I do not think that anyone's ability to write should disbar them from discussion. We can not expect perfection from others. Instead we should try to understand them as human beings, and interpret them with generosity and kindness.
(from internal email)
>Let me be clear: this was an arbitrary decision. It was different than what Id talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because Im the CEO of a major Internet infrastructure company.
LinkedIn, of course, wants to get all the benefit of the public Internet with providing as little as they can. This, coming from someone who used to work at LinkedIn.
These companies have built their fortunes on the public Internet and now that they are successful they seek to not pay homage to the platform that give them their success. It's very clearly anti-competitive, and bad for users. LinkedIn should be forced to compete based upon the veracity and differentiation of their service, not because they have their users' public data held hostage from competitors.
The judge who issued this injunction - Edward Chen, is also the judge presiding over the Uber drivers as independent contractors class action case.
LinkedIn has full control over this, it's their site. What they are fighting for is the ability to choose who gets public access to various pieces of information; which its member do not get control over.
HiQ argued that LinkedIn has a monopoly on "the professional networking market" and is unfairly exploiting that monopoly to gain an advantage in the data analytics market. HiQ showed that LinkedIn might be developing an analytics product that competes directly with their Skill Mapper product.
Or do they mean the 'public' profile which you see when logged in? If yes, this would be a real case because this is awesome data I would like to scrape and which you could build interesting business cases with.
That is actually dangerous. Why some startup or some judge can tell me to whom I can serve content and to whom I cannot?
I was going to do some experiments with larger datasets from businesses in a region, but quickly found that's not possible.
I mean it is completely crazy, it is not LinkedIn data it is OUR data
Recently I was setting up my new phone and thought about installing their app and I thought to myself, why?
Eventually that thought came back to me when I was attempting to update my profile and simply decided to delete it entirely.
This is the same outcome most of us wanted between Swartz and JSTOR, and perhaps with Malamud and PACER. No technical control can be in the right place, but we can hope for a common understanding (maybe eventually law?) that terms of service may demand or prohibit some things but not anything.
On the other hand, privacy is an issue too. LinkedIn lets you download a spreadsheet with the email addresses of all your connections, and if you have a lot of connections you will regularly get e-mail messages from life coaches, "managing directors", software development outsourcers, "SEO experts", and all kind of BS artists.
I tried a year ago and obviously it was impossible.
Anyone know whether this is right?
And BTW in case you're not aware, if you hold data from any EU citizens you'll be required to comply with the GDPR regardless of where you're located.
How this interacts with Safe Harbour I have no idea.
How can Linkedin argue that Google be allowed to scrap but other third party cannot?
For instance, "buried in 6 layers of obfuscated XML" and "accessible in O(N^3) time" would both be implementations that are not "blocking" the data but they would still be extremely difficult to use.
What I saw instead was a subset of subprocesses in isolation from each other, presented in an admittedly artistic fashion. It's impressive, and maybe the purpose is more to whet one's appetite for more information rather than be informative in itself, but that's not really what I was expecting or hoping for.
And you've shown what the value is long before I asked myself the question "how much" - which I usually ask early in the process - but not here.
At least that is how I found it... great work.
Would be interesting to see conversion figures for something like this.
And this was Alex's follow-up a year on:
Very well done Alex!
It's nice to see something that was designed with maintainability in mind. Designed to be disassembled, repaired and re-assembled later. Impressive engineering.
So different from most consumer products sold today which never use screws and are not designed for repairing. If it breaks down you're expected to buy a new one...
and your ad is one of the best i've seen since MasterClass ads in my Facebook feed. i felt like the ad was basically free content. i was learning!
Really great video too
I don't mean to underplay the work involved in programming and marketing this project, but just not giving up is perhaps the hardest part of things like this.
I just subscribed to the video course and I can see the preorder offer is a no brainer, skimming through the PDF provided I can see there is enough value on it to easily make it worth the $20 by itself.
So as a suggestion: Highlight the PDF and its content on the preorder page, there is only one mention about it but it doesn't specify its contents.
I like to take things apart, and it made me a bit nervous, as each piece was separated, that I would never be able to put it back together :)
Where is the Reddit post of this? You're going to front page, for sure.
The motorcycle equivalent is this: https://www.youtube.com/watch?v=MkHJuU01-Wk&index=43&list=PL...
I watched about 3/4 of these ^^ videos, really learned a lot about how a combustion engine works.
I did have a slight giggle when the promo at the end says you explain everything about 'modern cars', while you are working on a car introduced 27 years ago.
Of course I understand that disassembling a new car does not make financial sense, I'm not trying to be negative here.
Really, it's a pretty great engine, but with 233k miles a little grumpy.
I have the 2015 version and there seems to be a new one out now (2018 version) but the reviews warn about the many bugs
I'm really interested in seeing where you go with the 3D modeling. As a coder/DIY mechanic (one of many I'm sure), I'm pretty psyched by how this tech could be used.
I also want to say that I appreciate your price point. I think it's at a good point where it might be less than the potential value of the product, but attracts those who would otherwise dropout of purchase or seek other means to obtain the media.
How is the course delivered? Downloadable or streaming only? Can I watch it on Linux?
Nice tip of the hat to Luxo Jr. at the end there.
#1 PayPal returned me to an invalid URL after finishing the payment
#2 I've paid & logged in, nevertheless the website still shows me links to "buy the course".
Something something electric motors are far simpler. ;)
How did you make those flying parts? Photoshopping out the holders?
It's going to be really interesting to our purchasing and maintenance patterns for EVs.
Of course the opposite is actually a brilliant idea (at least at a time when XML was still popular): Take an XML dataset and visualise it in SVG using XSLT transformations (well... yeah... doing it all in XSLT is still insane, but you get the idea).
Fun times. And all nearly 20 years ago. It really was ahead of its time. Of course the end finally had to come. The PGMs sold this thing to a certain aviation company with the promise that it would automatically build circuit diagrams from their XML chipset database (because.... XSLT!)
When Vector took over Corel, they very rightly dropped our division like a hot potato (we had something like 3 PGMs per developer). It was quite unfortunate because the developers and QA people I worked with there were some of the best I've every had the pleasure of working with. I've always waited for something to come of SVG and really wish we had been able to release something that wasn't crazy so that people could see what the potential was.
Edit: In my old age I'm losing track of time. It seems that Vector acquired Corel in 2003, so that's only 15 years ago :-)
To be useful as a vector image format, there should be strict rules (and less cruft). Why is there no libsvg like libjpeg or libpng? Why have interaction as part of an image format?
SVG lives in an uncanny valley between jpeg and flash/js.
I think there is still a big need for a real vector interchange and display format. Right now people pick a "good" subset of SVG. Or even fallback to fonts.
My dream vector format:
- Pixel perfect across implementations at 50%,100%,200% renderings. At least grid aligned lines.
- Lossless roundtrip across apps. Start in Illustrator, edit parts in Inkscape, other parts in Animate, untouched things stay exactly the same.
- Standard zero dependency C reference implementation: Stream in, bitmap out.
Shameless plug, one of my first experiments with SVG+react (+cljs): https://polymeris.github.io/carlos/ Done in one day, without knowing the tech.
ImageTracer is a simple raster image tracer and vectorizer that outputs SVG, 100% free, Public Domain.
SVG hits a performance ceiling as number of elements increases.Canvas, since it's rasterized image can handle a rasterized representation of millions of SVG elements.
However as Canvas dimension increase, it will hit a performance ceiling.
- loading bar until website is fully loaded
- animated buttons that bounce and flash
- full screen 2 second transitions from page to page
- all in one page, no urls!
feels like 2010 again : ]
All the kb saving are meaningless with performance issues caused by lots of nodes, gradients, etc.
It's great for responsive icons, but for a full interactive UI with illustrations bitmaps are the better choice IMO unless you really need dynamic scaling.
Just because you can do it doesn't mean you should.
Here's a good Hackaday article on converting SVG to PCB file formats:http://hackaday.com/2016/01/28/beautiful-and-bizarre-boards/PCBModE is one package I tried for a while:http://pcbmode.com/Boldport uses this toolchain to make beautiful PCBs relying heavily on SVG: https://www.boldport.com/
I had a hard time with all these packages, however, and ended up just hacking it together by hand with python code and outputting in KiCAD format. I wasn't even able to get KiCAD to read/render it properly (too many weird elements), but since OSH Park (where I got my PCBs from) takes KiCAD format directly and gives you a preview, that all worked fine, and when I ordered my PCBs, they arrived in working condition just fine the first time.
So yeah, SVG can do a lot, including make funky PCBs.
I have in mind all those concerts or various events' png/jpeg files that are dropped on the web here and there. If they were svg files, it would be made easier for search engines to index their content.
Even without that, svg totally rocks. About a decade ago, I played with SVGWeb and made a showcase carousel presenting screenshots of projets with automated reflection on them (like it was common back then, a reverse image on bottom of actual image with a gradient from transparency to white to make an effect like if the ground was a mirror). I just had to upload a plain screenshot and everything was automated, I was mind blown, and surprised we seemed to go the canvas way instead (not so much, in retrospect).
Nowadays, I often make my icons as svg react components. It makes it so much easier to change their color or saturation on hover, this is very cool. We probably still have a long way to go to exploit all of svg potential.
The transparent JPEG-in-SVG files can also be made with nothing but ImageMagick and Bash .
Unless I misread it?
Would the same be true for SVG ? Would it be used mostly for ads, too ? Or even, can you selectively block only the SVG used for advertisments ?
> Devices are your personal property. We wont force you to have anything you dont want.
Devices are your personal property. The SoC is still a proprietary trade secret, the baseband is still spying on you for the NSA, the GPU is still a closed blob piece of shit. No mainline driver support, bootloader is closed source, firmware is closed source. We own this phone, you don't.
> We will always play well with others. Closed ecosystems are divisive and outdated.
> Devices shouldnt become outdated every year. They should evolve with you.
Devices become outdated because shitty vendors refuse to open source and mainline drivers for their components.
> Technology should assist you so that you can get on with enjoying life.
Technology should be trustable, and a device where you cannot tell if or when the microphone and/or camera are recording and being remotely accessed is anything but.
Not wanting to single Essential out too much here - every vendor goes on and on about how great this phone is for you, while holding as much of a vice grip over the operation of the device as possible to make sure you need to buy another one as soon as possible through planned obsolescence. It is just the stick up the ass language announcements like these use is really infuriating when the people making them know full well how much they are screwing you over.
The first actually open platform phone is the one that will have longevity. The rest are snake oil about how good they will take of you because you can't take care of yourself with your own software that you can trust.
To be fair though, Sprint is one of the easier carriers to work with after T-Mobile. I can't imagine them releasing a phone on AT&T or Verizon, as their process is grueling. I guess since they're selling an unlockcked version of their phone, it doesn't really matter to power users. However, most sales for smartphones are from contracts sold directly from carriers so it'll be interesting to see how they'll do in the market with their current strategy (similar to One Plus One).
Props to them though. It's not just about carrier certification. Releasing a smartphone is a long complex process. Some engineers at Sprint were briefly talking about how great the phone was, so I have high hopes.
Buy an (old) iPhone.
I've got a 5S -- still perfectly fast for what I use it for (email, youtube, brokerage account, general internet, some small games), and is getting OS updates and security patches until IOS11. It's $120 on eBay; a new screen can be had for $13, a new battery for $11. it's solidly designed and there's a gigantic field of accessories and apps.
Maybe titanium and no bezels are worth a price premium, but there's no way it's worth a 5x increase in price.
In the meanwhile, I'll keep buying $120 phones (Moto G4 with Amazon Ads FTW) and keeping them for ~2 years until they break or software updates stop. Even though as a Catholic (Laudato Si, Rerum Novarum) it kills me to waste all those materials every couple of years and be part of the environmental degradation of our planet.
But controlled obsolescence kills me. The real feature that improves in phones the past few years for me is the software and apps, not the hardware.
- Give me a lighter, snappier OS. Not something clunkier and slower and uses more ram, gpu/cpu (aka battery life).
- Actually support updates to the things for longer than 2-3 years.
- (Not related to this phone) Use stock android, unless you're removing bloat. Why? Because inevitably there's going to be apps. What I want is a nice flat surface that includes wifi, bluetooth, and nice API's and permissions for those apps to plug into.
- The biggest feature you can give me on a phone? Battery life, Replaceable battery, Data/Cell reception, Speaker/Microphone quality.
- SIM card that's easy to get out.
- Actually, Dual SIM's.
- Support for carriers globally.
- And physical keyboards. Something for SSH'ing with.
Good thing they're not doing that!
> We also plan to release new wireless accessories (like our snap-on 360 Camera) every few months. That schedule ensures that the latest technology will always be in the palm of your hand without having to replace your phone. These accessories will also work with other products like Essential Home.
Spoke too soon.
No, it's really not. It's literally just a tool I use for communication.
While both peanut allergy and celiac disease involve pathogenic immune responses, they represent very different types of problems and this study's results do not suggest any relevance to celiac.
The peanut allergies that they are referring to in this study are one of the most striking examples of what's known as a Type I hypersensitivity (IgE-mediated/anaphylaxis). In this type of reaction, high levels of IgE, a class of antibody, generated toward a specific antigen become loaded onto mast cells and on re-exposure, cause mast cell degranulation and subsequent smooth muscle contraction. For this reason, anaphylactic responses frequently involve closing of the airway, nausea/vomiting, and other dysregulations of smooth muscle activation and require a strong adrenurgic agonist like epinephrine to counteract this activation.
Celiac pathogenesis is not a Type I hypersensitivity. To my knowledge, the exact mechanism of pathogenesis is not known, but it is likely a combination of Type III (antibody-mediated) and Type IV (T-cell mediated) hypersenitivities.
Anyway, I'm not trying to ruin anyone's hope here, but this study has no relevance for celiac. What this has shown is that there is the potential for food allergies to be systematically eliminated with long-term increasing exposure to the problematic antigen, in this case, peanut antigen. This has been done for some time with other, less aggressive types of IgE-mediated conditions like dog and cat dander allergies. So in that way, it's not all that surprising of a result, but I'm certainly glad to see that this was able to be done safely. This is really great news for the millions of people out there with anaphylactic food allergies.
All that being said, I do hope that celiac can be managed more effectively with immune-modulatory (or other) treatments in the future and my sympathies go out to those who have been affected by this horrible disease.
I only ask because it seemed to have been general knowledge that this was impossible / couldn't be done up until recently. As a outsider looking it, it seems quite obvious, but that's just due to naivete.
Her clinic is relatively unique, in that it will be offering multi-allergen rapid desensitization. Using this procedure, a person can be desensitized to multiple allergens simultaneously, in as little as three months. She can treat milk, egg, wheat, soy, peanut, tree nut, fish, and shellfish allergies.
I started my food side project https://bestfoodnearme.com with the idea in mind that I can catalog dishes at restaurants based on allergies, gluten free etc. Allergic reactions are a very scary thing especially with small children.
I feel like I have to throw away almost all advice they give us about kids these days. These types of things do a lot to undermine the advice of doctors.
I had to end up creating a link component that would automatically link to an archive.org version of the link on every URL if I marked as "dead". It was so prevalent it had to be automated like that.
Another reason why I've been contributing $100/year to the Internet Archive for the past 3 years and will continue to do so. They're doing some often unsung but important work.
The BBC also donated its Networking Club to the Internet Archive: https://archive.org/details/bbcnc.org.uk-19950301
Also, sites are a very volatile medium. I often bookmark pages with interesting information to read later, and it inevitably happens once in a while that a site went down and I just can't find the information anymore.
On another note, the more dynamic the web becomes the harder it will be to archive so if you think that the 1994 content is a problem wait until you live in 2040 and you want to read some pages from 2017.
> The average lifespan of a web page is 100 days. Remember GeoCities? The web doesn't anymore. It's not good enough for the primary medium of our era to be so fragile.
> IPFS provides historic versioning (like git) and makes it simple to set up resilient networks for mirroring of data.
What's cool isn't how fast some of these technologies become obsolete, such as various Java applets and cgi-bin connected webcams. It's the static content that can survive until the end of time.
Like Nicolas Pioch's Web Museum. Bienvenue!
It really stresses the importance of directly quoting / paraphrasing the content you want in your plain text, and not relying on external resources for posterity.
The MBone was not a "provider", it was an IP multicast network. This was the only way to efficiently stream video content to thousands of simultaneous clients before the advent of CDNs. https://en.wikipedia.org/wiki/Mbone
How I laughed.
I noticed that the wayback machine no longer lists historical sites if the latest/last revision of robots.txt denies access. Has anyone else experienced this?
In the late-90's I helped build one of the first fortune-500 e-commerce web sites. The website was shutdown years ago, but it view viewable on the wayback machine as recently as a year ago. The company in question put a deny-all robots.txt on the domain, and now none of the history is viewable.
It's a shame -- used to use that website (and an easteregg with my name on it) as proof of experience.
I had lots of fun reading them as an Internet-addicted kid -- but several of the links were dead even before it was officially published.
Makes me want to try to read a Markdown-only Internet browser, which treats native Markdown documents as the only kind of Web page.
And yes, the way I got on the internet in those days was to dial into a public Sprintlink number, then telnet to a card catalog terminal in the Stanford library, and then send the telnet "Break" command at exactly the right time to break out of the card catalog program and have unfettered internet access. Good times.
The web is ephemeral unless somebody archives it. Many companies offer an archive service for your sites for a fee, and archive.org does it to provide a historical record,
Zilch. Nada...couldn't find it anymore. Gone. Something I had easily chanced upon before I know couldn't find with directed searching. They must have restructured their site.
Answers a question I always had about "Snow Crash" by Neal Stephenson. The main character, Hiro Protagonist (I still giggle at that name), sometimes did work as a kind of data wrangler - "gathering intel and selling it to the CIC, the for-profit organization that evolved from the CIA's merger with the Library of Congress" (Wikipedia).
I always wondered what made that feasible as a sort of profit model, and I guess now I know - that was the state of the internet in 1992, when the book was published. Seems like a way cooler time period for Cyberpunk stuff, I'm almost sad I missed it :(
That was one helluva course, challenging and interesting, and fun all at the same time (and so much "concretely" - lol).
From what I understand, that course is still available thru Coursera (which Ng booted up after the ML Class experiment; Udacity was Thrun's contribution after his and Norvig's AI Class, which ran at the same time in 2011).
Anyone have links to interviews or information on Ng's vision? I'd love to hear the details.
What are "advanced simulation tools" ? something like https://github.com/marcotcr/lime ?
So is this Elon Musk's arch-nemesis?
I would love to just use the amp version for all TechCrunch pages. Anyone in the mood to make a chrome extension? (I'm on desktop, and the results are still clean without adblocker)
I wouldn't be surprised if private donations will eventually be responsible for the eradication of Malaria (1000 deaths daily, much more suffering and cost to society).
If you're in tech you're likely to be in a great position to create value beyond your company. For example, donating equity from your startup or a fraction of your income to the charities that can prove they are having the most cost effective impact on the world:https://founderspledge.com/https://www.givingwhatwecan.org/pledge/
Bill Gates and Warren Buffet pledged to give half of their net worth away during their life or death to charity. They're practicing what they preach.
"The first question concerns accountability... The Foundation is the main player in several global health partnerships and one of the single largest donors to the WHO. This gives it considerable leverage in shaping health policy priorities and intellectual norms..."
"Depending on what side of bed Gates gets out of in the morning, he remarks, it can shift the terrain of global health..."
"Its not a democracy. Its not even a constitutional monarchy. Its about what Bill and Melinda want..."
"In 2008 the WHOs head of malaria research, Aarata Kochi, accused a Gates Foundation cartel of suppressing diversity of scientific opinion, claiming the organization was accountable to no-one other than itself."
"As Tido von Schoen Angerer, Executive Director of the Access Campaign at Mdecins Sans Frontires, explains, The Foundation wants the private sector to do more on global health, and sets up partnerships with the private sector involved in governance. As these institutions are clearly also trying to influence policymaking, there are huge conflicts of interests... the companies should not play a role in setting the rules of the game."
"The Foundation itself has employed numerous former Big Pharma figures, leading to accusations of industry bias..."
"Research by Devi Sridhar at Oxford University warns that philanthropic interventions are radically skewing public health programmes towards issues of the greatest concern to wealthy donors. Issues, she writes, which are not necessarily top priority for people in the recipient country."
More in the article...
I definitely don't mean to diminish the contribution of the Gates Foundation though. I often hear that they're one of the good ones.
- Edit- Nevermind, found it on here: https://en.wikipedia.org/wiki/Cascade_Investment
Hopefully other billionaires can take inspiration from him and recognize that helping the species is a more fulfilling game than "How many 0s in my net worth."
There have been some good words from the foundation regarding the (health, primarily I believe) programs in Tanzania. I wonder if this is towards scaling those projects.
Anyone have the scoop?
Keeping a bit of wiggle room.
Also, maybe they cant utilize all that cash at once. Therefore it would be best to be illiquid until you need the liquidity.
Malaria, low literacy rates, etc., are the byproducts of failed political systems and corruption.
Musk's impact on electric vehicle technology will drain a great deal of despotism from the middle east as dependence on oil wanes, far more effectively than any philanthropic contribution he might have made would have.
There are a number of technologies that can drastically change the dynamic between the elites (officials) and everyone else worldwide. Our most gifted thinkers and entrepreneurs should be inventing the next printing press or cotton gin, not attending charity functions.
What accounts for this monstrous difference? He has cashed some out over the years, but not ~$80 billion worth.
If great efforts are made to eradicate diseases in an area that has historically been subjected to various diseases such that the local population has adopted the reproduction strategy of having large amounts of offspring to counter premature deaths, but upon eradication/reduction of said diseases there is significant delay in the abatement of the overproduction strategy if it abates at all http://www.unz.com/isteve/the-worlds-most-important-graph/
if this population surge then expands beyond it's historical borders and causes mass societal disruption on a neighboring continent whose civilization has historically contributed great innovation and wealth to the world at large, and subsequently, said wealth and innovation contribution declines because of said societal disruption, do those who sought to eradicate the various diseases harbor some responsibility for the diminished prospects of the world at large?
Granted these are delicate questions but I believe their being asked has not just validity, but importance.
Certainly, simply ignoring pain, disease, and suffering is almost universally unpalatable.
But modifying one aspect of a complex system for what appears to be perfectly benevolent reasons, it is not at all surprising to find there could be downstream negative effects, negative enough to far outweigh whatever beneficent contribution you thought you were initially making. How do you make value judgments in such cases?
Gates himself has said it is far better to help people where they are regarding the recent unauthorized population influx events in Europe.
So I don't believe he is entirely blind to these potential downstream catastrophic effects.
Shouldn't this example of randomness still be pushed to the installation stage, instead of in the distribution. If Debian's binary package contains a "random" key, then we have a pretty large herd already using it.
Some kind of blockchain-like trust verification system isn't the craziest idea I've ever pitched.
Most content websites have become such a massive crapfest of ad-bloat, bad UX, huge page sizes and general usability hell that it's nigh impossible that I'd be able to reach the actual content of a non AMP site in the first 5-10 seconds of clicking on its link. (On my phone that's an additional 1-2 seconds for registering the tap, and 1-2 seconds for navigating to the browser)
My click-throughs to non AMP websites have reduced considerably.
So say what you may, AMP (or FB Instant or its ilk) will prosper until the mobile web experience stops being so crappy.
(Edit: About a decade ago, when mobile browsers were in their infancy and data plans were slow and limited, I distinctly remember using Opera Mini for mobile browsing because it used to pre-render pages on the server and send a very light payload to the phone. This saved you both data costs and made mobile browsing even realistically possible)
This is like complaining that a hammer is bad for driving screws.
and similarly the rest of the article seems badly researched.
This tech may seem trivial to broadband users, but has demonstrated itself to be effective in mobile-heavy, low-bandwidth markets (ref India & myntra.com)
> With AMP [chat applications] cannot be used
True currently. There are no chat application amp extensions, yet. This could change in the future. Vendors interested in implementing one for AMP should get involved at http://github.com/ampproject/amphtml
> AMP does not have any markup specific to checkouts
Most web pages move from shopping cart to payment by changing URLs. This would work just fine with an AMP page. There is in fact at least one vendor who has integrated payments with AMP already: https://www.ampproject.org/docs/reference/components/amp-acc...
Also take a look at https://ampbyexample.com/advanced/payments_in_amp/
> AMP does not allow for use of forms
> They really do not support a logged in state, or user preferences. Things like recommended products, or recently viewed products will not work with an AMP page. None of the personalization aspects like Hi, Lesley are done with AMP.
See the (perhaps poorly named) https://www.ampproject.org/docs/reference/components/amp-lis... This supports loading content specific to the user, even on a cached amp document.
> if search and filtering are a large part of your sites mobile navigation, AMP will be useless.
This is exactly what amp-bind was built for:https://ampbyexample.com/components/amp-bind/
> Google Analytics is not supported on AMP
Google Analytics is fully supported in AMP. Here's the Google Analytics support page:https://developers.google.com/analytics/devguides/collection...
> If you use a different suite of tracking such as Piwik or kissmetrics, they will not work with AMP.
There is a large list of analytics vendors that have direct support here:https://www.ampproject.org/docs/guides/analytics/analytics-v...
Other vendors can be added with a small amount of configuration. Here's a guide for Piwik, for example: https://www.elftronix.com/guide-to-using-piwik-analytics-wit...
Alternatively, vendors can submit a configuration to the AMP project which is just a few lines of JSON, then the vendor will be supported more directly.
> Ad Revenue is Decreased
The link is to a single article from a year ago. There are many studies pointing to the opposite effect as well.
> A/B testing is not supported
I'm not sure what URLs the author used, but I tried to find a similar overstock recliner page that might be the right one. I found:
The author tries to use a google.com/amp URL, but these redirect when not coming from a search click. Much easier is to take the CDN amp URL, which is served the same way:
I loaded both of these in Chrome, simulated a mobile device, network tab, and throttling with Fast 3G. Here were my results:
* non-AMP: 42 requests, 1.1 MB transferred, Finish: 10.3s, DomContentLoad: 3.38s, Load: 9.52s
* AMP: 35 requests, 408 KB transferred, Finish 5.87s, DomContentLoaded 1.28s, Load: 5.88s
I suspect that the author's referenced tool is reporting "fully loaded time" as the time that the last network event ended. AMP pages intentionally delay loading images below the fold to prioritize visible content. This results in some images loading later without impacting the user experience. For example, as I scrolled in the AMP page, the "Finish" time would move ahead to a new time as new images were loaded. With events like analytics triggers, looking at the time the last network event finished is typically a misleading metric and won't work correctly with most amp documents.
If you load filmstrips in Chrome's Performance Tab, you can see this more clearly. Filmstrips display what the page looked like at snapshots in time after loading starts. For my quick test with network throttling, the non-AMP page takes a little over 6s to finish reaching it's final state and the AMP page takes about 2.2s. So AMP here is nearly 3x faster as the user would perceive it on similar connection speed.
It either doesn't load or goes back to the previous page after a few seconds.
That said, AMP clearly isn't for eCommerce. You want dynamic, personalized content for eCommerce. Recommendations based on past pages visited, or search terms, or your location... It's not just about fast loading pages.
Some eCommerce may fear that their sales will suffer if someone else gets a page that's in AMP and then Google's new rankings put that page over their own... But that's no reason to convert your site to AMP. It's a good reason to build out landing pages specific for search terms, do paid advertising around keywords, and just generally market your products / site.
Generally speaking, people who come in to product pages straight from Google are just doing price comparison anyway -- it's just a step in the decision journey, but if you've done a proper job of marketing your business, customers that buy tend to go straight to your site and do a search using your own search tools.
AMP has limitations, like any piece of technology. Once you understand the limitations you should be able to plan your attack accordingly.
We are pre-launch but if you want early access and test our automatic generator please email me.
1. You have to embed the AMP version from Googles servers, you cant self-host the AMP js, or run it from another CDN. This makes your site unavailable in, for example, China, relies on Googles systems, and ensures that Google knows every user of your site.
> contain a <script async src="https://cdn.ampproject.org/v0.js"></script> tag inside their head tag.
2. You need to allow Google to cache the content, and all Google products will always link to the Google cache version. You can not opt out of this. You can not ensure users visit your own CDN version. You can not prevent Google from displaying modified versions of the pages (for example, the header UI of AMP pages in Google search, and the swiping between pages gesture).
> Q: Can I stop content from being cached?
> A: No. By using the AMP format, content producers are making the content in AMP files available to be cached by third parties. For example, Google products use the Google AMP Cache to serve AMP content as fast as possible.
3. Pages that use AMP get a massive indirect ranking boost. Yes, they dont get directly boosted, but they get added to the AMP carousel, between the ads and the #1 result, or between the #1 and #2 result. If, for a given search term, none of the top pages have an AMP result, Google will boost the first 3-4 pages that have an AMP result to this place even if theyd organically rank on page 10 or later. In some situations, Ive seen results from page 13 boosted to #1.
This is one of the most used features in HTML/CSS to handle images, people are complaining in the Github issues, others rewrite the whole Internet in AMP with AMP components.
This is ridiculous, Google just wants to restrict other ad networks' JS and recreates HTML/CSS/JS for no real reason.
- 2011: 32nm
- 2012: 22nm
- 2014: 14nm
- 2018?: 10nm
I don't know much about foundry processes, but it seems that it's taking more and more time for lesser and lesser gains, right? At this rate, how long until we reach sub nanometer? What are the physical limits on these processes, and does it have any implications for end users? Will we be using 2nm CPUs for 50 years?
Would love to hear the thinking of anyone educated on the topic.
Edit: very intrigued by the sustained downvotes on this \_()_/
Intel (and AMT) keep pushing more and more proprietary code that can not be read, changed or removed. No one knows exactly what it does and it has built in screen and key recording. It's my advice and the advice of privacy advocates that no one should purchase or use any processor made by Intel or AMD until they address these serious issues.
I like to call it Tick, Tock, Clunk.
The CPU's performance has started matter less and less.
We're in the early stages of deploying a new RESTful stack, and versioning is a hot topic (along with getting people out of the RPC mindset and into a resource-based paradigm). While version bumps should be much less common, we'll probably end up doing something similar to our cascading transformations. Essentially, the old version becomes a consumer of the new version, and as long as the new version continues to hold to its API contract, everything should work with minimal fuss. Of course, that's assuming that we don't change the behavior of a service in ways that aren't explicitly defined in the API contract...
For anyone else who's interested, they've written/talked about this a few times over the years, to fill out the picture:
It sounds like their YAML system has changed to be implemented in code instead, which maybe allows the transforms to be a bit more helpful/encapsulated. If anyone from Stripe is here, it would be awesome to know if that's true and why the switch?
In general, the concepts employed by Stripe really encourage better design choices. All changes, responses, request parameters, etc should be documented and then handled automatically by the system. We took this approach in our design, although we don't do it with an explicit "ChangeObject" like Stripe does; it's a great idea though.
Hoping to be able to put out a blog post once we start implementing the system and getting feedback on what works and doesn't work well.
This is a really smart way to do it.
One question is, over the years, wouldn't you add a lot of overhead to each request in transformation? Or do you have a policy where you expire versions that are more than 2 years old, etc? (skimmed through parts of the article so my apologies if you already answered this)
Does anyone know of packages that do this already? I have been contemplating creating one in PHP/Laravel for a long time but haven't had the time yet...
I wish other payment services treated their long-time clients with the same respect (looking straight at you, GoCardless).
Hey, @pc with all the spare time your team has accumulated by using this api model maybe you could put it to good use. Might I suggest it's time to divert most of your tech resources into creating the next Capture the Flag? Because those were just awesome!
I'm joking, in case it's not obvious (but I would absolutely love another Stripe CTF).
From a consumer of Stripe's API's perspective, doesn't this make debugging or modifying legacy code a real pain? Let's say I'm using Stripe.js API's from a few years ago; where do I go to find the docs for that version? Do I need to look at the API change log and work backwards?
It was a delight to get a peek behind the curtain. :)
Say it is splitting of street into street1 and street2 for an address element. How would I know what version to target to get this feature?
1. Mandatory new field.
2. Field is split. For example, address field is now divided into street1 and street2.
3. Change in datatype.
In the above three cases, we had to force users to upgrade their versions.
If you want to 'keep in touch' with people, call or text them. Make an effort to actually interact with the people who matter to you.
Metafilter user blue_beetle first put this idea online when he said "If you are not paying for it, you're not the customer; you're the product being sold" in response to the Digg revolt of 2010. The idea apparently existed for a few decades prior regarding TV advertising. I prefer to think blue_beetle was the one who brought it into the zeitgeist.
Edit: Alex3917 posted a similar idea on HN on 6 May 2010, beating blue_beetle by a couple months. Gotta give credit where it's due: https://news.ycombinator.com/item?id=15030959
The advertising tools are so powerful it is downright scary, the level of targeting one can do using it is just insane.
That's partly the reason why I stopped posting updates. After seeing the depth of the advertising tools.
I don't use Facebook for posting personal updates anymore but only to fuel my business. I realise that the only way I can "choose" to stay out of all these services that track and sell our identity to advertisers is if I have "fuck you money" (money is the currency you exchange for your limited time in order to survive in this world).
First, for privacy concerns. FB, specially was getting to creepy for me. I felt, every action I did was being analyzed and filtered, I felt like I was a lab rat. The fact that these companies know so much about us is pretty scary, I felt like I needed to regain my privacy, fight the system somehow.
Second reason was because, I wasn't getting anything substantial that could improve my life overall. All I saw was dumb-ass posts, ignorant comments, the passive aggressiveness, the "look at me doing this really mundane thing, but please like my picture so I can feel validated", etc... feels like a mouse-cat race to see which of us has a better life or something. I honestly feel bad for how much time I spent there when I could apply that time to learn new things.
After more than 6 months without FB, here's what I've learned:
- I still keep in touch with my closest friends, we chat on slack/iMessage every day. It's actually a good way to know who really misses you, during this time, only about 5% of my FB friends reached out to me through message or phone to ask how were things in life. The other 95%, I really don't even remember most of their names anymore. Just ask yourselves, why do we have to share so much of our lives with so many "friends"? I know we can filter, and create groups, etc.. but damn...do you really want to spend your life "managing" relationships, to see who sees what? I find that tiresome.
- I don't feel left out of anything, because I keep track of local events using other sources, I read news from faithful websites, and if I need to share anything I just use the old email or show face-to-face any pictures I need of my latest vacation from my phone without having to share anything with anyone.
- I gain more time, less stress, I don't feel overwhelmed to keep track of every social media update. I just don't care. If something important happens I will know it sooner or later.
- I no longer have this need to constantly keep posting photos of what I'm doing outdoors or whatever. I don't have the need to feel validated by anyone but myself.
- But most importantly, I regained my privacy, or at least my social footprint is bare none at this point. I'm using uBlock, Firefox, DuckDuckGo and other tools to keep trackers at bay.
I may never completely win this war, but at least my habits aren't being recorded and feed to any ML algorithm.
Fb is not just making us another node in a vast network graph but also ensuring a worst boring grown-ups who can't do anything worthy but post an Fb post condemning something and feeling great about their social responsibility.
This is a questionable assertion. Giant tech companies like Oracle and IBM don't tend to expand in this way, they make acquisitions of smaller companies, and use them to enhance the platform capabilities of the larger product.
I'm sure Zuck will be delighted if the "bottom billion" do all sign up and use Facebook, but they're never going to be massively profitable accounts.
Imo the acquisitions of Instagram and WhatsApp show the way that Facebook will go - Instagram adds a new and lucrative ad format, a profitable user segment and a base for adding in ideas from other platforms, such as Snapchat. WhatsApp builds out Facebook's graph and can be mined for intel.
> When you install Puppeteer, it downloads a recent version of Chromium (~71Mb Mac, ~90Mb Linux, ~110Mb Win) that is guaranteed to work with the API.
A lot of the chrome interface libs about at the moment require you to maintain your own instance of chrome/chromium and launch the headless server with your command line, or require a pre compiled version, that can quickly get out of date (https://github.com/adieuadieu/serverless-chrome/tree/master/...). Having this taken care of is a blessing.
Here's a quick hack: https://gist.github.com/rcarmo/cf698b52832d0ec356c147cf9c9ad...
I'm using The Verge for testing because it lazy loads images, and am being clumsy about the scrolling, but it mostly works - I can get 90% of the images to show on the finished PDF.
What I can't seem to get right, though, is creating a single-page PDF where the page height matches the document height perfectly - it always seems to be off by a bit, at least on this site (mine works fine, _sometimes_).
Anyone got an idea of why this is so?
This github page was generated by a markdown file created by a test.js running in a Puppeteer docker container:
The project itself looks exciting.
Sadly, there's probably no money on the line. But you will get your buggy software used by huge corporations for years to come!
Even sped things up a little versus phantom.
> Puppeteer works only with Chrome. However, many teams only run unit tests with a single browser (e.g. PhantomJS).
Is this true? Do teams write unit tests but only test them in a single browser? With test runners like Karma and Testem, running tests concurrently in multiple browsers is easy. You'd be throwing away huge value if for some reason you only decided to test in one vendor's browser.
Full disclosure: I'm maintaining the image.
I've been maintaining (thanks to this team and Headless Chrome) a convenience API based on this feature. Some additional features:
* React checksums for v14 and v15 (v16 no longer uses checksums) * preboot integration for clean Angular server->client transition * support for Webpack code splitting * automatic caching of XHR content
and for the crawling or to delegate your single-page app hosting and server-side rendering entirely https://www.roast.io/
Haven't done anything before with those serverless approaches.
> Who maintains Puppeteer?
> The Chrome DevTools team maintains the library
Cmon guys, a US Masters for 7000 USD? Are you kidding me? Its totally worth it. In fact I feel blessed that such a thing even exists. GaTech has been a trailblazer in this regards.
My wife did an online master's degree (at a legit university that also had an online program). You have to be very good at self-pacing, diligence, and learning autonomously. You have to be so good at it, in fact, that the type of person who would succeed in an online master's program is the same type of person who would succeed in self-learning without the master's program.
So if your only goal is to learn, then I say no, it's not worth it.
However, you're in Brazil and not a lifelong programmer. Credentials may work against you if seeking a job in the US. Many US companies look at South America as the "nearshore" talent, much better in quality than devfarms in India, but also still cheaper and -- because of that -- slightly lower in quality than US talent.
In that case, spending $7k and completing the program and getting the degree may help you get a $7k higher salary in your first (or next) job. It may give US companies more confidence in your abilities, as you received a US graduate school education.
So from a financial perspective and the perspective of job opportunities inside the US as a foreigner, then I think it may be worth it. If you don't care about getting US jobs then still probably not worth it.
Best of luck!
Honestly I think your time is better spent working on real projects. In my CS master's program I met many students with no real-world experience. One was a paralegal before school, and after he graduated he became...a paralegal with a CS master's. Experience > degrees, every time.
There's value in the program (algorithms and data structures being the most applicable), but just go in with your eyes open knowing that the degree is not a glass slipper that'll turn you into Cinderella overnight. Too many IMHO falsely believed my program was a jobs program and really struggled to find work in the field.
If you can do it at night while working FT, great but don't take 1-2 years off work. It sounds appealing to be done ASAP but you're unlikely to make up that 60-120K/year in lost wages. Unless you're fabulously wealthy.
Got a job at Google directly because of this program (a few classes like CCA helped a lot with interviews). I'm aware of at least a couple dozen of us from OMS here.
The program cost me dearly. It cost me my relationship with the SO and it cost me my health (staying up late nights, lots of coffee).
* $5k cheap, it's nothing, the real way you pay for it is via your time.
* The teachers like the flexibility as much as we do. Many are top notch. I took two classes from professors that work at Google (Dr. Starner and Dr. Essa), one at Netflix (Dr. Lebanon), and a few others have their own startups.
* One of the classes was taught by Sebastian Thrun, with a TA at Google, but I think that's changed now.
* The lectures are good, but you have infinite ability to subsidize them with Udacity, Coursera etc.
* You learn squat by watching videos. The true learning happens at 2am when you are trying to implement something, and end up tinkering, debugging, etc. That's when things click.
* The hidden gem is Piazza and some of the amazing classmates that help you out. Lots of classmates that work in industry and can explain things a lot better. I.e: Actual data scientists and CTOs of Data Science companies taking the data science class. They were amazing and I owe my degree to them in part.
* Working full time and taking classes is not easy. Consider quitting and doing it peacefully.
* From within Google, I've heard from people that did the Stanford SCPD (I'm considering it) and also OMSCS. Lots of people that say the SCPD program wasn't worth the time and effort. No one yet that's said the same about the GT program.
I've heard from people that have done the program in-person, and they say the online lectures and materials are significantly better.
A couple of things to consider: As you mentioned, it is more focused on Computer Science than Software Engineering/Development. There are a couple of Software Engineering/Architecture/Testing courses but I haven't taken them so I can't comment on how relevant I think they are to my day job.
It's an incredible bargain... 7-8K for an MS (not an online MS) from a top 10 school in CS. That on it's own makes it worth it for me.
It's not easy and it's not like a typical Coursera/Udacity course. Depending on which courses you take it can be quite challenging (which is a good thing). You typically don't have much interaction with the Professors but there are a lot of TAs and other students to help you along the way.
Here's a reddit in case you haven't come across it that answers many questions:
And here's an awesome course review site that a student built:
(Source: current OMSCS student, hopefully graduating in December)
I made an "informed decision tree" awhile back that goes into much more detail about my thought process when signing up for this degree:
I also reviewed the OMSCS program in detail here: https://forrestbrazeal.com/2017/05/08/omscs-a-working-profes...
Hope that helps!
Did I learn a lot?
I learnt a ridiculous amount. For the time+dollar investment it is amazing. The program is definitely not easy either.
It has been amazing to learn the concepts in ML (Dr. Isbell) and AI (Dr Starner) courses and then a few weeks later think "I think I can actually use these concepts in my workplace".
Why the mixed feelings?
Not all courses had the same quality to it. From the top of my head, AI, ML were probably the best 2 courses. Other well ran courses I would add was computational photography, edutech, introduction to infosec (besides the rote learning...), however some of the other courses I had a relatively negative experience.
The degree does suck up a lot of time and I would say it is the real deal.
Knowing what I know now I can't say 100% that I will "re-do" OMSCS - to be fair on GaTech I'm not sure whether the challenges that I feel above are due to an online program and I personally would be more suited to an in-person program but the experience has definitely been better than Udacity's nanodegree and any MOOC which I have sat.
Overall I would say if you do it for the sake of learning and that alone - OMSCS is worth it. For any other reason please don't do it.
The program does have its hiccups here and there. Some courses have been reported as being poorly organized, but this is certainly the minority. Also, you may not receive as much individual attention as you would in a on-campus program. This is aided by the fantastic community of students in the OMSCS program which provide a support system for each other through online forums/chat. If you are not much of a self-starter and need specific guidance, this program may not be for you.
One thing I'd warn though is that you'll get out of the program what you put into it - so it's really up to you to choose classes that will set up your career the way that you want it.
I'm about halfway through and many of the classes assume that you have the equivalent of an undergrad CS degree. It's not intended to replace an undergrad degree.
That doesn't mean you can't do it, but your going to spend a lot of time catching up. From what I've seen, the students without a CS degree, even those with significant industry experience, have had a much harder time with the more theoretical classes.
It's also a graduate program, and the classes are pretty rigorous compared to what I did in my undergrad CS degree.
Also keep in mind that admission is fairly competitive. And admission is only probationary. You have to complete 2 foundational classes with a B to be fully accepted.
Here are my thoughts on what people need to succeed as an OMCS student:
* Be able to program in C, C++, Python and Java at an intermediate level. And, know one of these very well. * Be able to use a debugger (GDB) and valgrind. * Be able to administer and configure Linux systems. * Understand data structures and examples (std::set in C++ is RB Tree backed, std::unordered_set is hash table backed) * Understand basic networking concepts and key technologies (TCP, UDP, IP, switching, routing, etc.). * Understand the x86 computer in general.
I've done well so far, but I have the programming/logic background to do the work. If you don't, brush up on the skills listed above before enrolling.
Edit: The class projects are a lot of work. Be prepared to give-up your weekends and evenings. Even if you know the material and the language, it's a job to get through some of the projects.
It's hard for me to estimate how much prep I would need to do to come in to this program and feel comfortable with the tasks at hand.
Cons: I've noticed some students who come to get their MS degree from a reputed institution because it is cheap. Due to coursework pressure, they take short-cuts, like doing group-work, discussing solutions when you are prohibited, plagiarizing in assignments, etc.
1 - The people I've seen doing it are learning A LOT - more than another online program I've seen.
2 - They're also working A LOT - it intrudes on all aspects of their personal life. It's as much or more work than doing an in person CS degree.
3 - The folks I know don't have CS undergrads, which also makes it more difficult.
Net - it can be worth it if you missed CS as an undergrad, but you'll have to work. You need to ask if there are enough people in Brazil who value the credential (or implied skills) to make it worth the time. The time investment is more expensive than the $s. (It will be thousands of hours)
Would anyone who works full time and gone through this program care to share their thoughts?
Edit: Just found this great article from another comment
The classes are cheap. The hours are long. In the end your grade depends on teammates who haven't been vetted. Three teammates who can't code? You get a C and don't pass.
Course content is extremely dated. UML and SDLC paradigms from the 70's with xerox pdfs distributed to "learn" from.
This is a money grab.
Otherwise, I think OMSCS is totally worth it. It is hard though. Really hard. I have a family, significant engineering experience, and I find the workload intense. It puts pressure on my family at the same time because I'm not available as much. So I'm taking it very slow, no more than 2-3 courses a year.
It feels great to be 'back at school' after so many years. I love learning new stuff and the challenges of hacking away at low level things. The kind of thing you rarely get to do professionally unless you're very lucky (or not getting paid much). Almost makes me wish I had done a Ph.D.
I don't know if it will help me get a better job or whatever, but it definitely fulfills my own internal itch.
I'm through my second OMSCS semester, and it you want to know if I think it's worth it...you'll have to read the post ;)
I don't know how it would be looked at in Brazil or what the economic cost/benefit are in terms of your own income. I did know a few folks from the University of Sao Paulo that did grad and postdoc work while I was at GT though, so clearly some people are aware of GT in Brazil. That might be another avenue to get opinions from. I would be interested to hear how the costs compare to an institution that was local to you.
edit: Answered my own question - You can't have two consecutive semesters "off". I.e. the slowest possible pace would be 2 classes in the first year, then 1 class every other semester. So I suppose it would be:spring/summer 'xx: 6 credits, 24 remaining, spring 'xx + 1: 9 credits, fall 'xx +1 : 12 creditsetc.
 - per https://www.reddit.com/r/OMSCS/wiki/index
I don't think it will have an immediate impact on my earnings or place in my company, but I think the long term value of having it far exceeds what I'm paying for it.
Folks say institution-X is the same. I haven't seen one. Princeton or Stanford are, AFAICT, stunningly more expensive, and not purely remote.
This is a "sine qua non" - without this particular option, there is nothing else on the menu for me at this point in my life and career.
However, computer science and software development are not the same thing. If your primary goal is to up your game as a software developer, you might get more out of well-regarded software development books like "The Pragmatic Programmer," "Working Effectively with Legacy Code", or "Design Patterns."
Hope this helps.
If you work for a reputable company, like I do, they do tuition reimbursement. My company just so happens to cover $5200 per year.
So inother words, I am getting the degree completely for free.
I have completed 7 classes so far, and have 3 left, which again, were all paid for.
The classes take a lot of time (see https://omscentral.com), but the learning has been a lot of fun. I loved it.
Does anyone have insight if doing Georgia Tech's - Master of Science in Analytics will help me land such role?
First and most important: your internships and work experience, and what you accomplished during those jobs. They should tell a story of increasing and accelerating personal growth, learning, challenge and passion. If you can share personal or class projects, even better.
After your experiences, your degrees will be considered based on the number of years each typically requires, with early graduation and multiple majors being notable.
1. PhD, if you have one. A STEM PhD was particularly helpful for ML/Data science positions, but not required. 2. BS/BA (3-4 year degree) 3. MS/MEng (1-2 year degree)
International students get a raw deal. The online masters will barely help you get a job or launch a career in the US. US universities appear to offer the chance to work for major US companies with a notable university (such as Georgia Tech) on your resume, only to feed their graduates into our broken immigration and work authorization system, H1-B indentured servitude and no replies from the countless companies that have an unspoken higher bar for those needing sponsorship.
To round out a few other contexts HN readers might experience:
If you are an international considering an on-campus MS/MEng, US universities are charging full price while giving you a credential of limited value and utility. Apply the same comments above but at a much higher price than GA Techs OMSCS.
If you are completing/just completed a less notable undergrad degree, paying for a masters program at an elite CS school (like GA Tech) is usually a bad deal. If it not a requirement for the positions you seek, it won't help your career chances much.
If you have an undergrad degree and your employer will pay/cover your MS/MEng at night/personal time (and that is your passion), awesome and go for it! It will be a lot of work and lost sleep to get everything out of the experience, but a lifelong investment in your growth and experience.
If you are completing/just completed a notable undergrad degree (tier-1, internationally recognized program), you don't need the masters. Feel free to get one for your learning, sense of self and building research connections while you ponder getting a PhD. The hiring and salary benefit will be very small--you are already the candidate every company wants to meet. If you decide to get a PhD, that will open some new doors but take 5+ years to get there.
At my previous company, we made it our forte and team passion to get authorization for employees--given a global pool of candidates and a hiring bar to match. I'm really proud of our effort here given the broken and unfair system. Sadly, many companies do not share this value or cannot justify the time, effort and expense, or cannot scale such a program to a larger number of employees across a less selective bar.
That's something you could learn on your own. But your knowledge of "technologies" are more valuable to employers than CS degree - especially if you have work experience.
The tech industry isn't like academia ( economics ) where you have to build up credentials. Work on projects that deal with web technologies or even better learn the back end ( databases ) or even the middle tier/server code if you are a front-end developer.
Becoming a full-stack ( front-end, middle-tier and especially back-end ) is going to be far more important to employers than if you know what undecidability is or computational theory.
Degrees are very important if you want to break into the industry ( especially top tier corporations ). But if you are already work in the industry, employers want to see the technologies you are competent in.
If your employer is willing to pay for it and you have free time, then go for it. Learning is always a good thing. But if you want to further your career, go learn SQL ( any flavor ) and RDBMs technologies - SQL Server, Postgres, etc ( any you want but I recommend SQL Server Developer Edition if you are beginner on Windows OS as it is very beginner friendly from installation to client tools ).
A full-stack web developer is rare and you could even sell yourself as an architect/management. That's a difference from being a $60K web developer and a $200K full stack developer/architect.
Employers will ignore you the second they find out your master is not legit.
I work for an ISP and we are trying to write another success story ;) As an ISP, we have tons of constraints in terms of infrastructure. We're not allowed to use any public cloud services. At the same time, the in-house infrastructure is either too limited, or managed via spreadsheets by a bunch of dysfunctional teams.
For my team, Kubernetes has been truly a life saver when it comes to deploying applications. We're still working on making our cluster production-ready, but we're getting there very fast. Some people are already queuing up to get to deploy their applications on Kubernetes :D
What I especially love about Kubernetes is how solid the different concepts are and how they make you think differently about (distributed) systems.
It sure takes a lot of time to truly grasp it, and even more so to be confident managing and deploying it as Ops / SRE. But once you get it, it starts to feel like second nature.
Plus the benefits, in almost any possible way, are huge.
Considering that Kubernetes doesn't modify the kernel, this issue sounds like is present in mainline and kernel devs should be involved.
Missing ENA and ixgbevf can be a real performance killer!
It's interesting that the reasons they cite for choosing Kubernetes over alternatives are entirely driven by 'developer experience' and not at all technical. It shows how critical community development, good documentation, and marketing are to building a successful open source project.
Kubernetes is becoming the goto for folks needing both their own physical metal presence and cloud footprint too. And the magic of Kubernetes is that it has APIs that can actually give teams the confidence to run and reuse deployment strategies in all environments. Even across clouds.
If you are like Github and want to use Kubernetes across clouds (AWS, Azure, etc) & bare metal and do deploy/customize that infra using Terraform checkout CoreOS Tectonic. It also tackles more of the subtle things that aren't covered in this article like cluster management, LDAP/SAML authentication, user authorization, etc.
Everyone does this - because Kubernetes Achilles heel is its ingress. It is still built philosophically as a post-loadbalancing system .
This is the single biggest reason why using Docker Swarm is so pleasant.
I found this one so far:https://classroom.udacity.com/courses/ud615
But any extra courses/trainings is always appreciated
We had ran another large-footprint container management system (not K8s, but also popular), and when its DNS component started to eat all the CPU on all nodes, best I was able to do fast,was just scrapping the whole thing and quickly replacing it with some quick-and-dirty Compose files and manual networking. At least, we were back to normal in an hour or so. Obvious steps (recreating nodes) failed, logs looked perfectly normal, quick strace/ltrace gave no insights, and trying to debug the problem in detail would've taken more time.
But that was only possible because all we ran was small 2.5-node system, not even a proper full HA or anything. And it had resembled Compose close enough.
Since then I'm really wary about using larger black boxes for critical parts. Just Linux kernel and Docker can bring enough headache, and K8s on top of this looks terrifying. Simplicity has value. GitHub can afford to deal with a lot of complexity, but a tiny startup probably can't.
Or am I just unnecessarily scaring myself?
Would love to hear more about this was accomplished. I'm currently exploring a similar issue (pulling per-namespace Vault secrets into a cluster). From what I've found, it looks like more robust secrets management is scheduled for the next few k8s releases, but in the meantime have been thinking about a custom solution that would poll Vault and update secrets in k8s when necessary.
We're running a few (legacy we're moving to Go) Ruby apps in production on Kubernetes. We're using Puma, which is very similar to Unicorn, and it's unclear what the optimal strategy here is. I've not benchmarked this in any systematic way.
For example, in theory you could make a single deployment run a single Unicorn worker, then set resources:requests:cpu and resources:limits:cpu both to 1.0, and then add a horizontal pod autoscaler that's set to scale the deployment up on, say, 80% CPU.
But that gives you terrible request rates, and will be choking long before it's reaching 80% CPU. So it's better to give it, say, 4 workers. At the same time, it's counter-productive to allocate it 4 CPUs, because Ruby will generally not be able to utilize them fully. At the same time, more workers mean a lot more memory usage, obviously.
I did some quick benchmarking, and found I could give them 4 workers but still constrain to 1 CPU, and that would still give me a decent qps.
I read a lot of Postgres code to get this finished, and I'm happy to say that for a codebase with so many authors, the quality of Postgres' is very high. Like any complex program it can be a little hard to trace through, but most of the naming is pretty self-explanatory, and it comes with some really amazing comments that walk you through the high-level design. I'd definitely recommend taking a look if anyone is curious.
These diagrams are really beautiful, I love the subtle separation in the lines showing that they're made of hypens. Is there some software that makes these easy to generate?
Thank you for all the effort you put into writing this. It is clearly the product of a lot of effort and craft.
> We could ensure that any process takes out an exclusive lock > on a file before reading or writing it, or we could push all > operations through a single flow control point so that they > only run one at a time. Not only are these workarounds slow, > but they wont scale up to allow us to make our database fully > ACID-compliant
> SQLite allows multiple processes to have the database file > open at once, and for multiple processes to read the database > at once. When any process wants to write, it must lock the > entire database file for the duration of its update. But that > normally only takes a few milliseconds. . . . > > If your application has a need for a lot of concurrency, then > you should consider using a client/server database. But > experience suggests that most applications need much less > concurrency than their designers imagine.
Although if I had a $100 for every time I've had DB corruption in Postgres over the years...
That being said, since 9.4 (or maybe 9.5) these incidents have mostly stopped happening and it's been remarkably stable.
>AI can't democratize itself (yet?) so I'll help make it more accessible to everyone!
This needs to happen more on the software side. You can buy a world-class quad-gpu machine for about half the cost of a vehicle. You can build a world-class single-gpu machine (what I have) for a fifth of that. It's literally amazing how accessible the hardware is, compared to almost any other science which requires 7 figure funding to get world-class results.
The software, the learning, the math, all need to become more and more accessible so more people can pick up and train an algorithm.
I think you mean Clang and LLVM
Seriously, I wonder if stars like Lattner even need to submit a CV or their mail simply floods with offers.
Compilers (specially open source) may be a stretch, but Tesla AI and Google AI? Can a contract clause protect one company when you're working for the other on the exact same field?
Again, I have no idea what I'm talking about, just curious.
Any users like it ?
The whole google memo revealed google employees are not as trustworthy as I thought. All the social media talk of blacklists, and inquisition tactics from some of upper management is bad for business.
I've already gotten emails from clients asking me to change their business google services to something else (anything else in their own words).
Mailpile is an email client (MUA) so you will need a server (MTA). At first you can try it out with your regular ISP, even Gmail. Later you can set up your own server. Setup is a little involved but much less than people tell you and, if you choose a competently run distro, requires very little ongoing maintenance.
With your own server, you can have it working exactly as you like. Export feature? No problem, you have direct access to the maildir, mailbox or the database. Want a catch-all? One switch in the config. You will have little trouble finding a provider who accepts your preferred method of payment, too.
Sure, you can find the web client sources. How about the server, and the mobile apps? The website makes a big deal of them being open source after all ( https://protonmail.com/blog/protonmail-open-source/ ).
The whole talk about "freedom and privacy" in relation to Bitcoin made me a bit nauseous. These are tech guys. It destroys trust for them to be blabbering nonsense about privacy like this.
Legacy Bitcoin fees can be higher than their actual monthly plans.
Anybody make the switch?
The Export feature has been an open request since before March 2015.
Another feature which would give users a way to get their mail out of PM is the ability to check mail from a client like Outlook or Thunderbird. That has been an open feature request since before February 2015.
They, as with other companies that refuse to listen to their customers, will eventually fail. Of course failure may mean being bought by a larger competitor (and a few of the bad decision-makers cashing out)...
How exactly... by paying with Bitcoin?
And this is coming from a security-conscious company?
Unless I mined the Bitcoins myself, and never spent the remaining 12.45 BTC that I mined (after presumably spending 0.05 BTC on protonmail)... it is far from "Anonymous".
If they started accepting Zcash, however...
Are there major products out there priced in Bitcoin yet?
If you find this type of thing interesting and want to be part of it, we are hiring lots of folks. My team is looking for bioinformaticians, Python hackers, and machine learning people. Please reach out to me if you want to know more firstname.lastname@example.org
> Last year, we raised $5.5 million to prove out the potential of this technology. Now, its time to make sure that its safe and ready for the broader population.
Not-so-obvious point #1 is that the presence of cancer-associated mutations in blood != cancer. You find cancer-associated mutations in the skin of older probands, and assumedly many of the sampling sites would never turn into melanomas. A more subtle point is that cfDNA is likely generated by dying cells, i.e. a weak cancer signature in blood might also be indicative of the immune system doing its job.
Point #2 is that it's not necessarily about individual mutations, which are, due to the signal-to-noise ratio alluded to above, difficult to pick up. One can also look at the total representation of certain genes in cfDNA (many cancers have gene amplifications or deletions, which are easier to pick up because they affect thousands of bases at the same time), and the positioning of individual sequenced molecules relative to the reference genome. It seems that these positions are correlated with gene activities (transcription) in the cells that the cfDNA comes from, and cancer cells have distinct patterns if gene activity.
NIPT is a non-invasive blood screening test that is quickly becoming the clinical standard of care. Many insurance companies now cover the entire cost of NIPT screening for for at-risk pregnancies (e.g. women of "Advanced Maternal Age" (35yo+)). The debate is moving to whether it should be utilized/covered for average-risk pregnancies as well.
I really do wish detection studies would publish a ROC curve, though, or at least d'.
Is 98% specificity adequate for a cancer test?
I guess most are treatable if caught early?
That and the article about blood tests shows there's a lot they're working on for noninvasive or minimally invasive procedures to help prevent cancer early on.
Last ditch effort was an experimental drug. Out of 150 people in the trial, he's the only one alive 12 years later. 6 years ago he started a company that is about to hit 8 figures in revenue and over 300 employees.
Life can definitely have a lot of swings.
> Stop just assuming you have a full lifetime to do whatever it is you dream of doing.
Rare and catastrophic events like this that can severely shorten your life are ones you should not plan for. If they occur, that's very unfortunate, and maybe you'll still have a contingency plan that can rearrange your plans to fit the remainder of your life, but he was still right to have arranged his life from the start to maximize his expected (in the statistical sense) impact:
> Before this diagnosis Id been thinking of my 1st 35 yearsaside from being a ton of fun and travelas preparation. I felt like I was building a platform (savings, networks, skills, experience) that I could then use in my second act to make a real contribution, to make my mark
"Most of the time", he would have been right to use the early part of his life to make long-term investments in his development. It's just that this time it turned out to be the wrong choice.
My father had colon cancer and was not expected to survive it. I think he was 69 when he was finally diagnosed. He had put off seeing a doctor because he thought it was "just old age" and it was quite advanced by the time he was diagnosed. He did survive it and died, iirc, just short of his 89th or 90th birthday.
A lot of my relatives have had cancer, some of them more than once. Few of them have died from it. So, I tend to be biased in assuming that it can be conquered, even when the doctors say the odds are long against.
I know I have seen quotes on HN about how the odds are long against startups and other things, but how you can work to overcome those odds. I think it is a bit like that.
Good luck in your journey. Best wishes on your outcome.
The doctor is likely to recommend a colonoscopy. Removal of polyps (a likely source of bleeding) reduces the chances of cancer developing and can only be done during a colonoscopy.
(i) to show that the aggregate does not inform the individual. As a species we are too influenced by aggregate stats. There is huge variation in individual responses.
(ii) history is written by the winners. Those people who did not survive cancer are not here to write about it and those surrounding them tend to want to forget and move on. Survivor bias is a real thing.
Not only is this story a reminder that life is short, but also that we must battle each day to keep perspective when the world and our human limitations are always trying to skew our views.
Given the tensions in the world, seeing things from the other points of view has never been more important.
Thankfully ALL has a very high success rate for treatment and long term survival, but it has certainly made me realize how unpredictable life can be. I was putting things off that I wanted to do with my life and once I finish recovering, I certainly will work to correct that.
Don't wait until tomorrow.
Shit happens; my wife was stabbed on the way back home at 29 and has been in a coma and now vegetative state since. If someone has a good way to deal with this crap, please do let me know.
> if we start thinking about impermanence now, while we still have time to find skillful means to deal with it, then later we will not be caught unaware. Even though in the short term the contemplation of death and impermanence might cause discomfort, in the long term it will actually save us from greater suffering. 
She has stage 4 lung cancer, it's spread to her brain. If she gone to Stanford, like she wanted, they would have done whole brain radiation to try and blast the brain tumors. The problem with brain tumors is that there is that barrier that keeps bad stuff out of the brain and it only lets small stuff through, chemo tends to be large.
My buddy started researching and asking questions (he's business/sales but I think he's an engineer). The 5 year survival rate is less than 1% for stage 4 lung cancer. So he started asking doctors and hospitals "what's your 5 year survival rate". Everyone pointed him to national stats and he said "no, I know those numbers, what are yours?". El Camino Hospital publishes their numbers because they are much, much better: 15%. I know, 15% isn't great but it is a boat load better than under 1%.
So they went there. El Camino has a different approach to this sort of situation, they use some chemo (avastin maybe?) that is small celled and gets through the brain barrier. They also did pin point radiation.
The results: it's 2 years out, I think 2 years and 1 month, and my buddy's wife isn't fine but she's damn close. She's on an every 3 week chemo cycle, she typically gets 11 good days and 10 crappy-bad days. They both retired (I still pay his health insurance which is a big deal) and bought a travel trailer, do 2-5 days trips up and down California. They are fully aware that they are trying to cram all of their retirement into a few years and so far are doing a great job, their doctor loves it (apparently a lot of cancer patients sit on their butt, just waiting for the next chemo session).
If she had gone to Stanford, while the radiation would have likely wiped out the tumors, it also has this little side effect called dementia, happens very quickly. So this outcome is much, much better and it only happened because my buddy did his homework.
And one sort of cool thing happened: this all started before my company imploded and I gathered the team and said "I want to send Bob on vacation. We've only got so much runway left so if you don't want to use some of that money on Bob, I get it, I won't judge, I'll pay for the vacation myself." It was unanimous, they wanted the vacation to be from the team (I was so proud of them, that's the team I wanted). So we sent them back east to see the fall colors, they had a great time.
It's worth stating that I've watched my mother-in-law and my father die of cancer (and while my love for my dad is pretty obvious, I loved my mother-in-law as well, we got along great). The thing that I've learned is the second you know you have something that is life threatening do whatever you want to do RIGHT NOW. I pushed for the vacation thing for Bob because my mother-in-law didn't want to have friends over "until she was better". I deeply regret not just arranging to have all her friends come in. She died pretty quickly.
So I don't want to be morbid, or show any lack of hope, but there is the possibility that OP is in the best shape he's gonna be. So use that time to have some fun, build some memories, whatever you think is good. If you kick cancer's ass you'll have some memories to look back on, if you don't, your family will have some to hold onto. Do not listen to the doctors, they tend to be overly hopeful and give you a false sense of hope (I get it, it's kind of all they can do, but we would have liked a more realistic view. They let my mother-in-law think she was going back to work).
Good luck, cancer sucks.
Edit: explain that lots of patients don't live between chemo sessions and a typo.
Being worried is definitely not a reason to put it off!
But this gets me thinking about a couple of bleedings I had a few months ago and totally dismissed. I will definitively see a doctor soon.
Why isn't the medical community looking for early warnings? relatively cheap tests for hs-crp, IL-6, TNF-alpha could be helpful. Inflammation is the precursor to disease.
Do yourselves a favor & ask your doctor for the hs-crp test to measure inflammation.
But will this change our perception of people in their 20s-30s that do pursue their dreams and are behind in their careers and finances?
At what point is a quarter/mid-life crisis not just an acceptable adult YOLO?
The author has some initial regrets on building up their savings and network, and I wonder if additional perspectives will be made by me posting this
This puts my own trivial problems into perspective. I hope you make it. God bless.
That highlit bit should be tattooed on everybody's forehead at birth so you see it every day. Lots of wisdom in there.
Start Here, please!
He was an athlete who ran in marathons. Never smoked or drank. But cancer still took him. Such a terrible disease.
Everyone who has left: Uber(Travel Ban and #DeleteUber), Disney(Paris withdrawal), Tesla/SpaceX(Paris Withdrawal). And now Merck, Under Armour and Intel(all left after failure to condemn white supremacists)
There's something I can get behind. Angry people have a positive feedback loop, so if we all collectively talked about Monads there'd be fewer problems.
Pretty worrying to observe the polarization in the US where IMO both left and right are full of extremist views and moderates have no audible voice.
You, the thing listening to this advice, is just a small part of a greater whole, much of which you (the thing listening to this advice) are not consciously aware of. This is because you were built by your genes to be good mainly at one thing: reproducing. That's all your genes care about. They don't care about your happiness or achievements or having a fulfilled life. In fact, they don't even really "care" about reproducing, except the same way that water cares about flowing downhill.
Your negative emotions are real. The pain you feel is real. But it's not you. It's something that is being done to you. In that regard it is exactly the same as physical pain, which is also not part of you, but rather something done to you. The fact that you're depressed is no more a character flaw than the fact that it hurts when you skin your knee.
When I feel bad I generally do a quick mental checklist: Have I drunk any water lately? Have I eaten? Have I had my pills? Usually the answer to at least one of those is 'no' and I feel better after correcting them. Then since my need for a warm place to sleep is pretty well settled, it jumps up a few notches to checking when the last time I cuddled someone was, how I feel about my current project, etc.
But I went farther than that.
I also learned to accept that my ideas and behaviors are inconsistent and it helped me better understand the world and myself.
And finally, I learned to accept that I sometimes do bad things (and desire to do even more bad things) and it made me a better person.
(Disclaimer: my experiments have n=1, no control group and subjective measurements chosen in hindsight.)
Over the years, I've managed to train myself to be amused by feeling bad. Because it's so pointless and silly. And so I smile, feel happy, and consider what needs done. Sometimes what needs done is sleeping for 15 hours. But that's very different from refusing to get out of bed.
> those who generally allow such bleak feelings as sadness, disappointment and resentment to run their course reported fewer mood disorder symptoms than those who critique them or push them away, even after six months
Yet the comments so far are mostly about "controlling your thoughts", rather than accepting negative emotions (and maybe looking at what they're trying to tell you rather than killing the messengers).
How does one determine that someone is really "feeling bad about feeling bad" or whether they're satisfying some expectation of the experimenter or doing something else? And moreover, cause and effect can be slippery, maybe those with more extreme problems tend to be caused to do the thing the experimenter thinks is a mistake rather what they do having an effect.
worrying about worrying about worrying
Getting over homesickness seems to be about being able to accept the bad thoughts as you would do at home and move on, rather than "feeling bad about feeling bad" in the form of wishing you could just go home.
I'm fortunate enough to have never experienced it, but I imagine serious grieving is like this too.
Emotions are adaptations. They trigger in particular circumstances and they cause particular behaviors. The entire point of having an emotion is to engage in emotionally appropriate behavior until the circumstance has passed. Positive emotions are attractors towards the triggering class of circumstances, while negative emotions are repellers.
"Feeling bad about feeling bad" pretty much works out to be something like "naive gradient descent on the function that maps attention onto emotional valence". This doesn't solve the underlying situations that cause you to feel bad - it's like not looking at pictures of your dead wife, rather than grieving, getting support from friends and family, and then moving on with your life.
The more I am busy (with meaningful work) the less time I have time to feel bad about feeling bad or just thinking about this.
This doesn't mean one should escape into random work but that there maight be a slight correlation.
Having 64 bit by default fixes such problem. That's a good move. Well done moz://a
Yeah flash is 32 bit but flash has been dead for the greater part of that decade. Not sure how it could possibly be the case that moz was still shipping 32-bit Firefox as default until yesterday.
Any reason to do so? It'd have to be some fringe use case of '<4GB RAM system & user doesn't need fast codecs'. If it's a closed business environment I'd imagine they're on IE
That said, kudos for making this the default.
Back in the early 80's my dad ( a chess teacher from Odessa ) immigrated to NYC ( with just $150 ). My mom worked 3 jobs to put him through Yeshiva Uni where he learned Cobol, JCL, and Fortran.
He ended up getting hired right away by Lehman Bros, and realized he was sitting on a gold-mine. Tons of well-educated immigrants were coming onto the golden-paved shores, with 0 knowledge of computer programming.
So we upgraded our family 1.5 bdrm apt in Jackson Heights, Queens, to a modest 3 bedroom tower apt in Forest Hills, where he proceeded to lecture evening and weekend classes by the droves.. (all on 1 white board!) * my job ( i was 13 at the time ) was to serve everyone instant coffee and bagels.
One day I was curious and asked him Dad, these folks can barely speak english.. how are they going to even pass their interviews? he looked up from his hand-drawn spreadsheet he kept a strict record of students..Oh that part is easy I already know all their future managers It was the perfect funnel.
best of luck fellas!
Are you legally authorized to work in the United States?Just a note: We are currently unable to offer income-based repayment outside of the United States. You can still attend Lambda School, but you would have to pay at least $10,000 up-front
$10,000 - that's about as expensive as a high quality bachelor's degree (in Germany) at private university via distance learning, where someone can pay flexibly btw. Additionally: We are talking about 6 semesters worth of material and study, study time can be extended for free.
What you are offering is a 6-month crash course where someone will have _NO_ degree whatsoever afterwards. I also doubt how much computer science you can teach in that time. Normal CS curriculum spends about one module (one semester's worth) on just the introduction to programming, has probably 2 modules (that you would do in 2 semesters) of computer science basics like computer architecture et cetera... there are so many good resources already available, including lectures of incredibly professors from some of the greatest universities.
Also: You have to create the learning resources once and can take on as many students as you want w/out any additional cost, great for you, seems like selling snake oil to me. I am unsure besides the resources and apparently online group working ("group work happens live and interactive") what it is you provide for possibly 30k$ in cash? 1 success finances the cost you have w/ an incredible amount of failures, and it's not clear to me if that one guy finding a job will have done so bc of your awesome curriculum and support?
It seems to me like anyone who can possibly finance proper education some other way should (and I want to repeat: it's a 6-months crash course, not a degree)
$30k seems steep, and while 0 $ upfront is appealing, that kind of strategy can be exploitative( like 0% deposit financing ). It can be super appealing to someone who doesn't have much money, but they end up paying a lot more than they really should be.
Not saying this is bad (maybe it's good thing), but when look through it, I'm getting alarm bells.
I also have questions about the quality of instruction. There are some big name institutions listed, but that doesn't necessarily indicate quality instruction. The best researchers are often emphatically not the best instructors, and for this venture instructors are much more important.
I sincerely hope this is successful; perhaps this can prompt traditional institutions to be more innovative (in delivery, instruction, finance, all of it).
Aside from that, you're essentially giving your students a loan and then having them repay it once they start earning money. How is this any different than a regular student loan (but with way more risk on your end)?
An Irish degree costs 3000 euros a year to enroll. Even for four years that would be cheaper than what's on offer here with a world wide recognized qualification.
On the whole, it seems like a very American solution to an American problem, education is just far too expensive in the states. * I'm a beneficiary of free state education from Ireland.
Best of luck in the effort, I do applaud you for it especially in the US where there are always tech shortages but...I really wish your country was in a place where your type of business didn't have to exist. A country where only those who can afford to be educated can receive education. I'm saying this as someone living in Nepal where those who can't pay can't learn either.
I really wish education was subsidized, worldwide! Alas, never going to happen.
Not that accelerating the success of those destined to succeed is in any way a bad thing (it is, undoubtedly, a good thing, and can certainly make for a good business), but my interest is much more in helping those succeed who otherwise wouldn't. That seems like a much harder challenge and one that can have more interesting ramifications for things like social mobility and income inequality.
You'd almost need a skills camp before the bootcamp that would be less choosy and take the top 20-30% of those students into a further bootcamp... but at that point the economics start to break down (why it'd likely need to be pro-bono).
Anyway, that was a big tangent! I think what you're doing is interesting. Do you have a twitter account set up if we're interested in following along?
What these programs need most of all are cheap dormitories that offer room and board. (Close proximity to students can also help provide one another with a support network.)
Ask since I've looked at this model before and while I'm not able to find it, I recall court cases that ruled against this type of finical agreement.
Just a note: We are currently unable to offer income-based repayment outside of the United States. You can still attend Lambda School, but you would have to pay at least $10,000 up-front
What if I want to take this program from outside of the USA and want to work remotely? Would this count as working in the USA? (Which is one of the questions on the form)
For example, could I sell you a car and say "the price is 1% of all your future earnings"?
Legally I guess I'd have to say "The car is $30k, but I'm giving you a loan at 999% interest, with repayment demands capped at 1% of your salary (upon production of salary evidence), and the loan to be forgiven in full in the case of death".
One pedantic but perhaps important nit to pick: I would not recommend the language "especially if you come from a lower-class background". Lower income, reduced opportunity, lower education, etc but lower class reads like you're coming from a position of privilege that looks down on potential students.
I'm interested in how you select applicants but your website doesn't let me look at the application process. Your funding model means selecting the right students is crucial as you carry the risk. How do you select from the candidates? Are they meant to be already quite experienced? I looked through the curriculum and it seems very intense even for someone with a bit of coding experience. How do you make sure people keep up the pace?
> "At that point we take 17% of income for two years"
This is insane for too many reasons. What happens when the applicant change jobs, goes abroad, gets fired, quits, creates her own company?
Your schedule is full day learning. So now I wonder: are you targeting people who are out of jobs? It seems a little confusing to understand your criteria of recruiting students (I have not paid too much attention yet).
- Do you plan to extend the syllabus to something more fundamental? Asking because you have some C++ and already go further than bootcamps, so I'm wondering if you intend to pursue this direction and get closer to a formal CS education.
- Do you plan to work on giving some sort of degrees later? Some countries require degrees for immigration, which is a door that bootcamps do not open. By offering degrees, you open the global market to your students, and in an increasingly connected world, it can make a huge difference. I basically owe my quality of life to the mobility I got thanks to my degree, and this is something I wish to everyone.
Massive Kudos to you guys! I hope you will eventually offer the same service to students outside of the US. The market is large.
Would I still be on the hook to pay you 17% of my income despite not landing a programming job?
This is similar to how all university tuition fees in the UK work, with slightly different numbers. A very sensible solution to the astronomical cost of education, provided people want to get jobs.
In the current UK scheme you pay 9% of the amount you earn over 21,000, or nothing while you earn less than that. Debt written off after something like 25 years if unpaid.
I've trained 3 people to code using a similar meta-course I built. It's more of an apprenticeship scheme (they work for me once they get good enough). https://github.com/z-dev/learn-programming
From that experience I think this idea could do well. I think no-win-no-fee education like this will be popular. I also think that a scheme like this could do a lot of social good and help people who might not want to go to University.
Good luck :)
I know a fellow who was very much "into computers," decided to become a programmer, signed up for the (offline) classes and then had massive troubles trying to understand how a do-while construct works. He was in mid-20s back when it happened, spoke 3 languages and had a day job as a nurse. He did finish the course, got his grades, but ultimatley lost all the interest and never got into IT.
The incentives at play here seem a bit out of whack...
If you can't get hired because you went to a "dev bootcamp" or just aren't good enough with code after spending all that time in school, they'll come after your for that car mechanic pay check.
I also noticed a mini web dev bootcamp, when will this be launched?
1) Risk is pooled in the institution rather than distributed amongst the students, which is the textbook way to deal with uncorrelated risk.
2) The incentive of the university and the student are aligned as much as possible
3) By putting costs and benefits into a form with equal time horizons, disadvantaged students no longer need to rely on the generosity of governments or private lenders for upfront cash.
The only thing I'd always questioned was whether such a scheme as described above could pass legal muster, as it bears a resemblance to involuntary servitude, as well as requiring access to income statements. I've never heard of anyone but the govt placing liens on income. I'm hoping a workaround has been found, because from a strictly incentive based analogy, this model has the potential to do to modern education what patent law did to manufacturing
Any idea what the acceptance rate is like?
What sort of time frame will you hear back if you got accepted or not?
It feels like a very fine line between altruism and taking advantage of ignorance.
* Underestimation of difficulty whether through cynicism (burn the devs) or cluelessness
* Inadequate training and expectation devs can just piggy back learning technology x from scratch whilst writing production software using it
* Trying to use one off contracts as a way of building resellable products
* Insistence that all devs time must be billable and trying to defy gravity in ignoring skills rot etc. through lack of investment in training
* Expectation that devs can be swapped between technologies without problems
* Swapping people in and out of projects as if this will not affect progress
* Deliberate hoarding of information as a means of disempowering devs
All of this inevitably leads to a bunch of pissed off devs. The ones that are happy to eat it become the golden boys and get promotions. Those that point out the bullshit leave once they can and are replaced with the desperate at the bottom who sooner or later arrive at the same position of wanting to leave once they realise what's going on. I think tech can be pretty miserable if you are not in the upper echelon of lucky types that can score a position at a Google, Facebook etc.
Oh and a couple more:
* Give no feedback unless things go wrong
* Treat your highly educated, intelligent and motivated devs like children by misusing agile in order to micromanage them
You might think startups are small enough that this couldn't happen but that was actually where my worst experience was. The founders are visibly in a meeting with a couple people, maybe "suits", maybe not. They come out of the meeting and the next day your priorities are rewritten. Cool beans, that's a thing that can happen and that's not my issue. My issue is, why? What are the goals we are trying to hit now? What's the plan? Why is that better than the old plan?
This is especially important IMHO for more senior engineers responsible for architecture and stuff, because those matters can greatly affect the architecture. Telling me why lets me start getting a grasp on what parts of the code are long term and what can be considered a short term hack, what the scaling levels I need to shoot for, and all sorts of other things that are very hard to determine if you just come to me with "And actually, our customers need a new widget to frozzle the frobazz now more than they need to dopple the dipple now."
Not necessarily the biggest issue, there's a lot of other suggestions here that are probably bigger in most places, but this is one that has frustrated me.
(I'll also say this is one you may be able to help fix yourself, simply by asking. If you are in that senior role I think you pretty much have a professional obligation to ask, and I would not be shy about working that into the conversation one way or another.)
* Spending too much on marketing/sales before people want the product. They usually just end up burning their brand if the product is too low quality.
* Too much focus on building multiple small features rather than focusing on the value proposition.
* Trying to negotiate deadlines for product development. "We don't have two months to finish this. Let's do this in one." In software estimation, there's the estimate, the target, and the commitment. If the commitment and estimate are far off, it should be questioned why, not negotiated.
* Hiring two mediocre developers at half the salary of one good developer. They usually can't solve problems past a certain treshhold.
* Importing tech talent, rather than promoting. Usually the people who have built the product have a better understanding of the tech stack than someone else they import.
* Startups that rely on low quality people to skimp on the budget. These people later form the DNA of the company and make it difficult to improve, if they're not the type who improve themselves.
There seems to be some sort of quasi-religious belief in the fundamental averageness of humans; consequently the difference between developer salaries at any company varies by maybe 50%, whereas the productivity varies by at least a full order of magnitude.
Until "management" realizes this, the only way that a developer on the upper end of the productivity scale can capture their value is to found their own company. I sometimes wonder what would happen if some company simply offered to pay 3X the market rate and mercilessly filter the results.
* Reorganizing seemingly for the sake of reorganizing. Result: Every time the new organization has settled somewhat and people know who to interact with to make things flow smoothly, everything is upended and back to square one.
* Trying to make our products buzzword compliant without understanding the consequences - we've on occasion been instructed to incorporate technologies which are hardly fit for purpose simply because 'everyone else is doing it' (Where 'everyone' is the companies featured in whatever magazine the CEO leafed through on his latest flight. Yes, I exaggerate a bit for effect.)
* Misguided cost savings; most of what hardware we use, we buy in small quantities - say, a few hundred items a year, maximum. Yet purchasing are constantly measured on whether they are able to source an 'equivalent' product at a lower price. Hence, we may find ourselves with a $20,000 unit being replaced by a $19,995 one - order quantity, 5/year - and spend $10,000 on engineering hours to update templates, redo interfaces &c.
* Assuming a man is a man is a man and that anyone is easily and quickly replaceable (except management, of course) - and not taking the time and productivity loss associated with training new colleagues into account.
Edit: An E-mail just landed in my inbox reminding me of another:
* Trying to quantify anything and everything, one focuses on the metrics which are easy to measure, rather than the ones which matter. As a result, the organization adapts and focuses on the metrics being measured, not the ones which matter - with foreseeable consequences for productivity.
Here's what happens when a manager tries to fill tickets himself: his sense of control of the project is derived not from relationships of trust and cooperation with his reports, but from direct involvement in the code. So naturally, any challenging or critical piece of code ends up getting written by him (because otherwise, how could he be confident about it?)
The manager is essentially holding two jobs at once so they end up working late or being overly stressed at work.
The devs will feel intimidated to make architecture decisions, because they know if they do something their manager doesn't like, it will get refactored.
They will also feel as if they are only given the "grunt work" as all the challenging work is taken on by their manager.
The code itself is in a constant state of instability because there is a tension between the manager needing the other employees' help to get the code written on time, while also needing to have that complete control and mastery over the code that can only come from writing it yourself. So people's work gets overwritten continually.
This is very bad and it's very common - managers should learn to delegate as that is an essential part of their job. If they can't delegate they should remain as an individual contributor and not move into management.
The first chapter says: "The major problems of our work are not so much technological as sociological in nature." Sorry Google Memo Dude. DeMarco and Lister called it in the 80s.
Speaking of DeMarco, he also wrote a book about controlling software projects before Peopleware. Then in 2009 he denounced it. 
To understand controls real role, you need to distinguish between two drastically different kinds of projects: * Project A will eventually cost about a million dollars and produce value of around $1.1 million. * Project B will eventually cost about a million dollars and produce value of more than $50 million. Whats immediately apparent is that control is really important for Project A but almost not at all important for Project B. This leads us to the odd conclusion that strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely youre working on a project thats striving to deliver something of relatively minor value.
: https://www.computer.org/cms/Computer.org/ComputingNow/homep...: https://en.wikipedia.org/wiki/Peopleware:_Productive_Project...
* Building a one more generation of product than the market supports (so you build a new version when the market has moved on to something new).
* Rewarding productivity over quality.
* Managing to a second order effect. For example when Nestle' bought Dryers they managed to 'most profit per gallon' which rewarded people who substituted inferior (and cheaper) components, that lead to lower overall sales and that leads to lower overall revenue. Had they managed to overall revenue they might have caught the decline sooner.
* Creating environments where nobody trusts anyone else and so no one is honest. Leads to people not understanding the reality of a situation until the situation forces the disconnect into the mainstream.
* Rewarding popular popular employees differently than rank and file. Or generally unevenly enforcing or applying standards.
* Tolerating misbehavior out of fear of losing an employee. If I could fire anyone in management who said, "Yeah but if we call them on it they will quit! See what a bind that puts us in?" I believe the world would be a better place.
There are lots of things, that is why there are so many management books :-)
The worst is when you get two or more managers attending the same meeting. Then nothing will get done as they eat up all of the meeting time arguing about business rules, magnifying the complexity of the system until you end up with some Rube Goldberg chain of logic that they will completely forget minutes after they've left the meeting. A good manager knows to trust their employees and only intervenes to make sure those employees have the resources they need to do their jobs. The most effective managers are humble and respect the expertise of the experts they hire.
1) believing you can dramatically change the performance of an employee -- it's very rare to save someone and less experienced managers always believe they can.
1.5) corollary to the above: not realizing the team is aware and waiting for you to fix the problem and won't thank you for taking longer to do what's necessary.
2) believing that people don't know what you're thinking -- people see you coming a mile off.
3) thinking you can wait to fix a compensation problem until the next comp review -- everyone waits too long on these.
4) believing HR when they tell you that you can't do something that's right for your team -- what they're really saying is that you have to go up the ladder until you find someone who can force them to make an exception.
5) not properly prioritizing the personal/social stuff -- at least this is my personal failing, and why ultimately management has not stuck for me.
6) believing your technical opinion matters -- I've seen way too many VP's making technical decisions that they are too far from the work to make, trust your team!
It'd be fun to see a list of these from the non-management point of view. I'd start off with the inverse of #6 above:
1) believing your technical opinion matters -- the business is what ultimately matters.
- Focusing on fixing problems, rather than preventing problems
- Acting as yes-men to bad upper-management strategy, thereby creating a layer of indirection between the people who think it's a good plan vs the engineers who can explain why it's not quite that easy
- Trying to use software tools (e.g. Jira's burndown charts) to quantitatively/"objectively" measure engineers
* Preaching about the virtues of a flat organizational structure, but making unilateral decisions.
* Hiring people for a particular challenging job, but have them work on menial unchallenging tasks.
* Creating multiple layers of management for a tiny team.
* Facilitating post mortems that would be better facilitated by a neutral third party.
* Using vague management speak as a deliberate strategy to never be held responsible for anything.
* Rewarding politics with promotions.
* Marginalizing experienced employees.
* Talking too much about culture.
* Trying to be the company thought leader instead of helping people do their best work.
* Assuming that everyone underneath you views you as a career mentor.
* Negging employees.
* New hire managers: Firing incumbent employees after youve only been on the job for a few weeks.
* New hire managers: Not doing 1:1s with everyone who reports to you.
* New hire managers: Create sweeping changes like re-orgs after a few weeks on the job.
* New hire managers: Doing things a certain way because it worked well at a previous company.
* New hire managers: Changing office work hours to suit your personal life.
(Similar problems can happen when a bunch of people with no management skills decide to found a company and start hiring people.)
Estimates get shortened. Technical decisions are overruled for business or political reason. Warnings about undesirable outcomes are ignored. Sheer impossibility deemed surmountable.
I feel this is the worst mistake by management because the technical people are the ones who suffer for it. Overtime, inferior software, frustration, technical debt, lack of quality, are all things management doesn't really care about because they can always just push people harder to get what they want.
None of this was really necessary. Every single year, they watched the backlog of work gradually climb over the course of the summer. Then, around September, they began insisting on overtime at psychological gun point to try to clear the backlog. It would have been entirely possible to allow people who met certain quality standards to work some overtime during the summer and cap how much could be worked. People could have competed for overtime slots instead of feeling forced into it. It would have worked vastly better for everyone.
Of course, an elegant solution like that takes a bit more planning on the end of management. Simply demanding extra hours at a certain point is a simpler, brute force method. But, I felt it had a lot of downside to it and was mostly avoidable for the company in question.
It makes me wonder how many companies basically create drama of this sort. Because this crisis was entirely created by management, IMO. There was zero reason they had to wait until it hit a certain volume and then force overtime on us.
- I'll repeat: "misusing agile in order to micromanage"
- Requiring all decisions to be signed off on
One of your teams write messy code? Don't try to educate them. Instead enforce strict coding standards that forbid all but the most basic complexity. Everyone else now have to make their code more verbose and objectively worse, while the problem team still writes bad code but now they make even more of it in a neater formatting.
2) Raise wages only for people who threaten to leave.
3) Run a high tech software development shop but have an IT department that assumes everyone only ever need Excel and Outlook.
Ports are blocked. Local computer admin is locked. Updates are forced, delayed and centralized. Hardware is underpowered. Network blocks ping.
4) Demand to be in full control.
Make sure nobody does anything you don't understand. Shoot down experiments you can't see the point of, even if they're small. Hire skilled and experienced people, but demand that you can understand everything they do.
5) Let random people deal with hiring and interviews.
Hiring is both a hard and sensitive process. On one hand you are giving people an impression of your workplace, and on the other hand you are trying to evaluate the skill of someone who has a different skill set than yourself.
Giving this job to some burnt out elitist asshole who throws resumes in the garbage because they did or didn't include a cover letter, or a wannabe drill sergeant who tries to be "tough" and "test them under pressure" during interviews, gives you a bad rep in tech circles and doesn't help you hire skilled people. Giving it to someone who can't be bothered to reply to applicants or update them on rejections is also shitty.
6) Open fucking landscape workplaces.
Everywhere I've worked, the folks running the show have too much ego and political capital invested in products or projects that are turds. The result is massive financial losses for the business.
I got the luck to work with great managers at amazon. From what I've seen, programmers are driving the company there - or at least, they have their word to say, often, and power that comes with it. On my team, decisions relative to product development were clearly strongly driven by us. Seems to work pretty well for amazon.
A lot of people see a building full of books and wonder why it can't be replaced by a bank of terminals and Google. I won't get in to the relative merits of dead trees vs. electrons, and largely don't care about it. What that line of thought misses is two-fold: the librarians and the community space.
Decent librarians are hugely underrated resources. Great ones can be incredible. Maybe natural language systems will become good enough in my lifetime to handle some of the vague requests librarians routinely manage to match to the right book, but the leaps of association to related topics, the knowledge of the edge cases of information classification to navigate them well, and the general mass of knowledge they accumulate is massively useful to have on hand. And so few people take advantage of it.
Meeting spaces in this context (both formal, sign-up-for-your-group and informal) serve an important role as well. It seems like they're becoming rarer as government buildings use security as an excuse to close to the public, and in calling around to private groups with spaces that previously did that sort of thing have been much more reluctant to do so when I've tried to organize things over the last several years.
To personalize this a bit, I grew up in a poor family. One thing that was heavily emphasized to me was the value of learning - I think it was reaction to missed opportunities. Who knows what would have happened, but I do know that my college essays (written referencing library books, building on interests fostered in the math and the American Lit sections) would have been very different without them, and I kinda doubt I would have gotten a free ride to a top-10 school if I had been only drawing on what public school offered.
 Anecdata alert!
I think donating to the internet archive would be a better donation which a lot more benefit to society than funding physical libraries.
Libraries solve one of the worlds most important problem - keeping societies important information history safe. Websites are not immune to this problem. They require maintenance. When a webpage goes down its gone forever. Without something like the internet archive, we would not have a modern day library equivalent for the web. We are losing a lot of important information. Physical libraries today are in comparison much less important than digital ones.
Libraries should evolve with the change of technology and move their function from curation and access to information to something that is able to benefit more people. Books occupy volume and removing them would make more room for desks and rooms where people with no access to quiet areas could use to be more productive.
I think one of the best places for a mega-philanthropist to invest would be in the time and places that kids spend outside of public schools. Many of the biggest disadvantages in opportunities for kids are created when they fall behind before and after school and during summers, relative to kids who are better off socioeconomically. These disadvantages compound and are lasting. Safe places to engage in healthy recreation, productive endeavors, and getting something nutritious to eat that they wouldn't otherwise have access to would go a long way for underprivileged youth and have an impact for the rest of their lives.
Raise taxes and on people like Bezos and Gates for the needs of society.
Talk about diversity, the library is a place where you get to see people from all walks of life outside the silicon valley bubble (different race, age, handicap). It builds a learning community where people have the opportunity to help each other at a more human level.
Then do the same with legal records, although that is more of a legal problem than a money problem.
I don't know how it is in the US, but for instance German libraries offer to loan ebooks: http://www.onleihe.net/
Donating ereaders and rights to ebooks to libraries seems more effective than printed books.
Big caveats here are Amazon's monopoly position, DRM and copyright and loans for ebooks vs. physical books.
I co-founded Peer 2 Peer University  a non profit that brings people together in learning circles to take online courses. When we switched from online-only to face to face meetings in public libraries we started teaching adults who had fallen out of the education system and who were not benefiting from online courses. And I can't say enough positive things about the librarians who we work with.
Bezos should spend (or not spend) in ways and things he values, to maximize what he gets out of what hes earned.
(P.S. libraries compete with his book selling business! Why wouldnt he rather sell a library pass on a kindle for a monthly subscription?)
I would hope were going to make large strides in these in his lifetime. If we could effectively funnel more into R&D sooner, wed all see the benefits sooner. Cancer(s), for example, might be cured in say 2060 with our current effort, but if we solved the problem by 2030, hundreds of millions would benefit.
With technology becoming cheaper and cheaper, right now, you can get a good enough computer to access all the world's information (the Internet) for $25. I can imagine that price being more like $5 in the next 10 years.
Since the goal of investing is planning for the future, it'd be an enormous waste to spend all of that money on the antiquated concept of a library. It will be about as useful as investing the money in VHS tapes.
Better education-related goals for billionaires: make more of the copyrighted information available for free public use. Help increase access to good quality internet and computers in poor communities.
Just books, staff and facilities: the three things that libraries always need, won't become obsolete in a few years, and are equally available to all patrons in an area.
Yes, public libraries need to evolve to meet their community's needs as they change. But just as a new coat of paint or solar-powered lighting doesn't strengthen an aging bridge, focusing on the flair rather than the core of what makes a library a library would be foolhardy.
Carnegie's legacy, the example used in the article, doesn't translate to the present.
If Bezos wanted to democratize information in a comparable way, perhaps he could underwrite universal access to high-speed Internet. Many many parts of the country still do not have reliable, high-speed, low latency Internet connections.
If the Kindle ever was jailbroken, well then the kid or whoever just learned about jailbreaking/hacking. Without Wifi or LTE support, likely no one would really bother.
I find this one inspiring:https://www.ted.com/talks/curtis_wall_street_carroll_how_i_l...
It seems to imply that someone, who wasn't competent enough to make billions of their own, is somehow more apt to know how to better spend them than the one that actually did.
just keep the money in banks. that's what they do.
I'm not rich in the popular sense of the word (besides having the fortune of being American middle class), but I do have investments by virtue of almost never spending on consumer goods. And having no wife or kids. My coworkers realize after years of seeing me drive the same beater correctly assume I'm in better shape financially, and some have the audacity to jokingly ask me to put them in my will.
Now, I will not deny that I am an extremely fortunate person who is cognitively able, like Bezos or anyone well-connected with material wealth, but what's with the 'he should donate to this cause instead'?
It's his money. He could buy a fleet of yachts, set them on fire, and upload the video footage - why shouldn't he be allowed to do that? At what arbitrary level of wealth does 'his' money become everyone else's money?
Seriously. Since when we pick someone else's pocket and decide what to do with his money?
If things keep getting digitalized at the current speed, all knowledge of the world will be accessible online in our lifetime.
Unless you believe that a large percentage of citizens will not be able to afford a device for accessing the internet, libraries are a waste of money.
Oh and since librarians were mentioned, if AI keeps advancing, we will be able to have a conversation with a search engine within 30 years. So who needs a librarian?
For example, I don't think many on the right disagree that funding prenatal care is a good thing--but some major prenatal care providers, such as Planned Parenthood, also provide abortion services and so some politicians want to cut all their funding to make sure none of the Federal money goes to abortions. A whole bunch of women's health services get cut in order to make sure there is no chance the money ends up helping abortions.
So I'd like to see some billionaire, or some well-funded charity like the Gates Foundation, build several clinics that provide free abortions around the country in the states with the least restrictions on abortions, and fund a program that provides free travel to and from those clinics for women in the states with restrictive laws that have forced most such clinics to close.
Then organizations like Planned Parenthood can get completely out of the abortion business, taking away the major excuse that is used to cut their funding.
State legislators can stop spending a lot of time coming up with new ways to try to shut down abortion clinics in their states (because shutting down such clinics will no longer stop the abortions), and state attorney generals can stop wasting time defending those attempts in court, and maybe they will finally realize that the best way to reduce abortions is to make it so people don't need them in the first place. Maybe then states like Texas can drop their idiotic "abstinence only" approach to sex eduction (which has resulted in soaring teen pregnancy rates...) and switch to something actually effective.
Edit: any down voters care to name specific objections? That Planned Parenthood provides a lot of useful women's health services that are not related to abortion should not be controversial. That abortion is the main reason Congress wants to completely defund PP should also not be controversial. That "abstinence only" programs are a massive failure is pretty well documented. That many states keep passing abortion restrictions which then get challenged and often struck down as unconstitutional is not controversial.
> To make it a real thing I'd start by calling morestack manually from a NOSPLIT assembly function to ensure we have enough goroutine stack space (instead of rolling back rsp) with a size obtained maybe from static analysis of the Rust function (instead of, well, made up).
> It could all be analyzed, generated and built by some "rustgo" tool, instead of hardcoded in Makefiles and assembly files.
Maybe define a Go target to teach Rust about the Go calling conventions? You may also want to use "xargo", which is specially built for stripping or customising "std" and to work with targets without binary stdlib support.
Is there some issue with this approach that I'm missing? Is the additional process overhead really enough that it's worth bending over backwards to avoid it?
C is still the common denominator, you'd think it'd be easy, but it's hard. Years ago when LLVM was showing promise and Google was going to get Python running on top of it I was hopeful.
I guess nowadays it's a better design to run separate processes and have your languages communicate out of process (pipes, http, etc) rather than in-process.
Go strives to find defaults that are good for its core use cases, and only accepts features that are fast enough to be enabled by default, in a constant and successful fight against knobs
He is just writing a more direct manual version of CGo in assembly that bypasses a lot of what CGo does, to be much faster.
> Before anyone tries to compare this to cgo
The only meaningful message in this blog is it possible to write a faster CGo, that's it. Comparing it to CGo is the only useful possible outcome, but...
> But to be clear, rustgo is not a real thing that you should use in production. For example, I suspect I should be saving g before the jump, the stack size is completely arbitrary, and shrinking the trampoline frame like that will probably confuse the hell out of debuggers. Also, a panic in Rust might get weird.
So when you actually fix all those things you might be back where CGo was at the beginning.
This guy comes across as a classic "but i wanna be cool" hacker who discovers that when you bypass all the normal protections in a library and make some kind of direct custom call, things can be faster.
I guess so what?