hacker news with inline top comments    .. more ..    22 May 2016 Best
home   ask   best   3 years ago   
How Technology Hijacks Peoples Minds medium.com
1531 points by prostoalex  2 days ago   404 comments top 72
lars 2 days ago 23 replies      
Wow, I'm glad this was posted.

I think the way we use technology today will be looked back on the same way we look back at naive cigarette smoking in the 1950s.

Modern app design isn't about creating things that are good for the user, but about creating want in the user. This is a problem.

For example, there are several studies showing that using Facebook in general makes people less happy. User happiness just happens to not be be necessary for Facebook to be a successful business.

Go to a developer conference by one of the big tech companies, and speakers generally aren't talking about doing good things for the user. They'll use euphemisms like "increasing engagement". There's concepts like "permission priming", psychological tricks to get the user to do what you want. There's books written about how to maximize app addictiveness. It's stuff that mildly screws over the user, and it guides the product designs that affect the lives of billions of people. It's not good.

greggman 1 day ago 2 replies      
I had a mildly negative reaction to this article which seems different from most people's reactions here

Yelp example: I get that yelp is limiting my options but what did I do before yelp? Pretty much I just went to the 4 or 5 places I already knew instead of seeing if there was something new nearby. I tried hundreds of new places because of recommendations from Google Maps or Yelp that I likely would not have tried otherwise. Certainly not before I had this thing in my pocket that let me research at the moment of desire instead of having the plan ahead

Similarly I look for meetups and have been to way more activities that I would have gone to in the past.

On the slot machine idea: Before smartphones and apps I'd flip through a magazines and hope there'd be something interesting. Before digital TV back when channel surfing tuned into each channel immediately instead of taking 3-5 seconds I'd flip through channels hoping to discover an interesting program. When I buy a book I'm gambling it's going to catch me. When I go to a new restaurant I'm gambling it's going to be delicious (it often isn't)

I guess I don't really see the difference between that and many of the things listed on that list.

I also do uninstall apps. I've un-followed 90% of my connections to keep it my feed actually relevant to me.

Sure I do spend too much time on (in my own opinion) on the net for various things. hopefully I can keep it under control.

It got more scary for me toward the end where he seemed to be calling for government regulation. A digital "bill of rights". An FDA for Tech. No doubt he's assuming he'll be on the committees to decide what's best for everyone else.

forgottenpass 2 days ago 5 replies      
Apparently google has an Ethicist.

Ctrl-F: Ad

Huh. Zero mentions of the thing that manipulates people by design.

I guess I don't blame him, because even with a blind spot to ads it would seem Google still doesn't give two shits about his input. He explicitly calls out a problem YouTube feature, which he certainly would have voiced internally before externally. And he doesn't work at Google anymore.

Google must want results from ethicists the way tobacco companies of old wanted results from in-house scientists.

Jtsummers 2 days ago 8 replies      
Kind of interesting idea about how our tools (generic) present us with limited menus, and effectively restrict our options.

Facebook has expanded (barely) the options for basic responses to posts (no longer just like, but also a handful of emoticons to express laughter, anger, sadness, etc.). Not as full an option as when using the comment box, but for quick responses it allows for greater expressiveness. At least people don't have to "like" the tragic news of their friend's family deaths anymore.

But then look at Allo, announced from Google yesterday, with its @google bot that will help people decide how to make basic, trivial responses to pictures posted to threads. (I'll try to find the link later, but the demo was with a graduation photo, and a few suggested responses like "You look great!" "Congratulations, so happy for you" or something similar).

By pushing the job of coming up with options to tools, like the choice of restaurants and bars to Yelp, we narrow our worlds. We limit our expressiveness and creativity.

I don't know that I have a point, just some thoughts.

imgabe 2 days ago 3 replies      
The bit about choices reminds about the soda cup debacle from when Bloomberg proposed limiting the size of soda containers that some venues could sell.

Everyone griped about how it limited their "freedom of choice", but nobody asked about why those particular sizes were the choices available in the first place. 7-11 and others decided that they could add a $0.05 more soda and charge $0.25 more and make more money. People would buy it because look, you get 50% more for only a quarter!

Meanwhile the choice of buying less was never presented as an option.

bikamonki 2 days ago 11 replies      
Does anyone else feel the slot machine effect here on HN is the karma displayed on the top-right? After a submission or comment, is that number the first thing you check when you come back to the site?
danr4 2 days ago 1 reply      
I enjoyed the article.

The most important takeaway for me was " now companies have a responsibility to reduce these effects by converting intermittent variable rewards into less addictive, more predictable ones with better design"

I think most of the techniques listed actually cause pain for users, the same way addictions do. I think a lot of people are aware at some level that they are giving up to temptation and it makes them feel worse about themselves.

On the contrary, when an app makes a prediction and nails it, I tend to appreciate it much more, and feel it helped me rather than lured me. My only gripe with predictability is it usually entails giving up a big portion of my privacy.

In my idealist mind somewhere in the future, personal privacy will be a default state of mind for service providers. Total self privacy combined with life analytics (Lifelytics?) which empower streamlining ones routine is a dream I hope I witness come true.

blabla_blublu 1 day ago 1 reply      
My 2 cents on this(opinion) and how I am struggling to cope with tech usage in my life.

I noticed that I spend a disproportionately high volume of time "consuming" than "creating". Recently I have been making efforts to create more. writing, drawing, painting. Something, anything so that the brain can spend some time coming up with new things instead of just reading/passively participating.

Technology has empowered a lot of us to create more, but it has also played a huge part in 'consumption', since it is so easy to swipe up and get the next article and the next and the next. Not to forget passive reading where I just skim through without actually paying any kind of attention to detail.

kreutz 1 day ago 0 replies      
This is not specific to technology and can essentially be traced back to capitalism for every industry. Businesses have nothing to gain by making product decisions around "will this help the customers well being". If it does not help them sell more, do more, make more it does not matter. All consumer facing companies apps, games, food, travel they are all gamified to grow the business regardless of whether there is a government agency to influence them. I'm all for making product and business decisions around these ideas but these psychological tricks have been applied to consumers for decades long before the Internet.
wslh 2 days ago 2 replies      
I was really interested in the article until I read: "I spent the last three years as Googles Design Ethicist...". In the article he is focusing at the application level but the issue is at the form factor level.

One simple example: if you give to a child a Simon game app based on the original Simon game [1] he/she will probably end up switching to YouTube and watching stupid videos but if you give him/her the "limited" form factor version of the game the child will have more fun.

[1] https://en.wikipedia.org/wiki/Simon_(game)

siglesias 1 day ago 1 reply      
Anybody else struck by the irony of the glaring "Don't miss Tristan Harris's next story" dialog at the bottom of the article?

Note the verbiage exploiting Fear of Missing Something Important.

guard-of-terra 2 days ago 0 replies      
When people are given a menu of choices, they rarely ask:

 whats not on the menu? why am I being given these options and not others? do I know the menu providers goals? is this menu empowering for my original need, or are the choices actually a distraction? (e.g. an overwhelmingly array of toothpastes)
It's so much more about politics (elections of all levels) rather than about technology!

lilcarlyung 2 days ago 2 replies      
Sooo... These are the things that I need to do to build a successful app?
nnd 1 day ago 1 reply      
Nothing new here. If anyone is interested to learn about these mechanisms in death there is a great book "Hooked" by Nir Eval.

The real problem here are the users. If they keep unconsciously falling for the same tricks over and over again, rather than taking a stand and rejecting manipulative products, there is no incentive for product makers to create product which are _not_ manipulative. You can compare it to eating junk food, rather than choosing healthy options.

AndrewKemendo 1 day ago 1 reply      
This is classic game theory.

You lose as a company if you don't build in these addictive features. So everyone does it because as he made clear, attention is the currency of business currently. All of these companies, and new companies would have to agree not to build these behaviors in.

The idea of an FDA or bill of rights for technology is great in the holistic macro sense, but I think unrealistic as it is not aligned with the interests of companies.

It's the same with any externalities, be it pollution or labor exploitation - things that have clear nexus with bad outcomes but struggle to gain traction around limiting because of overwhelming business interests.

Animats 1 day ago 1 reply      
Hm. Maybe we need a system for mail and messaging which puts most items into a bin to be read later. Once every N hours, it shows you the "later" items.

(One real headache is the demise of third-party messaging clients. You can't write a Twitter client any more, or a Facebook client, or a WhatsApp client, or a Slack client. This is a big problem, because the vendor clients work for the man, not for you. With email, you're still in control, but not with the proprietary systems.)

Mendenhall 2 days ago 1 reply      
There are companies that have whole teams of "addiction specialists" that some video game companies hire to give talks to their producers. I know this from personal experience, they are literally trying to make you addicted and have that goal in mind.
AngrySkillzz 2 days ago 0 replies      
This is what everyone means when they say "the medium is the message."
tdaltonc 1 day ago 2 replies      
> We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. Peoples time is valuable. And we should protect it with the same rigor as privacy and other digital rights.

I think that the only way we can do this is for out technology to make our values impulsive.

tlb 2 days ago 0 replies      
How can one separate the inherent addictiveness of social approval (which we evolved to cope with) from the added addictiveness due to slot-machine rewards?

(Discrete measures of approval, like a count of Likes, add their own variance in addition to the inherent variability of how much people liked something. That discretization is a property of the digital system, and adds the variance of a binomial distribution.)

So when you say something IRL, you can gauge a lot of fine gradations of approval in the way people react. But online, where you get a small count of upvotes, the quantization adds variance and makes it more addictive.

The added variance is most significant at small sample sizes. People whose submissions get hundreds of votes might find the process less addictive than people whose submissions get a handful of votes.

Would HN be less addictive if the upvote process was more analog, somehow? Say, by adding up the duration people held down the mouse button for?

amelius 1 day ago 0 replies      
> Thats why I spent the last three years as Googles Design Ethicist caring about how to design things in a way that defends a billion peoples minds from getting hijacked.

Every time I see the doodle, I have the feeling I'm being hijacked. After playing with the doodle I often forget what I came to Google for. Just saying.

xufi 2 days ago 3 replies      
It saddens me whenever I go out wit my friends and I try to have a conversation, that everyone is glued to their phones and can't have face-face conversation.
tmaly 1 day ago 0 replies      
Great post, the whole Yelp part really spoke to me as I have been working on my own side project to help me find better food dishes. I am trying to wrap my mind around this slot machine concept as I really do not want people using my project like this.

Any suggestions on UI would be greatly welcome.

ca98am79 1 day ago 0 replies      
> Imagine a digital bill of rights outlining design standards that forced the products used by billions of people to support empowering ways for them to navigate toward their goals.

This scares me

jonstokes 2 days ago 1 reply      
First thought: I gotta get off all of these apps. I always knew they were messing with me, and now I know how!

Second thought: Our UX designer needs to read this ASAP... it's basically a "best practices" guide for making a social app that shows "traction".

cynoclast 1 day ago 1 reply      
I loved the part about the "friction required to enact choices" bit. Like how it's outstandingly easy to get a bank account, and how as a part of the law regarding privacy, they're required to provide a method for you to opt out of them selling your data, which involves them sending you a paper privacy notice, and to opt out you have to send them a written letter (no form is provided), to an address you have to write out yourself, and at your own cost.

That always infuriated me.

It should be as easy as signing up.

jamesmiller5 1 day ago 0 replies      
Similarly resonates with to another great article "The Slow Web" - http://jackcheng.com/the-slow-web .
henrik_w 1 day ago 1 reply      
I just finished reading "Deep Work" by Cal Newport - great book. The central theme of the book is that you need distraction-free focus to do your best work (especially relevant for programmers I think). Social media goes against this - constantly checking HN, Twitter or FB breaks this focus. He basically recommends to quit altogether. A good start is to try and go a full day with checking any social media - it's harder than it sounds.
knowaveragejoe 2 days ago 2 replies      
One nitpick: maybe it's just the grocery stores near me, but the Pharmacies are almost always located near the front right by where you walk in. Milk is often in the back however.
Kinnard 2 days ago 0 replies      
The menu options metaphor applies so well to government. Especially given our current predicament.
excalibur 1 day ago 1 reply      
He had me until the end.

"Peoples time is valuable." True

"And we should protect it with the same rigor as privacy and other digital rights." LOLOLOLOL

Lxr 1 day ago 0 replies      
> But grocery stores want to maximize how much people buy, so they put the pharmacy and the milk at the back of the store.

Early competitors of Google thought that accurate search results only served to drive users away from their site and search was therefore an unimportant part of an overall "Internet company". It would be interesting to see a grocery chain with the philosophy of "find what you want really fast" rather than "make it really difficult to find what you want so you spend longer here" - it could be very successful by the same analogy.

devy 1 day ago 0 replies      
Tristan Harris, the OP of this medium post, listed a table of comparisons between what's today's engaging/addictive/time sink product characteristics vs. time well spent products he advocates on his main project site[1]. Of all the 11 points, two of them has the most profound sociological impact or inertia:

* Success Metrics: measure success by net positive contributions rather than interactions

* Business Model: use non-engagement based advertising rather than engagement advertising.

I fully applaud Tristan's vision and mission but skeptical of how quickly companies, VCs and society can adopt it.

[1]: http://timewellspent.io/

lisper 1 day ago 1 reply      
This is not just a problem at the consumer level. My entire career I have been told that I couldn't do X because it wasn't "industry standard" or "best practice" or some other code word for "not on the menu."
thaw13579 1 day ago 0 replies      
The article does a good job of showing examples, but strangely, it doesn't connect this in any way with the vast related research in psychology and economics. This makes me skeptical of the claims of expertise...
dspoka 1 day ago 0 replies      
I love how on one side there is an engineer at google talking about how addictive technology is and then on the same day someone else at google releases an api to make mobile phones be even more clever slot machines... https://developers.google.com/awareness/?ref=producthunt
sbierwagen 1 day ago 1 reply      
A note of slight relevant interest: fastmail.com's native client doesn't actually have a manual refresh button. You only get a notification for a new mail if the server pushes one to you.
grok2 1 day ago 0 replies      
I think this is not necessarily right -- the existing technology limits peoples choice and directs them to behave in a certain way sometimes, but it's not like people don't know they are being manipulated or that they are being provided limited choices, but they live with it what's available if what's available is satisfactory enough. But if it doesn't really satisfy them, people do go out of their way to get what they want...mostly. Atleast that's what I see from a data-point of 1.
vicbrooker 2 days ago 1 reply      
I can't see a way for this style of design to be feasible that doesn't rely on a subscription-based business model.

As long as users allow free + advertising generally be the way to for build a dominant tech company then I would assume anyone that tries to compete by not optimising for advertising (eg. reducing friction for users) will lose. Which is a damn shame.

Based on my limited knowledge of the history of news media, theres a cycle between free + advertising and paid + high quality. Intuitively it should apply to other verticals too, and I hope that, in reality, it does.

ljk 1 day ago 1 reply      
Very eye-opening read.

few random thoughts:

> Hijack #7: Instant Interruption vs. Respectful Delivery... By contrast, Apple more respectfully lets userse toggle "Read Receipts" on or off

Unfortunately iPhone has a new feature to reply to messages immediately even in the lock screen, looks like everyone is guilty of this

> now that you know Ive seen the message, I feel even more obligated to respond.

always tried to purposefully not respond right away even when it said I read it already, but it does feel weird/rude

tsunamifury 2 days ago 1 reply      
It's called "railroading" and it's been around forever. The only new trick is to add more options that still railroad you in the same general direction.

The problem is human minds are smarter than you think and they tend to disengage when they sense this and they strongly want to get off the rails. The user does this by quitting and/or deleting the service. Over time you realize it's best to give the user all their options and they are more likely to stay using your product.

kercker 1 day ago 0 replies      
The author apparently has his own agenda, after seeing the time well spent link all over the article.
ommunist 22 hours ago 0 replies      
This is another perspective on what Nick Carr noticed in his brilliant "The Shallows", and it is invaluable. I am glad Google is not entirely evil.
1_2__3 1 day ago 0 replies      
I'd say it's worse, because now that we don't pay for things (ad support ftw?) the service providers have a much stronger incentive to keep you on the site than they do to make you happy. They've slowly but surely developed methods that do exactly that, leaving us all wondering why we spend so much time on sites/in apps that we don't actually enjoy.

This isn't going to change.

AlexandrB 2 days ago 2 replies      
Some of the design anti-patterns he's describing smell a lot like the broken window fallacy [1]. Sure, apps can get big on getting users addicted and "engaged", but there's no actual economic benefit being produced by such design - just tons of opportunity cost to users.

[1] https://en.wikipedia.org/wiki/Parable_of_the_broken_window

danvoell 1 day ago 0 replies      
Would it be possible to build an app interface which interacts with these apps yet works on behalf of the user? I might be willing to pay for something like this.
gentleteblor 1 day ago 0 replies      
I wonder what impact this knowledge would have on the public's perception of Silicon Valley (and the tech industry in general) if it got very popular. The current thinking seems to be something akin to: Oil & Gas Industries bad, Tobacco Industry bad, Fast Food Industry Bad, Big Pharma bad. Startups Good! Silicon Valley Good. Tech will fix everything. But it's the same old game.
tdaltonc 1 day ago 0 replies      
If you're interested in learning more about these power and how to use them for good, consider joining us:



hypertexthero 1 day ago 0 replies      
A book on a similar theme from before the internet is [Ways of Seeing][1] by John Berger. The typography, set in bold throughout, doesn't do the text any favors, but the writing and information is good.

[1]: https://en.wikipedia.org/wiki/Ways_of_Seeing

bcoughlan 1 day ago 0 replies      
> What if your email client gave you empowering choices of ways to respond, instead of what message do you want to type back?

Communicating directly via writing or talking is the only thing here that is not driven by limited choices.

It reminds me of the koan about the expressiveness of the command line compared to point and click:


One evening, Master Foo and Nubi attended a gathering of programmers who had met to learn from each other. One of the programmers asked Nubi to what school he and his master belonged. Upon being told they were followers of the Great Way of Unix, the programmer grew scornful.

The command-line tools of Unix are crude and backward, he scoffed. Modern, properly designed operating systems do everything through a graphical user interface.

Master Foo said nothing, but pointed at the moon. A nearby dog began to bark at the master's hand.

I don't understand you! said the programmer.

Master Foo remained silent, and pointed at an image of the Buddha. Then he pointed at a window.

What are you trying to tell me? asked the programmer.

Master Foo pointed at the programmer's head. Then he pointed at a rock.

Why can't you make yourself clear? demanded the programmer.

Master Foo frowned thoughtfully, tapped the programmer twice on the nose, and dropped him in a nearby trashcan.

As the programmer was attempting to extricate himself from the garbage, the dog wandered over and piddled on him.

At that moment, the programmer achieved enlightenment.

thesrcmustflow 1 day ago 0 replies      
> One major reason why is the #1 psychological ingredient in slot machines: intermittent variable rewards.

I first came across this concept in a Hello Internet podcast, and it's amazing how much you see it in just about everything on the internet once you've heard of it.

Dowwie 1 day ago 0 replies      
Much of this content is associated with the field of Behavior Economics. For instance, in Hack #1, the author is speaking of what academia has called "Choice Architecture". For more about that, you can freely access multiple studies published by Thaler and Sunstein.
johnchristopher 1 day ago 0 replies      
tldr; : don't use social networks on your smartphone or you're going to miss out on real life.
galfarragem 1 day ago 0 replies      
I found it funny when the author also uses manipulative tricks:

- he finishes the article with two links (one direct and one indirect) for the same website;

- he (or the software) underestimates the time needed to read the article.

hughw 1 day ago 0 replies      
/me furiously redesigning our app to be a slot machine....
tomc1985 1 day ago 0 replies      
Perhaps we are in need of a cultural shift that emphasizes "doing it right" instead of "making all the money"
have_faith 2 days ago 2 replies      
Technology doesn't hijack our minds, people do. It's an age old practice of blaming the tools.

The logical next step might be to ask well then how do we fix people? (aka, society). Maximum freedom in society means maximum freedom to be manipulated by others.

Perhaps ironically, the closer we get to solving world hunger and eradicating diseases and so on the closer we become to overpopulation and overcrowding our little planet that once seemed so large. Our ineptitude at cooperating with each other and our ability to manipulate each other so easily is probably the only thing stopping highly accelerated human progress, and thus our own demise.

Absurdism looks more appealing every day.

JoeDaDude 1 day ago 0 replies      
I could not help but notice the block at the end that asked one to "Read Tristan's next blog" with a button to Follow him.
krosaen 1 day ago 0 replies      
tard 1 day ago 1 reply      
how come this article only got 2 points[1] when someone submitted it here a day ago, but this time it's at the top of everything?

[1] https://news.ycombinator.com/item?id=11726766

frogpelt 1 day ago 1 reply      
timewellspent.io contains a perfect example of indirectly hijacking our agency: the video play button.

That little triangle has almost become impossible to resist. My two-year-old can spy it from across the room and he runs over and begs me to click it.

What if we stopped using icons that have programmed our brains?

imacomputer2 1 day ago 0 replies      
Estimated rewarding time is 12 minutes? Who reads that fast!
cdnsteve 1 day ago 0 replies      
A brilliant article.
mck- 1 day ago 0 replies      
How Hacker News Hijacks your day...
astazangasta 1 day ago 0 replies      
It's not "technology" that is hijacking people's minds, it's specific companies who are doing it. You don't blame the magician's hat for fooling you.

The problem, as usual, is that technology is slave to the boring, insipid demands of capital to get us to click on ads and purchase more snow-pants.

hackaflocka 1 day ago 0 replies      
Nothing "the government" can't solve!!!
bobwaycott 1 day ago 1 reply      
I spend most of my internet time on HN. It makes me wish HN tracked this and, each time I load the page, prompted me with a "Do you really want to spend the next n minutes here?"
marknutter 1 day ago 1 reply      
Interesting read, but is the author being serious when they suggest that the FDA should be setting standards for how software UX is designed? I can't even begin to imagine how much of an unmitigated disaster that would be.
zepto 1 day ago 1 reply      
Have you considered that the manipulation of people's minds in consumer societies might actually be part of the reason why there is so much exploitation of people in more vulnerable communities?
fsiefken 2 days ago 1 reply      
Snow Crash
Pulce 1 day ago 0 replies      
Likes was 666 wen I went to the page... that hijacks me.
tacos 2 days ago 0 replies      
Novelty, not technology. The same cognitive behavior is exploited on restaurant menus.
Reverse Engineering a Mysterious UDP Stream in My Hotel gkbrk.com
1300 points by gkbrk  1 day ago   167 comments top 30
Animats 19 hours ago 4 replies      
At least they play music. I once stayed in a Howard Johnsons Motor Lodge in Pittsburgh near CMU, where, at corridor intersections, they had speakers to generate some ambient noise to mask voices from the rooms. Most places that do this use white noise, or water sounds. But this was Pittsburgh. They were playing faint machinery noises - whirr, chunk, etc. At first I thought someone had left a PA microphone open in the boiler room or something, but no, it was deliberate.
tharshan09 1 day ago 7 replies      
Can you send your own UDP packets to the elevator then?
janci 1 day ago 3 replies      
Now I know what to do in a hotel!


tonyedgecombe 1 day ago 4 replies      
I was in an elevator a while ago when there was a ringing followed by a voice trying to sell PPI services (a regular source of spam in the UK). The emergency system was just an embedded phone.
daveguy 18 hours ago 2 replies      

Revelation/Disappointment -- it is elevator music.

Or is it? Maybe he gave up too quick. Maybe that is how they disguise the secret spy transmissions!


ReedJessen 20 hours ago 0 replies      
This is really well written. Good length for one quick bus ride. I was on pins an needles until the end. Great blog post.
omash 1 day ago 1 reply      
That was just the carrier, hiding the steganographic payload.
dkopi 19 hours ago 2 replies      
Binwalk is a great tool for finding potential files within a given binary stream: https://github.com/devttys0/binwalkIt has an incredible list of supported file types.
mhd 1 day ago 0 replies      
You had to follow that shaggy dog a long time, but it finally led you to the girl from Ipanema.
detaro 1 day ago 3 replies      
Reminds me how surprised I was when I found out that many IP phone installations use multi-cast strictly to distribute on-hold music to all phones, instead of the phones pulling files from a server or storing it locally.
SoonDead 1 day ago 1 reply      
Excellent story, although the URL spoiled it for me, might worth changing it to something more vague.
gnicholas 9 hours ago 0 replies      
A story with a decidedly less innocuous outcome: https://medium.com/@nicklum/my-hotel-wifi-injects-ads-does-y...
kw71 21 hours ago 2 replies      
Next time you see a multicast stream, try playing it with vlc.
ngcazz 5 hours ago 0 replies      
As someone who doesn't really grok either Python or IP programming, I really enjoyed how simple yet educative (and ultimately, funny) this blog post was. Definitely will be trying this exercise the next time I end up at a hotel with a laptop.

I wonder how hard it might be to hijack the stream to have the receivers play your own packets.

buttershakes 17 hours ago 0 replies      
It's probably a microphone hidden in his room that encodes the data stenographically into elevator music.

More seriously, no investigation as to what happens when you try to inject your own data?

Namidairo 1 day ago 0 replies      
Have you tried throwing the raw recording against SoX? Worked pretty well when I had an unknown recording to play for which I had guessed the format. (Which turned out to be IMA-ADPCM with reversed byte ordering)
nowprovision 1 day ago 1 reply      
ah damn, not a network guy but couldn't they put lifts on their own subnets and avoid populuting the constrained airway. good read :)
caylorme 13 hours ago 0 replies      
Now just copy the headers and send some multi cast audio of your own to hijack the elevator and bathroom audio :)Could make for a good prank
djabatt 20 hours ago 1 reply      
Great network engineering. I think we all need a set of tools that allow us to find if we are being bugged by all the networked smartTV's, printers VOIP phones etc. Not to mention the Amazon Echo. I dig it if someone had a Wireshark/Python app that allowed everyone to listen to with Amazon Echo was sending to Amazon.
deepsun 23 hours ago 2 replies      
Does it drain battery of mobile devices not listening to the port?

Have you tried multicasting your own audio to the same port? That might have been fun.

known 23 hours ago 1 reply      
I'd start with

tcpflow -p -C -i eth0 port 80 | grep -oE '(GET|POST|HEAD) .* HTTP/1.[01]|Host: .*'

daveheq 21 hours ago 2 replies      
I love knowing I get spammed with elevator music over WiFi at a hotel the whole time.
cmarcond06 19 hours ago 2 replies      
Is this legal? What if the media was from Hotel Cameras?
KillerRAK 1 day ago 0 replies      
I applaud the effort and your curiosity. Any good tunes on that stream? Perhaps you're working on a remix?
selectiveupvote 16 hours ago 0 replies      
Okay, I got a good laugh out of this one and appreciate it.
binaryanomaly 1 day ago 0 replies      
Hehe, nice story! Congrats ;)
lynxaegon 1 day ago 0 replies      
haha! Best reverse engineering of elevator music ever :)
m00dy 1 day ago 0 replies      
Nice story
matiasb 1 day ago 0 replies      
It was an ex-NSA agent talking to his mom :)
punnerud 17 hours ago 0 replies      
Do you know you are a nerd when you laugh out loud after reading this?

My girlfriend: Was that something I would also laugh at?Me: Most likely not ;)

Google supercharges machine learning tasks with TPU custom chip googleblog.com
820 points by hurrycane  3 days ago   273 comments top 45
luu 3 days ago 15 replies      
I'm happy to hear that this is finally public so I can actually talk about the work I did when I was at Google :-).

I'm a bit surprised they announced this, though. When I was there, there was this pervasive attitude that if "we" had some kind of advantage over the outside world, we shouldn't talk about it lest other people get the same idea. To be clear, I think that's pretty bad for the world and I really wished that they'd change, but it was the prevailing attitude. Currently, if you look at what's being hyped up at a couple of large companies that could conceivably build a competing chip, it's all FPGAs all the time, so announcing that we built an ASIC could change what other companies do, which is exactly what Google was trying to avoid back when I was there.

If this signals that Google is going to be less secretive about infrastructure, that's great news.

When I joined Microsoft, I tried to gently bring up the possibility of doing either GPUs or ASICs and was told, very confidentially by multiple people, that it's impossible to deploy GPUs at scale, let alone ASICs. Since I couldn't point to actual work I'd done elsewhere, it seemed impossible to convince folks, and my job was in another area, I gave up on it, but I imagine someone is having that discussion again right now.

Just as an aside, I'm being fast and loose with language when I use the word impossible. It's more than my feeling is that you have a limited number of influence points and I was spending mine on things like convincing my team to use version control instead of mailing zip files around.

bd 3 days ago 3 replies      
So now open sourcing of "crown jewels" AI software makes sense.

Competitive advantage is protected by custom hardware (and huge proprietary datasets).

Everything else can be shared. In fact it is now advantageous to share as much as you can, the bottleneck is a number of people who know how to use new tech.

abritishguy 3 days ago 5 replies      
I think this shows a fundamental difference between Amazon (AWS) and Google Cloud.

AWSs offerings seem fairly vanilla and boring. Google are offering more and more really useful stuff:

- cloud machine learning

- custom hardware

- live migration of hosts without downtime

- Cold storage with access in seconds

- bigquery

- dataflow

manav 3 days ago 2 replies      
Interesting. Plenty of work has been done with FPGAs, and a few have developed ASICs like DaDianNao in China [1]. Google though actually has the resources to deploy them in their datacenters.

Microsoft explored something similar to accelerate search with FPGAs [2]. The results show that the Arria 10 (20nm latest from Altera) had about 1/4th the processing ability at 10% of the power usage of the Nvidia Tesla K40 (25w vs 235w). Nvidia Pascal has something like 2/3x the performance with a similar power profile. That really bridges the gap for performance/watt. All of that also doesn't take into account the ease of working with CUDA versus the complicated development, toolchains, and cost of FPGAs.

However, the ~50x+ efficiency increase of an ASIC though could be worthwhile in the long run. The only problem I see is that there might be limitations on model size because of the limited embedded memory of the ASIC.

Does anyone have more information or a whitepaper? I wonder if they are using eAsic.

[1]: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=701142...

[2]: http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.p...

semisight 3 days ago 5 replies      
This is huge. If they really do offer such a perf/watt advantage, they're serious trouble for NVIDIA. Google is one of only a handful of companies with the upfront cash to make a move like this.

I hope we can at least see some white papers soon about the architecture--I wonder how programmable it is.

mrpippy 3 days ago 3 replies      
Bah, SGI made a Tensor Processing Unit XIO card 15 years ago.

evidence suggests they were mostly for defense customers:


jhartmann 3 days ago 5 replies      
3 generations ahead of moore law??? I really wonder how they are accomplishing this beyond implementing the kernels in hardware. I suspect they are using specialized memory and an extremely wide architecture.

Sounds they also used this for AlphaGo. I wonder how badly we were off on AlphaGo's power estimates. Seems everyone assumed they were using GPU's, sounds like they were not. At least partially. I would really LOVE for them to market these for general use.

asimuvPR 3 days ago 1 reply      
Now this is really interesting. I've been asking myself why this hadn't happened before. Its been all software, software, software for the last decade or so. But now I get it. We are at a point in time where it makes sense to adjust the hardware to the software. Funny how things work. It used to be the other way around.
breatheoften 3 days ago 1 reply      
A podcast I listen to posted an interview with an expert last week saying that he perceived that much of the interest in custom hardware for machine learning tasks died when people realized how effective GPUs were at the (still-evolving-set-of) tasks.


I wonder how general the gains from these ASIC's are and whether the performance/power efficiency wins will keep up with the pace of software/algorithm-du-jour advancements.

RIMR 3 days ago 2 replies      
Somewhat off topic, but if you look at the lower-left hand corner of the heatsink in the first image, there's two red lines and some sort of image artifact.


They probably didn't mean to use this version of the image for their blog - but I wonder what they were trying to indicate/measure there.

danielvf 3 days ago 1 reply      
For the curious, that's a plaque on the side if the rack showing the Go board at the end of AlphaGo vs Lee Sedol Game 3, at the moment Lee Sedol resigned and AlphaGo won the tournament (of five games).
nkw 3 days ago 1 reply      
I guess this explains why Google Cloud Compute hasn't offered GPU instances.
fiatmoney 3 days ago 3 replies      
I'm guessing that the performance / watt claims are heavily predicated on relatively low throughput, kind of similar to ARM vs Intel CPUs - particularly because they're only powering it & supplying bandwidth via what looks like a 1X PCIE slot.

IOW, taking their claims at face value, a Nvidia card or Xeon Phi would be expected to smoke one of these, although you might be able to run N of these in the same power envelope.

But those bandwidth & throughput / card limitations would make certain classes of algorithms not really worthwhile to run on these.

bravo22 3 days ago 2 replies      
Given the insane mask costs for lower geometries, the ASIC is most likely an Xilinx EasyPath or Altera Hardcopy. Otherwise the amortization of the mask and dev costs -- even for a structured cell ASIC -- over 1K unit wouldn't make much sense versus the extra cooling/power costs for a GPU.
Coding_Cat 3 days ago 0 replies      
I wonder if we will be seeing more of this in the (near) future. I expect so, and from more people then just Google. Why? Look at the problems the fab labs have had with the latest generation of chips and as they grow smaller the problems will probably rise. We are already close to the physical limit of transistor size. So, it is fair to assume that Moore's law will (hopefully) not outlive me.

So what then? I certainly hope the tech sector will not just leave it at that. If you want to continue to improve performance (per-watt) there is only one way you can go then: improve the design at an ASIC level. ASIC design will probably stay relatively hard, although there will probably be some technological solutions to make it easier with time, but if fabrication stalls at a certain nm level, production costs will probably start to drop with time as well.

I've been thinking about this quite a bit recently because I hope to start my PhD in ~1 year, and I'm torn between HPC or Computer Architecture. This seems to be quite a pro for Comp. Arch ;).

phsilva 3 days ago 1 reply      
I wonder if this architecture is the same Lanai architecture that was recently introduced by Google on LLVM. http://lists.llvm.org/pipermail/llvm-dev/2016-February/09511...
taliesinb 3 days ago 0 replies      
I don't know much about this sort of thing but I wonder if the ultimate performance would come with co-locating specialized compute with memory, so that the spatial layout of the computation on silicon ends up mirroring the abstract dataflow dag, with fairly low-bandwidth and energy efficient links between static register arrays that represent individual weight and grad tensors. Minimize the need for caches and power hungry high bandwidth lanes, ideally the only data moving around is your minibatch data going one way and your grads going the other way.

I wonder if they're doing that, and to what degree.

harigov 3 days ago 3 replies      
How is this different from - say - synthetic neurons that IBM is working on, or what nvidia is building?
nathan_f77 3 days ago 1 reply      
I'm thinking that this has the potential to change the context of many debates about the "technological singularity", or AI taking over the world. Because it all seems to be based on FUD.

While reading this article, one of my first reactions was "holy shit, Google might actually build a general AI with these, and they've probably already been working on it for years".

But really, nothing about these chips is unknown or scary. They use algorithms that are carefully engineered and understood. They can be scaled up horizontally to crunch numbers, and they have a very specific purpose. They improve search results and maps.

What I'm trying to say is that general artificial intelligence is such a lofty goal, that we're going to have to understand every single piece of the puzzle before we get anywhere close. Including building custom ASICs, and writing all of the software by hand. We're not going to accidentally leave any loopholes open where AI secretly becomes conscious and decided to take over the world.

Bromskloss 3 days ago 2 replies      
What is the capabilities that a piece of hardware like this needs to have to be suitable for machine learning (and not just one specific machine learning problem)?
cschmidt 3 days ago 1 reply      
This seems very similar to the "Fathom Neural Compute Stick" from Movidius:


TensorFlow on a chip....

isseu 3 days ago 0 replies      
Tensor Processing Unit (TPU)

Using it for over a year? Wow

revelation 3 days ago 0 replies      
There is not a single number in this article.

Now these heatsinks can be deceiving for boards that are meant to be in a server rack unit with massive fans throwing a hurricane over them, but even then that is not very much power we're looking at there.

hyperopt 3 days ago 1 reply      
The Cloud Machine Learning service is one that I'm highly anticipating. Setting up arbitrary cloud machines for training models is a mess right now. I think if Google sets it up correctly, it could be a game changer for ML research for the rest of us. Especially if they can undercut AWS's GPU instances on cost per unit of performance through specialized hardware. I don't think the coinciding releases/announcements of TensorFlow, Cloud ML, and now this are an accident. There is something brewing and I think it's going to be big.
saganus 3 days ago 3 replies      
Is that a Go board stick to the side of the rack?

Maybe they play one move every time someone gets to go there to fix something? or could it be just a way of numbering the racks or something eccentric like that?

hristov 3 days ago 3 replies      
It is interesting that they would make this into an ASIC, provided how notoriously high the development costs for ASICs are. Are those costs coming down? If so life will get very hard for the FPGA makers of the world soon.

It would be interesting to see what the economics of this project are. I.e., what are the development costs and costs per chip. Of course it is very doubtful I will ever get to see the economics of this project, it would be interesting.

protomok 3 days ago 0 replies      
I'd be interested to know more technical details. I wonder if they're using 8-bit multipliers, how many MACs running in parallel, power consumption, etc.
j-dr 3 days ago 1 reply      
This is great, but can google stop putting tensor in the name of everything when nothing they do really has anything to do with tensors?
__jal 3 days ago 0 replies      
My favorite part is what looks like flush-head sheet metal screws holding the heat sink on.

No wondering where you left the Torx drivers with this one.

j1vms 3 days ago 2 replies      
I wouldn't be surprised if Google is looking to build (or done so already) a highly dense and parallel analog computer with limited precision ADC/DACs. I mean that's simplifying things quite a bit, but it would probably map pretty well to the Tensorflow application.
aaronsnoswell 3 days ago 2 replies      
I'm curious to know; is this announcement something that an expert in these sorts of areas could have (or did?) predict months or years ago, given Google's recent jumps forwards in Machine learning products? Can someone with more knowledge about this comment?
eggy 3 days ago 0 replies      
Pretty quick implementation.

On the energy savings and space savings front, this type of implementation coupled with the space-saving, energy-saving claims of going to unums vs. float should get it to the next order of magnitude. Come on, Google, make unums happen!

paulsutter 3 days ago 0 replies      
> Our goal is to lead the industry on machine learning and make that innovation available to our customers.

Are they saying Google Cloud customers will get access to TPUs eventually? Or that general users will see service improvements?

nxzero 3 days ago 0 replies      
Is there anyway to detect what hardware to being used by the cloud service if you're using the cloud service? (yes, realize this question is a bit of a paradox, but figured I'd ask.)
mistobaan 3 days ago 0 replies      
Another point is that they will be able to provide much higher computing capabilities at a much lower price point that any competitors. I really like the direction that the company is taking.
eggy 3 days ago 1 reply      
I think the confluence of new technologies, and the re-emergence / rediscovery of older technologies is going to be the best combination. Whether it goes that way is not certain, since the best technology doesn't always win out. Here, though, the money should, since all would greatly reduce time and energy in mining and validating:

* Vector processing computers - not von Neumann machines [1].

* Array languages new, or like J, K, or Q in the APL family [2,3]

* The replacement of floating point units with unum processors [4]

Neural networks are inherently arrays or matrices, and would do better on a designed vector array machine, not a re-purposed GPU, or even a TPU in the article in a standard von Neumann machine.Maybe non-von Neumann architectire like the old Lisp Machines, but for arrays, not lists (and no, this is not a modern GPU. The data has to stay on the processor, not offloaded to external memory).

I started with neural networks in late 80s early 1990s, and I was mainly programming in C. matrices and FOR loops. I found J, the array language many years later, unfortunately.Businesses have been making enough money off of the advantage of the array processing language A+, then K, that the per-seat cost of KDB+/Q (database/language) is easily justifiable. Other software like RiakTS are looking to get in the game using Spark/shark and other pieces of kit, but a K4 query is 230 times faster than Spark/shark, and uses 0.2GB of memory vs. 50GB. The similar technologies just don't fit the problem space as good as a vector language.I am partial to J being a more mathematically pure array language in that it is based on arrays. K4 (soon to be K5/K6) is list-based at the lower level, and is honed for tick-data or time series data. J is a bit more general purpose or academic in my opinion.

Unums are theoretically more energy efficient and compact than floating point, and take away the error-guessing game. They are being tested with several different language implementations to validate their creator's claims, and practicality. The Mathematica notebook that John Gustafson modeled his work on is available free to download from the book publisher's site.People have already done some type of explorator investigations in Python, Julia and even J already. I believe the J one is a 4-bit implementation of enums based on unums 1.0. John Gustafson just presented unums 2.0 in February 2016.

[1] http://conceptualorigami.blogspot.co.id/2010/12/vector-proce...

[2] jsoftware.com

[3] http://kxcommunity.com/an-introduction-to-neural-networks-wi...

[4] https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gus...

swalsh 3 days ago 0 replies      
I wonder if opening this up as a cloud offering is a way to get a whole bunch of excess capacity (if it needs it for something big?) but have it paid for.
dharma1 3 days ago 0 replies      
hasn't made a dent on Nvidia's share price yet
amelius 3 days ago 2 replies      
One question: what has this got to do with tensors?
camkego 3 days ago 1 reply      
Does anyone have links to the talk or the graphs?
ungzd 3 days ago 0 replies      
Does it use approximate computing technology?
niels_olson 3 days ago 0 replies      
I like that the images are mislabeled :)
LogicFailsMe 3 days ago 1 reply      
Perf/W, the official metric of slow but efficient processors. How many times must we go down this road?

Let's see this sucker train AlexNet...

rando3826 3 days ago 1 reply      
Why use an ANKY in the title? Using an ANKY(Acronym no one knows yet) is bad writing, makes readers feel dumb, etc. Google JUST NOW invented that acronym, sticking it in the title like just another word we should understand is absolutely ridiculous.
simunaga 3 days ago 2 replies      
In what sense in this a great news? Yes, it's a progress, so what? After all, you - programmers - earn money for your jobs and pretty soon you might not have one. Because of these kinds of great news -- "Whayyy, this is really interesting, AI, maching learning. Aaaaa!".

"I'll get fired, won't have money for living and AI will take my place, but the world will be better! Yes! Progress!"

Who will benefit from this? Surely not you. Why are you so ecstatic then?

Chrome removes Backspace to go back chromium.org
682 points by ivank  3 days ago   579 comments top 104
klodolph 2 days ago 12 replies      
I'm going to take a somewhat contrarian view and say, "Thank you, Chrome developers."

It's always easy to tell apart the people who know shortcuts from the people who don't, if you watch them use their computers. Someone with a few shortcuts on tap will zoom around their monitors, switching between mouse and keyboard only when necessary.

But there are a few shortcuts and user interface quirks that are too outdated and weird, and only serve to surprise and annoy us. They herald from an earlier age when people were still figuring things out in new UI paradigms. For example, these days, you expect the scroll wheel to scroll up and down in a scrolling view. However, my coworker was changing some project settings in Visual Studio the other day, and he tried to scroll through the settings while a drop-down menu in the settings had focus. It scrolled through the menu options, selecting them, instead of scrolling through the view. He had to cancel the changes he was making and open the window again, because he couldn't remember what was originally selected.

This is the worst kind of surprise. Something you thought was just supposed to let you look at different parts of the interface instead modified the data you were looking at. Backspace to go back is a similar surprise. It's supposed to delete text, but instead it can navigate away from a page entirely, if you are in the wrong state when you press backspace. For the same reason, I'm even getting sick of the old middle mouse button paste, since it's too easy to press when I'm scrolling.

Forward and back navigation are already mapped to alt + left and right arrow. Let's reserve backspace for deleting text. (I'm not happy that it sometimes means "navigate up a level", but that might tell you what kind of computer I had growing up.)

Jedd 3 days ago 21 replies      
Chrome / Chromium have a habit of making these arbitrary changes that seriously annoy some (arguably small) percentage of their users, while claiming that it makes it simpler / better for everyone else, while explaining impatiently why it's infeasible to make the now missing feature a configuration option.

Evidently the kinds of people that can't be bothered going into the Advanced Configuration Settings page would be confused by an additional item in the Advanced Configuration Settings page.

I never used the backspace button for back (though it's probably what's mapped to my mouse button #8 - I'll know on the next upgrade), but I did get mightily annoyed by two changes a while back, and am always happy to bring them up whenever there's a story about Chrom* devs doing this kind of thing.

1. snap-to-mouse - while dragging the scrollbar, if you move the mouse further than ~80 pixels away from the scrollbar column, the page jumps back to the original location - apparently MS Windows users love this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this, and

2. clicking inside the URL bar selects the whole contents - apparently MS Windows users are used to this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this.

No idea what the defaults are for OSX, and, really, it doesn't matter - these features should be sensitive to extant defaults on whatever desktop environment the browser finds itself running on.

ruipgil 3 days ago 7 replies      
I might be the minority here, but I think that using the backspace to go back is counter intuitive. In my mind backspace is to delete something, and I always worry about that.
floatboth 2 days ago 5 replies      
Good. I always set browser.backspace_action to do nothing in Firefox, because this is SO infuriating. You think you have a text field focused but you actually don't (e.g. accidental mouse click removed the focus), you press Backspace and BOOM! suddenly you're on the previous page.

Ctrl/Cmd+[ and ] is the real shortcut!

oneeyedpigeon 3 days ago 8 replies      
One of the contributors states:

"Building an extension for this should be very simple."

Why on earth isn't there just a generic keyboard-shortcut preference where I can control every possible browser action and its associated keyboard shortcut? In fact, why isn't this available at an OS level? Surely it would remove a lot of unnecessary duplicate code.

dandare 3 days ago 3 replies      
"We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction. The latter are often cases where the user loses data. Years of user complaints have been enough that we think it's the right choice to change this given the degree of pain users feel by losing their data and because every platform has another keyboard combination that navigates back."

Personally I am shocked that the Chromium team ignored years of user complaints before they decided to fix what their own usability studies found to be a worthless yet painful gimmick.

ChrisArgyle 3 days ago 3 replies      
Analysis from Chrome devs here https://codereview.chromium.org/1854963002

Though I am a frequent user of backspace in Chrome I'm inclined to agree with their decision. Almost no one is using it and casual users are confused by it.

I'll just wait for someone to implement the feature in an extension.

kibwen 3 days ago 4 replies      
This is going to sound hyperbolic, I'm sure, but backspace-as-back is enormously important to my browsing experience. When I recently installed Ubuntu I had a small moment of panic when I realized that hitting backspace in Firefox performed some Ubuntu-specific thing rather than navigating backwards (as it does in Windows), but fortunately there's an about:config pref to re-enable the behavior. Just my two cents.
FollowSteph3 3 days ago 0 replies      
I think this is very good. I can't tell you the number of times I've lost form data by hitting backspace.

For those wondering how, if you do control backspace to erase a word etc igs very easy to miss, especially as you transition between word delete and single character delete.

The other common use case for errors is when u think you're in a field editing and you're actually not, bam, you just lost all your form data.

I also like the idea that backspace is for text editing and not for a second feature such as navigation. For enter yes but not backspace

EdSharkey 3 days ago 4 replies      
This feels a bit like how Esc was nerfed over the years in Firefox and others until it essentially did nothing. It used to mean STOP. All sockets were closed, the page stopped loading, and I think way waaay back, even animated gifs stopped cycling and JavaScript timeouts and intervals were cancelled.

Single-page webapps were the death of Esc, it was too confusing to users to have a page suddenly hang because they pressed Esc for some reason and all the XHR connections silently closed. "Stopping" just no longer made sense.

Just going to need to train the old timers on the new key strokes. It is sad though when convenient controls are taken away.

gjvc 3 days ago 4 replies      
This is most annoying. I have used this for the past twenty years and have not lost form data using it. In any event, chrome seems to remember form contents upon navigating back to a form page.

Leave my muscle memory alone please.

pfarnsworth 2 days ago 0 replies      
Thank GOD. So many times I've been filling out forms and sometimes I hit backspace to delete something, and maybe I clicked on a dropdown, but it goes back one page and I lose everything. Not the end of the world, but pretty annoying and I'm glad they're removing this.
spo81rty 3 days ago 1 reply      
This has always been annoying when doing it on accident. Good riddance!
crazygringo 3 days ago 1 reply      
Finally! It's about time. I don't know who ever thought having a command that didn't use a modifier key was a good idea -- it's not just about losing form data (even if that's protected against), a webpage can have all sorts of "state" you don't want to lose.

Also, what's so hard with tapping Cmd+Left or Ctrl+Left to go back? It's all I've ever done, incredibly intuitive, and simply to do with one hand (using the right Cmd button), at least on most keyboards I've seen.

djwbrown 2 days ago 0 replies      
Cumulative time wasted using 'Command+[': none.Cumulative time wasted due to overloading of the backspace key: hours.

Relying on current context to determine the behavior of backspace was a terrible idea from the start. To hell with your muscle memory. Re-learn a shortcut that makes sense, and which will save you time one day, rather than insisting with hacker-machismo that you've never lost data in a form.

dhd415 2 days ago 1 reply      
I think comment #32 (https://bugs.chromium.org/p/chromium/issues/detail?id=608016...) is worth highlighting:

 If you can fill out a formular field correctly without losing focus, you are not part of Chrome's target audience. edit: Had to type this four times due to accidently going back.

nikanj 2 days ago 0 replies      
I can't count the number of times I've noticed a typo in a form, hit shift-tab one time too many or few, hit backspace and ended up losing all of the info I filled in. The forward button mostly just leads to "resubmit form data?", instead of bringing me back.
itslennysfault 2 days ago 0 replies      
master race!!!

...but seriously, if I had a dollar for every time I've tried to hit "delete" (backspace on mac) to delete something I had selected in a web app and had it navigate back losing my unsaved changes I'd have a couple bucks.

It's rare, but it's annoying when it does happen.

greggman 2 days ago 0 replies      
Oh thank you thank you THANK YOU!!!!

I can't tell you how many times I've lost data because of backspace! Good riddance.

Now, please also get rid of pull down to refresh in iOS Chrome because that has also lost me data a ton of times as well. I don't even know who uses that feature. I don't need to refresh most pages and if I do there are better ways.

jneal 3 days ago 0 replies      
I've personally always used alt+left to go back. I know backspace does the same thing, but the only reason I know that is because I seem to hit more frequently than you'd expect while not focused on a form field causing my browser to go back unexpectedly. I've never lost data, though, it always seems to persists when I go forward.
hosh 2 days ago 0 replies      
The very first comment says:

"How is someone who grew up in terminal times expected to navigate back when using a two-button mouse?"

I grew up in terminal times. I was lucky that, while growing up, I had access to my father's Unix account through the university. Not only that, I do all of my development work on the terminal (via tmux, vim, and spacemacs). I like the terminal. I love keyboard shortcuts. Keeping my hands in the home row -- awesome!

The backspace in the browser has always struck me as misfeature. I've lost data when typing in forms.

In contrast, when I browse a page, I rarely hit the back button. I'm more likely to open a link in a new page when I am doing serious research.

Times move on. Some things are lost, and our civilization is not for the better. This is not one of those cases.

slavik81 2 days ago 0 replies      
My first instinct was to bemoan it's loss, but after thinking about it, I make this mistake far too often.

I actually just lost a draft of an annual self-assessment to this. I wanted to delete some text, but I guess I didn't have focus in the text box, and hit back. The form was created by an awful website (PeopleSoft/Oracle), so hitting forward didn't bring my data back.

Sure, it was just 20 minutes of work. Sure, a better website would have had the fields autosaved, or at least not have broken the browser autofill. Sure, I could have written it in a different program and then pasted into a browser.

But seriously, that should never happen. Not like that.

mstade 2 days ago 1 reply      
I wonder if this is in any way related to the exceptionally annoying thing on google.com, where if you hit backspace it doesn't navigate back, but starts removing characters from your search. It does this with other keypresses too, presumably so you can just keep typing till you find whatever you're looking for, but it's a flagrant disregard for my action of moving focus from the input field.

In any event, I use backspace to navigate back all the time, so this is sure to annoy me to no end. Especially since I use multiple browsers, and it'll be hard to break habits. Ah well..

davb 3 days ago 0 replies      
They say 0.04% of page views are a result of pressing backspace. 0.04% sounds small but imagine how many page views per month there are, globally, with Chrome. That's. Significant number.

Backspace sure is an unusual navigation choice these days, and perhaps wouldn't make sense to code in new software. But in browsers, backspace to navigate back is expect behaviour.

This isn't the first time the Chrome or Chromium teams have made sweeping changes based on usage stats, pissing off the minority who use those features and pushing ever closer to a browser with only the lowest common denominator features that everyone uses.

_pferreir_ 2 days ago 0 replies      
As a web application developer, I second the motion to officially thank the Chrome development team for this. "Backspace" triggering "back" is a usability disaster, and not only for inexperienced users. We recently had issues with a 3rd party editor widget losing focus due to a bug, which led to people accidentally triggering "back" and losing their data (it was a rich text field, so you can imagine how much of a problem that was). Sure, the problem here was the widget, but using such a commonly pressed key as the shortcut for a potentially destructive operation is a recipe for disaster.More advanced users have the option to use a custom extension, or even mouse gestures. Just develop an "Advanced Chrome" plugin and the problem will be solved.

As a side note, it's interesting to see how such a small change (which, as mentioned above is even reversible) can trigger such an outcry. I've read stuff such as "I've been using this shortcut for 20 years" or "I don't want an extension"... are those even arguments? Yes, applications should be "user-centred" but the "user" here is a collective of thousands or millions of people with their own incompatible opinions. There is a (very good) reason for this change and I've seen zero achievable solutions that would not imply it.

hkjgkjy 2 days ago 2 replies      
Hermel 2 days ago 0 replies      
Finally! I don't know how often I accidentally navigated away from a page by pressing backspace while writing in a textbox.
jbb555 3 days ago 0 replies      
They don't spend any time fixing everything that's broken with modern computers, instead they spend time changing things that weren't broken. Great.
retbull 2 days ago 0 replies      
Good fuck that was annoying. I actually came up with a new work flow for all browsers because of this. I always open links in a new tab so if I want to go back I have to close the tab and if I want to go forward I will middle click to open in a new tab. This only falls apart when I run into a multi-page form or application that requires text. When that happens I hate that backspace goes back.
pbiggar 2 days ago 0 replies      
Well done Chrome. While it's the way that I go back and I now need to change my habits, this is the kind of hard decision that you need to make to have a really great product. They weighed the upsides and downsides, and pissed off a small subset of people (esp on HN who are likely to be the backspace-as-back users) to make a better experience. Bravo!
Pharylon 2 days ago 0 replies      
I guess I can uninstall Backstop now (a Chrome extension that literally exists only to disable backspace to go back). I've been running it for years now.

For you 1% of people that actually use the Backspace key for going back, I'm sure someone will come up with an extension to re-enable it, don't worry.

marcusarmstrong 3 days ago 0 replies      
Finally! I can get rid of my third party extension to get rid of this insane behavior.
Mithaldu 3 days ago 1 reply      
So they applied the wrong fix, to a problem that had been solved a decade ago.

The problem: Moving away from a form can result in data loss.

Their solution: Make it harder to move away?

The actual solution implemented more than a decade ago: Cache history completely and make it easy to move forward and backward in a tab's history while maintaining form contents.

okonomiyaki3000 3 days ago 0 replies      
Thank the gods! I can't imagine who ever thought backspace as back was a good idea in the first place.
grandalf 3 days ago 2 replies      
I've never used backspace for nav intentionally, and it has caused annoying data loss for me a few times.

It's never made sense to me why this behavior was ever added to browsers. The logical choice would have been the left arrow key (since there is a corresponding right arrow).

brudgers 3 days ago 1 reply      
Having a friend who often operates their keyboard by the old stick between their teeth method, I'd like to see an analysis demonstrating that the breaking change improves accessibility. Particularly since the alternative posed in the thread is the chorded alt-left.
Kadin 2 days ago 0 replies      
Striking a blow for mediocrity. Ugh.

If there was really a problem with data loss, the better solution would seem to be warning the user before navigating away from the page. Removing a widely-used single-key behavior in order to protect users from themselves seems like a bad prioritization.

It'd be nice if we could still have software that is unashamedly not trying to target some sort of Archie Bunker "low information" user. Even the big Linux distros seem obsessed with making things easy for some hypothetical moron-in-a-hurry, at the expense of actual users who know what they're doing. It's unfortunate, and it seems to be a sort of antipattern that's infected a lot of software design. It wasn't always this way: there used to be an expectation that users would learn to use software, and that like any tool, if misused you could mess things up. Somewhere along the line, we've decided that it's unacceptable to tell users that they need to learn how to use software instead of blindly stabbing at it and expecting it to protect them.

I'm not against sane defaults or warning users before they really do something horrible, but the current trend towards ripping out anything and everything that might possibly be 'confusing' seems to be far overstepping the mark.

Firefox isn't much better, but at least they haven't Nerfed the back button.

jdelaney 2 days ago 0 replies      
I wrote a quick Chrome extension to fix this for those interested.

Extension: https://chrome.google.com/webstore/detail/back-to-backspace/...

Source: https://github.com/j-delaney/back-to-backspace

djrconcepts 2 days ago 0 replies      
Great News! I have never intentionally hit backspace to go back, and yet I've hit backspace on accident and been taken many times. Quite annoying when it happens.
azinman2 2 days ago 1 reply      
I love how most of the comments on the bug tracker are "I've never lost data so therefore no one has and this should go back because I'm used to it."

Typical myopic power users...

YeGoblynQueenne 2 days ago 1 reply      
We have UseCounters showing that 0.04% of page views navigate back via thebackspace button and 0.005% of page views are after a form interaction. Thelatter are often cases where the user loses data. Years of user complaints havebeen enough that we think it's the right choice to change this given the degreeof pain users feel by losing their data and because every platform has anotherkeyboard combination that navigates back.

We're doing this via a flag so that we can control this behavior should there besufficient outcry.

oOh dear lord, that's a horrible idea. You make a change to your software to fix aproblem that is not caused _by_ your software? If a form is confusing enoughthat the user thinks they have focus when they don't and ends up losing datathen that's an issue with the form, isn't it? Not the browser and not thebutton.

davesque 2 days ago 0 replies      
I agree with this. Accidentally going back when you lose focus on a text field is super annoying.
kbenson 2 days ago 0 replies      
About time! I've been seriously annoyed a number of times when doing text/data entry (on this very site!) where somehow I removed focus from the input, and then tried to erase some text, only to find it going back a page, and my input is gone when I browse back forward (this problem is exacerbated by inputs that don't exist until Javascript creates them from some page event).

When using a laptop with a sensitive touchpad, this can get really bad.

logicallee 2 days ago 0 replies      
I'm a bit late to the party (already 408 comments) but, guys, here is an example of what happens currently in many browsers:




(this example is not prescriptive, it's just what happens)

At any rate, the GIF shows the current situation. You should watch it.



I actually wrote to this app creator that they should throw up a confirmation window ( like these https://www.google.com/search?q=confirm+navigation&tbm=isch )

but the fact is that the browser is the one that decided to navigate away. Now what's very interesting, is that even in this, HN's, thread we have people saying "Yes!!!" and people saying "No!!!" to the change.

So people who simply have never used backspace for navigation, like me, have many times accidentally touched backspace or thought we were focused on a form, and ended up losing data (because the page didn't throw up a confirmation window after navigating back, and after clicking forward the page is blank again.) While other people, who have no convenient single key they can use to navigate back, have come to rely on it. I'm not sure what the solution is, but here's the current situation so everyone understands it.

samuellb 2 days ago 0 replies      
I welcome this change, because I've lost a lot of form data with the backspace key. Not specifically in Chrome, because I remember having the same problem with IE 6 at the time when Firefox was still in alpha and was called "Phoenix".

Now I wish that the Thunderbird developers also remove or change their single letter shortcuts that are easy to mis-type. E.g. "A" for archive, which creates an undeletable "Archive" folder in your mail account. There's a bugzilla issue for it here:https://bugzilla.mozilla.org/show_bug.cgi?id=615957

emodendroket 2 days ago 0 replies      
I have to say, I find it a lot more common that I accidentally lose focus inside a text box and go back than that I intentionally use that shortcut.
swingedseraph 2 days ago 0 replies      
Do I like this? Yes. Should it be an immutable part of the interface and not configurable? No. That's ridiculous.
avehn 2 days ago 1 reply      
People who cannot use a mouse or see a screen, who rely exclusively on keyboard commands, will be greatly affected by this change.

Global Accessibility Awareness Day https://www.w3.org/WAI/perspectives/

soheil 2 days ago 0 replies      
This is actually a no brainer, many times I have accidentally tapped on my mousepad while typing and took the focus away from a textarea then noticed a typo and tried to delete it and baaam you're no longer on that page and possibly all the text you typed has gone into the abyss.
whoisthemachine 2 days ago 0 replies      
Positive change in my book. This key was always overloaded, leading to unintentional usages. Using backspace as a "navigate back in history" shortcut never worked reliable for me in any of the browsers I've used extensively (Chrome, FF, and IE).
jasonm23 2 days ago 0 replies      
Wontfix - just apply the worst possible, cruddy fix and shut down discussion.

Forgive me if I do not applaud.

chrisdotcollins 15 hours ago 0 replies      
Thank you so much for this. Accidental backspaces have been s o frustrating to me at times. I'm glad sharing my telemetry was able to assist you in seeing these problems.

I am looking forward to this update.

Osiris 2 days ago 0 replies      
This is why I switched to Vivaldi. All the shortcut keys are customizable. It also includes mouse gestures for quick back/forward with the mouse. I prefer having the choice than someone else dictating.
mcrmonkey 2 days ago 0 replies      
ffs this is stupid. backspace has always been "back" in browsers and it really vexes me when some versions of firefox on some linux distro's do this. two hands have to be used to action this because the right alt key is either not mapped ( on some os'es ) or is alt-gr.backspace works well when moving quickly too - one finger from home row and bam.

Rather then this be the fix they should probably look at the bug thats causing the user to go back when the form element is focused.

Whats next ? take away space bar for moving through the page ?

A about:config thing needs to be present for this to allow the user to switch between what they want. Sure extentions are possible to fix this too but i dont really want a 3rd party extention to re-enable whats a tried and tested keyboard shortcut. Additionally what happens if that dev's account gets hacked and the extention modded for malace ?Or if the dev pulls the thing in a way similar to the node.js module issue a month or so ago.

This part is worrying though:

We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction.

Where is that data being gathered from and how?

Additionally what is classed as a form interaction ?

YeGoblynQueenne 2 days ago 0 replies      
We have UseCounters showing that 0.04% of page views navigate back via thebackspace button and 0.005% of page views are after a form interaction.

That's from the linked issue, the one that actually made the change.

So, um. What is "UseCounters"? Does this mean that when you're entering text in a form Chrome is registering your keypresses?

daveloyall 2 days ago 0 replies      
I got to this thread late, sorry.

Here's the thing about Chrome... They don't want power users.

Remember when you first switched to Chrome? That sleek little pastel colored window elegantly fast. It worked on most websites. It was notably fast on Gmail, which at the time was the slowest website you spent a lot of time on.

You didn't mind that Chrome wasn't configurable. You might even have thought that it would become more configurable over time.

You were wrong. You were never the target audience.

I once had a infuriating (to me, at the time) argument with a Googler who was responsible for an internal app which performed better in Firefox than in Chrome. He said "Use Firefox!". I didn't get it at the time. He was a power user, all his co-workers were power users, and thus the internal app was only used by power users... They all used firefox! At least for real work... (Pretty sure they all had Chrome on hand for Mail and Maps, etc...) Anyway, the internal app correctly targeted firefox.

Meanwhile, back in time, when Chrome came out, Firefox started hemorrhaging users. Mozilla reacted. Today, it's as fast or faster than Chrome for most sites I use. And it's configurable!

If you are reading this, and don't have the latest beta or Nightly FF installed, you should go do so now! Really, trying firefox after being away from years will make you smile and renew your faith in humanity. :)

But unfortunately, this story doesn't end there...

I think some firefox devs see Chrome as a rolemodel... Maybe they want to compete with Google for those users who are not you! As a small example, I offer this: https://bugzilla.mozilla.org/show_bug.cgi?id=1118285 Note the posts that are marked "Comment hidden (advocacy)". You can click the [+] to show what was hidden (comments from power users).

There are niche browsers for power users, and there are extensions... But there isn't a mainstream browser for power users because power users aren't mainstream.

I'm just describing the problem (well, I hope!), I'm sorry but I don't have a solution.

chrisdotcollins 15 hours ago 0 replies      
Thank you so much for this. I have had this happen to me by accident so many times.

I'm glad the sharing of telemetry pays off in features sometimes.

Looking forward to this update.

usaphp 2 days ago 0 replies      
I am glad they are making this change. I've lost my form data by accidentally going back while trying to erase something in a text field so many times.
eropple 2 days ago 0 replies      
So I'm going to need another Chrome extension, further exacerbating the gong show that is Chrome battery life, for something I use all the time.


math0ne 2 days ago 0 replies      
The amount of times I've accidentally navigated away from a form by hitting that damn shortcut!
lr 2 days ago 0 replies      
On OS X, Command-left bracket has worked on Chrome, Safari, Firefox (and probably more) for years. Not sure about Windows or Linux, but keyboard shortcuts are very well established across browsers in this way (like Command-L, and all of the Emacs bindings like Control-A, Control-E, etc.).
copperheart 2 days ago 1 reply      
Big thanks to the Chrome devs for this, I applaud and personally appreciate the decision but wonder why a navigation shortcut like this couldn't be made into an option for others to enable or disable based on their preference?
Kiro 2 days ago 0 replies      
As someone who constantly gets screwed by this: finally!

Example from literally one minute ago: a cool thing was going on in a Twitch stream and I wanted to hype in the chat, misclicked the chat box so backspace went back to the stream list instead making me miss the moment.

ogreveins 3 days ago 0 replies      
I would very much like them to revert this change. Using backspace to go back has been in my browsing habits since I began using the internet. Atl+left or right is annoying. Either give us a checkbox or revert it. Please. Pretty please.
SeriousM 2 days ago 0 replies      
"If you can fill out a formular field correctly without losing focus, you are not part of Chrome's target audience.

edit: Had to type this four times due to accidently going back."

Made my day.

perezdev 2 days ago 1 reply      
>Are you suggesting that the only remaining options are Alt-Left (a two-hand key combo for that I have to move my mouse hand towards the keyboard, and then back)

I guess no one told this guy that a standard keyboard has two ALT keys.

jijojv 2 days ago 0 replies      
Thank you. This is the right fix for 99% users who'd otherwise lose data
dghughes 2 days ago 0 replies      
Wouldn't it make more sense to have a pop up "Are you sure you want to navigate away?" solution instead?

This is the very definition of throwing the baby out with the bathwater.

april1stislame 2 days ago 0 replies      
Never lost data on firefox by going forward after going back for accident while writting on a form, but whatever...Google only wants dumb users who can't see past what they're doing.
rocky1138 3 days ago 0 replies      
I don't mind removing backspace, but this better not remove the functionality of my back button on my mouse. That's one of the worst things about having to boot into OSX at work.
backtoyoujim 2 days ago 0 replies      
I bet that eventually the "quit application" feature for Mac OS X is going to be offshored.
XorNot 2 days ago 0 replies      
I'm surprised people think this is a bad idea? I know of no one who uses backspace this way in the browser.
mixedCase 2 days ago 0 replies      
Why must every browser out there suck? Servo seems like the last hope if it gets integrated into FF or a FF-like browser.
henvic 2 days ago 0 replies      
What the hell? Just ask the user if he intended to go backwards when there is a form on focus or something.
jordache 3 days ago 0 replies      
on the topic of annoying details that browser makers overlooks.

In safari, when opening a new tab, the focus is not on the address bar. I have to always to Cmd+L before start typing. The address bar focus works when you don't have a homepage defined (so a blank page), but who doesn't configure a default home page? arrghh

andrei_says_ 2 days ago 0 replies      
It's the most frequently used key for me when I browse.

Any way to add it back? Maybe an extension?

yAnonymous 3 days ago 0 replies      
I hope they also remove support for forward/back mouse buttons. I keep accidently pressing those.
sammorrowdrums 3 days ago 0 replies      
Good riddance! This is such a terrible double-purpose binding. When bindings seriously are common typing commands, that are not just bound, but bound in a way that is often destructive, it just needs to die.

Anyone who thinks this shouldn't die is basically a bad person. It was an affliction, and one of the poorest design choices in history. :-p

lpsz 3 days ago 1 reply      
Sometimes, it's better without these features. E.g. on Mac, dragging left in the browser is a gesture for going back to the previous page, and I can't count how many times I've accidentally triggered that while filling out a web form or interacting with a page. Isn't the back button and the keyboard shortcut enough?
dc2 2 days ago 0 replies      
I just hit backspace to go back to HN after reading this... and it didn't work.
rietta 2 days ago 0 replies      
NOOO! That's how I go back! And use the space bar to scroll up and down!
jdhzzz 2 days ago 0 replies      
Thank you.
MrBra 2 days ago 0 replies      
Is there a setting to re-enable it?
ryanlol 2 days ago 0 replies      
Because adding customizable keybinds is too difficult? Hell, if configs looking scary to normal users is a problem why not just have a json/sqlite/whatever file in the profile directory?
tehchromic 2 days ago 0 replies      
Hip hip hooray!!!
autoreleasepool 2 days ago 0 replies      
I can finally uninstall BackspaceMeansBackspace!
ravenstine 2 days ago 0 replies      
kruhft 2 days ago 0 replies      
Good, now I don't have to fix one of my sites to handle backspace 'properly' that uses a custom keyboard handler for input. What a pain.
bluhue 2 days ago 0 replies      
Space-bar next!
OJFord 2 days ago 0 replies      

I've only ever done this by accident.

monochromatic 2 days ago 0 replies      
Morons don't know how to use our web browser? Better break it!
optimuspaul 2 days ago 0 replies      
finally, now maybe I can go back to Chrome.
homero 2 days ago 0 replies      
mdholloway 2 days ago 0 replies      
Thank god.
givinguflac 2 days ago 0 replies      
One more plus for Vivaldi.
dredmorbius 2 days ago 0 replies      
Google Chrome have fixed a longstanding UI/UX bug and state overload of the Backspace key

Backspace key in Chrome browser no longer navigates backward, but instead is limited to its initial and rightful role: deleting the previous character under the pointer (mouse / text cursor).

I swear by His Noodliness I'd ranted on this at G+ some time, though unfortunately since Microsoft Bing Search isn't available on Google+, I cannot actually find shit in a useful fashion.

That said, I applaud this change, thumb my nose at the fuckwits who are bitching about it, and note again the Flaw of Averages: One Size Fits None.

As to the justification of not relying on Backspace for Navigation

I ordinarily take exception to blame-the-user / taunt-the-user practices, and should hasten to explain my own here.

Learning a New Backward Navigation Method is a Temporary Training Inconvenience.

Repeatedly losing Vast Quantities of Newly Composed Content is an Irrevocable User State Loss.

Among the canons of human-computer interface design is this: Thou shalt not fuck with thine users' State.

Which by definition makes those who fail to make this distinction fuckwits. Perhaps only ignorant fuckwits, a curable state, though quite possibly and regrettably stupid fuckwits, a State of Extreme Durability in my experience.

The larger fault is arguably for the lack of a clear stateful separation of editing from _ browsing_ modes in Web browsers. Editing involves creating novel user state which can be easily lost through capricious client behavior, such as, to draw on a randomly selected example, fucking overloading the backspace key with the behavior of "delete my highly considered and Very Important Message to the Univers by immediately and irrevocably moving off this page.

It's with some irony that I note that console-based Web browsers rarely have this problem. The w3m browser, for example, when editing a text field, dumps the user to a local full-powered editor, and in fact defaults to that one specified by the users environment ($VISUAL, $EDITOR, etc.). The result is that a "primitive" browsing tool actually has an exceptionally powerful editing environment.

(At this point, the Emacs users in the room are of course laughing and pointing at me, but they in fact entirely substantiate my claim in doing so. And, my dear good friends, I've given not inconsiderable thought to actually joining you, as it seems that via Termux, a commandline environment for Android, emacs and all its capabilities are in fact available to me, and may vastly surpass the Android applications environment in capabilities. The fact that Viper is a well-established and long-standing component of the Emacs landscape means that the One True Operating System now does in fact have a useful editor.)

Chrome has other utterly unredeemable failures on Android, including an utter lack of ad-blocking capabilities. But for the task of composing and editing, this is a nice touch.

But it does raise one futher point: why is editing via Web tools so abysmally poor?

Despite various deficiencies, the G+ app actually does favourable compared to a number of other platforms, and virtually all Web editable tools. Reddit and Ello stand out particularly. As much as I love the Reddit Enhancement Suite full-screen editor (it's a browser extension for Firefox and Chrome desktop), it's not available on Android. Meaning I've got to jump through Multiple Divers Hoops in order to compose long-form content on Reddit. Android's various content-creation deficiencies make this a tedious process. This accounts for some of my Diminished Output in recent months.

In particular, Firefox/Android has proven Exceptionally Capable at Losing My Shit, at least in memory not exceptionally distant (considering I've owned my present Samsung[tm] Infernal Device[r] only since October last), a characteristic which makes me Exceptionally Leery of Embarking on Enterprises of Extensive Prose Composition within that context.

Given the, shall we say, exceptional advancement of text-composition in other contexts, I find this particular failure mode of the Browser Development Community in General most unpardonable.


exabrial 3 days ago 0 replies      
THANK YOU!!!!!! Progress
imaginenore 3 days ago 0 replies      
smegel 2 days ago 0 replies      
So it takes 51 versions before common sense kicks in?

Have a pat on the back.

soperj 2 days ago 0 replies      
And i'll never use chrome again.
optforfon 3 days ago 1 reply      
Anyone want to place bets on how long till Firefox copies them?
kaonashi 2 days ago 0 replies      
Next up: remove form submit on return from textarea fields.
alexc05 2 days ago 1 reply      
> "We're doing this via a flag so that we can control this behavior should there be sufficient outcry."

I love that they decided to do this. I think the justification for taking it away is really good.

I also think that the decision to disable via "flag" shows some prescience with respect to how the public reacts to things.

Great move and a template for "sound product development".

IvanK_net 3 days ago 1 reply      
I wanted to use Ctrl+N, Ctrl+O and Ctrl+T shortcuts in my webapp. I reported a bug 3 years ago https://bugs.chromium.org/p/chromium/issues/detail?id=321810 which is not fixed yet, but they have "fixed" Backspace ... that seems crazy to me.
Fast.com: Netflix internet connection speed test fast.com
626 points by protomyth  3 days ago   359 comments top 63
exhilaration 3 days ago 7 replies      
This is from Netflix, it downloads Netflix content and reports the speed back.

This is important because unlike your average Internet speed test (which ISPs take pains to optimize), there's a very real possibility that your ISP is happy to let your Netflix experience suffer - assuming they don't throttle it outright - as previously mentioned on HN:



CyrusL 3 days ago 6 replies      
Cool. I just redirected http://slow.com to https://fast.com .
finnn 3 days ago 7 replies      
For those hatin on speedtest.net and wanting upload, http://speedtest.dslreports.com/ and https://speedof.me/ have booth been around for a while. The reason for fast.com is that it tests download speed from netflix. ISPs can't prioritize it without prioritizing netflix as well.
nlawalker 3 days ago 3 replies      
What I'd really love to see is this concept provided as a service by all of the big streaming/gaming/large-content-blob providers and aggregated into a single page.

I have absolutely no reason to believe that every well-known "speed test" app/site/utility out there isn't being gamed by my ISP. A speed test that showed me my actual streaming bandwidth from Netflix, actual download speed of an XX MB file from Steam, actual upload bandwidth to some photo-sharing service, and actual latency to XBox Live or some well-trafficked gaming service would be awesome.

bdwalter 3 days ago 1 reply      
Seems like this is really about training their consumers to define the quality of their internet by their reachability to the Netflix CDN nodes.. Smart move on Netflix's part.
gdulli 3 days ago 4 replies      
When I used to be a Netflix customer it was more the variability of my connection that was an issue and not its "speed" at a given optimal time.

Usually I could begin a stream without problems. But often while streaming (often enough for me to realize streaming was a bad experience) the bitrate dynamically dropped way down to a terrible quality in response to what I imagine were poor network conditions. Netflix no doubt sees this dynamic quality adjustment as a feature, and preferable to buffering, but I chose an HD stream and I'd rather even see an SD quality video that I could be sure would stay that quality than switching between HD and very low bitrate, fuzzy, artifacty video.

I don't blame Netflix for the quality of my connection, but streaming is just not as reliable as cable and it's not one of those Moore's Law type things where throwing more processing power or memory fixes the network issues.

vessenes 3 days ago 0 replies      
I like the idea of getting ISPs into internal conflict: the folks responsible for making sure that speed checks like speedtest.net run quickly will be fighting the folks responsible for throttling Netflix.

But, I think the throttling folks will ultimately win. In that case, I guess Netflix is laying out a good case for consumers to complain, so it's win-win.

ejcx 3 days ago 0 replies      
This is super awesome! It's a good speedtest that works on mobile, which I had not been able to find.

Funny thing is I found this in the source.

 <!-- TODO: add code to remove this script for prod build --> <!--<script> document.write('<script src="http://' + (location.host || 'localhost').split(':')[0] + ':8081/livereload.js?snipver=1"></' + 'script>') </script>-->
Not a big deal, but kind of funny.

kcorbitt 3 days ago 2 replies      
Really nice and easy to use -- the test starts way quicker than speedtest.net.

However, am I missing something, or does this only test downloading? I guess that makes sense for Netflix's use case, but I'm usually at least as interested in knowing my upload speed, because with typical asymmetric connections that can be a bigger bottleneck for video calls and content-production workloads.

victorNicollet 3 days ago 2 replies      
Very interesting, and it confirmed my suspicions that my ISP throttles me (or at least, tries to).

I'm using Numericable from Paris and got 18Mbps to Netflix, 40Mbps to their comparison test. By going through an SSH tunnel (which makes a 230km detour through Roubaix), I get 39Mbps to both Netflix and control.

I am rather surprised that the bandwith loss caused by the SSH tunnel is so small.

jedberg 3 days ago 4 replies      
Oh man this is awesome. I can't wait till people start calling thier ISPs claiming they aren't getting the speeds they pay for, only for the poor agent to have to explain how peering agreements work.
gregmac 2 days ago 0 replies      
Some observations about this:

For me, it's getting stuff from https://*.cogeco.isp.nflxvideo.net -- which indicates my ISP (Cogeco) is part of their Open Connect [1] program with an on-network netflix cache.

Other people are reporting downloads from https://*.ix.nflxvideo.net, which appears to be the Netflix cloud infrastructure.

It downloads data from 5 URLs every time, but their sizes fluctuate, something like ~25MB, ~25MB, ~20MB, ~2.2MB, ~1.2MB.

The contents of each response appears to be the same (though truncated at a difference place), with the beginning starting with:

 5d b9 3c a9 c3 b4 20 30 b9 bc 47 06 ab 63 22 11
`file` doesn't recognize what this is.


Since it's https, ISPs shouldn't be able to easily game this (eg: make this go fast, but still throttle video content).

So one potential way would be to only start throttling after 25MB is downloaded (or after a connection is open for ~2 minutes): does anyone know how Netflix actually streams? If they have separate HTTP sessions for 'chunks' of a video, then presumably this wouldn't work.

They could see if a user visits fast.com and then unthrottle for some amount of time. I'm not sure if ISPs have the infrastructure to do a complex rule like this though (anyone know?). I also think this would be relatively easy for users to notice (anytime they visit fast.com, their netflix problems disappear for a while) and there would be a pretty big backlash about something so blatant.

[1] https://openconnect.netflix.com/en/

callmeed 3 days ago 1 reply      
Most interesting is comparing it to the ISP speed tests:




Fast.com is reporting about 1/2 the speed of these for me (2 seem to use the same Ookla speed test).

mofle 3 days ago 0 replies      
I made a command-line app for it: https://github.com/sindresorhus/fast-cli
zodPod 3 days ago 1 reply      
I'd bet this is a move to make the ISPs that are throttling them look bad. If people start to use it to check their speeds and they are downloading Netflix content from Netflix and the ISP is throttling, it will look slower than it is and more people will likely complain.

I like it. It's suitably evil!

danr4 3 days ago 2 replies      
This is good but my god what a waste of a domain name :(
janpieterz 3 days ago 3 replies      
Odd, on a dedicated 500 mbit line I've now gotten 6 different results, ranging from 350-500. Speedtest.net indicates a stable 500+ mbit line, downloads from very fast servers always max it out at 500 as well.

Besides stabilizing it a bit, getting the upload on there would be amazing, it's certainly a lot nicer for the eye than speedtest.net.

mrbill 3 days ago 1 reply      
Interesting. Even over multiple tests, I get almost exactly 1/3rd the download bandwidth speed to NetFlix that I do testing with speedtest.net.
_jomo 3 days ago 2 replies      
I also like speedof.me which tests latency, download, and upload but purely using HTML5/JS (unlike speedtest.net with it's Flash app)
tigeba 3 days ago 4 replies      
Just for a reference point, I'm getting about 350 on Google Fiber in Kansas City.
smaili 3 days ago 4 replies      
Not to sound ignorant, but what's the point? Why would Netflix go through the trouble of acquiring what I suspect to be a fairly expensive domain just to show how fast one's internet speed is?
iLoch 3 days ago 1 reply      
Description would be nice for anyone on mobile who doesn't want to needlessly waste bandwidth.
nodesocket 3 days ago 1 reply      
While cool, I can't believe they bought and use fast.com for something so simple. Fast.com has to be worth some coin. Anybody have any idea what that domain is worth?
pazra 3 days ago 0 replies      
This is nice and great that it loads quickly with no bloat or distractions. Not sure about the domain name though, as it's not immediately obvious what the site is for.
stanleydrew 3 days ago 2 replies      
I'm pretty sure Google is about to release a speed test tool embedded directly into its SRP for speed-test-related queries.

Similarly to how they eliminated the need for third-party IP address checking tools by returning your actual IP address when you search for "what's my ip address".

pgrote 3 days ago 1 reply      
The amount of data netflix will collect from this is exciting! I can only imagine the stories it will tell once hundreds of thousands of people use it. It would be fantastic to see how the agreements between ISPs and netflix affect the data transfer rates.
isomorphic 3 days ago 0 replies      
I have multiple WAN connections (multiple ISPs). This actually (correctly) reports the aggregate download speed!

Obviously if they are "downloading multiple files," they aren't waiting for them to complete synchronously.

loganabbott 3 days ago 1 reply      
I prefer the speed test here: https://www.voipreview.org/speedtest No flash or silverlight required and a lot more details
danvoell 3 days ago 1 reply      
I feel like you could do more with this domain. Cool little tool though.
manmal 3 days ago 1 reply      
I have absolutely terrible Netflix quality on my Samsung TV sometimes, but it shows 68MBit here. Makes me wonder whether the firmware is to blame..
erickhill 3 days ago 2 replies      
Thanks Xfinity. For my home service fast.com should redirect to slow.com. 5.2 Mbps (it's sold at 50 Mbps with asterisks everywhere).
smhenderson 3 days ago 2 replies      
OK, so I get 48 on fast.com and decided to use the link to compare on speedtest.net. There I get 101 down, 112 up.

So while 48 seems very fast to me (I get 19 at work) it's a lot less than 101. Is Verizon throttling the connection or is NetFlix not giving me more the ~50? At what point is the cap on NetFlix's side and not the client connection?

athenot 3 days ago 2 replies      
This is interesting.

Test 1: on Comcast but connected to company's VPN: 48Mb/s

Test 2: on Comcast but not on the VPN: 11Mb/s

k4rtik 3 days ago 0 replies      
Is it inflating the results shown on Wi-Fi?

I am on a MacBook Air Early 2014 and my current link speed is 144 Mbit/s according to Network Utility, but fast.com shows between 210 to 230 Mbps on each run.

Speedtest.net results are consistent as before at ~38 Mpbs, which is what I would expect from the routers around me.

lemiffe 3 days ago 1 reply      
Only downlink? For me uplink is more important, and I suspect for others as well (gaming/streaming).
jasallen 3 days ago 0 replies      
Wow, "fast.com" is one helluva valuable piece of DNS real estate Netflix is throwing at this.
IgorPartola 3 days ago 1 reply      
Yup, and it nicely confirms that (a) my Charter connection is in fact 65 Mbps down and (b) I can't get faster internet where I live.

Oh, and 5 Mbps up is just ridiculous. That's what I get with my business plan. Back up a TB of data to the cloud? Yeah, that'll take weeks.

vonklaus 3 days ago 1 reply      
My internet speed (according to fast.com) is 0. Adblock & uBlock off on the site & fast.com uses https. Not sure why it wouldn't be working, no VPN in middle. Anyone else having issues?

edit: Speedtest.net was ~38mbps down. Is a netflix subscription nec. for this?

ahamdy 3 days ago 0 replies      
The download speed is absolutely incorrect, I live in a 3rd world country and have a 2Mb connection I get max 200/kb max download rate, fast.com is showing a download speed of 1.2Mb I really wish it was true
zmitri 3 days ago 1 reply      
Speed is halved using fast.com (140 down) vs speedtest (287 down) and I'm currently on Paxio in Oakland http://www.paxio.com
kilroy123 3 days ago 2 replies      
Why doesn't netflix just try to by-pass ISPs by rolling out their own service?

Big ISPs are starting to cap data, to stop/slow-down netflix. They should just put out their own high speed service like Google.

nodesocket 3 days ago 0 replies      
Interesting looking at Chrome developer tools, lots of magic and interesting payloads.

The http header Via is interesting as it lists the AWS instance that served the request and region. i-654a87b8 (us-west-2)

EpicEng 3 days ago 0 replies      
Well... I just found out that my connection went from 20Mb to ~120Mb recently. I have no idea when this happened and my bill hasn't changed.
myrandomcomment 3 days ago 2 replies      
On AT&T U-verse in Palo Alto area. $72 p/month for 24mb now with even more data caps.

Fast.com ~23mbSpeedtest.net ~38mb

Hum, I wonder which is right and which is the ISP screwing with traffic?

mrmondo 3 days ago 0 replies      
Just tried it on our 300/300Mbit link at work, lots of people working today so it'll be heavily under use but:

- Netflix: 240Mbit/s

- Speedtest: 293Mbit/s

known 3 days ago 0 replies      
dangson 3 days ago 0 replies      
Not surprisingly since this is downloading Netflix content, it doesn't work when I'm connected through Private Internet Access VPN.
narfz 3 days ago 0 replies      
is there a bandwidth cap? i constantly get 160Mbps but i know for sure that our office line can do way more. speedtest.net is always close to 900Mbps. maybe speedtest.net has a endpoint within the ISP backbone and netflix not? or is the peering between AWS and my ISP?
caludio 3 days ago 0 replies      
Mhh, I get consistently (much) lower Mbps with Firefox than with Chrome. Is it how it's supposed to be? Is it my network maybe?
martin-adams 3 days ago 0 replies      
Cool tool. I'd love to know the story behind how Netflix managed to use such a lucrative domain.
JustSomeNobody 3 days ago 1 reply      
How long until ISPs catch on and make sure fast.com is given a high priority?

I don't see how this will accomplish anything for Netflix.

vadym909 3 days ago 2 replies      
Wow- this is awesome. I hated speedtest.net
bodytaing 3 days ago 0 replies      
This is an awesome alternative to the other speed tests because it's very minimal and has a clutter-free UI.
mrmondo 3 days ago 0 replies      
Doesn't work at all well for me here in Melbourne on 4g. Netflix: 7Mbit, speedtest.net: 39Mbit
hacks412 3 days ago 0 replies      
Is this a way for them to optimize who they deliver faster streaming services to?
parfe 3 days ago 3 replies      
730mpbs to fast.com while only 700mbps on speedtest.net (with 853mbps up).
techaddict009 3 days ago 1 reply      
No upload speed results?
arnorhs 3 days ago 0 replies      
man, i'd love to see something like this for twitch streams. i feel like i have problems with twitch streams at specific times per day.
wil421 3 days ago 0 replies      
On my laptop:

First test: 55mbps

Second: 35mbps

Third: 22mbps

Fourth: 22mbps

Speedtest: right at 36mbps every time.

It seems to be more stable on my cellphone.

philjackson 3 days ago 4 replies      
SPEED MEGATHREAD, post your speed/location/ISP below here:

44Mbps / London, uk / BT

jefurii 3 days ago 2 replies      
Yawn, only checks download speed.
developer545 3 days ago 6 replies      
It's surprising that people on HackerNews don't seem to understand the basics of how the Internet works.
Google Home home.google.com
574 points by stuartmemo  3 days ago   455 comments top 72
cheald 3 days ago 22 replies      
When I was younger, I dreamed of something like this. Voice control for my home! A Star Trek computer that I can interact with conversationally! I just say what I want and it happens!

Now, I just see an internet-connected microphone in a software black box which I can only interpret as a giant frickin' security liability. I want this, but unless it's open source top-to-bottom, I won't ever actually put one in my home. We know too much about how these things can be abused for me to ever seriously consider it without being able to verify for myself what it's doing and why.

t0mbstone 3 days ago 10 replies      
Please, please, please be a completely open, extensible platform...

I want to be able to control my Apple TV with my Google Home device.

I want to be able to control my Phillips Hue and LiFX bulbs.

I want to be able to build my own custom home automation server endpoints and point my Google Home commands at them.

I want to be able to remote start my car with a voice command.

I want to be able to control my Harmony remote, and all of the devices connected to my Harmony hub.

I want to be able to access my Google calendar.

I want to be able to make hands-free phone calls to anyone on my Google contacts.

If my grandmother falls, I want her to be able to call 911 by talking to the Google Home device.

I want to be able to ask wolfram alpha questions by voice.

I want to be able to have a back-and-forth conversation to arrive at a conclusion. I don't want to have to say a perfectly formulated command like, "Add an event to my calendar on Jan 1, 2016 at 2:00 pm titled go to the pool party". I want to be able to say, "Can you add an event to my calendar?", and then answer a series of questions. I hate having to formulate complex commands as a single sentence.

I want to be able to have a Google Home device in each room, without having to give each one its own wake-up word. Just have the closest one to me respond to my voice (based on how well it can hear me).

I want to be able to play music on all of my Google Home devices at the same time, and have the music perfectly synchronized.

This is my wish list. I am currently able to do more than half of these items with Amazon Echo, but I had to do a bunch of hacking and it was a pain in the ass.

If Google Home can deliver on these points, I would switch from Amazon Echo in a heartbeat.

koolba 3 days ago 4 replies      
RFP - Request For Project

1. Train Google Home to recognize Amazon Echo's voice as its owner.

2. Train Amazon Echo to recognize Siri's voice as its owner

3. Train Siri to recognize Google Home's voice as its owner

4. Kick start some kind of endless loop between the three of them.

frik 3 days ago 1 reply      
Google, thanks for shutting down Freebase.com on 2 May 2016. By taking it offline, and using it (Knowledge Graph) for Google Home you effectively locked out all competitors. WikiData is a far cry and a fraction of the size of what was Freebase.

Freebase was a large collaborative knowledge base consisting of data composed mainly by its community members. It was an online collection of structured data harvested from many sources, including individual, user-submitted wiki contributions. Freebase aimed to create a global resource that allowed people (and machines) to access common information more effectively.


Google is using a lot of data collaborative collected data from closed Freebase and Wikipedia without giving it back.

will_brown 3 days ago 2 replies      
When Windows10/Cortana was released my buddy attached a mixer/switch to his PC allowing him to wire input mics and sound speakers to every room in his house.

And though I can't see any personal uses for such a device, he swears it has changed his life, and the only thing I believe he does with it is tells Cortana to play Van Halan first thing when he wakes up.

protomikron 3 days ago 6 replies      
Ok, controversial opinion:

"[...] and manage everyday tasks"

What exactly do we want to automate at home? I think this whole home automation and smart home stuff is complete bullshit. Obviously there are some nice things, like "play me song xyz", but IMHO it is completely oversold. There are just not that much things to automate at home.

And this does not mean that I think 640K are enough memory for everyone.

fizzbatter 3 days ago 1 reply      
I'm dying for an Echo / Home that is fully api friendly and allows custom keywords. I want to buy an interface to my own home assistant. I want a hackers friend.

Sure, offline-capable would be great too, but for now just give me the damn api hooks. :s

edit: Note that i believe Echo has a pretty good API. I just don't want to talk to echo haha. I want to talk to my system.

JarvisSong 3 days ago 1 reply      
ITT smart hackers asking for more features and noting the privacy implications. Unfortunately, this, Echo, and others are coming for the masses, the masses who have everything public on Facebook and won't really understand the issues until it's too late. Give it a few years and 'everyone' will have a Star-Trek-like home computer experience. What can we do to turn the tide in favor of privacy and security? Or do we just trust Google/Amazon will do the right thing?
deprave 3 days ago 2 replies      
A company that makes money by collecting and selling access to personal information about people is offering to put a microphone in your home.

If you need a product like this, for the sake of your privacy, buy an Echo.

izolate 3 days ago 6 replies      
Looks like something that should've been under the Nest brand. Whatever happened to that?
pbnjay 3 days ago 1 reply      
I find it odd that Google is going to take so long to get this out the door - "later this year" seems like ages. Did they start on the hardware that late?

Amazon has what, 6 months to get more competitive on the search/trivia front? or this is going to kill it.

free2rhyme214 3 days ago 2 replies      
Competition against Amazon Echo is always positive for consumers.
swalsh 3 days ago 1 reply      
As somoene who runs a small ecommerce company i'm really hoping the next platform is open, and not owned by Amazon (or Google). I sell products where purchasing them would be fantastic via a voice interface. If Amazon owns it though, there's no way I'm going to get any fraction of that business. The ownership of these voice platforms is a huge risk for market competition. The voice interface naturally lends itself to "choose the first choice that fits my paramters, and let's go with it". If you say "Alexa, book me a taxi to the airport". Alexa chooses who takes you. Being the priority choice is a huge advantage for whoever wins that. It's just so much power in the hands of so few. It's the opposite of what the internet should've been.
grownseed 3 days ago 1 reply      
The page linked here is basically an ad with no content (yet it manages to have a scrollbar no matter the window size...). Tried to look for actual specs but couldn't find anything, does anybody have anything more substantial?

On another note, is there a way to just get some sort of remote microphone array (I think that's what it's called on the Echo) and set up Alexa/Google/Cortana/... directly on a PC?

zitterbewegung 3 days ago 0 replies      
I'm confident that Amazon won't kill off Alexa (due to its success). I am not so confident that if this isn't widely successful or even in the future this will be killed off just like Revolv and bricking the device . It is good that Alexa is getting competition though.
beilharz 3 days ago 1 reply      
This gives me a 404.
shogun21 3 days ago 0 replies      
I am impressed Amazon was able to make a new product category. It's only a matter of time before Apple announces their take on Siri Home.
xiphias 3 days ago 0 replies      
,,Always on call'' - it just got the worst memories for me of waking up at 3am
partiallypro 3 days ago 0 replies      
Microsoft, where are you? Cortana on a device that is similar to the chip Master Chief has would be incredibly popular, and it done right could also be just as popular. Especially since Cortana is on every platform and completely agnostic unlike Google Home and Echo. Give it the same extensible API as Cortana has on Windows 10, etc and it could be a home run. Don't let Google and Amazon eat your lunch here.

I do wonder though how Google/Microsoft/Apple will handle there being multiple instances of their devices able to take commands. So if I say "Hey Cortana" or "Ok Google" will each device have to sort of communicate with the other to only activate the one that is closest?

bbunqq 3 days ago 4 replies      
You too can bring a slice of 1984 into your home with this lovely crafted listening device!
enibundo 3 days ago 0 replies      
Does anyone else feel as this kind of stuff (I'd put it in the same bag as the apple Watch, and the amazon something) is completely useless?

Personally, I feel we need to use less technology in everyday life.

blabla_blublu 3 days ago 0 replies      
Competition in this space is welcome! Can't wait to see what their difference / what sets them apart from Echo. Given Google's propensity to sell Ads, it will be interesting to see if customers are willing to put a device like this in their house.

Reminded me of a humidifier for some reason - http://www.amazon.com/Aromatherapy-Essential-Oil-Diffuser-co...

struct 3 days ago 0 replies      
Looks neat, let's hope Google leads in 3rd party applications too and not just in appearance. Also interesting that they specifically gave a shout out to the Alexa team.
Roritharr 3 days ago 0 replies      
I love how they put the LG MusicFlow Speaker on the Home presentation. I've been suffering that malpurchase for about a year now. I can rely on it not working 70% of the time, seemingly crashing, creating a mesh wifi although plugged into ethernet or attached to my home wi-fi...

If they can't get the third party vendors to get their Google Cast integration up to the reliability level of a Chromecast Audio, they should stop supporting this.

bobwaycott 3 days ago 0 replies      
Can we get the link changed to https://home.google.com/? Non-HTTPS just 404s.
dmritard96 3 days ago 1 reply      
I think the most interesting thing in the echo and now google home narrative is that these are subsets of phones. Speakers, microphones and internet connections with only two substantial differences - they are powered 100% of the time and they have better speakers/acoustics. It will be interesting to see if those are substantial enough to overwhelm the obviousness of doing these through the phone in your pocket.
pluc 3 days ago 3 replies      
It's blocked in Canada.


evolve2k 3 days ago 1 reply      
There's no way I'm putting something like this, collecting data directly for Google in my house.

Anyone else have privacy concerns?

lazyjones 3 days ago 4 replies      
What's the business model for Google Home? Will it suddenly splurt out an advertising message in the middle of the night, or will it rather include subtle product placements in otherwise harmless answers?

Remember, it's made by a company that thinks it's appropriate to put text ads on the first spots of your search results, in increasingly confusing ways.

xenihn 3 days ago 0 replies      
Hey, I have that pasta strainer. The one that's being used to store citrus fruits for some reason...
gopher2 3 days ago 0 replies      
Yeah, I'm sticking with Echo because business model.
walrus01 3 days ago 1 reply      
How many months from release until the FISA court issues a secret order to turn one of these on 24x7x365 in a suspect's home, and stream the audio to the FBI "counter terrorism" people investigating a subject?
djloche 3 days ago 0 replies      
Voice controlled computer interactivity doesn't appeal to me, and double unappealing is the skynet factor to the whole thing.

Home automation doesn't need nor should it require signing over your privacy.

kristianc 3 days ago 0 replies      
The search queries that get sent to Google are probably the least interesting part of this to them. Sure, Google will get some additional search queries and be able to target you slightly better, but it's a rounding error in terms of the data they already have.

The interest in this on Google's side is on having a permanently connected 'listener' on your network to identify which devices you're running and when. If it's running through your WiFi network, Google is going to know about it.

mattmaroon 3 days ago 0 replies      
I love my Echo but it has a couple weak points, all of which could be solved by a competent platform. I can't, for instance, just tell Alexa to play new podcasts from my lists or directly from the net (except through TuneIn, which sucks.) It doesn't work with many home automation devices. It's AI is not that great when it comes to non-Amazon services.

I'm hopeful the Android platform will make this a better device.

imh 3 days ago 0 replies      
In what world is "Always on call" an appealing phrase?
jug 3 days ago 0 replies      
This is interesting but to be honest I already have this on my phone, which is with me not only in my living room, but even in the street.
wodenokoto 3 days ago 1 reply      
What is this? All I get is:

 404. Thats an error. The requested URL / was not found on this server. Thats all we know.

jredwards 3 days ago 0 replies      
Google Nope
theideasmith 3 days ago 0 replies      
The website is down now. For those who want to check it out, here's the link: https://web.archive.org/web/20160518173022/https://home.goog...
alexc05 3 days ago 2 replies      
I really don't want to come off as super negative here ... but am I the only one who finds this one UGLY?

Compare to some of the other devices from previous years, and competitors:




It sort-of looks like a cheap air freshener. Maybe it'd grow on me, but I kinda think it is ugly.

Someone should manufacture a range of "tchotchke skins"

https://www.google.ca/search?q=tchotchke&tbm=isch so it could sit on your counter and look like something that you'd be happy to mix in with the rest of your decor. (angels, golden lucky-cats, porcelain hands, googly-eyed-wooden-owls https://s-media-cache-ak0.pinimg.com/736x/78/b5/80/78b580270...)

Anything to stop that thing from lookling like a plug in air freshener really.

gcr 3 days ago 0 replies      
Will they offer a rebate for this device to burned Revolv users?

Doing that would be a great gesture.

As it stands, I would be wary of purchasing one of these. How long would it last before Google tires of it?

ck2 3 days ago 0 replies      
So it's echo/alexa by Google?


Is there going to be a patent war?

machbio 3 days ago 0 replies      
Hope this is not as disappointing as Onhub, it would be helpful if they have a rich api to start with and not promise that the APIs are coming later...
sickbeard 3 days ago 1 reply      
Remember when voice commands for your computer came out? It was cool but nobody talks to their computer. They won't be talking aimlessly in their kitchen either.
ComodoHacker 3 days ago 0 replies      
Next step: chemically analysing your kitchen fumes and flavors in nearly real time to profile your gastronomic habits.
gambiting 3 days ago 0 replies      
In 4 years Google will drop support for it leaving you with a pretty paperweight. Not interested, not from Google.
exodust 3 days ago 0 replies      
That page is so simple, yet even Google devs are "powering" these simple pages with multiple JS files. Why? Is it laziness? Or just some belief that Angular is required now for even "hello world"?

When viewing source I initially thought 'great, a nice clean HTML page'... after all, it's just 3 images fading between each other and a simple form.

But then at the bottom we see Angular, Angular Animate, Angular Scroll, and a fourth Main JS file. Way to set an example Google.

conjectures 3 days ago 0 replies      
How is this different to having a smartphone on your person? Other than using an additional plug socket.
irrational 3 days ago 0 replies      
So to use it I have to get up and go to wherever it is plugged in? Why wouldn't I just use my phone which is always on me?
lamein 3 days ago 0 replies      
People don't care about their privacy anymore. Many of us do care about it, but we are not the majority.

This project relies on that fact.

mathpepe 3 days ago 0 replies      
When the danger is so near we admire the foresight of those warning about it. Kudos to the FSF.
raajg 3 days ago 0 replies      
Another Amazon Echo. Not at all interested.

I wish there was a text box:

Please never send me the latest updates about Google Home.

pbreit 3 days ago 0 replies      
Please support 3rd party streaming audio.
paulftw 3 days ago 0 replies      
Amazon has Echo, Google has Nest and now Home.

What could Apple's Project Titan be if not a smart home device?

swasheck 3 days ago 0 replies      
kinda looks like my wife's essential oil diffuser. it'd fit right in if i wanted one.
sgnelson 3 days ago 0 replies      
How long before we find out the NSA has access to this and the Amazon Echo?
ilaksh 3 days ago 0 replies      
Only one sentence explanation unless I missed something. Its an Echo competitor.
Joof 3 days ago 0 replies      
Initially after snowden I thought, "the government and governments around the world will crack down on this behavior now".

I was nave. Nobody cares. Now they viciously support such practices. As long as that exists, I can't buy into datamining devices. And it will always exist.

educar 3 days ago 0 replies      
Seriously, never in my wildest dreams did I think that technology would come down to this. Like many others, I dreamed a future where I could have an automated assistant at home. Just not this way! It's really all about ads and mining data, isn't it.
tempodox 3 days ago 0 replies      
Now, we can volunteer for the Big Brother experience.
csrm123 3 days ago 0 replies      
What happened to "Don't be Evil"?
King-Aaron 3 days ago 0 replies      
Sucked in, anyone who bought a Nest.
Kinnard 3 days ago 1 reply      
Why wasn't this done under Nest?
58028641 3 days ago 0 replies      
till google disables it ...
Oletros 3 days ago 0 replies      
And I suppose it will be another US only product/service from Google
dharma1 3 days ago 0 replies      
looking forward to replacing my Echo Dot with this
zozo123 3 days ago 0 replies      
bache 3 days ago 1 reply      
romanovcode 3 days ago 1 reply      
Haha, no thank you. i don't want google to listen to everything I say in my house.

Next thing you know it's going to tell me is "Smith! Put more effort in those crunches!".

jayfuerstenberg 3 days ago 0 replies      
I'm not so lazy that I can't hold my phone and google for something. Pass.
nkg 3 days ago 1 reply      
This morning a friend of mine got his gmail hacked, which means his Play, Maps, Music and everything got hacked also.

With Google Home, add your "everyday tasks" and voice history to this! ^^

Online tracking: A 1-million-site measurement and analysis princeton.edu
588 points by itg  3 days ago   268 comments top 28
randomwalker 3 days ago 11 replies      
Coauthor here. I lead the research team at Princeton working to uncover online tracking. Happy to answer questions.

The tool we built to do this research is open-source https://github.com/citp/OpenWPM/ We'd love to work with outside developers to improve it and do new things with it. We've also released the raw data from our study.

ultramancool 3 days ago 4 replies      
As soon as I saw these APIs being added I immediately dropped into about:config and disabled them. How the hell do these people think this is a good idea to do without asking any permissions?

Put these in your user prefs.js file on Firefox:

user_pref("dom.battery.enabled", false);

user_pref("device.sensors.enabled", false);

user_pref("dom.vibrator.enabled", false);

user_pref("dom.enable_performance", false);

user_pref("dom.network.enabled", false);

user_pref("toolkit.metrics.ping.enabled", false);

user_pref("dom.gamepad.enabled", false);

Here's my full firefox config currently:


Privacy on the web keeps getting harder and harder. Of course this should only be used in conjunction with maxed out ad blockers, anti-anti-adblockers, privacy badger and disconnect.

We need browsers to start asking permission. When you install an app on Android or iOS it says "here's what it's going to use, do you want this?". The mere presence of the popup would annoy people and prevent them from using these APIs.

brudgers 2 days ago 2 replies      
Google has a vested interest in information leakage. I have a suspicion that the Chromium project expresses a strategic desire to shape the direction of browser development away from stopping those leaks. The idea of signing into the browser with an identity is a core feature and in Google's branded version, Chrome, the big idea is that the user is signed into Google's services.

Google only pitches the idea of multiple identities in the context of sharing devices among several people: https://support.google.com/chrome/answer/2364824?hl=enand even then doesn't do much to surface the idea. https://www.google.com/search?hl=en&as_q=multiple+identities...

rdancer 3 days ago 3 replies      
This is the kind of nonconsensual sureptitious user tracking that the EU privacy directive 2002/58/EC concerns itself with, not those redundant, stupid cookie consent overlays.
f- 2 days ago 0 replies      
Although the emphasis on the actual abuse of newly-introduced APIs is much needed, it is probably important to note that they are not uniquely suited for fingerprinting, and that the existence of these properties is not necessarily a product of the ignorance of browser developers or standards bodies. For most part, these design decisions were made simply because the underlying features were badly needed to provide an attractive development platform - and introducing them did not make the existing browser fingerprinting potential substantially worse.

Conversely, going after that small set of APIs and ripping them out or slapping permission prompts in front of them is unlikely to meaningfully improve your privacy when visiting adversarial websites.

Few years back, we put together a less publicized paper that explored the fingerprintable "attack surface" of modern browsers:


Overall, the picture is incredibly nuanced, and purely technical solutions to fingerprinting probably require breaking quite a few core properties of the web.

pmlnr 3 days ago 2 replies      
So... what we need is a browser, which says it supports these things but blocks or provides false data on request and looks as ordinary as possible for "regular" browser fingerprinting.

Is anyone aware of the existence of one?

anexprogrammer 3 days ago 3 replies      
Colour me unsurprised. Disappointed though.

I'm glad I disabled WebRTC when I first discovered it could be used to expose local IP on a VPN.

These "extension" technologies should all be optional plugins. Preferably install on demand, but a simple, obvious way to disable would be acceptable. (ie more obvious than about:config)

Not a great deal can be done about font metrics other than my belief that websites shouldn't be able to ferret around my fonts to see what I have. Not like it's a critical need for any site.

jimktrains2 3 days ago 6 replies      
NoScript is an all-or-nothing approach. Are there any JS-blockers that allow API-level blocks?
cptskippy 3 days ago 1 reply      
All of this makes me wonder how some of these interfaces should be more closely guarded by the user agent.

Perhaps instead of a site probing for capabilities, they should instead publish a list of what the site/page can leverage and what it absolutely needs to work. Maybe meta tags in the head or something like the robots.txt. Browsers can then pull the list and present it to the end user for white-listing.

You could have a series of tags similar to noscript to decorate broken portions of sites if you wanted to advertise missing features to users and, based on what features they chose to enable/disable for the site, the browser would selectively render them.

kardos 3 days ago 3 replies      
So given this information, how can we poison the results that the trackers get?
codedokode 3 days ago 1 reply      
Some methods of fingerprinting are probably used to distinct between real users and bots. Bots can use patched headless browsers that are masquaraded as desktop browsers (for example as latest Firefox or Chrome running on Windows). Subtle differences in font rendering or missing audio support can be useful to detect underlying libraries and platform. Hashing is used to hide exact matching algorithm from scammers.

There is a lot of people trying to earn on clicking ads with bots.

Edit: and by the way disabling JS is an effective method against most of the fingerprinting techniques.

wodenokoto 3 days ago 0 replies      
What annoys me the most is how many useless cycles these trackers use to track me.
MichaelGG 3 days ago 0 replies      
WebRTC guys get around this by stating fingerprinting is game over, so don't even bother. They ignore that they are going against the explicitly defined networking (proxy) settings. Browsers are complicit in this. If the application asks "should I use a proxy", then ignores it, silently, wherever it wants, that's deceptive and broken.

There's still zero (0) use cases to have WebRTC data channels enabled in the background with no indicator.

If all these APIs are added, the web will turn into a bigger mess than it is. They can't prompt for permissions too much. So they'll skip that, like WebRTC does.

ape4 3 days ago 0 replies      
Seems like browsers should ask the user's permission to use these html5 features. Then whitelist. For example, a site that does nothing with audio should be denied access to the audio stack.
pjc50 3 days ago 1 reply      
I think it's time for HTML--, which would contain no active content at all and simply be a reflowable document display format.
aub3bhat 3 days ago 1 reply      
There is an acceptable tradeoff between pseudo anonymous access through browsers vs non-anonymous access through native apps.

To interpret this research as reason for crippling web or browsers would be a giant mistake. Crippling browsers will only work against users, who will be then forced into installing apps by companies.

Two popular shopping companies in India exactly did this, they completely abandoned their websites and went native app only. This combined with large set of permission requested by apps lead to worse experience in terms of privacy for consumers. As the announcement for Instant Apps at Google I/O demonstrate, web as an open platform is in peril and its demise will be only hastened by blindly adopting these types of recommendations.

Essentially web as open platform will be destroyed in the name of perfect privacy. Only to be replaced by inescapable walled gardens. Rather consider that web allows a motivated user to employ evasion tactics, while still offering usability to those who are not interested in privacy. While with native apps where Apple needs a credit card on file to install, offer no such opportunity.

I am happy that Arvind (author of the paper) in another comment recommends a similar approach:

"""Personally I think there are so many of these APIs that for the browser to try to prevent the ability to fingerprint is putting the genie back in the bottle.But there is one powerful step browsers can take: put stronger privacy protections into private browsing mode, even at the expense of some functionality. Firefox has taken steps in this direction https://blog.mozilla.org/blog/2015/11/03/firefox-now-offers-....Traditionally all browsers viewed private browsing mode as protecting against local adversaries and not trackers / network adversaries, and in my opinion this was a mistake."""


makecheck 2 days ago 0 replies      
Over 3,000 top sites using the font technique, and from the description this sounds really wasteful (choosing and drawing in a variety of fonts for no reason other than to sniff out the user).

Each font is probably associated with a non-trivial caching scheme and other OS resources, not to mention the use of anti-aliasing in rendering, etc. So a web page, doing something you dont even want, is able to cause the OS to devote maybe 100x more resources to fonts than it otherwise would?

A simple solution would be to set a hard limit, such as 4 fonts maximum, for any web site; and, to completely disallow linked domains from using more.

cdnsteve 3 days ago 1 reply      
After reading this it makes me want to disable JavaScript entirely, along with cookies, and go back to text browsing. I've been using Ghostery on my phone, it's been pretty good.
wyldfire 3 days ago 3 replies      
Whoa, what's the use case for exposing battery information?
radicalbyte 3 days ago 0 replies      
Of course this is something you do. Throw it together with all of the other information you can clean from a browser (referrer, ip) and you can get a match with a very high confidence level.

Shops can do the same with baskets, you find that people are either identified by one very rare feature which reoccurs often or their little graph of 4-5 items which correlate 99% to them.

buremba 3 days ago 2 replies      
All these things make the websites the new apps. Most probably we won't need to use many desktop applications a few years later.
youaretracked 2 days ago 0 replies      
Since the original web based ad campaigns were launched we have been tracked. Serious web analytics companies know these tactics already.

So what exactly is the research contribution being made here? What's new and interesting?

chatmasta 2 days ago 0 replies      
If you want to see a live demo of all the ways your browser can fingerprint you, this is a great website: https://www.browserleaks.com/
id122015 3 days ago 0 replies      
I think its similar to how Absolute Computrace rootkit identifies Android and Lenovo devices. Each hardware compoment has a unique ID, like your ethernet, bluetooth, even microphones and batteries.
coygui 2 days ago 0 replies      
Would it be more secure to use tor than traditional browser. The only drawback is the longer RTT.
jkot 3 days ago 1 reply      
Malware filtering is needed.
tomkin 3 days ago 1 reply      
Ahhh. Remember when this was just a Flash problem, and getting rid of Flash was going to rid the world of evil?

Spoiler: that didn't happen.

ysleepy 3 days ago 2 replies      
Well, who would have guessed. Surprise surprise.

The web is such a shit technology.

Firebase expands to become a unified app platform googleblog.com
529 points by Nemant  3 days ago   202 comments top 41
mayop100 3 days ago 19 replies      
(firebase founder here) Im thrilled to finally be able to show everyone what weve been working on over the last 18 months! When I said big things are coming in the HN comments back when our acquisition was announced, I was talking about today : )

Were really excited about these new products. There are some big advances on the development side, with a new storage solution, integrated push messaging, remote configuration, updates to auth, etc. Perhaps more important though are the new solutions for later in your apps lifecycle. Weve added analytics, crash reporting, dynamic linking, and a bunch more so that we can continue to support you after youve built and launched your app too.

I'd suggest reading the blog post for more info:https://firebase.googleblog.com/2016/05/firebase-expands-to-...

This is the first cut of a lot of new features, and were eager to hear what the Hacker News community thinks. I look forward to your comments!

primitivesuave 3 days ago 4 replies      
Firebase is an incredibly powerful tool, and in a sense is a "democratizing force" in web development. Now anyone can build a complete web application without needing to know anything about setting up servers, content delivery networks, AWS (which is still quite difficult to use), and scaling. I teach kids as young as 10 years old to build iOS apps and websites with Firebase - they can develop locally and push to Firebase hosting with a single command. After exploring this new update, I can say with confidence that literally everything is easy-to-use now.

Whenever there is a Firebase announcement there are many replies along the lines of "this won't work for me because it's owned by Google, may be discontinued, doesn't have on-premise solution, etc". If these are your thoughts then you are missing the point of Firebase. It enables small web development shops like mine to focus on building beautiful web applications without having to give up manpower toward backend engineering. The cost of using Firebase is peanuts compared to the savings in employee hours.

Perhaps some day we will have to migrate elsewhere, but I find that possibility extremely unlikely because the clear amount of effort it took to create the Google-y version means this is a long-term play.

zammitjames 3 days ago 0 replies      
We were part of the Early Access Program for the expanded Firebase and used it to build our music collaboration app Kwaver. With the new features, they did a nice job of collecting a bunch of related mobile products (Analytics, Push Notifications, Storage, Authentication, Database, Deep Linking, etc) into a pretty cohesive platform, and it's saved us a bunch of time.

With Firebase Analytics we can track events, segment audiences (according to user properties; active, dormant, inactive) and take action according to the user segment. We are able to send push notifications (also using Firebase) to dormant male users who play the piano for example. Another cool feature is Remote Config, which gives you the option to ship a number of separate experiences and track the user interaction. Like A/B Testing but way more flexible.

For us, the best product is the existing database product they had, as it really improves our user experience to ditch the 'pull to refresh' button' and have our app respond to changes live.

We have been waiting for Google to provide developers a more complete mobile solution for a while now, and theyve done it superbly through Firebase!

Feedback; It would be really cool if Firebase could implement UTM codes to be able to track user acquisition and be able to automate actions according to User Properties.

Shameless plug: if you're a musician (or a music fan), we'd really appreciate if you could download our music collaboration app, try it out and give us feedback. Its available for free on the app store; The following link will re-direct you there later today. http://kwaver.com

timjver 3 days ago 1 reply      
I love Firebase, but the Swift code in the iOS guide is of really low quality. For example (https://firebase.google.com/docs/database/ios/save-data#dele...):

 if currentData.value != nil, let uid = FIRAuth.auth()?.currentUser?.uid { var post = currentData.value as! [String : AnyObject] var stars : Dictionary<String, Bool> stars = post["stars"] as? Dictionary<String, Bool> ?? [:] // ... }
What this should really be:

 guard let post = currentData.value as? [String : AnyObject], uid = FIRAuth.auth()?.currentUser?.uid else { return FIRTransactionResult.successWithValue(currentData) } let stars = post["stars"] as? [String : Bool] ?? [:] // ... }

chatmasta 3 days ago 2 replies      
Interesting that Google is doubling down where Facebook divested. The obvious difference is that Google has a cloud platform and Firebase is a funnel into it, whereas Facebook had nothing to funnel Parse users into.

I wonder if Facebook will ever launch a cloud platform. They've got the computing resources for it.

bwship 3 days ago 0 replies      
We've been using the Firebase platform for a while now. It's pretty cool to see them expand from 3 products to ~15 overnight. I'm most excited about their analytics and crash reporting. I must say that their system has been one of the best we have used in a longtime, I am really excited to see other aspects like analytics and ads being housed under this same umbrella, as I think it is going to help with development time overall. One area that I'd like to see improved though is a deeper querying language for database, or even better would be a way to automatically export the system in realtime to a postgres database for better SQL type analytics.
davidkhess 3 days ago 1 reply      
The concern I've always had with Firebase is the lack of a business logic layer between clients and the database. This tends to force the business logic into the clients themselves.

Trying to change the schema if you have Firebase clients deployed that can't be instantly upgraded via a browser refresh (i.e. iOS and Android mobile apps) seems an extremely challenging task.

ivolo 3 days ago 0 replies      
We used the original Firebase database product to build http://socrates.io/ 3.5 years ago, and I remember getting Socrates running in a few hours. Im looking forward to seeing them up the bar on speed of development / ease for their next 10 products :) Nice work team!
mybigsword 3 days ago 4 replies      
way too risky to use it for startups. Google may discontinue this project at any time and you have to spend months to rewrite everything for another database. IF google open source it and we will be able to install it on premise and patch without Google, that would be ok. So, I would recommend to use PostgreSQL instead.
fredthedinosaur 3 days ago 1 reply      
When will it support a count query? now to be able to count number of children I have to download all the data. Count is such an important feature for me.
fahrradflucht 3 days ago 2 replies      
I have build apps with firebase in the past and the feature I missed the most was performing scheduled tasks on the database.Now we are getting this BIG app platform update and this feature is still not in there. AWS Lambda with Scheduled Events for a long time to come :sad-panda:
dudus 3 days ago 0 replies      
Even if you don't want to use any Firebase service you might still want to use it only for Analytics. Drop the firebase SDK in the App and you are done. Free, unlimited and unsampled Analytics reports for your App.



joeblau 3 days ago 1 reply      
I remember walking into Firebase's offices about 4 years ago when it was 4 people on Townsend St in SOMA in a 300 square foot share office space. It's amazing to see how far they've come; Congrats to the whole team.
skrebbel 3 days ago 1 reply      
As a current Firebase customer, I'm pretty thrilled about all this (especially since I was afraid Google would pull a Facebook here). However, there's quite a bunch of API changes and absolutely no info about how long the old JS library, endpoints, etc etc are to keep working. Should I get stressed out?
maaaats 3 days ago 5 replies      
This may be a stupid question, but: What do you use it for? Cannot everyone basically edit the client code and do whatever with your data? I've only used Firebase for prototyping.
oceankid 3 days ago 1 reply      
The thought of reliable, managed hosting is interesting.

But how does one extend an app outside storing and fetching data? What if you want to run a background job to send emails, parse a complex csv or create a custom pdf and write to firebase storage?

WalterSear 3 days ago 0 replies      
I'm in talks with a company regarding building an application for users in developing countries, where Android 2.0 is still the dominant OS version.

Firebase 2.0 looks like a great fit for their needs otherwise, but is the new sdk backward compatible to Android 2.0?

albeva 3 days ago 0 replies      
I think services like firebase are a very scary thing. Too much dependence on one vendor, too much black box magic, too much logic that is beyond control. And services like this contribute to general dumbing down of software developers. We're heading towards world of script kiddies where html and js rule and all complex logic is handled and controlled by service providers. Is it a good thing? You can deliver fast, but in the long term is it worth it?
pier25 3 days ago 1 reply      
So how would one address server side logic?

Like for example doing something with the data before sending it to the client?

blairanderson 3 days ago 0 replies      
From my experience with the new API, it's a little less intuitive and worse documentation. I think it's rad that Google invested a ton of resources into firebase.

we have been super successful with firebase, and are proponents of using it as a notification system and less of a datastore. That would be easy, but unwise. Use it to notify clients of changes so they can fetch data. Read from Firebase, write to your own server/DB.

ddxv 3 days ago 0 replies      
This appears to just be a way to limit the growth of third party tracking which threaten Google by encouraging user acquisition from many sources.

I say this because they dont specifically say they will postback events to advertising networks other than Google's.

robotnoises 3 days ago 2 replies      
I don't think it was explicitly mentioned in the keynote, but it looks like they updated pricing:


Can't find the old pricing now, but it seems similar, but with less plan types.

Philipp__ 3 days ago 0 replies      
It looks like it is here to stay... But that surprise Parse shutdown will leave me asking, what if...
1cb134b57283 3 days ago 0 replies      
As a server engineer already having trouble finding a new job, how worried should I be about this?
mcv 3 days ago 0 replies      
I intend to use Firebase as at least a temporary backend while developing my app. Maybe I'll move to a real server later, but during development it's really easy to just have some place you can shoot json at. And I can always add interaction later by having some other application listen to it.

I don't really need the actual realtime communication stuff all that much (though it might turn out to be useful), but just a lightweight place to store json is really useful.

wiradikusuma 3 days ago 1 reply      
For Firebase/Google Cloud Platform engineer: does it mean Google Cloud Endpoints is being phased out? if i'm already using Google Cloud Endpoints, should i move to Firebase? what's the advantage?
tszming 3 days ago 0 replies      
The biggest problem with any Google cloud services nowadays is you don't know if it was/will blocked in China, of course it is okay if you don't care the users in China.
aj0strow 3 days ago 0 replies      
I've had only good experiences with firebase. They added an HTTP api, web hosting, multiple security rule preprocessors (pain point), and got faster and cheaper. Yeah only good things.
robotnoises 3 days ago 0 replies      
Not expressly mentioned anywhere that I've seen: the Free plan now includes custom domains + SSL cert. Under the previous firebase.com, that was $5 a month.

Sounds good to me!

intellegacy 3 days ago 1 reply      
Is there a tutorial that explains how to setup a backend for user-taken videos? for an IOS app

one thing I liked about Parse was that it's documentation was newbie-friendly.

gcatalfamo 3 days ago 1 reply      
Can somebody explain the new Firebase reframing towards GCP? Maybe with another provider analogy? (e.g.,Firebase is to GCP like Parse is(was) to Facebook)
welanes 3 days ago 0 replies      
FYI, new docs on data structure mention rooms, which was an example in the old docs. Should read messages or conversations: https://firebase.google.com/docs/database/web/structure-data...
kawera 3 days ago 1 reply      
Question: Would Firebase be a good option where the desktop/web app is the main access point, mobile been secondary (around 3:1) ?
Kiro 3 days ago 1 reply      
I'm building a simple web app where I want signed in users to be able to add a JSON object to a database and then list all JSON objects publicly. Only the user who created the object should be able to edit it. Is this a good use-case for Firebase or should I look into something else?
eva1984 3 days ago 0 replies      
Feel like the new Wordpress/Drupal/CMS, just in App space.
ssijak 3 days ago 1 reply      
What is the state of AngularFire library, there are no guides for angular in the new documentation? And when will the angularfire for angular2 be ready to use?
dmitriz 3 days ago 1 reply      
Is user email confirmation finally supported by Firebase? Last time I checked it wasn't.
themihai 3 days ago 0 replies      
"... and earn more money." Is this really necessary on the homepage? Sounds like a old misleading spam page
sebivaduva 3 days ago 0 replies      
for all of you looking for a real-time api platform that's open source and not owned by a cloud giant come join us build telepat.io
Blixz 3 days ago 3 replies      
So, Still no offline persistence for JS. What a huge disappointment.
choward 3 days ago 1 reply      
Provide a self hosting option or GTFO.
Tesla Announces $2B Public Offering to Accelerate Model 3 Ramp Up bloomberg.com
399 points by dismal2  3 days ago   179 comments top 11
jboydyhacker 3 days ago 18 replies      
The big surprise here isn't that Tesla was doing an offering - it was Goldman did a huge research note 24 hours before the offering while actually participating in said offering.

Super bad form and just goes to show the community- don't trust investment bankers. Such bad form.

Animats 3 days ago 2 replies      
Well, $1.4 billion for Tesla, $0.6 billion for Musk personally, and an option for Goldman Sachs to get $0.21 billion.[1] Tesla stock is down in after-hours trading, but that doesn't mean much. If the stock is down significantly at the close tomorrow, the market didn't like this.

It's a legit offering. The company intends to build a big factory and make stuff. Real capital assets will be bought with that money. It's not to sell stuff at a loss to gain market share in hopes of raising prices later. (Looking at you, Uber.)

Tesla just hired Audi's head of manufacturing, Peter Hochholdinger. About a week ago, the previous two top people in manufacturing quit, right after Musk announced he wanted the production line running two years sooner. Maybe Hochholdinger can do it.

[1] https://www.sec.gov/Archives/edgar/data/1318605/000119312516...

jernfrost 3 days ago 11 replies      
Why do people keep spouting this nonsense that Tesla is losing money on EVERY car? They make money on every car otherwise they wouldn't be selling any cars. The lose money due to their high R&D.
vessenes 3 days ago 2 replies      
This is not a surprise; there's an old saw that I think I first read in a Buffet annual report. It says that financing tends to alternate forms for companies in terms of what makes sense: debt -> equity -> debt -> equity.

Equity offering seems likely to be much cheaper than debt right now; Tesla has great mindshare among consumers, and lots of doubters on the professional investor side.

crabasa 3 days ago 3 replies      

 echo "Tesla to offer $1.4 billion shares, remaining to be sold by Elon Musk. Musk is exercising options to buy 5.5m shares and will boost overall holdings on net basis. Developing... " | wc 1 30 176
News articles and tweets are converging at an alarming rate.

jgalt212 3 days ago 0 replies      
It's pretty obvious at this point that Tesla's number one product is their stock. Which makes it no different from a number of other high fliers.

At first they were an innovative car company. Then the stock price shot well above the level sustainable by an electric car company. Elon realized this, and then builds the Giga factory. We're not just a car company, we're a power company!

Now they are raising more equity off of an inflated stock price. I'd stay away from this one.

Not a total hater, Tesla cars are great, but one of these day's Elon's moon shots and obsession with the stock price will catch up with him (he'll still be rich) and his investors (they may be significantly less rich).

11thEarlOfMar 3 days ago 0 replies      
It's neither here nor there, but I feel like Bugs Bunny in "High Diving Hare", and Musk just raised the platform another 50 feet:


marvin 3 days ago 1 reply      
From the press release, it appears that the capital raise is "only" $1.4 billion -- the remainder is Elon Musk selling shares to cover his tax liability for simultaneously exercising options from 2009. Hopefully 1.4 billion is enough.
slantaclaus 3 days ago 0 replies      
Tesla has a really great business. They're not just cars, they're batteries. Their home battery for storing solar energy is a huge deal at least in terms of future cash flows. Also, they're a white label supplier of batteries to companies like Toyota and Mercedes. Anyway--new long term TSLA shareholder here. Bought in at $205.
mjbellantoni 3 days ago 1 reply      
Anyone have thoughts as to why they're selling stock as opposed to issuing bonds?
syngrog66 2 days ago 0 replies      
When you have 375k $1000 preorder deposits its probably an ideal time to raise investment.
Reason: A new interface to OCaml facebook.github.io
601 points by clayallsopp  4 days ago   283 comments top 50
Cyph0n 4 days ago 4 replies      
This looks very interesting. I've always had OCaml in mind but never actually got around to using it in a project. Facebook could have done a better job describing what exactly this is, but they do provide a good overview at the end of the page (strangely!) [1].

In summary, Reason [2] is a new language (correction: interface to OCaml) that shares a part of the OCaml compiler toolchain and runtime. I don't know of any language that uses a similar approach, that is, plugging into an existing compiler toolchain. I guess a reasonable yet inaccurate analogy would be Reason -> OCaml is like Elixir -> Erlang or Clojure -> Java.

I hope Reason can provide OCaml with the extra push needed to bring it into the mainstream PL space and more widespread adoption.

[1]: http://facebook.github.io/reason/#how-reason-works

[2]: https://github.com/facebook/reason

mhd 4 days ago 1 reply      
I hope this doesn't sound like trolling, but JavaScript's syntax is now a selling point? I kinda-sorta get the reason why people want an actual JavaScript stack on the backend, but I never heard that syntax/semantics brought people from e.g. Rails to Node.

Sure, OCaml isn't even the nicest syntax in the ML family, but I'm not sure whether that's worth it, especially considering that almost any "X-like" language often turns out to be an Uncanny Valley for "X" programmers -- close enough to make some frustrating errors.

e_d_g_a_r 4 days ago 3 replies      
I for one welcome the syntax. I run the OCaml meetup in Silicon Valley and syntax is definitely an issue for newcomers. This makes it easier for other programmers to instantly just jump into OCaml/ML rather than ask about what is `in` or what is `let foo = function`, etc etc.

EDIT: Hosting a Meetup this friday at 6pm in San Francisco about Reason and how to instantly start using it, http://www.meetup.com/sv-ocaml/events/231198788/

jameshart 4 days ago 2 replies      
Wonder if this project has anything to do with Eric Lippert's move to Facebook (https://ericlippert.com/2016/02/08/facebook/ - Eric has also been producing a series of blog posts implementing a Z-Machine interpreter in OCaml to run mini-Zork on, starting here: https://ericlippert.com/2016/02/01/west-of-house/). Eric was on the C# compiler team at Microsoft and previously worked on JScript.
civilian 4 days ago 4 replies      
I know that it's common to have namespace collisions, but their logo is so similar to Reason magazine's. https://reason.com/
avsm 4 days ago 1 reply      
There's a screencast fresh off the presses on the info page at https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml

I'm finally going to switch away from my ancient nvi setup and use Atom instead! MirageOS recently moved all our libraries over to using the new PPX extension point mechanism in OCaml instead of the Camlp4 extensible grammar. This means that MirageOS libraries should be compatible with Reason out of the box -- so it'll be possible to build unikernels from a slick editor interface quite soon hopefully!

alex_muscar 4 days ago 2 replies      
Nice to see that OCaml is getting so much love at facebook. Unfortunately, adding a new syntax that's almost OCaml, but not quite, doesn't seem like such a great idea. While it might make the language accessible to more people, it runs the risk of fragmenting the community.

I know syntax is subjective, but some of the choices seem a bit odd. For example, declaring variants and using their constructors looks like Haskell, but the semantics is still OCaml. In Haskell, constructors are first order so they can be passed as functions, and partially applied. It makes sense that their declaration and use looks like function declaration and function calls. In OCaml they are not first class, that is, you can't pass the as arguments, or partially apply them. That's why it makes sense for the declaration to look like a tuple, and the use to look like a function applied to a tuple--well, somewhat, you can still argue that it's still confusing because you might expect to be able to apply the constructor to a tuple variable, but well, such is life :). Unless constructors are first class in Reason--it doesn't look like it from a quick scan through the docs--this particular syntactic difference is of dubious value, and, worse, it can be misleading to newcomers.

Also, changing `match` to `switch` seems gratuitous as well, and it also loses some of the meaning of the original. i.e. "I want to match this value against this set of patterns".

Finally, I know that using `begin` and `end` for blocks is verbose and Pascal-ish--which people seem to hate for some reason--but using { } for scopes looks out of place, and leads to awkward cases like this:

 try { ... } { | Exn => ... };
I don't mean for this to sound ranty, or like I'm picking on Reason. I think it's good that facebook is tryiog to spice things up in the OCaml community.

MichaelGG 4 days ago 4 replies      
I started off a bit skeptical with the <- renaming to =. Mutability should be rare enough that <- makes things stand out. But apart from that I think I rather like this syntax, on the whole. Not a fan of semicolons. It also makes me appreciate F#'s #light syntax (now its default). Using whitespace really clarifies stuff, and there's always in and ; for fallback.

What's OCaml's status with multithreading? Are there any proposals for more flexible operators, so there doesn't need to be different operators for different numerics? (F# solves this by allowing inlined functions.)

greyhat 4 days ago 1 reply      
The slowness in Firefox appears to be solely due to this:

 @media (min-width: 1180px) { body:not(.no-literate) .content-root { background-color: #fdfcfc; -webkit-box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); } }
Removing it in the Firefox style editor restores normal performance.

Edit: And they have commented out the box-shadow! Hah.

TY 4 days ago 5 replies      
Ok, it might be the end of the day for me and I'm denser than usual, but I can't understand what is this? Ocaml to JS transpiler?

Checked this out, but the reason still eludes me:https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml pun intended)

chenglou 4 days ago 0 replies      
I've worked on the Atom plugin for this, itself written in Reason and compiled to JS using js_of_ocaml: https://github.com/facebook/reason/tree/7f3b09a75cacf828dd6b....

Having worked with Reason, JavaScript, and the bridge between the two, most of my errors seem to fall on the JavaScript side. So I guess the type system's indeed working =).

hellodevnull 4 days ago 8 replies      
Site doesn't load in Firefox. Works in Chrome.
mseri 4 days ago 1 reply      
I love OCaml, but that's a really nice reshape of OCaml syntax! And apparently things will be interoperable. I am really curious to see where it goes.

EDIT: and they want to use and maintain compatibility with ppx. Great news

haches 4 days ago 0 replies      
If you'd like to play with Reason you can do it online here:


Of course, you can also create your own Reason projects.

ipsum2 4 days ago 1 reply      
Looking at http://facebook.github.io/reason/mlCompared.html it looks like regular OCaml, with a sprinkling of JS syntax.
Paul_S 4 days ago 3 replies      
Website fries the CPU (FF).
nikolay 4 days ago 1 reply      
Nice, but I always wonder why function is abbreviated as the longer unambigous fun and not just fn?!
akhilcacharya 4 days ago 1 reply      
Do want to learn this - does anybody know any interesting projects that can take advantage of the OCaml ecosystem and functional aspects?
cwyers 4 days ago 2 replies      
I really wish they'd taken the pipeline (|>) operator from F#, if they were going to rework OCaml.
grhmc 4 days ago 1 reply      
I'm seeing "BUILD SYSTEMS RAPIDL" over here on Linux.
robohamburger 4 days ago 2 replies      
I took ocaml for a spin a couple months ago and compared to more recently created languages it seems a bit crufty.

If they can simplify the build system to be on par with something like cargo that would be swell.

Also: having rust style traits or haskell classes would be amazing. Also macros that aren't obscure and hard to use compiler plugins please :)

Hopefully it ends up being more than just questionable sugar around ocaml and actually adds some sorely needed language features.

honua 4 days ago 1 reply      
What problems would be well solved by Reason/OCaml?
bjz_ 4 days ago 2 replies      
Would be nice to see modular implicits like those that are being proposed for OCaml. It's a shame to not have any form of ad-hoc polymorphism.
oblio 4 days ago 4 replies      
Has anyone here built something say, over 10k lines in Ocaml? How is the development experience? IDEs, debuggers, linters, deployment, etc.
incepted 4 days ago 2 replies      
Interesting but since they are designing a revised syntax, I wish they had got rid of Ocaml's semi colon. These stand out in 2016.
konschubert 4 days ago 1 reply      
> A new, developer experience for rapidly building fast, safe systems.

The comma placement suggests that developer is an adjective for experience.

xvilka 4 days ago 1 reply      
It would be nice if they'll make it work on Windows platforms. There is already an issue for that[1]. It also depends from the Windows support in OCaml itself and opam[2].

[1] https://github.com/facebook/reason/issues/470

[2] https://github.com/ocaml/opam/issues/2191

SwellJoe 4 days ago 5 replies      
So, I know OCaml is impressively fast. And, I know OCaml is impressively terse ("concise" may be a more positive term). But, I wonder what would make one choose OCaml (or a variant of it like this) over some of the other new or old languages that exhibit some excellent characteristics for modern systems. In particular, a convincing concurrency story seems mandatory. I don't know enough to know if OCaml (or this variant) has a convincing concurrency story, and nothing on the front page of website tells me.

So, why do I want to learn this, rather than, say, Go or Elixir?

swuecho 4 days ago 3 replies      
Do it provide a usable standard lib? If so, I may try to use it in side project.
ubertaco 4 days ago 4 replies      
As excited as I was to see a big new thing in OCaml-land, I have to say my excitement died down as I read on.

I don't really see most of the changes as improvements.

Having a different, explicitly-noticeable syntax for mutable updates is nice, because it calls out mutability (which should be used sparingly).

I don't see extra braces as necessarily an improvement, given that OCaml's local scopes are already quite unambiguous thanks to "let ... in". On that note, Removing "in" and just going with semicolons removes another "smelly-code-callout" by making it less obvious what's imperative and what's functional.

I actually don't like ambiguity between type annotation and value assignment in my records. It's clear in current OCaml that {a: int} is a type declaration and {a = 1} is a value declaration/assignment. Moving to colons-for-record-values is at best a bikesheddy, backwards-incompatible change for change's sake, and at worst a breaking-change way of code less clear.

Speaking of making code less clear, how is "int list list" not clear? It's an int-list list. As in, a list of int-lists. So of course it should parse as "(int list) list". Why change to backwards annotations? Just to prevent existing code from working as-is, and making people used to reading ML spend extra brain cycles on remembering that your types read the opposite way?

And they make a huge deal out of their type for tuples being "(a, b)" instead of "(a * b)". Yeah, okay, I get it. It's not that big a deal, since people are used to reading product types as, well, products.

The other thing that seems weird to me is the need to change to a "fat arrow" instead of a "skinny arrow", again for no real reason. In fact, it just makes it more likely that you'll confuse it with a comparison operator. Nobody tries to type ">-", but people try to type ">=" all the time. You're just switching for the sake of switching, and it's not an improvement.

Their example code of their replacement for match...with is especially egregious. If you showed me the OCaml snippet and the Reason snippet unlabelled, I would think that the OCaml snippet is the new-and-improved version, since it's much more compact, much less noisy, and reads more like what it's trying to do ("match my_variable with either SomeValue x or SomeOtherValue y").

Another thing they make a lot of noise about is requiring fewer parens in some places. But then, they also require more parens in other places. So...okay? I guess? Not really a win.

And why rename equality operators? Are you really going to tell me that people prefer that their languages have "==="?

johnhenry 4 days ago 0 replies      
Wondering how, or even if, this compares to elm? http://elm-lang.org/
yegle 4 days ago 1 reply      
This is the new low of search engine unfriendly :-(
mark_l_watson 4 days ago 0 replies      
Reason looks interesting. I have had a 5 year run of alternating between really liking Haskell, and sometime thinking that my own development process was too slow using Haskell. I am putting Reason on my try-it list.

Documentation suggestion: add examples for string manipulation.

cm3 4 days ago 1 reply      
I miss dead code elimination the most, especially when building code that uses Core.
breatheoften 4 days ago 1 reply      
Is Facebook using mirage or similar ocaml unikernel tool chain? Is part of the goal of reason to make a more approachable syntax available for authoring code that will run inside next-generation containers?
partiallypro 4 days ago 0 replies      
Does anyone Else's Firefox absolutely slow to a crawl on this page?

Edit: just doesn't load at all on Edge. Does load in Chrome/Opera and surprisingly IE 12 but doesn't load the logo's font.

zem 4 days ago 2 replies      
i noticed this in the examples:

 | List p (List p2 (List p3 rest)) => false /* 3+ */
has the regular list destructuring in pattern match syntax been removed? that's pretty sad, if so - lists are the default data structure in ocaml, and it's worth retaining some special syntax for cons especially in pattern matches.

elcapitan 3 days ago 0 replies      
Is there an overview in which regard this differs from "classical" Ocaml?
stuartaxelowen 4 days ago 3 replies      
Can we please keep using parens for function invocation? Leaving them out hurts readability.
querulous 4 days ago 0 replies      
if this had come out five years ago i'd probably be all over it, but i think i'd rather just use rust at this point. different syntax but better safety and it's not like the ocaml ecosystem has a lot to offer
andrew_wc_brown 4 days ago 0 replies      
Everything reads like double talk.Not sure what I would want to use this for.
intrasight 4 days ago 0 replies      
Pretty disappointed that they'd release something that butchers Firefox.
molotok 4 days ago 0 replies      
Fry Firefox RAPID.
aerovistae 4 days ago 0 replies      
fixxer 4 days ago 1 reply      
Why rtop?
ulber 4 days ago 7 replies      
This page is completely unusable due to lag. From the other comments it seems this is FF specific. One would think FB would have the resources to test new pages at least on common browsers before publishing.

Edit: The fix came quickly though.

carapace 4 days ago 0 replies      
Another site that is useless with JS disabled. Nice work.
ClosureChain 4 days ago 0 replies      
I wonder if the people at Propellerheads will sue Facebook for using the name of their software https://www.propellerheads.se/reason
zump 4 days ago 2 replies      
Facebook just won't let OCaml die.
devit 4 days ago 5 replies      
It seems to me that Rust would be pretty much strictly better than this.

In particular Rust has similar syntax, seems to have all Reason's features plus the linear types and regions/borrowing that allow memory and concurrency safety while still being able to mutate memory and not being forced to use GC.

They are aware of Rust since they cite it in their page, so I wonder why they decided to create this instead of using Rust.

It would be nice if they explained this in the FAQ.

I guess it might be useful if you have an OCaml codebase to interface with but don't already know OCaml, but given the relative obscurity of OCaml that seems a pretty narrow use (and also Facebook isn't known to make extensive use of it, afaik).

Play Store and Android Apps Coming to Chromebooks googleblog.com
378 points by ojn  2 days ago   201 comments top 27
spot 2 days ago 6 replies      
from the post:

> Schools in the US are now buying more Chromebooks than all other devices combined -- and in Q1 of this year, Chromebooks topped Macs in overall shipments to become the #2 most popular PC operating system in the US*.

that's pretty amazing actually. congrats to google & the chromebook team!

radarsat1 2 days ago 4 replies      
I'm curious just on the technical side, what does this mean for the many apps that include ARM code? (i.e. apps that use the NDK) Will there be some emulation, or do apps generally ship with multi architecture?

Edit: Ok, the answer is, both. Thanks ;)

caffinatedmonk 2 days ago 6 replies      
I'm curious why they didn't mention something so game changing as this in the keynote.
sharms 2 days ago 3 replies      
This is a big move and will majorly impact desktop / laptop computing. Now the entire ecosystem of Android apps (even Microsoft Office, Snapchat, Photoshop Express) is going to be available, and arguably this platform is much more complete than say, Universal Apps (Microsoft
magnumkarter 2 days ago 3 replies      
This is great!!! I wonder if it will be possible to install the Play Store in Chromium OS. I know that Chromium has some support for installing Android .apk files.
hackaflocka 2 days ago 2 replies      
To the Googlers on here -- any idea when it'll come to Chrome browser on other platforms. I really hope Google doesn't artificially delay that to boost Chrome OS penetration.
bonaldi 2 days ago 3 replies      
No support for the original Pixel? It's more powerful than quite a few on the list. Damn.
stkoelle 2 days ago 1 reply      
Intellij for Android, would help a lot some developers ;-)
dharma1 2 days ago 0 replies      
I hope they will open source this, so we would get Android apps on other Linux distros too. That would be a great win for Linux app ecosystem
gvurrdon 2 days ago 0 replies      
Does anyone know how permissions would be handled? There are some Android apps I'd like to install on a Chromebook but I certainly don't want them to get access to my contacts.
chrisper 2 days ago 2 replies      
Is there a way to try out ChromeOS without owning a chromebook?
jimmcslim 2 days ago 4 replies      
Why are Chromebooks such a US phenomenon... Here in Australia retail availability is pretty dire. I wonder if this development might see that start to change?
pgrote 2 days ago 1 reply      
While this is a great step forward, I am disappointed in the list of chromebooks supported.

I looked over the list and cannot find a common thread as to what is supported and what isn't. Does anyone know?

My Acer C720 with an i3 isn't on the list, but my Toshiba Chromebook 2 with lesser specs is on the list.

pawelkomarnicki 2 days ago 0 replies      
Well, time to shred my "Samsung Chromebook", and maybe get something newer or just give up with this "gazillion of models and revisions" bullshit I hated about Windows years back :/
ralmidani 2 days ago 0 replies      
Hopefully this leads to the release of ARM devices with more than 32GB of storage.
jbigelow76 2 days ago 3 replies      
I'd be more interested in seeing Electron apps on ChromeOS before Android apps, not expecting that to happen mind you, Electron on ChromeOS probably does nothing to move the Google ecosystem forward.
asimuvPR 2 days ago 0 replies      
Google: What does this mean for ARC users?
headmelted 2 days ago 1 reply      
Obviously there's no-one in the world that didn't know this was coming, but even so, I feel for the Remix OS guys.

I assumed at the time their objective was to be acqui-hired by Google, but I can't see why there would be a reason for that now, or how they'd hope to compete in this situation.

Congratulations to the Chrome O/S and Android teams. I was briefly on a Chromebook when my laptop packed in, and but for the absence of solid developer tools, I'd have stayed forever. There's a lot to be said for convenience.

genieyclo 2 days ago 2 replies      
After the Android Chrome app gets extensions, what's the point of keeping ChromeOS alive? It's the only thing Android's missing that ChromeOS has.
headmelted 2 days ago 0 replies      
For anyone that hasn't yet played with a Chromebook and is interested in this, x86 builds of Chromium OS:


This isn't exactly the same (no play store yet), but it'll let you get a feel for the OS and it's merits.

koolba 2 days ago 3 replies      
Will apps run natively on Chromebooks or will my fart app slow down because it's being emulated?
jimjimjim 2 days ago 2 replies      
year of the linux desktop?
superobserver 2 days ago 0 replies      
This is really great news and I hope they execute this right. As liberating as crouton is, I still find myself wanting an Android apps for the ease of access.
TazeTSchnitzel 2 days ago 1 reply      
Coming soon: Chrome OS made into merely an alternative Android home screen, and Chromebooks becoming Droidbooks.
_pmf_ 2 days ago 0 replies      
There's virtually no app that I feel thrilled to use on my laptop.
ncr100 2 days ago 0 replies      
I assume Google IAB be supported on Chromebooks, too?

Cross-device purchase restoration, etc?

dandare 2 days ago 0 replies      
Finally Sonos on Chromebook!
Academics Make Theoretical Breakthrough in Random Number Generation threatpost.com
382 points by oolong_decaf  4 days ago   157 comments top 23
tptacek 3 days ago 0 replies      
I'm sure this is as important to computer science as the article claims, but not having even read the paper I can say pretty confidently that it isn't going to have much of an impact on computer security. Even if it became far easier to generate true random numbers, it wouldn't change (a) how we generate randomness at a systems level or (b) what goes wrong with randomness.

Our problem with cryptography is not the quality of random numbers. We are fine at generating unpredictable, decorrelated bits for keys, nonces, and IVs. Soundly designed systems aren't attacked through the quality of their entropy inputs.

The problem we have with randomness and entropy is logistical. So long as our CSPRNGs need initial, secret entropy sources of any kind, there will be a distinction between the insecure state of the system before it is initialized and the (permanent) secure state of the system after it's been initialized. And so long as we continue building software on general purpose operating systems, there will be events (forking, unsuspending, unpickling, resuming VMs, cloning VMs) that violate our assumptions about which state we're in.

Secure randomness isn't a computational or cryptographic problem (or at least, the cryptographic part of the problem has long been thoroughly solved). It's a systems programming problem. It's back in the un-fun realm of "all software has bugs and all bugs are potential security problems".

It's for that reason that the big problem in cryptography right now isn't "generate better random", but instead "factor out as much as possible our dependence on randomness". Deterministic DSA and EdDSA are examples of this trend, as are SIV and Nonce-Misuse Resistant AEADs.

(unsound systems frequently are, but that just makes my point for me)

hannob 4 days ago 2 replies      
While this may be an interesting theoretical result it almost certainly has zero practical implications for cryptography.

We already know how to build secure random number generators. Pretty much every real world problem with random numbers can be traced back to people not using secure random numbers (or not using random numbers at all due to bugs) or using random number generators before they were properly initialized (early boot time entropy problems).

This random number thing is so clouded in mystery and a lot of stuff gets proposed that solves nothing (like quantum RNGs) and stuff that's more folklore than anything else (depleting entropy and the whole /dev/random story). In the end it's quite simple: You can build a secure RNG out of any secure hash or symmetric cipher. Once you seeded it with a couple of random bytes it's secure forever.

oolong_decaf 4 days ago 0 replies      
Here's a link to the actual paper: http://eccc.hpi-web.de/report/2015/119/
electrograv 4 days ago 3 replies      
> We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number

"Independent and no correlations" sounds like a crippling assumption if you want to use any two deterministic PSRNGs. How can you possibly guarantee they're completely un-correlated and independent without seeding them with collectively more bits of entropy than you can get out of the combined system?

I'm not sure what "independent" is even supposed to mean for a deterministic sequence, which by definition is recursively dependent.

beambot 4 days ago 3 replies      
Reminds me of the Von Neumann method of using a biased coin to generate unbiased random coin flips: http://web.eecs.umich.edu/~qstout/abs/AnnProb84.html

(Edit: not the algo itself, just the notion of combining randomness.)

deckar01 4 days ago 0 replies      
> Abstract:

> We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least logCn for a large enough constant~C. Our extractor outputs one bit and has error n(1). The best previous extractor, by Bourgain, required each source to have min-entropy 499n.

> A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n1, for any 0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown nq bits are chosen almost \polylog(n)-wise independently, and the remaining q=n1 bits are chosen by an adversary as an arbitrary function of the nq bits. The best previous construction, by Viola, achieved q=n12 .

> Our explicit two-source extractor directly implies an explicit construction of a 2(loglogN)O(1)-Ramsey graph over N vertices,improving bounds obtained by Barak et al. and matching independent work by Cohen.


Dagwoodie 4 days ago 8 replies      
What makes randomness so hard? I had this crazy thought awhile back and wondering if it would work out:

Say you took a small disk shaped object like a hockey puck with a window on it and you filled it with sand. 50% white sand and 50% black sand. Inside the puck would be blades that are attached to a motor and rotated slowly to constantly change the pattern. The pattern formed in the window would be truly random wouldn't it? You could mount this to a PCIE card with a camera...

dave2000 4 days ago 2 replies      
What is the possibility that this is an attack on cryptography; convince people that it's safe to produce random numbers this way using an inaccurate "proof" and then have an easy/easier time decrypting stuff produced by anyone who uses it?
wfunction 4 days ago 1 reply      
Could someone explain why XORing the outputs of the two sources isn't optimal?
jaunkst 4 days ago 5 replies      
I have always wondered why not introduce physical randomness into cryptography. Let's take scalability out of the question and look at the problem at the fundamental level. If we used a box of sand that shifted each time a random number was requested and a camera to scan and produce a number from this source would it not more random than any other method? I'm not a professional in this field I am just truly asking why not..
Cieplak 4 days ago 2 replies      
Does this imply that XORing /dev/urandom with /dev/random is a good practice?

PS: Thanks for clarifying @gizmo686. The arch linux wiki suggests that urandom re-uses the entropy pool that dev/random accumulates, so this is indeed a BAD idea.

I found this helpful as well:

Overall, their construction quite reminds me of a double pendulum, which is one of the simplest examples of deterministic chaos.

Houshalter 4 days ago 1 reply      
I read the article and the comments and I'm still confused why this is important.

I mean it sounds trivial. Why not take the hash of the first random number, and xor it with the first random number. Then optionally hash the output and use that as a seed for a RNG. If any part of the process isn't very random, that's fine, it's still nearly impossible to reverse and doesn't hurt the other parts.

marshray 4 days ago 1 reply      
How is this different than taking two independent bits with < 1 bit entropy and XORing them together to combine their entropy? (up to a max of 1 full bit)
kovvy 3 days ago 0 replies      
How well does this handle a biased source of random numbers in one or more of the inputs? If someone has set up your random number source to be more easily exploitable (or just done a really bad job setting it up), does combining it with another poor source with this approach mean the results are still useful?
csense 4 days ago 1 reply      
"...if you have two low-quality random sources...you can combine them in a way to produce a high-quality random number..."

I tried to skim the paper, but it's really dense. Can someone who understands it explain how what they did is different than the obvious approach of running inputs from the two sources through a cryptographically strong hash function?

wfunction 4 days ago 4 replies      
Isn't "Independent and no correlations" redundant? How can two random variables be independent but correlated?
nullc 4 days ago 0 replies      
But can anyone extract the algorithm from the paper?


mirekrusin 4 days ago 0 replies      
Can someone explain why it's considered so hard to get randomness? I mean you can take old radio and you hear random noise, is it hard to create tiny antenna in the computer?
bootload 4 days ago 0 replies      
another article via UT (Uni. Texas), "New Method of Producing Random Numbers Could Improve Cybersecurity" ~ http://news.utexas.edu/2016/05/16/computer-science-advance-c...
Bromskloss 4 days ago 1 reply      
> A source X on n bits is said to have min-entropy at least k if

Can a rigorous definition of "source" be found somewhere?

nullc 3 days ago 0 replies      
But can anyone extract an algorithm from the paper? :)
roschdal 4 days ago 5 replies      
We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number,

So Math.random() * Math.random() ? :)

ninjakeyboard 3 days ago 0 replies      
praise RNGesus!
LinkedIn password leak kaspersky.com
395 points by trumpeter  2 days ago   204 comments top 34
kazinator 2 days ago 2 replies      
> If youre not sure how strong your password is, test sample passwords with our password checker here.

That is irrelevant in the face of leaked passwords; what matters most in that situation is that your password is something other than your leaked one.

If the passwords were leaked due to being stored in plain-text, no amount of complexity would protect them, obviously.

Don't use the same password on multiple sites. If your LinkedIn password is leaked, you don't want that same password to grant access to your bank account. That just as important than how strong the password is, if not more.

If some site has suffered a password leak, and you're a user of that site, you must change the password on that site, and also on all other sites where you happened to use the same password. Do it as quickly as possible without worrying how strong the new passwords are. Then change later to stronger ones.

A password's strength is inversely proportional to how often you change it. For instance, if you happen to change a password every week (for the sake of argument---few people likely do), and it takes a month to crack on the best available hardware cluster, then you're probably okay. If you change only once a year, you're much less okay; a surreptitious password breach could happen, and two months of cracking later, the attackers have your password. Meanwhile, you're still months away from changing it, not knowing there had been a breach.

By the time users learn about a breach---if ever---they should assume that their passwords have been cracked, because some unknown amount of time has passed between the actual break and the discovery. The discovery will likely stem from the fact that some of the "lower hanging" passwords have been cracked and accounts start being misused. The site admins can then only guess from various circumstantial information (logs or whatever other breadcrumbs left bind) about when the leak might have occurred.

wglb 2 days ago 4 replies      
1: Change your password. RIGHT NOW. If youre not sure how strong your password is, test sample passwords with our password checker here. Seriously?

Keep in mind that these estimates are based on some bogus entropy estimation. If a password hacking guy runs the correct dictionary past the hashes you password generates, it might be as small, well, as the first one tried. For example, run the passphrase Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn1 past the kaspersky bruteforce estimator, you get 10,000 centuries. But this is clearly false, as inicated in http://arstechnica.com/security/2013/08/thereisnofatebutwhat.... They clearly "cracked" this in far less time: "in a matter of minutes".

warrenpj 2 days ago 1 reply      
The best security that an individual can get from passwords is clearly achieved by using a password manager and generating a unique random password for each site, and changing high-value passwords periodically. (It's arguably already impossible for a human to generate or remember enough good passwords, and either way it gets harder as computers get better at guessing human-generated passwords.)

However, from the point of view of someone implementing an authentication system, passwords on their own are broken. There will be a significant fraction of users who re-use their password at a site with minimal-effort security. If you subscribe to the idea that computer professionals have a moral duty to safeguard people's private information entrusted to them, then password-only authentication is just broken.

The solution is to either: spend the money to implement a multiple factor authentication system (with a secure password database and fraud detection) or use a federated identity service. (Even just sending a one-time login code via email is fine). The latter is simple and takes even less effort than implementing a password system from scratch.

There should be fines (at the very least) for having an unsalted password database with more than X number of users.

stephenitis 2 days ago 0 replies      
useful tool to check your emails https://haveibeenpwned.com

https://haveibeenpwned.com/PwnedWebsites hasn't been updated yet with this yet because the list hasn't leaked entirely.

Also change your password: https://www.linkedin.com/psettings/change-password

benologist 2 days ago 3 replies      
I got an email from them this morning about this, it just smells like all their other junkmail begging me to +1 their active users.

Why don't they invalidate the passwords all at once instead of letting -- someone -- use the potentially compromised passwords again...

oxguy3 2 days ago 3 replies      
Woo, I created my LinkedIn profile in 2015, so I should be safe since the leak is supposedly from 2012. If anyone else isn't sure when they made their LinkedIn, you can see your join date here (ctrl+f "Member since"): https://www.linkedin.com/psettings/
Tharkun 2 days ago 2 replies      
I got an e-mail from LinkedIn today saying that I would be forced to reset my password upon my next login. They didn't say why. I guess this explains it.
zeveb 2 days ago 5 replies      
Aaaand that's why I use 'pwgen -s 22' to generate a unique password for every single site I use. I don't care if a salted password database is stolen; heck, as soon as I change my password I don't even care if a plaintext database is stolen.

Why -s? Because it means each password is a complete word, and may easily be double-clicked in a password list (which is nice, because selection is copy in X).

Why 22 characters? Because 22 mixed-case letters and digits are just over 128 bits of entropy.

Say it with me:

 pwgen -s 22

benzor 2 days ago 2 replies      
Question for the more security-savvy among you: If the leak happened in 2012 and I've changed my password since then (it's listed in your account page [1]), do I need to change it again?

Logic tells me I've got nothing to worry about, even considering potential password reuse, if they've all changed since then.

[1] https://www.linkedin.com/psettings/account

luso_brazilian 2 days ago 1 reply      
Considering the amount of "growth hacking" LinkedIn use (used?) to so, sending too many emails to too many people this breach can be much more dangerous than usual.

People raises eyebrows when they get phishing emails but when it comes purposely from LinkedIn and vouched for by your social and professional circle it could get much more credible and easy to fall.

koyao 2 days ago 1 reply      
And LinkedIn is now asking me to enter my phone number:

"Add an extra layer of security to your account. Add your phone number."

Leaking my email / password is bad enough; I'm not going to give them my phone number for more damages!

jasonpeacock 2 days ago 2 replies      
Also, why is the 2FA option hidden under "Privacy" and not right next to the Change Password option?

You'd think they would want to advertise 2FA better...

may 2 days ago 0 replies      
You can see how long you've been a LinkedIn member by going to your Privacy & Setting page, where it displays at the top.


joelthelion 2 days ago 4 replies      
Do we know how strong their hashing scheme was?

Edit: SHA-1... You'd think a site as big as linkedin would have strong hashing...

ryanlol 2 days ago 1 reply      
My theory is that this data leaked via custhelp.com, the filename of the data dump I have (linkedin.cfg) seems to support that.

This would also explain linkedins initial "confusion" regarding the hack.

jedmeyers 2 days ago 2 replies      
> test sample passwords with our password checker here.

Do NOT do that with your exact password though :)

heartsucker 2 days ago 0 replies      
I'm going to use this to recommend a CLI for strong, memorialize passwords (if you're not using something like KeePass).


 $ pip install diceware $ diceware -n 8 -d ' ' --no-caps proton hunts blake 31 pope pivot taped plain

vermooten 2 days ago 3 replies      
Who cares if their LinkedIn account gets hacked? In my case they'll be able to see 500+ recruitment agents I've never heard of as my 'contacts'.
electic 2 days ago 1 reply      
Folks, this is becoming a common occurrence. Use a password generator and password vault to protect against this type of scenario.
sleepychu 2 days ago 0 replies      
I'm pretty sure the right move for me is going to be to just delete my account. I mainly just receive recruiter spam from it.
open-source-ux 2 days ago 3 replies      
As someone who isn't versed in security issues, can anyone explain how security breaches like this one (and Adobe etc.) occur?

I'm assuming (and I may be completely wrong) that some kind of software monitors if the database of customer details is being downloaded. If a download is detected, an alert is issued. Does software like this exist? Or there other measure that guard against these data breaches?

tudorw 2 days ago 2 replies      
I've read zero reports of people breaking into houses, finding a piece of paper down the back of the cabinet with lots of passwords on and no site names, then using those passwords randomly to gain access to an unknown system... A 'software' or 'online' password manager seems like a terrible idea, all your eggs in one convenient basket, if Sony and VISA and the NSA are unable to secure their systems 100% of the time I doubt the maker of your software will fare much better over the long term.
JumpCrisscross 2 days ago 0 replies      
A useful HaveIBeenPwned feature would be a list of pwned passwords connected to my email address.

Yes, I know - don't reuse and use a password manager. But not everyone follows best practice. Knowing which password motifs to absolutely not reuse would be helpful.

awinter-py 2 days ago 0 replies      
beyond linkedin logins, they also have a zillion email passwords from the bad old days before oauth.
mkhpalm 2 days ago 0 replies      
Whats interesting to me is that their spams to change your password showed up on a whole bunch of group email addresses I am a member of. So at some point linkedin went and harvested email addresses that got to my inbox and made a bunch of bad assumptions to include those as secondary addresses for me. I can only assume it was their mobile app, which is now forever uninstalled on all my devices. I simply cannot have them doing that.
jjm 2 days ago 0 replies      
For those that have forgotten, https://news.ycombinator.com/item?id=4073309

Back then there were issues. If I remember correctly, there was some nodejs even after this with no bcrypt.

gggggggg 2 days ago 0 replies      
anyone know how I can get a copy of the list. I want to see if the email/password combination I used back then is still in my regular circulation on other sites.
DyslexicAtheist 2 days ago 0 replies      
is it even verified that the data isn't again the warmed up stuff that surfaced from LinkedIn's 2012 breach? This is quite common these days.
noja 2 days ago 0 replies      
> test sample passwords with our password checker here.

And you just lost my trust Kaspersky, congratulations.

20andup 2 days ago 0 replies      
Just points out the fact that we should use password generators for all web sites that requires one.
misiti3780 2 days ago 2 replies      
this might be a dumb question - but if the password was unique to that account AND you have 2 factor auth enabled, is there any reason you need to change the password ?

so if some hacker somehow manages to backward engineer a salted-bcrypted-hash of my unique password, he still cant get in without my cell phone

adamredwoods 2 days ago 0 replies      
2-step authentication?
ILoveMonads 2 days ago 0 replies      
I'm amazed LinkedIn is as big as it is. They have a big, new, building in Sunnyvale and lots of employees--too many it seems for a simple social network. I drive past their HQ a few times a week when I'm in Sunnyvale and see their employees, who don't look like other tech employees, waddling down the street to the McDonalds on the corner of Mathilda and Delray.
MikeJougrty 2 days ago 1 reply      
So basically, if I get interviewed by a company and I get asked why I don't have a Linkedin account, am I legitimate to respond to them by saying that Linkedin sucks in many different ways including password breach?
A language that compiles to Bash and Windows Batch github.com
386 points by onecooldev24  2 days ago   141 comments top 31
thinkpad20 2 days ago 3 replies      
The idea is interesting, but ultimately the utility of this seems limited. The differences between windows and Unix are more than just the shell language involved; shell scripts are typically deeply intertwined with the system in question, to the point where it's often not the case that a bash script will even run reliably across different Unix systems, much less on windows. Also, you can already run bash on windows, so once again the problem doesn't seem to be the language per se. I can only imagine how difficult it would be not just to design a script that would work properly on both platforms, but to debug the cases where it didn't.

Also, as others have noted this language doesn't support redirection, which in my mind makes it practically useless beyond toy applications. I've written hundreds of bash scripts and I don't think any of them didn't make heavy use of pipes and redirection. I'm also not sure if the language supports running processes in the background, traps, magic variables like $!, extra options, subshells, file globbing, etc, all things that many real-world scripts use. Bash scripts often use many programs available on Unix systems as well, such as find, grep, readlink, xargs, and other things that aren't part of the language per se. Unless those are ported over too, writing useful scripts would be almost impossible.

Finally, I don't think the author has made a convincing argument that such a language is even needed, when languages like perl/python/ruby exist for when the task is complex enough to require a more powerful language. On the other hand, if the project is (as I suspect) purely for fun and interest, then by all means :)

scandox 2 days ago 3 replies      
You've got to admire this and at the same time you can't help but see the shadow of two more letters on the end of its name...
BYVoid 2 days ago 3 replies      
I am the author of Batsh. Batsh is a toy language I developed 2.5 years ago in a hackathon. It was just for me to play with OCaml. Feel free to play with it at http://batsh.org/.
jtwebman 2 days ago 2 replies      
The project hasn't been worked on in over a year also it isn't the language that is much different but the tools. Are you going to write grep on windows? If not the language really doesn't matter.
legacy2013 2 days ago 2 replies      
This is cool, but I hope the new Linux Subsystem in Windows 10 will propagate enough that everything can be written in bash
xrstf 2 days ago 2 replies      
Is there a way to get this as a precompiled Windows binary? I really want to give this a try, but I'm just sooo lazy and don't want to install OPAM if there's maybe a way around that.
onedognight 2 days ago 0 replies      
I was disappointed to see that the compiler was not written in batsh and therefore not self hosting. Seems like a missed opportunity.
pm 2 days ago 2 replies      
Was hoping the website was a .it domain.
Someone 2 days ago 1 reply      
Weird that it doesn't seem to support basename (1) and dirname (1) and their Windows equivalents (e.g. %~dp0%)

Quite a few of my scripts need to find files relative to the location of the script, or compute the path to an output file by replacing the extension of an input file.

tomcam 2 days ago 1 reply      
I must have done it wrong. In the late 80s, I created a Windows batch-to-executable compiler called Builder (extending the language greatly, to the point of menus as language constructs). Made a decent living off it, too!
nikolay 2 days ago 0 replies      
I posted Batsh long ago, but it's totally impractical. The multiplatform PowerShell is the best option. Anyway, we need a new shell and NGS [0] looks like it. Neither zsh, nor Fish offer what a shell scripting language needs so that you don't have to use Python, Go, or another language for a slightly more complex stuff!

[0]: https://news.ycombinator.com/item?id=11734622

ricksplat 2 days ago 0 replies      
A couple of slight nitpicks, but isn't this more like a translator? I understand a "compiler" to be that which condenses high-level structures down to more fundamental CPU-oriented structures (as opposed to an "interpreter" that takes the structures as presented). If you're compiling into Bash, or windows batch then you're converting from one high-level representation into another. Though you could debate whether these are "high level" whether they are technically or not. I will concede that this batsh language does indeed look a lot nicer than either - it seems to be of the "general purpose" style of C or Javascript rather than the more targeted style of either bash or batch - which is specifically for the domain of operating system scripting - I guess this might mean it's actually lower level. The whole thing is pretty cool though :-)
moondev 2 days ago 0 replies      
Since docker is on windows now, this should run script.sh in current directory pretty easily.

docker run -v %CD%:/opt -w /opt debian:jessie ./script.sh

kelvin0 2 days ago 0 replies      
Cool stuff! Would have been useful for me when all I knew was C++! Unfortunately, I use Python for exactly this type of scripting. Oh and I also use it for stuff like Django, lxml, requests, wxPython, OpenCV, PyGame, NumPy, Reportlab ...
jheriko 2 days ago 1 reply      
how is this top item? seems unfinished, unpolished, not very useful and very run of the mill as an achievement.

no easily found windows installer and has obscure dependencies whilst claiming it has none etc. etc.

zwieback 2 days ago 4 replies      
Looks nice but wouldn't you just use something like Python nowadays?
juped 2 days ago 1 reply      
Excellent idea. The only thing I thought was "why not PowerShell", but XP compatibility is probably the reason. (It exists but isn't preinstalled for that version.)
fpoling 2 days ago 0 replies      
I like the idea of compiling to bash from a much saner language even without Windows compatibility. With containers and slimmed-down OS like CoreOS Python and friends may not be available. Besides, for some tasks Python startup time is just unacceptable so using Bash could be a good option if not the awkward syntax to put it mildly.
exabrial 2 days ago 0 replies      
"Bash is the assembly code of Unix?"
incepted 2 days ago 1 reply      
Isn't it ironic to write this tool in a language that is not available on Windows[1]?

[1] "Windows support is comming soon. "https://ocaml.org/docs/install.html

marshray 2 days ago 0 replies      
I was yearning for something like this literally just last night. Still am, tempted to give it a try.
chris_wot 2 days ago 1 reply      
I love the fact that someone, somewhere needed this, and then someone, somewhere created it.
niutech 2 days ago 0 replies      
There is also Pash: PowerShell + Bash (http://pash.sourceforge.net)
youjiuzhifeng 2 days ago 0 replies      
Cygwin provides another way to run the bash shell on windows. It is really helpful with full features of some commands like 'find' 'sed' 'awk'.
evincarofautumn 2 days ago 1 reply      
How should I pronounce the name? I read batsh as a homophone for batch [b], which is unfortunate. Is it supposed to be pronounced as two syllables [b.] or something like that?
eagsalazar2 2 days ago 2 replies      
I've never tried but can't you already just run your node or ruby or whatever scripts on both Windows, Mac, Linux? Is there some reason targeting bash and batch is important?
sedatk 2 days ago 1 reply      

 @echo off set java=/usr/bin/java
that's not how it works. that's not how any of this works.

knocte 2 days ago 0 replies      
At the time GNU/Windows became a thing recently (https://msdn.microsoft.com/en-us/commandline/wsl/about), the idea of Batsch turned obsolete to me.

And anyway I have been already been using crossplatform .fsx scripts for quite some time. KTHXBYE

emmelaich 2 days ago 0 replies      
This would be a candidate for the scripting language in redo

Paging @apenwarr

meunier 2 days ago 0 replies      
Opportunity to name this batshit wasted.
agjmills 2 days ago 0 replies      
Relevant XKCD: https://xkcd.com/927/
Open Whisper Systems Partners with Google on End-To-end Encryption for Allo whispersystems.org
329 points by ThatGeoGuy  3 days ago   212 comments top 14
robert_foss 3 days ago 7 replies      
To me it seems like Open Whisper Systems are accepting a lot of concessions in order to have Signal included into products. The trust I once had for moxie is quickly dissipating.

* Privacy is only provided in Allo in a secondary mode. Not by default.

* Federation of the Signal protocol has been rejected for non-technical reasons.

Also, on a personal note, the desktop client requiring chrome is pretty awful.

cm3 3 days ago 4 replies      
Has anyone given this


more thought and whether one should avoid Signal and work with a more friendly project that doesn't seemingly fail at its desire to have widespread use of the protocol and actually tried to sue WireApp? WireApp's now approved as a non-infringing implementation in Rust, so that's great for reliability.

Edit: The suing part was initiated by Wire as a response to Moxie demanding GPL compliance over their claim Wire is infringing. I got that backwards.

tptacek 3 days ago 7 replies      
This is fantastic news. The two largest messaging platforms on the Internet will both be using Signal protocol.

I could ask for more: E2E could be the default for Allo, and it isn't. That's not great. But the E2E you get when you ask for it will apparently be best-in-class.

Jarwain 3 days ago 2 replies      
What I'm curious about, and think would be really neat, is if one could take advantage of the shared Signal Protocol to send messages cross-platform. Specifically, sending an encrypted message to a Whatsapp user from Allo. Or to a Signal user from Whatsapp. Or any combination/permutation really.
Roritharr 3 days ago 2 replies      
I really wonder what the people of allo.im are thinking now.
NetStrikeForce 3 days ago 1 reply      
I'm not sure I got it right.

Is Google going to be scanning all my conversations to give me suggestions on what to say next? Really?

I understand the price of things like Gmail, where I get a robust email system in exchange of scanning my emails and mining my data. I got something very good from Google, they got my data. Not the best of the deals I ever made, but it has (had?) a strong appeal.

On the other hand I don't understand this Allo thing: There's no appeal in the smart assistant, it doesn't bring anything I want to have.

lawnchair_larry 3 days ago 0 replies      
For someone concerned about privacy, it's baffling to me that we'd be forced into sharing our phone number in order to communicate.
superkuh 3 days ago 3 replies      
Does this require a phone number like the rest of Open Whisper Systems products?
sigmar 3 days ago 0 replies      
Great news! Hopefully this also means that identity verification (through a key fingerprint) will be available in Allo (and in Duo?)
chinathrow 3 days ago 4 replies      
That is awesome - now we also need to kill metadata collection. Is this feasible?

Oh and off-the-record was there on Hangouts/Gtalk before - I used it but the chats were replicated across clients (e.g. Pidgin vs gmail.com) - so not really off-the-record (i.e. they lied).

mahyarm 3 days ago 0 replies      
I wonder how many other signal protocol integrations are in progress...
dang 3 days ago 1 reply      
Please don't do this here.
dang 2 days ago 1 reply      
Please don't do this. Personal attacks are not ok on HN, regardless of how wrong or annoying someone is.

We detached this subthread from https://news.ycombinator.com/item?id=11728339 and marked it off-topic.

p0ppe 3 days ago 5 replies      
Why didn't Google just develop this in house? It almost feels like they're admitting to having no credibility on privacy without an external partner.
Bootcamps vs. College triplebyte.com
384 points by kwi  2 days ago   427 comments top 82
brudgers 2 days ago 8 replies      
There are several factors that don't enter this analysis.

1. Bootcamps can be selective over a range of non-academic criteria such as interview skills, personal hygene, and prior work experience. Or to put it another way, unlike a public university, a boot camp can select for culture fit both in its internal cohorts and in the workplaces it targets.

2. Bootcamps tend to attract people with previous work experience: someone more likely to have several years of working to keep a roof over their head than a recent CS grad. There's a difference between a junior programmer with their first real job and a junior programmer who has spent six years working crappy jobs [or good ones].

3. Bootcamps have much more latitude to train for employment and employability. Listening to Jeff Meyerson's hours of bootcamp love songs, those interviews have left me with the distinct impression that doing so is common.

4. Bootcamp grads may come out with a stronger alumni network that can provide recent feedback about interview processes like Triplebyte's. Going in with some idea of what's coming is likely to produce better results.

5. Bootcamps don't have to report their "failures". There's no independent oversight or accountability of the sort common in university education. A "C student" may simply find it impossible to graduate a bootcamp. The bootcamps are free to shape their "graduate" pool however they wish.

lloyd-christmas 2 days ago 5 replies      
How is this possible?

I think one key aspect that is missing is that boot camp graduates aren't straight off the barista lineup. I took one at age 28 after having worked in a technical role in finance since undergrad. The average age of my class was probably 29. Beyond just time in the workforce, I had a double major in math and economics with a minor in applied statistics. Had I dropped "Behavioral Economics" and taken "Data Structures" along with some other random course, I could have switched my Econ major to a minor in CS. Many people in my class were of a similar background.

madmax96 2 days ago 11 replies      
I have a slightly more pessimistic view of the situation:

Sure, bootcamp grads can write a web application just fine; after all, it's usually only CRUD. But what value are they bringing to an organization? Why would I pay them the same amount as a college graduate who undoubtedly has more total knowledge not only in CS, but in other areas as well? Ideally, a college should expose students to a diverse range of knowledge, each tidbit providing additional value to an organization. If I just wanted an application constructed, I could offshore the job and get it done cheaper.

Yes, a well-run bootcamp might be a better __coding__ education than a computer science degree, but coding is the easy part. There are other valuable skills that aren't being taught (i.e. the ability to communicate clearly, how to do research, how to learn independently) that make an organization strong.

We aren't in the coding business, we're in the building business. Code is simply a means to an end.

AlldenKope 2 days ago 1 reply      
Companies focus too much on attracting talent, not enough on developing it.

If both of these screened avenues of entry to software development are as promising as these metrics indicate (each with their pros and cons) here are some potential larger takeaways for companies:

1) Invest in the continuous development of your employees, regardless of their background and seniority

2) Hire for teams, and diversify teams with both CS and BC grads

3) Hire more people in general (maybe on a probationary period)

Fit to small teams with the goal of cultivating experientially diverse teams, and spend significant time developing employees - junior and senior.

Any intellectual work should involve continuous learning and development. If the company's focus is restricted to current projects, or on the bottom line, or if managers enforce strict division of labor, an organization will warp to optimize for those metrics and become less adaptable to inevitable changes in the market (or within the company) and the company will fail to compete - or at minimum incur major opportunity costs.

What these metrics suggest is that if you take relatively successful candidates and invest in their individual development, both in depth and breadth, that investment will pay off. You'll create engineers who find better solutions to problems and - more importantly - who find better problems to solve.

HNcow 2 days ago 6 replies      
I'm in the process of hiring a junior position and have no bias towards college grads or bootcamp grads. The only negatives towards boot camp grads I've seen so far is:

1) One candidate had no idea what the terms "Class" or "OOP" even meant. I'm FINE with them not understanding stuff like sorts/advanced data structures, but he ACTUALLY had 0 idea what an int was. No lie!

2) I wish there wasn't such a heavy reliance on MongoDB in most of these programs. Some do have SQL as well, but I feel like 80% of workplaces will be dealing with SQL, so I'm not sure what the focus on Mongo is all about if the purpose of these programs is to make you hireable. I think it's that it's an easier concept to relay since you're working with JSON everywhere already, but I've seen a bunch of people have a very strong bias towards Mongo to the point where they seem to not understand why you even would use SQL.

3) This part might get me in trouble here, but we are a small company in NJ and budgeting 50k for the junior 0 experience position. Most of these bootcamps in Brooklyn or Manhattan instill that you minimum should be making 60k and not to even look for anything else. I disagree with that personally, but I realize it is possible for grads to make this (especially in NYC). I've just come across a few that scoff at us for the pay we have, and I do understand it, but some of my higher ups who don't really feel comfortable with the bootcamp concept don't think they are worth it.

Obviously there are a lot of pros with hiring them as well. I think typically they are the more qualified candidates skill wise. None of the ones we've come across have been a great fit so far though, but I think it's because of how close to NYC we are. These programs are based there, and we have trouble competing with the salaries there. That's why we have been having more luck finding college grads from the NJ area though, they don't have these kind of higher expectations.

kemiller2002 2 days ago 6 replies      
Boot camps have their place, but they are not a replacement for a traditional CS degree. I have met good and bad programmers from both types of programs (some from well respected colleges who I still wonder how they exactly passed), but here's the thing, I don't care about practical skills. I care about the person being able to think.

All those concepts that they teach in CS isn't about knowing the name of an algorithm, it's about thinking abstractly. I honestly don't care if a recent grad knows how to use IDE x or even much about source control. I can easily teach them that. I can't easily teach a person how to understand pointers or pass functions as parameters. I don't need someone who can write code; I need someone who can look at a problem and realize that we can cut the amount of work we have to do by understanding programming concepts at an abstract level. It is very hard to achieve this in a 12 week course. Can some people do this? Sure, they may have the background from a previous career that aids them in this, but they are the exception and not the rule.

ammon 2 days ago 6 replies      
I'm happy to answer any questions about this (I expect it to be controversial). When we started Triplebyte one year ago, I was pretty skeptical of bootcamps. Doing credential-blind interviews and seeing what some bootcamp grads can do, however, has won me over. Clearly there are a lot of bad bootcamp grads (and probably a lot of bad bootcamps). But the model is working really well at the top.
Jormundir 2 days ago 2 replies      
These results aren't very surprising because this is about interviewing performance. The goal of bootcamps is "teach you enough to get a job"; they're basically gaming the interview process by teaching to the test. University programs on the other hand are "teach you CS theory"; learning to interview well is up to the student and the specific school's offering of interview training.

I think there's a strong argument to make that university programs are too focused on theory, when the vast majority of their students are going to go out and get practical engineering jobs. I don't want the pendulum to swing too far to the practical side, though, because then you lose the long-term benefits of getting a CS degree. Although, schools can certainly buff up their practical material.

Anecdotally; when I participate in hiring, I tend to discount the bootcamp grads. Maybe it's unfair, but my experience hiring them has been that they know how to interview well, and know their tools well, but when you compare them a year in, they're pretty far behind their university counterparts. I see a plateau, where it's hard for a lot of bootcamp grads to move from doing generic web development to designing more challenging systems. Obviously it depends on the individual, but this seems to be a categorical struggle for bootcamp grads with little technical background. A lot of companies really just need more people doing web development, so being open to the bootcamp pool is essential, and ruling out bootcamp grads is silly.

jhchen 2 days ago 3 replies      
It was not long ago that Computer Science degrees itself faced a similar challenge, against more well-rounded liberal arts programs, championed and prided by the Ivy Leagues. Today MIT and Stanford are ahead by the strength of their more practical engineering degrees. The data from Triplebyte supports the same narrative, just in greater granularity: businesses value practical skills.

There is value in being balanced and diversity, but this applies to teams, not necessarily individuals. Not everyone on your engineering team needs to be an architect. After your globally distributed, fault tolerant, realtime, highly available system is designed, somebodys got to build it. And most startups or software teams have no business even trying to design such a system in the first place.

In the US, my generation was told we all needed four year degrees. We dont. Some jobs and some roles certainly but the entire population of future adults?

There is an engineering shortage in the US because everyone was too busy getting four year degrees in more well rounded fields. Meanwhile Apple needs tens of thousands of engineers that could have been trained by two year vocational programs that the US was apparently above for our children, and thus cannot meet their business needs.

And yet this data from Triplebyte is incredibly encouraging because while we screwed up the educational policy, it may not be so difficult to fix.

caconym_ 2 days ago 1 reply      
It makes a lot of sense that bootcamp grads would outdo fresh college grads on "web system design"; they've presumably spent most of their bootcamp time focusing heavily on web systems. Stuff like load balancers/reverse proxies, distributed message queues, noSQL DBs, etc. may be totally foreign to a lot of fresh college grads, while a bootcamp grad can probably be expected to have a not-too-shabby understanding of how those components fit together.

The "practical programming" bit is a little more depressing, though it does ring somewhat true based on what I've seen in real life. How people can spend 4 years programming and still consistently fail at building decent abstractions, I have no idea.

Also, where is the "neither" category? There are dozens of us... dozens!

WWKong 2 days ago 2 replies      
Go to college. Life is long and it is not about passing your first interview. Real world is complex and ever changing. The point of going to college is not to acquire coding skills to pass the interview. It is about facing real world challenges: people, responsibilities, complicated decisions, uncomfortable situations etc. And hopefully at the end of it you are better prepared to take on life. It is a harder path than going to a coder factory. Take the hard path.
felix_thursday 2 days ago 0 replies      
There's something to be said about a person doing a bootcamp. Not only is it a drastic career pivot, but choosing to invest in yourself like that is a huge sign of maturity, growth mindset, and awareness. It's no surprise that a bootcamp grad can quickly get up to speed in their first professional dev environment.

I did the WDI bootcamp through GA, and loved the experience. My motivations weren't to become a full-time web dev, but to become a much better, more well-rounded product manager. It's paid off 5x over so far.

There's a ton of garbage bootcamps out there, and it's unfair to lump them all together -- it's unfair that these exist. period. While, you can't replace the deep technical and theoretical understanding you get with a classic CS degree, if your goal is to build web apps, do you really need the formal experience, or can you learn that on the job?

bunnymancer 2 days ago 6 replies      
Bootcamper here,

Of course 3 months is going to get you running with a solid basic knowledge of your stuff.

In what world would low-level, algorithms and data structures be doable in 3 months?

Point is, I don't think Bootcamps and Colleges are comparable.

It's like being a woodworker and a forester..

There's a place for each and it's not the same positions...

Now, here's my big question:

If your interview includes Practical programming, Web system Design, Algorithms and Low level system design...

What in the nine hells are you hiring for?

Had it been for a trucker position you'd be asking for "driving license, laws and regulations, engine design and car physics"..

For reference: https://i.imgur.com/sh7LJgj.jpg

morgante 2 days ago 2 replies      
I'd love to see some more mathematical analysis of these differences. In particular, I suspect that while the averages are similar, the distributions look extremely different.

Specifically, the average engineer out of either a bootcamp or college is pretty mediocre. But the top 10% of engineers are mostly college graduates and are definitely not bootcampers. This is because the best developers are overwhelmingly passionate about development and have been doing it since high school. If you love programming, you might go to college to get a firmer academic standing. You definitely won't go to a bootcampif you've been programming for 5 years, a 3 week bootcamp makes no sense.

On the other hand, when it comes to the bottom tier I suspect bootcampers are a lot better. This is mostly because the bottom tier of CS graduates are atrociously bad. Regrettably, it is possible to graduate with a degree in CS without ever having written a single program by yourself. They slink by mostly through cramming for exams and "collaborating" with peers. My impression is that bootcamps are actually less tolerant of this behavior: you won't make it through a bootcamp without ever programming autonomously.

ogrev 2 days ago 1 reply      
This is basically a warning to every single person going through bootcamps right now: Your skills are not special. You can be replaced with ease. Unless you differentiate yourself through what you learn either at your job or after the camp and demonstrate it through your work then your job will be kaput. That's basically what all of those Everyone Can Code advertisements were trying to achieve which is to make these skills a commodity.

Good luck.

harlanji 2 days ago 0 replies      
This is the most honest comparison I've read so far.

I dropped out of high school because I was making good money by 18... kept working, saw my own limitations, and did a BS degree in 3 years, graduating at 26. That was 5 years ago today, actually :)

I see this same distinction in practice, thanks Triplebyte for quantifying it. If I were staffing an engineering team, I'd absolutely take junior engineers from bootcamps and senior engineers with university backgrounds. I like the surgical model from The Mythical Man Month, and have seen elements of it working by hiring junior test engineers of varying technical backgrounds and training them.

I think a BS degree in CS makes a lot more sense when you're hitting the edge of your capability as an independent contributor--many may never need it, some will love going on a few year sabbatical and earning their 'piece of paper' (as I did).

Biggest factor that gave me an edge was I had lots and lots of context for all the content of classes, and I took notes every single day, Beginner's Mind style and didn't try to test past intro classes... even CS 101 with Scheme. I was also able to work on my mentoring/leadership skills with classmates.

avs733 2 days ago 1 reply      
there is a simple confounding variable here that unfortunately triplebyte can't touch with a ten foot pole...age/work experience

College is largely about transitioning children to adults (we can argue that separately) the personal and professional development that students go through over 4 years is vast. They are becoming adults in many frames, including understand the world and technology as systems. They aren't just learning to code, they are learning how to think.

To the extent that I know (warning: anecdata) Bootcamps presume a lot more worldly knowledge, attract and expect more grown up students, get students with direct interest in web/software/apps, and are much more likely to get career transitioning students (from the people I know who have bootcamp'ed). They have a much broader knowledge base to build on which will help them in some areas and hurt them in others. I would be curious if Triplebyte has any data they can touch at all looking at that.

Simply said...a 22 year old college student with a CS degree and a 35 year old BC grad may look similar on metrics but function entirely differently as employees in both the short and the long term...caveat emptor, figure out what you need.

humbleMouse 2 days ago 4 replies      
I think a well-run bootcamp is a better coding education than college computer science. The only thing most grads have on bootcamp people is algorithm knowledge. This is easy to fix. Just teach algos in bootcamp. It really isn't that hard to understand.

Ideal bootcamp:


-angular or any mvvc data mirroring framework

-OOP and ntier patterns

-Stored Procs/ORM/SQL training


-Webservices SOAP/REST

The college grads I work with tend to have written a couple shitty programs that don't really do anything, and their "final project" was hooking up a database to a business logic layer.

source: I have taught in bootcamps before and work with lots of new college comp sci grads now.

superuser2 2 days ago 0 replies      
If I were stranded on an island with the laptop I used in college and a power source, I'd have a pretty good idea of how to stumble through:

- A multithreaded UNIX-like operating system with user programs, system calls, and a filesystem, with reasonable (if not entirely optimal) caching strategies.

- A TCP/IP stack for that operating system.

- An authenticated encrypted channel over my TCP/IP stack with forward secrecy by building a pseudrandom function up to a stream cipher, RSA with OAEP, Diffie-Helman, etc.

- Network services from the RFCs in C (we did a router and IRC).

- A high-level programming language with support for both functional and OO idioms based on the typed lambda calculus with recursion, lists, records, tuples, ref cells, subtyping, etc.

- A lexer, typechecker, and interpreter for that language using parser generator tools, a recursive descent parser, or a shift-reduce parser in a pushdown automata model.

- A formal specification of the evaluation and typing rules and a type soundness proof for that lanugage.

- A distributed KV store with Paxos, Raft, or Byzantine Generals running on my encrypted channel and written in my language (we used 0MQ and were given a 0MQ broker that could be told to drop messages for testing purposes).

- Greedy, dynamic programming, network flow, and ILP algorithms with proofs of correctness and efficiency.

My class work repositories put me about three quarters of the way there.

I'm sure bootcamps can teach people enough to tread water in a dynamic language web framework, and that meets real business needs and adds real value. But college is a chance to go deeper.

I know nobody is paying us to build our own lightsabers. But - and call me old fashioned if you'd like - I think a professional ought to be able to build his own lightsaber anyway.

jedberg 2 days ago 0 replies      
> It backs up the assertion that algorithm skills are not used on the job by most programmers, and atrophy over time.

This was the most interesting part to me. I'd love to see more on this.

I've always found it silly to ask algorithm questions of senior engineers. There seems to be an exponential falloff of that knowledge as one gets further from graduation.

danellis 2 days ago 2 replies      
I swear, articles like this are going to cause me to have an existential crisis. I started learning programming as a child in the 80s. More than 30 years later, I like to think that I've acquired a lot of valuable knowledge and experience across a broad range of topics, and yet... when I hear about people training for three months and walking into decent jobs, I start to wonder what actually differentiates me at all.

For the sake of my ego, I'd love to hear that these bootcamp graduates have shallow, fragile knowledge in a narrowly focused area.

nappybrainiac 2 days ago 0 replies      
I'm not sure that a comparison between bootcamps and college is viable.

College is not just about learning to code. You also learn to deal with professors and how to get the best grades out of them. You figure out how much you can drink without the glaring hangout that interferes with your morning philosophy class. You sign those forms to get credit cards that haunt you till you have a job. If you're smart, chose a good college and get really lucky, you might actually learn something and get a job after graduation.

Boot camps are about learning to code, creating networks and passing interviews for tech jobs. You can't pledge, or hang out with the furries, paint your face with your college colors for the football game at the weekend, or struggle figure out if your summer course fulfills the requirement for your social science elective.

These two places of learning can peacefully co-exist and each one has its purpose.

I even think that it would be good for some CS Degrees to walk into a bootcamp to explore something new and expand their knowledge.

Bootcamp replacing college? I don't think so. Not till bootcamps have long lines of students trying to change their course selections at the registrar's office.

There are some options that lie somewhere in the middle...

dontscale 2 days ago 0 replies      
I think the debate about Colleges vs. Bootcamps is an apples to oranges comparison

Algorithms are commoditized into libraries. Web design has been commoditized with templates.

Open-ended programming is still more complicated, but putting apps on the web today is easier than static HTML just 5 years ago. Parts of programming will continue being commoditized.

So if it's easy to create something and put it out there, the great and all-important challenge that faces developers today is making it matter.

danso 2 days ago 0 replies      
I wouldn't be surprised that a bootcamp grad could beat a college CS student in practical web knowledge. Stanford has a web applications elective, CS142 [0]...in the previous years, it focused on Rails [1]; this year, it moved to the MEAN stack. In both syllabi, a week is spent on learning HTML/CSS alone...this year, I believe they spend a couple weeks learning JavaScript.

This class is an elective, which means that students aren't expected to know HTML/CSS/JS before taking it, though the core CS classes (Java, C) are prereqs. This also means that students who don't take 142 could quite graduate without having any practical knowledge about web development.

That said, it's not because the CS students couldn't actually learn practical web dev, and as others have said here, the best bootcampers are often folks who have a STEM background already.

[0] http://web.stanford.edu/class/cs142/

[1] http://web.stanford.edu/~ouster/cgi-bin/cs142-winter14/index...

pbiggar 2 days ago 2 replies      
It's not discussed, but I would guess that the best CS grads beat the best BC grads, but average/bad BC grads beat average/bad CS grads.
lsadam0 2 days ago 2 replies      
> Bootcamp grads match or beat college grads on practical skills, and lose on deep knowledge.

I feel as though you are attempting to lower the bar of what is acceptable in order to sell something :). The word 'practical' is thrown around in this article without much of a definition. Are we talking about making simple web pages?

I've just finished conducting a round of interviews for a junior level position, and based on this experience I highly doubt I will be considering bootcamp graduates in the future. As an example, for a question which involved sorting an integer array, and providing a method GetElementAt(index)....95% of the bootcamp applicants implemented sort within the GetElementAt method so that the entire array is sorted with every single call. A handful of CS grads made the same mistake, but most of them did not. Is this sort of oversight excused in the idea of 'practical' programming? Or in your definition, is this considered deep knowledge?

lordnacho 2 days ago 0 replies      
Well this is interesting to me, as I've recently worked with a bootcamp graduate, and I've been looking over my brother's shoulder while he finishes his CS degree at Columbia.

- Bootcamp lady was very able on the iOS project we were working on. She seemed to know where things were in XCode, and she understood Obj-C and Swift (no embarrassing questions about what classes are). She didn't seem to know about other environments (and said so), but we were doing an iOS project.

- Ivy league guy seems to have touched every common language (c, c++, Python, HTML/JS/CSS, R, and more), along with common tools (vim, pyCharm, tmux, gcc, VC++, laundry list). I was surprised by how practical it was, actually. I thought it would be obscure algorithms the whole way, but I guess they take the theory and essentially force you to learn the practical aspects by implementing things in relevant stacks.

- Bootcamp lady was very good working in our little MVP team. Understood how common management ideas like Agile work. Conscientious with looking at the Trello board, asking questions in Slack. Not sure if this is just her personality, or because they tell you how software teams work.

- Ivy league guy had lots of group projects, but they tended to be dysfunctional. There was always someone shirking. Some people had no clue what was being built or how to compile it. There didn't seem to be any management oversight, just blind "let's get this piece done" type organisation.

- Degree guy has way more breadth. He was routinely looking at machine learning, implementing demos with scikit, setting up VMs for himself, looking at assembly, looking at SQL optimisation, and other diverse tasks. Bootcamp grad didn't need this stuff, but also would need significant training to get to that level.

- Ambitions were similar. My background is in financial code, and they both want to do that. Bootcamp grad has quite a mountain to climb, particularly with things that take more explanation than MVC. She has a good attitude, so if someone would teach her she could do it. My brother is better positioned though, and would need less teaching to reach the same place.

DougWebb 2 days ago 1 reply      
After reading through all of the comments so far, my impression is that bootcamps are for training the developers whose jobs will be automated away in the coming years, and college is for training the developers who will be writing the code that automates those jobs.
madiathomas 2 days ago 0 replies      
Bootcamps are filling a void which existed for a long time in the CS industry. Most of the time, A CS grad is hired to do a job that can be done by someone with little programming knowledge. I feel it is a waste of resources to hire a CS grad to do CRUD app with maximum 5 users on a very good day.

Now companies can use people from bootcamps for such kind of jobs and use CS grads for deep and high level stuff. Surely some top bootcampers will be able to do high level stuff too.

pbiggar 2 days ago 2 replies      
> This does not leave bootcamp grads equivalently skilled to university grads. If you want to do hard algorithmic or low-level programming, youre still better served by traditional CS eduction.

Or, if I may suggest, a low-level/algorithmic bootcamp.

somecodemonkey 2 days ago 0 replies      
Bootcamp without years if experience will not replace a Computer Science degree. It lacks the depth to build a solid foundation. While this is just an anectdote every company I have worked for refuses to hire bootcamp grads.
RankingMember 2 days ago 1 reply      
I think that both college and boot-camp styles of training have their place. I'd think my default inclination would be to want to hire the CompSci majors to do the deep-scope planning/figuring and use boot-camp hires to do the grunt-work of supporting that vision.

It's important to note that this is just my initial inclination. I have no expectation that there won't be instances of boot-camp hires being better than CompSci hires in cases. It really comes down to the particular person, and hopefully any hiring process would do a decent job of evaluating each person.

enricobruschini 2 days ago 0 replies      
Thel real big difference that too often Americans forget is that college (and all the education system below colleges) gives you the structural mindset to break down complex problems and find solutions. They create your way of thinking and your rational side.Bootcamps, instead, just teach you how to execute some actions.It's like the difference between colleges and industrial schools.
partycoder 2 days ago 0 replies      
Having interacted with a lot of both lately, my impressions are that bootcamp graduates focus on mostly on functional requirements.

They have a hard time identifying non-functional requirements, assessing and mitigating risk, and start getting confused when things go low level.

In my experience, all "friendly" technologies have sharp edges somewhere, where you start getting exposed to low level issues. When you face these issues, there's no guarantee the answer will be in stack overflow and you will appreciate having learned some theory.

lxe 2 days ago 2 replies      
Do you remember your college web programming courses? The curriculum is always woefully out of date and seems that traditional undergraduate programs don't focus on updating it. This makes sense -- there are very few academic research areas that deal with practical web applications, and this is obviously mirrored in your undergraduate classes.

Don't forget -- universities are also research institutions, while bootcamps are not, and the coursework will reflect this.

babbeloski 2 days ago 0 replies      
Bootcamps make people employable for sure, I work with a team of people mostly from Bootcamps. I think the problem with some of them is they don't actually like programming. Learning new things and change is met with a lot of feet dragging. Don't get me wrong, I know a lot of programmers that are just 9-5'ers. It just seems like anytime there's any extra effort involved a ton of justification and selling needs to happen.
Mc_Big_G 2 days ago 0 replies      
That's because bootcamps specifically teach you how to pass interviews. I'm a senior developer and suck at interviews because I haven't taken my spare time to specifically study for interviews. I don't remember how to reverse sort a b-tree or whatever inane questions are asked because I don't need to know that to do my job spectacularly. It's actually kind of a joke that bootcamp grads interview better.
wanderr 1 day ago 0 replies      
The problem I have with bootcampers is that the bootcamps greatly over promise where they will be skill level wise when they finish, so expectations are way beyond junior developer roles. They also usually come out of bootcamps with a very narrow skillset; they can somewhat easily whip out a web app with a simple backend and rudimentary data sets using whatever framework that bootcamp focused on. Increase the complexity of the project even just a little and they get stuck.

There are some diamonds in the rough and some bootcamps are better than others but in general I'd much rather see someone who learned by hacking on things alone than a college or bootcamp grad.

douche 2 days ago 0 replies      
So would the best of both worlds be the combination? Four-year CS program for the fundamentals and the deep knowledge, then the summer after graduating (or really, senior spring, when the coasting sets in) a bootcamp-style training on practical development?

I don't think that a traditional CS degree makes you code enough to become a good software engineer. I certainly wouldn't have gotten enough practice actually writing code if I just did my coursework and didn't dabble in other things, like game development. Let alone other practical skills, like debugging/profiling (barely touched upon), source control (likewise), testing (completely ignored), project management/estimation (noop...).

I'm still amazed and horrified that I took a Data Structures and Algorithms course that required nothing beyond proofs and a little pseudocode - not a line of actual, working code. It could be tailor-made for really understanding memory-management or TDD.

megapatch 2 days ago 0 replies      
This is obviously comparing different things. Bootcamps and College are not replacing each other. But there is a difference between learning because you are hungry for knowledge (college) and learning because you are hungry for food (boot camp, you need to do your job). The former makes you better in the trade.
norea-armozel 2 days ago 0 replies      
My employer has been hiring a few people from some local coding bootcamps here in the Minneapolis area. Most of them are very decent at programming, so I'm not sure if they had any experience prior to their bootcamps or not, but I can't say I have any complaints for those they've hired. Never had a fix any of their code since they've been on the job either. Sometimes they need more help since they didn't get the discrete structures or software architecture knowledge that I did from my traditional CS degree. Honestly I think that should be something you pick up on the job or have been taught in high school (I'm biased of course).
balls187 2 days ago 0 replies      
> Weve found bootcamp grads...worse at algorithms and understanding how computers work.

Solution: hire a college grad and send them to a boot camp.

ArkyBeagle 2 days ago 0 replies      
Programming isn't all one thing. You have to have what amounts to an epistemology about the system you're working on right now or you're going to break things.

A degree improves the chances of this. About half of what I do is teach these things, on the job. Just being able to classify a systems error can be daunting - is it a show stopper, or an ignore, or something in between?

I see bootcamps as being fine for getting people into seats, but the rest takes a long time.

Finally, employability and what (IMO) CS/programming should be about are diverging rapidly. This was not always so. This is starting to be a real problem.

savrajsingh 2 days ago 0 replies      
One of my close friends said it best: "savrajsingh, Lebron James doesn't care if you start playing basketball."
JustSomeNobody 2 days ago 0 replies      
Likely, you see more people getting a CS degree because they feel it's a good job than the people going to bootcamps. The people going to bootcamps are more likely to be doing it because they've done some development and really like it.

Now, what bootcamps aren't going to give you is the breadth of a CS degree. But if you're getting a CS degree just for the money, you're not picking things up very well either.

So, I can see where a certain % of CS students and bootcampers are roughly equivalent.

I feel if you're very interested in CS and get a college degree and do really well in college, you're going to come out ahead of someone taking a 3 month bootcamp. I also feel there's more opportunity for CS degrees. ie, one probably isn't going to see too many 3 month bootcampers doing real time development. (I'm talk real real-time, not that buzzword web real time.)

Fiahil 2 days ago 0 replies      
Could it be possible to combine both worlds? Getting a university degree by spending four years with a strong focus on practical skills and intense workload[1].

To my knowledge only the top tier of american colleges (MIT, Berkeley, Stanford, ...) come close to that achievement. But, in France, where I live; I had the opportunity to go to a private school "specialized" in computer science (Epitech, 42, if you wanna look it up), that was mostly an "enlarged bootcamp" from year one to year three. It was kind of funny, for me, when my peers from traditional schools ended up discovering version control in their final internship.

[1]: Once you replace the shitty paper exams by actual projects in programming classes, you'll be amazed by how much you'll increase student proactivity.

shubhamjain 2 days ago 1 reply      
I started programming before college and I was always on my own. I never had a programmer friend until I started working. Although I was always able to get things done but the code I wrote is something that should never go into production. It took lots of mistakes, a lot of reading and shooting my own foot to finally start writing worthy code after like 2-3 years. (Although, there were code bases I worked with that were way worse!)

One thing I am curious about is, does a bootcamp make you proficient enough to avoid those mistakes and contribute directly to the application? I am pretty sure, it could have been a lot of help if someone could point out the mistakes I am making in my code, but I am not sure if it would have been enough.

redschell 2 days ago 1 reply      
I think there's a great opportunity for bootcamps to help people like me. I'm currently a pre-sales professional, and have been for a few years now. I'm closing in on 28, and while I've been served very well by developing product expertise, my background isn't in CompSci, and I've never actually formally learned to code. If I want to be a good Solutions Architect down the line, and I certainly do, this could be how I bridge the skill gap.

Sure, I could learn most of what I need to know on my own time, but this might be a great way to get it done quickly in a batch and then move on to applying it in a very practical way with my customers.

seattledev14 2 days ago 0 replies      
When you think about it as skills training vs. college I think it delivers on it's promise.

In college, most people don't declare a major until their Sophomore or Junior year, so the idea that competition is a 4 year degree is a bit misplaced. Code Schools don't teach music appreciation, though there are a lot of musicians. Bootcamps offer an intensive at 40+ hours a week vs. a two hour class two days a week.

Can you deliver skills based training in 10 weeks? The placement rates would say yes. Do some schools focus on placement while others focus on taking tuition... That's true as well.

Look to find the school that has a placement track record.

egonschiele 2 days ago 2 replies      
Hey, I wrote an algorithms book aimed at bootcampers! The epub is out today, print book to follow: http://amzn.com/1617292230

I'm hoping this will be an easy to read algorithms book for bootcamp grads. Here's a sample chapter for anyone interested:https://manning-content.s3.amazonaws.com/download/f/a75f93d-...

Philipp__ 2 days ago 0 replies      
They definitely are comparable. But I think going to both would be best thing if there is time and strength. Just as I thought college gives you most of theoretical stuff. If you are not used to working on your own, on side projects and are taking college for granted, then you aren't off to a great start. But if you are used to doing something besides college, whether it is paid job or some tinkering projects you do in free time that you later put on github, then need for bootcamp maybe isn't present. So hitting it somewhere in the middle might be best...
AlexeyMK 2 days ago 0 replies      
I'm most curious to see what the stats are deeper into the funnel, specifically:

- At what rate do bootcamp grads vs new grads get offers (intro --> offer at portfolio companies)?

- Is the above metric significantly different for different classes of companies (either segmented by company size, field, or "CRUD-eyness" of company?

As a former hiring manager at a "much harder than CRUD" company, I remember looking at some bootcampers and saying "I wish we could interview these people, but the knowledge gap is just too significant".

seanhandley 2 days ago 0 replies      
We've hired a couple of junior developers lately that had no college experience but significant online training and the experience so far has been very positive.

Given how long it takes universities to update course materials, I'm not sure they can compete with this kind of education programme. It's true that a lot of the fundamental computer science is missing but with senior devs on the scene, any gaps can be filled with an afternoon around a whiteboard.

AJRF 2 days ago 0 replies      
One thing I realised my final year of University is how much marketing Universities do towards the job market.

They shop their students and curriculum around to employers all over the county (some on an international level).

There is going to be a lot of inertia involved when it comes to hiring from Universities that most Bootcamps don't even consider or spend time doing. I don't think they give universities any cause for concern, and wont, for some time.

swalsh 2 days ago 0 replies      
So basically, the guy who builds a rafter is a woodworker, the guy who nailed the rafter to the structure is a woodworker and the guy who made the dining room table is a woodworker. Each guy is important, but the skill level and education time are different. Not every rough carpenter needs to have an extensive education in fine carpentry to be successful in their area of woodworking.
lyime 2 days ago 0 replies      
It's hard to measure hunger. My intuition is that it plays a big role when it comes to finding success after going through a bootcamp.
andrewfromx 2 days ago 0 replies      
when I read "4 years" I don't remember doing nothing but code for all 4 years during my CS degree. Part of the appeal must be that you focus on just coding intensely for a short period of time. I'm thinking back on my 4 years at pitt.edu and my god did I waste a lot of time. If you distill it all down, maybe it does == 3 months at good camp.
baron816 2 days ago 0 replies      
All you need to know: some companies can use bootcampers very effectively, some cannot. It all depends on what the company is doing. It's evident that since many companies have found great success while employing bootcampers that the skills they provide are useful.
vparikh 2 days ago 1 reply      
I would love it if Computer Science grads took a boot camp course - one that covers css/html/javascript, any MVC framework, ntier patterns, ORM/SQL/NoSQL training. Because from my experience, they apparently don't teach any of that in comp-sci school.
puppers 2 days ago 0 replies      
Universities don't necessarily teach students programming.They teach them Computer Science.

Bootcamps teach students programming, definitely not CS. I highly doubt they could teach a student 4 years of CS material in 3 months.

serge2k 2 days ago 0 replies      
> How to use an editor is something that a traditional CS degree program would never think of teaching.

Of course not. Why would they ever do that. It falls into the same bucket as version control. It's useful, but go learn it yourself because it's not that hard.

kbuchanan 2 days ago 0 replies      
I think this supports the hypothesis that schooling (secondary, post secondary, bootcamps, whatever) is first and foremost a sorting mechanism. Bootcamps have discovered _one_ avenue for quickly assessing and sorting students into a career they can succeed at.
data4lyfe 2 days ago 0 replies      
So triplebyte still can't infer anything about how well a software engineer performs on their job from the metrics that they are gathering though if they're basing performance on how well they do on their coding questions and interviews?
brandonmenc 2 days ago 0 replies      
Bootcamps seem to encapsulate and accelerate the "I taught myself to program in middle school and high school" experience for adults who missed that boat - which is great.

The results make a lot more sense when you look at it that way.

personjerry 2 days ago 5 replies      
I think this misses a huge point: College is a huge factor in social development; This is extremely important not only for developing software on a team, but to developing a healthy lifestyle in and out of the workplace.
strathmeyer 2 days ago 0 replies      
Triplebyte figured out in twenty minutes that I didn't learn enough while getting a CS degree at CMU in order to get a programming job so... good luck to them.
provemewrong 2 days ago 0 replies      
I find the whole premise a bit amusing, because in my country the prime target audience for bootcamps are undergrad CS students or fresh graduates.
Kinnard 2 days ago 0 replies      
I would love to see a break out for people who are neither bootcamp grads nor college grads but who are completely self-taught, like me :)

Surely they've received some applicants in this category.

emodendroket 2 days ago 0 replies      
This is neat, although as someone who attended neither (well, not for computers anyway) I guess I can't do the solipsistic thing and look for myself.
findjashua 2 days ago 0 replies      
I'm not sure why this is a surprise. Computer Science and Software Engineering are different things, the only common factor being programming.
mmkx 2 days ago 0 replies      
Nice ad.
soneca 2 days ago 0 replies      
I believe not all bootcamps are equal.

Is there anywhere a curated list of good, recommended, worth your money bootcamps?

forgotAgain 2 days ago 1 reply      
I wonder how many of the engineers at this weeks Google I/O or the next Apple Dev conference went to bootcamps.
genzoman 2 days ago 0 replies      
whether or not you come from a CS background, or a bootcamp background the proof is in the pudding: can you answer the whiteboard questions? if so, you pass, and nobody cares where you went/did not go to school.

if that's not enough, revise the whiteboard question.

andrewvc 2 days ago 14 replies      
What a load of crap.

What, they bred the capacity for abstract thought into you in college?

College attracts a generally higher quality applicant pool. You're mistaking selection bias for an effect.

Let me tell you, I've interviewed programmers from all over. There are boatloads of people with CS degrees with close to zero capacity for creative thinking. There are also boatloads of CS grads who can barely code their way out of a while loop (true story!).

I've spent my career (no CS degree!) working alongside CS grads. I've gone further, faster, than most of them. I've had to deal with this kind of idiotic commentary over and over again.

CS grads are always surprised that I never got a degree (oh, I never would have guessed! you're different, its those OTHER people without degrees who are idiots). Four years of school + the associated debt creates a big incentive to believe that you got a square deal out of college.

Ologn 2 days ago 0 replies      
> it still just seems hard to believe that 3 months can compete with a 4-year university degree.

Yes, it is very hard to believe. Impossible, actually.

> Bootcamps, are intense. Students complete 8 hours of work daily

In-class time is not the gauge for college. Students are supposed to spend at least three hours studying for every hour spent in class. On top of that are office hours with the professor, as well as contact with the TAs or study labs.

If my courseload for a semester is Calculus 102, Theory of Computation, Algorithms 201, Principles of Programming Languages, and Computer Architecture, I don't see how it is different than a bootcamp because a bootcamp is "more intense". I don't know how you can get more intense than juggling these five topics.

> Traditional CS programmers spend significant amounts of time on concepts like NP-completeness and programming in Scheme...But it is not directly applicable to what most programmers do most of the time. Bootcamps are able to show outsized results by relentlessly focusing on practical skills...How to use an editor is something that a traditional CS degree program would never think of teaching.


I took a course in OS principles and then one in distributed systems. The first course covered mutual exclusion somewhat, the second much more. I spent quite a lot of time writing complex Java programs that handled mutual exclusion well. Guess what I am doing today, years after that course? Writing a complex Java program that uses mutual exclusion. I only took that second course because it fit my schedule, but it has come in very handy over the years.

Insofar as NP-completeness being "academic CS", I have unfortunately seen too many bugs ( https://bugs.freedesktop.org/show_bug.cgi?id=3188 , https://sourceforge.net/p/jedit/bugs/3278 etc.) where people did not heed the polynomial growth of algorithms.

They're trying to dumb down what you can't dumb down.

The reality can be seen if you look around a SoMa startup and wonder where all the grey-haired programmers went. Where did those programmers who were in their mid-20s in the late 1990s, programming for the dot-com startups, in an even more inflated market, go? Where are the grey-haired, balding programmers in your company?

And this bootcamp is the answer. Just look at the real estate prices and you know the market has heated up. Naval Ravikant turned down $600 million last year because he said there weren't enough places to invest that. Despite talk of perhaps some cooling since the beginning of the year, things are pretty hot. So get some kid to go to a bootcamp for a few months. They can only get their hands on one real programmer, but they can hire a few of these bootcamp kids to do a few MVP's, or maybe code some features up, which the real programmer will have to fix later.

What happens to these kids later, who have no foundation in what they're doing, who have no deeper understanding of what they're doing?

> programming in Scheme...How to use an editor is something that a traditional CS degree program would never think of teaching.

That's because a traditional CS degree program teaches you to write your own editor if need be. Stallman went to MIT and wrote Emacs, Bill Joy went to Berkeley and wrote vi.

What the hell point is there to teaching an editor? I was using Eclipse with Android plugins a year ago, now I'm using Android Studio. University is to teach concepts which will exist decades from now, not the Javascript library framework du jour.

The ones who will make out on this are the bootcamps, and the companies who can use these kids when the market is hot and will dump them when their usefulness is over. Just like what happened in 2000 (or 2008). You'll see what your bootcamp and two years working at a failed startup amounts to when the economy cools, job listings dry up and the posted ones say "BSCS required". Being able to cut and paste from Stack Overflow and use frameworks other people wrote and extended is not an educational foundation.

There are a lot of strawman arguments on the other side. Yes, the hardest working, brightest bootcamp graduate is probably better than the laziest, dullest person who managed to graduate from some third-rate college and get a CS degree. And so forth. None of that detracts from the point though.

eastWestMath 2 days ago 0 replies      
This just in: web dev body shop is perfectly happy with bootcamp grads.
puppetmaster3 2 days ago 0 replies      
Also cheaper and faster, a good way to save tax resources maybe?
indatawetrust 2 days ago 0 replies      
> Note: We are only accepting applications from programmers.


DaveParkerCF 2 days ago 0 replies      
Over the last three years at Code Fellows in Seattle (www.CodeFellows.com) we've seen the market change a lot for students, hiring companies and curriculum.

At launch, there was a lot of pent up demand. 400 people applied for a Ruby class of 25. Most that took that first class had been self taught and in the surveys said they had been hacking at projects for an average of 18 months. Code school was a way to speed their path into a professional developer role (note developer, not engineer).

The majority of students today already have a degree and are looking to switch careers, average age of ~30. They are looking for skills to transition so in that way, going back to college isn't an option unless it's for advanced degree. The same is true for the veterans that are transitioning to the workforce, they have been in a very structured environment and want to speed through job ready training vs. four more years at college.

"Stack switchers" tend to be the top of the compensation range. If you have 10 years of .Net experience and want to switch to iOS. You'll earn top dollar. If you don't have much real world experience you'll land an entry level JavaScript job with that skill.

The needs of hiring companies has also shifted as the market has matured. There are more "code school grads" in the market looking for jobs, so the process of screening needs to be better, interviews need to be improved and tools like triplebyte.com improve transparency of skills. Hiring Junior developers has never been the preference for employers. Everyone would rather hire both skill and experience. But when you're competing with larger companies in a hot job market, you'll often take Junior talent that is a good culture fit.

By culture fit I mean a combination of past education, work experience and new skills. Combine that with work ethic and desire and you see why most of the strong code schools have a high (90%+) placement rate.

Curriculum have changed as well. Code schools have to be teaching at the front end of the hiring demand. Teaching an old tech stack where job postings are heading down won't work. Review StackOverflows recent survey if you're curious about stack preferences.

Code schools are also required to be licensed with each state where they do business. That's a requirement not all schools follow. It's really about consumer protection in that way so check with your state.

The industry is still immature and you're correct that there isn't any reporting standards, e.g. are placements rates reported at 90 or 180 days past graduation, etc? We're working with a number of companies like the Iron Yard to standardize on reporting and moving to audited results over time. I hope that someday we can apply the same placement rate standards to other academic institutions. As a dad of college age kids that would be amazing (note the White House tried that two years ago with a scorecard and the Universities said no).

Regarding the debate of should everyone learn to code or no one learn to code? It's a skill, it's not for everyone. It's a job that isn't for everyone. There are a lot of online resources, information sessions and one day courses, start with the low risk version and see if it's for you. With an average starting salary of $71k in Seattle, the compensation appeal is a strong draw for people outside of the tech industry. You may be drawn to the compensation just make sure that you are also drawn to the work.

trich7 1 day ago 0 replies      
I would love to add a few comments. I do not know about the specific CS program nor code bootcamp they are specifically are comparing about, but I can speak to a general CS degree and the general bootcamp education. Full disclosure, I am a founder of a coding bootcamp.

Nothing can replace a 4 yr degree with basis in theory and multiple subjects but bootcamps offer people a way to jumpstart a lagging career or make a step into a new one. Being hirable in this industry is saying something, and that is what a great bootcamp should do.

That being said bootcamps are teaching current real world and career-like solutions. Many argue that you don't actually get the real-world experience in CS 4-year degrees due to behind the time curriculum (due to long approval processes that coincides with accreditation) and long and sometimes boring lectures without a lot of application. Bootcamps take a flipped classroom, hands on, and immersive approach. Less lecture, more project-based learning.

Many developers fall under the 41.8% group on the recent StackOverflow study of self-taught developers. A very large number of developers in the market are finding their skills in very non-traditional ways. What many CS grads learn is undoubtedly useful, I would never take anything away from that, but with software expanding into so many different fields, blurring the lines between who was traditionally an "engineer" and who isn't, and with the increasingly rapid pace at which languages/frameworks/best practices are constantly changing, there are a lot more opportunities to contribute in code than by cooking up advanced algorithms with linked lists.

In fact, many of our partner employers were frustrated by the lack of applicable, modern technology competency by the CS grads they were interviewing. As only one piece of anecdotal evidence to this: we've had various CS grads take our programs because (as they described) they only learned languages that were not anywhere to be found in the companies they were interviewing with.

Companies are starting to recognize that those who apply themselves in a bootcamp are able to learn quickly and adapt to new technologies and projects easily. Employers are looking for someone to get the job done with the skill set that matches the technologies that they practice. Many employers don't care if employees have acquired that skill set in a garage when they were 12 years old, at MIT, or at a coding bootcamp.

But, bootcamps aren't for everyone, you really have to apply yourself and consume content quickly. But if those requirements are met, bootcamp attendees really can excel! I know because at ours we have had so many success stories just like the afore mentioned where a student truly applies themselves, lands an amazing job, or starts a hot tech company and truly changes the trajectory of their life. Plus it happens in a tenth of the time of a 4-year degree and at a fraction of the cost ;)

Bootcamps offer a more personalized mentoring. Being able to see delegates, resonate with students emotionally, pick up on subtle nuances of communication and respond appropriately is the very essence of education. I believe passionately that training and coaching are not about getting something from one head to another, but are an intimate dance that transforms both parties.

I hope that helps a bit.

analognoise 2 days ago 1 reply      
Of course employers love bootcamps - they need business logic monkeys who lack the fundamentals and therefore don't increase in value over time as much as the people who actually put in the time with said fundamentals.

It's cheaper to have somebody who doesn't have a real education.

AWS X1 instances 1.9 TB of memory amazon.com
339 points by spullara  3 days ago   179 comments top 22
jedbrown 3 days ago 1 reply      
Does anyone have numbers on memory bandwidth and latency?

The x1 cost per GB is about 2/3 that of r3 instances, but you get 4x as many memory channels if spec the same amount of memory via r3 instances so the cost per memory channel is more than twice as high for x1 as r3. DRAM is valuable precisely because of its speed, but the speed itself is not cost-effective with the x1. As such, the x1 is really for the applications that can't scale with distributed memory. (Nothing new here, but this point is often overlooked.)

Similarly, you get a lot more SSDs with several r3 instances, so the aggregate disk bandwidth is also more cost-effective with r3.

lovelearning 3 days ago 14 replies      
This is probably a dumb question, but what does the hardware of such a massive machine look like? Is it just a single server box with a single motherboard? Are there server motherboards out there that support 2 TB of RAM, or is this some kind of distributed RAM?
MasterScrat 3 days ago 3 replies      
As a reference the archive of all Reddit comments from October 2007 to May 2015 is around 1 terabyte uncompressed.

You could do exhaustive analysis on that dataset fully in memory.

ChuckMcM 3 days ago 4 replies      
That is pretty remarkable. One of the limitations of doing one's own version of mass analytics is the cost of acquiring, installing, configuring, and then maintaining the hardware. Generally I've found AWS to be more expensive but you get to "turn it on, turn it off" which is not something you can do when you have to pay monthly for data center space.

It makes for an interesting exercise to load in your data, do your analytics, and then store out the meta data. I wonder if the oil and gas people are looking at this for pre-processing their seismic data dumps.

1024core 3 days ago 1 reply      
Spot instances are about $13 - $19/hr, depending on zone. Not available in NorCal, Seoul, Sydney and a couple of other places.
dman 3 days ago 4 replies      
Going to comment out the deallocation bits in all my code now.
pritambarhate 3 days ago 4 replies      
Question for those who have used monster servers before:

Can PostgreSQL/MySQL use such type of hardware efficiently and scale up vertically? Also can MemCached/Redis use all this RAM effectively?

I am genuinely interested in knowing this. Most of the times I work on small apps and don't have access to anything more than 16GB RAM on regular basis.

vegancap 3 days ago 8 replies      
Finally, an instance made for Java!
krschultz 3 days ago 8 replies      
A bit under $35,000 for the year.
realworldview 3 days ago 0 replies      
Recompiling tetris with BIGMEM option now...
Erwin 3 days ago 0 replies      
I'm curious about this AWS feature mentioned: https://aws.amazon.com/blogs/aws/new-auto-recovery-for-amazo...

We've experiemnted with something similar on Google Cloud, where an instance that is considered dead has its IP address and persistent disks taken away, then attached to another (live or just created instance). It's hard to say whether this can recover from all failures however without having experienced them or even work better than what Google claims it already does (moving around failing servers from hardware to hardware). Anyone with practical experience in this type of recovery where you don't duplicate your resource requirements?

zbjornson 3 days ago 0 replies      
How does this thing still only have 10 GigE (plus 10 dedicated to EBS)? It should have multiple 10 Gig NICs that could get it to way more than that.
jayhuang 3 days ago 0 replies      
Funny how the title made me instantly think: SAP HANA. After not seeing it for the first 5 paragraphs or so, Ctrl+F, ah yes.

Not too surprising given how close SAP and Amazon AWS have been ever since SAP started offering cloud solutions. Going back a couple years when SAP HANA was still in its infancy; trying it on servers with 20~100+ TB of memory, this seems like an obvious progression.

Of course there's always the barrier of AWS pricing.

amazon_not 3 days ago 1 reply      
The pricing is surprisingly enough not terrible. Given that dedicated servers cost $1-1.5 per GB of RAM per month the three year price is actually almost reasonable.

That being said, a three year commitment is still hard to swallow compared to dedicated servers that are month-to-month.

0xmohit 3 days ago 0 replies      
Wow! http://codegolf.stackexchange.com/a/22939 would now be available in production.
manav 3 days ago 0 replies      
Hmm around $4/hr after a partial upfront. I'm guessing that upfront is going to be just about the cost of a server which is around $50k.
micro-ram 3 days ago 2 replies      
What happened to the other 16 threads?

18(core) * 4(cpus) * 2(+ht) = 144

ben_jones 3 days ago 0 replies      
I'd be guilty if I ever used something like this and under utilized the ram.

"Ben we're not utilizing all the ram."

"Add another for loop."

mrmondo 3 days ago 0 replies      
I'm taking it this is so people can run NodeJS or MSSQL on AWS now? Heh, sorry for the jab - what could this be used for considering that AWS' top tier provisioned storage IOP/s are still so low (and expensive)?

Something volatile running una RAM disk maybe?

samstave 3 days ago 2 replies      

Thats amazing.

samstave 3 days ago 3 replies      
16GB of ram should be enough for anyone.

Edit, y'all don't get the reference: famous computer urban legend...


0xmohit 3 days ago 1 reply      
Encouraging folks to write more inefficient code?

I'd be interested in hearing what Gates [1] has to say about it, though.

[1] "640 kB ought to be enough for anybody"

Going dark: online privacy and anonymity for normal people troyhunt.com
347 points by danso  3 days ago   114 comments top 19
sixhobbits 3 days ago 2 replies      
I'm surprised he doesn't mention NoScript, Privacy Badger, etc. "Normal people" should be more concerned about about the highly detailed profiles that companies are building based on browsing habits. "Normal people" read about data breaches and embarrassing leaks that force politicians to resign. "Normal people" know nothing about the behind the scenes tracking that goes on when you google medical symptoms[0] or visit pages which have Facebook like buttons as footers[1].

Yes, this article is targeted at people who don't understand the problem of using their .gov email address to sign up for dodgy sites, but think about whether you'd rather have your bank statement made public or a large, visualizable data set representing most of your browsing history.

I would love to see more work done on privacy through noise/obfuscation, such as that started by Adnauseum[2] and TrackMeNot[3] - not necessarily publishing your credit card details online as suggest in another comment here, but in making random search queries and clicking on random ads when your device is idle. Most of us have sufficient processing power and bandwidth for the overhead not to be a problem. It's sad that it looks like both add-ons have failed to make a splash, and seem to have fallen out of active development (end of 2015 marks the last commits for both projects, which is too soon to pronounce them dead, but they definitely don't seem to be hives of activity).

[0] http://motherboard.vice.com/read/looking-up-symptoms-online-...

[1] http://www.allaboutcookies.org/cookies/cookie-profiling.html

[2] http://adnauseam.io/

[3] http://cs.nyu.edu/trackmenot/

xrorre 3 days ago 2 replies      
I appreciate the intention of this article. Written for people only starting to change their surfing habits in light of Snowden. But the example of the tools they should use are not thought out very well.

First: Freedome by F-Secure is closed source and there is no OpenVPN alternative. Always choose a VPN that has OpenVPN so that users can configure the connection to their needs. No need for this bloated mess.

Second: Whilst disposable Google accounts might seem like a good idea, there are any number of ways for Google to cross-correlate a disposable identity with your actual identity using fingerprinting captchas or even your screen resolution. Google does this to spot serial re-registrations and to stop people gaming Google Plus voting rings and spammers in general.

Third: Be careful of online websites offering fake-name services. Most of this data is generated server-side and logged for the purposes of cross-correlation with your IP address and useragent string. Quite possibly the vast majority of fake-identity sites are run by LEA

- I like to write some quick and dirty ruby gems to generate fake identities because then it can't be correlated. (The names are pulled in from disparate sources and I always ensure true-randomness).

- In terms of email, use things like Riseup which use TLS at every hop so that passive dragnets cant sniff the password. 99% of all IMAP and SMTP services can be passively sniffed because they use weak STARTTLS.

- Use 'honeywords' in an email to correlate different emails with different activities. For example:

 john.doe+shopping@riseup.net john.doe+gaming@riseup.net john.doe+correspondant@riseup.net
This way you can whitelist those addresses for the purposes of filtering out spam and phishing attempts.

apecat 3 days ago 1 reply      
Great article.

The only real omission I noticed is the lack of mention of advanced browser fingerprinting techniques that can be used against browsers, even if caches are emptied, 'porn modes' activated, VPNs opnened. As demonstrated here by the EFF's Panopticlick initiative. https://panopticlick.eff.org/

One of the most important points about the anonymity provied by the Tor project to remember is that the Tor Browser is painstakingly hand crafted to avoid many of these problems. In other discussions about TOR it is worryingly common to see other ways to route browser traffic through TOR, without mentions of the implications.

For those interested, here's a recent look into the Tor Browser system by one of the developers.


huuu 3 days ago 3 replies      
Doesn't this create a risk of committing fraud and identity theft in some countries?

I can understand it wouldn't be a crime to create a random email address but creating a fake house address and using this for payments sounds a little tricky.

btrask 3 days ago 5 replies      
This is just a list of more things for them to clamp down on.

I'm thinking about going in the opposite direction, and broadcasting all of my personally identifying information (credit card, SSN, etc). Obviously I would have to set aside a large amount of time to deal with issuing fraud reports, and make sure that I wasn't risking anything that I can't afford to lose--but it does seem simpler in some ways.

After all, if you don't have anything to hide, you're bulletproof, right?

rkrzr 3 days ago 2 replies      
TLDR: Use a VPN + Incognito mode + fake email and info

The VPN hides your IP. Incognito mode prevents your cookies from giving away your identity. And the fake info helps with things like sites being hacked and the data being dumped online.

amelius 3 days ago 4 replies      
> Going dark: online privacy and anonymity for normal people

Caveat: normal people don't care about such things.

jrcii 3 days ago 4 replies      
I really object to the language of "dark" to describe privacy or anonymity, which are thereby painted with a sinister connotation.
ChefDenominator 2 days ago 1 reply      
The article recommends going to Fake Name Generator (tm) to get a random online identity. The page is not encrypted and looks very, very fishy.

That page recommends going to Social Security Number Registry. Again, an unencrypted totally scammy looking page. If you enter a random name and select a random state, it will 'verify' that your identity has been stolen. Then, if you click on 'Validate', you can enter your SSN (unencrypted, of course).

I don't even know how to code, and this is a news site for hackers? This tripe makes it to the top of the front page?

mirimir 3 days ago 0 replies      
It's a good piece, but the treatment of VPNs is bad. There's a new site about choosing a VPN service: https://thatoneprivacysite.net/ It summarizes a huge amount of information, for 159 VPN services.
descript 2 days ago 0 replies      
It is so difficult to balance productivity/convenience and privacy/security.

Only recently did I stop worrying about privacy/security, and frankly my online experience is much better. I can now participate in any services/apps that catch my eye, I now save CC data at some sites, don't have a VPN/Tor slowing traffic and giving me cloudflare walls/"im not a bot" verification, don't have noscript/ublock/privacy badger breaking most sites, can sync across devices and backup online.

Having both secure & private online behavior is a massive inconvenience. You basically can't participate in the online world as it exists. (There are definitely opportunities to create secure/private versions of existing tools)

maglavaitss 3 days ago 0 replies      
This submission has some more tips for preserving your privacy https://news.ycombinator.com/item?id=11706680
ikeboy 3 days ago 0 replies      
The SMS receiving sites don't work so well IME. They tend to use a single number for everyone, and the demand by spammers etc is so much higher than the free supply that for any given service, your number will probably already be blocked. Or the receiving will be unreliable, etc. I've gotten it to work sometimes, but usually not. Definitely too hard and time consuming for "normal people".

Is there a site that sells phone numbers for viop and sms for bitcoin without requiring identity?

tmaly 2 days ago 0 replies      
As I am cranking away on some Go services on my laptop on my local coffee shop wifi, I see log entries popup of people trying to access php pages.

I go and ask the staff, and they said their POS is full of some weird software.

a good VPN provider is worth it, but finding one that will not keep logs on you is another story.

bunkydoo 2 days ago 1 reply      
Here's the thing, if you are a normal person - you aren't going to read a guide on something like this. I have a 1 sentence guide on this for the 'normal person' - If you wouldn't want your grandma to see it, just don't enter it in an internet browser.
astazangasta 2 days ago 0 replies      
I'm interested in 'phishing and malware protection', which I think means all my traffic gets reported to Google. This plus Google Analytics means the electric eye is on me wherever I go. Tips to browse safely without these?
fulafel 3 days ago 1 reply      
The article exemplifies why the widespread misappropriation of of the VPN term is unfortunate (in same series as "router" for NAT boxes...), it serves to confuse people about the potential of real overlay networks.
coldpie 3 days ago 0 replies      
Is this really still the best way to pay for stuff anonymously online? Lie to a financial institution? I understand the desire to avoid fraud, but boy does that irk me. Hrmm...
kevingrahl 3 days ago 3 replies      
Skimmed the article, saw that he reccomended using Googlemail. Looked at the title of the post again.Looked at Googlemail reccomendation.Laughed and made a mental note not to trust "Troy Hunt".
Improving Docker with Unikernels: Introducing HyperKit, VPNKit and DataKit docker.com
291 points by samber  4 days ago   37 comments top 6
kevinmgranger 3 days ago 1 reply      
docker's go-9p now makes for the 3rd implementation of 9p in go:

docker/go-9p https://github.com/docker/go-p9p

rminnich/ninep: https://github.com/rminnich/ninep

rminnich/go9p: https://github.com/rminnich/go9p

There's also the Andrey Mirtchovski and Latchesar Ionkov implementation of go9p, but all I can find is a dead Google Code link from here: http://9p.cat-v.org/implementations

pjmlp 4 days ago 2 replies      
With lots of OCaml love it seems, from a quick glance through the source repositories.
tachion 3 days ago 1 reply      
I wonder if we'll see a move towards getting Docker working on FreeBSD using either Jails or bhyve finally, since it talks about using bhyve hypervisor... That would be really great.
kordless 4 days ago 4 replies      
Seems like only a year ago Docker changed how it used Virtualbox to boot VMs using machine (and caused me endless amounts of suffering trying to figure out how to fix it). Now it would seem they are getting rid of Virtualbox entirely with their own VM...which needs contributions.
chuhnk 3 days ago 0 replies      
Very interesting work. I find go-9p quite fascinating and think it could really have broader applications. Docker if you see this, I actually think you're on to something for microservice development thats native to the docker world. I've been trying to come up with ways of replicating the unix philosophy around programs that do one thing well and the use of pipes but was always limited in my thinking in terms of http, json, etc, etc.

My advice, as a guy who's currently building something in the microservice space, explore this further. Spend some time building fit for purpose apps with this and see where it goes.

andrew_wc_brown 4 days ago 1 reply      
I guess I just want to know the take away.eg. Will consume less memory on mac.
Fox 'uses' a gameplay video from YouTube, and removes the original with DMCA torrentfreak.com
349 points by juanito  2 days ago   75 comments top 14
ascagnel_ 1 day ago 5 replies      
I've said it before, and it bears repeating here: ContentID is not, has not been, and will never be the DMCA. It was developed by YouTube so Viacom would drop the suit that would likely have stripped YouTube's safe harbor protections under the DMCA.

ContentID takedowns are not DMCA takedowns. They operate on a different, much less strict standard. Anyone that works with YouTube can flag any video for any reason (see Scripps taking down a public-domain NASA video[0]), and the content is removed immediately without giving the initial uploader a right to contest it (it can be restored later. A YouTube user has way fewer rights under ContentID than they do under the DMCA. If you are found in violation of ContentID, you must fight both YouTube and the claimant to have your case heard under the DMCA.

[0] http://motherboard.vice.com/blog/nasa-s-mars-rover-crashed-i...

Ralfp 2 days ago 2 replies      
SmarterEveryDay told a story in one of his videos about slow-mo clip of tatoo machine in action being ripped from one of his movies by the large press publisher, Bauer, and reuploaded on Bauer's Facebook page. Then Bauer moved on to claim DMCA on his original uploaded by him on his Facebook page:


just_observing 2 days ago 3 replies      
Until there is a financial penalty for wrongful use of the DMCA this type of thing will continue.

They use robots to create reports (maybe not in this case) because for them there is simply zero downside.

That downside needs to exist.

swang 1 day ago 1 reply      
On a similar note a Game YouTuber, Jim Sterling (Jimquistion), was frustrated that when he put out videos with some clips of video games or other movies, companies would come in and claim ownership and monetize his videos. He didn't like that since he wanted to be ad-free and these companies would essentially force his viewers to watch ads that he wouldn't even get any money for. These large companies didn't care because it's free money for them.

So he figured out how to get around stupid contentID flags. He put in clips of videos from different companies and spliced them into his videos. This caused a couple of companies to claim copyright over his video, but when there are multiple claimants for one video, Google's system doesn't give any of them any ad money. So in effect he gets to continue to use clips while not having any companies monetize his viewers.

Orangeair 2 days ago 3 replies      
There is no reason this kind of glitch should even be happening. There should be some kind of date attribute that gets attached to the ContentID data that Fox submits, and YouTube should reject takedowns for which that date is later than the date the video in question was uploaded. This is just ridiculous.
nomercy400 1 day ago 0 replies      
The currently implemented ContentID system isn't really at fault here. It is being effective in protecting Family Guy content appearing on Youtube. It looks at the reference video, analyzes it, and flags every video that matches the reference video by a certain percentage, and deals with it.

The real problem here is that FOX copied the exact contents of an existing 7-year old youtube video in their Family Guy episode (maybe the creators tried to give a tribute). The ContentID rules state that you should have exclusive rights to the content in a specific region, which they don't.

Thinking about this again, the ContentID system is also at fault here: it is blatantly ignoring the upload date for video's on youtube. I don't know how the ContentID system works, but you should at least give the date on which the copyright of the reference content (=Family Guy episode) starts. If FOX would try to upload the Family Guy episode, the ContentID system should give a warning that it uses already existing, older content. Now Youtube/Google, that can't be so hard to implement, now can it?

Also, Youtube should handle complaints from incorrect takedowns a bit better. Like keep a reputation on parties using the ContentID system. Say you start at 100. If you do incorrect takedowns, your reputation decreases. If you do correct takedowns, it rises (come up with some statistical efficient tool)_. If it falls below 50, you are excluded from the ContentID system for breaking the rules, or somebody could sue the party or something. Make it transparent (yearly report) to the public, so they can judge.

Flashuk100 2 days ago 2 replies      
This is eerily similar to the infamous "You made this?...I made this" comic.
wtfishackernews 1 day ago 0 replies      
Youtube really needs to find a proper solution to this problem. But at least they acknowledge it and and are taking steps to improve.


cordite 1 day ago 0 replies      
The same kind of thing happened to NASA back in 2012 with some news group. Both little and big names can screw people over with this.
jokoon 1 day ago 1 reply      
I know that my argument would not stand in court, but when I read this, it makes it harder for me to be against illegal downloading.

Although I prefer buying content directly from the artist, like Louis CK does. As long as it seems "fair", I buy.

That where piracy comes from: a rational argument about a skewed market.

ramy_d 1 day ago 0 replies      
Wow, I never considered the combination of a large broadcasting company that produces original content using sampled material from the internet and then automatically sending ContentID takedowns for said sampled content. It's like a takedown feedback loop.
Kiro 1 day ago 0 replies      
So how often are these bogus takedowns rectified? Will this be? Seems blatantly easy to fix if someone at YouTube just gives it a 5 minute look.
ivanstame 1 day ago 0 replies      
This is ridiculous...
blazespin 1 day ago 1 reply      
Maybe fox contacted NES and aquired the rights to double dribble and the clip.
My wife has complained that OpenOffice will never print on Tuesdays (2009) launchpad.net
418 points by hardmath123  4 days ago   155 comments top 29
Animats 4 days ago 8 replies      
Did this get fixed, 7 years later?

Yesterday, we had a story about Microsoft's disk management service using lots of CPU time if the username contained "user". Microsoft's official reply was not to do that.

I once found a bug in Coyote Systems' load balancers where, if the USER-AGENT ended with "m", all packets were dropped. They use regular expressions for various rules, and I suspect someone typed "\m" where they meant "\n". Vendor denied problem, even after I submitted a test case which failed on their own web site's load balancer.

Many, many years ago, I found a bug in 4.3BSD which prevented TCP connections from establishing with certain other systems during odd numbered 4 hour periods. It took three days to find the bug in BSD's sequence number arithmetic. A combination of signed and unsigned casts was doing the wrong thing.

sampsonetics 4 days ago 1 reply      
Reminds me of my favorite bug story from my own career. It was in my first year or two out of college. We were using a commercial C++ library for making HTTP calls out to another service. The initial symptom of the bug was that random requests would appear to come back with empty responses -- not just empty bodies, but the entire response was empty (not even any headers).

After a fair amount of testing, I was somehow able to determine that it wasn't actually random. The empty response occurred whenever the size in bytes of the entire request (headers and body together) was exactly 10 modulo 256, for example 266 bytes or 1034 bytes or 4106 bytes. Weird, right?

I went ahead and worked around the problem by putting in a heuristic when constructing the request: If the body size was such that the total request size would end up being close to 10 modulo 256, based on empirical knowledge of the typical size of our request headers, then add a dummy header to get out of the danger zone. That got us past the problem, but made me queasy.

At the time, I had looked at the code and noticed an uninitialized variable in the response parsing function, but it didn't really hit me until much later. The code was something like this:

 void read_status_line(char *line) { char c; while (c != '\n') { c = read_next_byte(); *(line++) = c; } }
Obviously this is wrong because it's checking c before reading it! But why the 10 modulo 256 condition? Of course, the ASCII code for newline is 10. Duh. So there must have been an earlier call stack where some other function had a local variable storing the length of the request, and this function's c variable landed smack-dab on the least-significant byte of that earlier value. Arrrrgh!

mpeg 4 days ago 2 replies      
The title reminds me of "the 500 mile email"


icambron 4 days ago 5 replies      
The most interesting part of this story to me is actually that his wife noticed that the printer didn't work on Tuesdays. I'd have never, ever put that together, no matter how many times I saw it succeed or fail. I'd actually be more likely to figure it out by debugging the CUPS script than I would be observing my printer's behavior. Can a lot of people pick up on correlations like that? "Ever notice how it's always Tuesday when the printer won't work?"
mazda11 4 days ago 1 reply      
My most memorial bugfix was when I was on a team ,temporary ,that did email encryption/decryption.They had one customer where some mails could not get decrypted, they had been figthing with this for one year, no one could figure out what was going on.I told them to do a dump for a week with the good and bad emails.After one week I was given the dump of files, looked at the count of bad vs good, did some math in my head and said:"Hmm, it appears that about 1/256 mails is bad.That could indicate that the problem is releated to a specific byte having a specific value in the random 256 bit AES key.If there is a specific value giving problems it is probaly 0x00 and the position I would guess being at the last or first byte."

I did a check by decoding all SMIME mails to readable text with openssl- sure, all bad emails had 0x00 as the least signicant byte.Then i looked at asn1 spec and discovered it was a bit vague about if the least significant byte had to be there if it was 0x00.I inserted a line into the custom written IBM 4764 CCA driver written in c called by JNI.Then all emails decrypted.

The team dropped their jaws- they had been figthing with it for 1 year and I diagnosed the bug only by looking at the good/bad ratio :)

I might remember some details wrong- but the big picture is correct :)

alblue 4 days ago 2 replies      
The TL;DR is that the "file" utility was miscategorising files that had "Tue" in the first few bytes of a file as an Erlang JAM file, with knock on effects for PostScript files generates with a header comment with Tue in the date.
nilstycho 4 days ago 1 reply      
The weirdest case at my tenure as a neighborhood computer tech was a personal notebook computer that would not boot up at the customer's apartment. Of course we assumed user error, but further investigation revealed that if the computer were running as it approached the home, it would bluescreen about a block away.

We guessed it was due to some kind of RF interference from a transmitter on the apartment building. Removing the WiFi module and the optical drive had no effect, so we further guessed it was interference within the motherboard or display. Rather than investigate further, we replaced the notebook at that point.

mark-r 4 days ago 0 replies      
I have an anecdote, which isn't mine but comes from someone I know personally. This guy was working as a service tech, and was called out to diagnose a problem with a computer that had been recently moved. It worked most of the time, but any attempt to use the tape drive failed within a certain number of seconds (this was long ago, when tape drives were still a thing). Everything had worked fine before the move, and diagnostics didn't show anything out of place. Then he happened to look out the window - this was a military installation, and there was a radar dish rotating nearby. The failures occurred exactly when the radar dish was pointed their direction. It turns out the computer had been moved up one floor, which strengthened the interference just enough to cause the failure.
kazinator 4 days ago 0 replies      
But "Tue" is not at the fourth byte in the example, which has:

 %%CreationDate: (Tue Mar 3 19:47:42 2009)
Something munged he the data. Perhaps some step which removes all characters after %%, except those in parentheses?

 %%(Tue Mar 3 ...)
Now we're at the fourth byte. Another hypothesis is that the second incorrect match is kicking in.That is to say, some fields are added above %% CreationDate such that the Tue lands on position 79. The bug that was fixed in the magic database is this:

 -+4stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 -+79stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 ++4stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2 ++79stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2
(This is a patch of a patch: a fix to a an incorrect patch.) There are two matches for this special date which identifies JAM files: one at offset 4, but a possible other one at offset 79 which will cause the same problem.

The real bug here is arguably the CUPS script. It should identify the file's type before munging it. And it shouldn't use a completely general, highly configurable utility whose data-driven file classification system is a moving target from release to release! This is a print script, so there is no reason to suspect that an input file is a Doom WAD file, or a Sun OS 4 MC68000 executable. The possibilities are quite limited, and can be handled with a bit of custom logic.

Did Brother people write this? If so, I'm not surprised.

Nobody should ever write code whose correct execution depends on the "file" utility classifying something. That is, not unless you write your own "magic" file and use only that file; then you're taking proper ownership of the classification logic, such that any bugs are likely to be your own.

The fact that file got something wrong here is a red herring; the file utility is wrong once in a while, as anyone knows who has been using various versions of it regularly regularly for a few decades. Installations of the utility are only suitable for one-off interactive use. You got a mystery file from out of the blue, and need a clue as to what it is. Run file on it to get an often useful opinion. It is only usable in an advisory role, not in an authoritative role.

Adaptive 4 days ago 4 replies      
I've noticed that printing is still one of the poorest UX aspects of *nix/OSS and regularly seems to suffer from errors so egregious that they can only be attributed to OSS devs not dogfooding these features. I'm assuming they just don't print much (I mean, we ALL print less than 20 years ago, but all the more reason to test these features which, when you need them to work you REALLY need them to work).
t0mek 4 days ago 0 replies      
During my studies I had a course called "Advanced Network Administration". I learnt about the OSPF routing protocol and its Quagga [1] implementation and I had to prepare a simple installation that consisted of 3 Linux machines. They were connected with cheap USB network adapters.

After everything was configured I started the Quagga daemons and somehow they just didn't want to talk to each other. I've opened tcpdump to see what happens and the OSPF packets were exchanged properly. After a while the communication and routing was established. I thought that maybe the services just needed some time to discover the topology.

I've restarted the system to see if it's able to get up automatically, but the problem reoccured - daemons just didn't see each other. Again, I launched tcpdump, tweaked some settings and now it worked - until it didn't a few minutes later.

It take me a long time to find out that diagnostic tool I've used had actually changed the observed infrastructure (like in the quantum world). tcpdump enables the promiscuous mode on the network interfaces and apparently this was required for Quagga to run on the cheap USB ethernet adapters. I've used the ifconfig promisc and after that the OSPF worked stable.

[1] http://www.nongnu.org/quagga/

carapace 4 days ago 1 reply      
Stuff like this is why I find "Synthetic Biology" so fucking scary.
pif 4 days ago 0 replies      
CERN: LEP data confirm train time tables http://cds.cern.ch/record/1726241

CERN: Is the moon full? Just ask the LHC operatorshttp://www.quantumdiaries.org/2012/06/07/is-the-moon-full-ju...

BrandonM 4 days ago 0 replies      
Near the end of that post, the commenter suggested a fix that includes the most qualified Useless Use of Cat entry[0] that I've ever seen!

 cat | sed ... > $INPUT_TEMP
[0] http://porkmail.org/era/unix/award.html#cat

chris_wot 4 days ago 0 replies      
Wait till you see where they found the print server!


krylon 3 days ago 0 replies      
One of our users complained that she could no longer print PDF documents. Everything else, Word, Excel, graphics, worked fine, but when she printed a PDF ... the printer did emit a page that - layout-wise - pretty much looked like it was supposed to, except all the text was complete and utter nonsense.

Or was it? I took one of the pages back to my desk, and later in the day I had an idle moment, and my eyes wandered across the page. The funny thing is, if I had not known what text was supposed to be on the page, I would not have noticed, but the text was not random at all. Instead, all the letters had been shifted by one place in the alphabet (i.e. "ABCD" became "BCDE").

I went back to the user and told her to check the little box that said "Print text as graphics" in the PDF viewers printing dialog, and voila - the page came out of the printer looking the way it was supposed to.

Printing that way did take longer than usual (a lot longer), but at least the results were correct.

To this day, I have no clue where the problem came from, and unfortunately, I did not have the time to investigate the issue further. I had never seen such a problem before or after.

In a way it's part of what I like about my job: These weird problems that seem to come out of nowhere for no apparent reason, and that just as often disappear back into the void before I really understand what is going on. It can be oh-so frustrating at times, but I cannot deny that I am totally into weird things, so some part of me really enjoyed the whole experience.

gchadwick 4 days ago 0 replies      
Surely the real bug is the reliance on the 'file' utility in the first place? It attempts to quickly identify a file that could be literaly anything so it's not surprising (and indeed should be expected) that sometimes it gets it wrong.

I don't know the details of the CUPS script but presumably it can only deal with a small number of different file types. Implementing it's own detection to positively identify PS vs whatever other formats it deals with vs everything else would be far more robust.

kinai 4 days ago 0 replies      
I once had the case with a desktop system that when you sat down and started typing it often hardware reseted. Turned out Dell left some metal piece in the case which was hanging between the case and the motherboard (in those few millimeter) and with some stronger desk vibration caused a shortcut.
mark-r 4 days ago 1 reply      
I love the modification that pipes the output of cat into sed; doesn't he realize that cat is redundant at that point?
gsylvie 4 days ago 0 replies      
Here's a great collection of classic bug reports (including the never-printing-on-tuesdays): https://news.ycombinator.com/item?id=10309401
sklogic 4 days ago 0 replies      
No, it is a cups bug indeed. File was never guaranteed to be precise in the first place, it is not a good idea to rely on it.
rcthompson 4 days ago 0 replies      
I once found a bug in a weather applet that only occurred when the temperature exceeded 100 degress. The 3-digit temperature caused a cascade of formatting issues that rendered part of the applet unreadable. I believe the author used Celsius, and so would never have encountered this bug on their own.
DonHopkins 4 days ago 2 replies      
My 6502 based FORTH systems would sometimes crash for no apparent reason after I tweaked some code and recompiled it. Whenever it got into crashy mode, it would crash in a completely different way, on a randomly different word. I'd put some debugging code in to diagnose the problem, and it would either disappear or move to another word! It was an infuriating Heizenbug!

It turns out that the 6502 has a bug [1] that when you do an indirect JMP ($xxFF) through a two byte address that straddles a page boundary, it would wrap around to the first byte of the same page instead of incrementing the high half of the address to get the first byte of the next page.

And of course the way that an indirect threaded FORTH system works is that each word has a "code field address" that the FORTH inner loop jumps through indirectly. So if a word's CFA just happened to straddle a page boundary, that word would crash!

6502 FORTH systems typically implemented the NEXT indirect threaded code inner interpreter efficiently by using self modifying code that patched an indirect JMP instruction on page zero whose operand was the W code field pointer. [2]

JMP indirect is a relatively rare instruction, and it's quite rare that it's triggered by normal static code (since you can usually catch the problem during testing), but self modifying code has a 1/256 chance of triggering it!

A later version of the 65C02 fixed that bug.It could manifest in either compiled FORTH code, or the assembly kernel. The FIG FORTH compiler [3] worked around it at compile time by allocating an extra byte before defining a new word if its CFA would straddle a page boundary.I defined an assembler macro for compiling words in the kernel that automatically padded in the special case, but the original 6502 FIG FORTH kernel had to be "checked and altered on any alteration" manually.

[1] http://everything2.com/title/6502+indirect+JMP+bug

[2] http://forum.6502.org/viewtopic.php?t=1619

"I'm sure some of you noticed my code will break if the bytes of the word addressed by IP straddle a page boundary, but luckily that's a direct parallel to the NMOS 6502's buggy JMP-Indirect instruction. An effective solution can be found in Fig-Forth 6502, available in the "Monitors, Assemblers, and Interpreters" section here. (The issue is dealt with at compile time; there is no run-time cost. The word CREATE pre-pads the dictionary with an unused byte in the rare cases when the word about to be CREATEd would otherwise end up with a code-field straddling a page boundary.)"

[3] http://www.dwheeler.com/6502/FIG6502.ASM

 ; The following offset adjusts all code fields to avoid an ; address ending $XXFF. This must be checked and altered on ; any alteration , for the indirect jump at W-1 to operate ! ; .ORIGIN *+2

 .WORD DP ;) .WORD CAT ;| 6502 only. The code field .WORD CLIT ;| must not straddle page .BYTE $FD ;| boundaries .WORD EQUAL ;| .WORD ALLOT ;)

GigabyteCoin 3 days ago 0 replies      
"tue" means "kill" in french... I wonder if a french programmer somewhere had something to do with this?
lifeisstillgood 4 days ago 2 replies      
And this is why we won't ever get AI. Humans seem to only manage to get to a certain level of complex before it all gets too much.

There are supposedly people in Boeing who understand literally every part of a 747, the wiring and the funny holes in the windows. But there is probably no one who understands all parts of Windows 10.

We're doomed to keep leaping like dolphins to reach a fish held too high by a sadistic Orlando world trainer

gregschlom 4 days ago 2 replies      
So what's the lesson here? What should we learn from that?
broodbucket 4 days ago 1 reply      
Is it just me or does this get posted every month?
meeper16 4 days ago 1 reply      
Yet another reason I don't let OpenOffice or any Linux UIs slow me down. It's all about the command line and always will be.
Forty Percent of the Buildings in Manhattan Could Not Be Built Today nytimes.com
317 points by strivedi  2 days ago   225 comments top 21
humanrebar 2 days ago 6 replies      
> New Yorks zoning rules were intended to create less cramped quarters, but they also have consequences for the number of aggregate apartments in the city. Such limitations can quickly decrease the supply of housing, and most likely drive up rents. If every tenement in the city were reconfigured in these ways, they would be less crowded, but there would also be fewer apartments to go around.

Another part of the article says almost 3/4 of the square footage in Manhattan was built between 1900 and 1930. I'm not sure how these regulations are supposed to have anything but a profound effect on rents. I can understand that people want to preserve aesthetics, but at what cost?

There are many working class people who have unconscionable commutes into Manhattan partly because of NIMBY zoning laws.

brudgers 2 days ago 0 replies      
To me, there's a potential implication in the headline that doesn't quite paint the right picture. Today's zoning code express a plan for dealing with the good and the bad of aspects of previously constructed buildings.

Today's zoning code deals with the height and bulk and uses of existing buildings as facts when determining the hygienic requirements of future buildings. Existing non-conformities are part of the logistical plan for handling change. The tightening of rules over time is the result of the strain prior laxity places on resources today.

dankohn1 2 days ago 1 reply      
This is great analysis. Zoning laws still leave plenty of opportunity for new construction, and Mayor De Blasio has made major changes to encourage new construction, which is the only potential solution for the high housing costs on the East and West US coasts.

For an example of what zoning laws were trying to avoid, look at images of Gotham City from Tim Burton's 1989 Batman, where the buildings grow outwards as they go up like trees trying to absorb all sunlight. http://illusion.scene360.com/wp-content/uploads/2015/03/tim-...

Manhattan is less dense today then it was a hundred years ago, but it's density can and should increase as taller, healthier buildings are added. http://www.vox.com/2014/9/23/6832975/manhattan-population-de... Written from the 22nd floor of the first LEED Platinum certified apartment building in Manhattan.)

gregwtmtno 2 days ago 6 replies      
No one wants to go back to the days of tenements, but we need to relax these zoning rules. We need more housing stock at every income level except ultra-luxury.
Spooky23 2 days ago 3 replies      
The headline is a bullshit statement, and the reporter should know that. I don't expect click bait from the NYT.

Urban zoning isn't the same as the burbs. Most of those buildings could be built today, but would require a variance. The buildings that would "never get built" today wouldn't be a result of zoning, but the ADA -- the need to have ramps eliminates new construction of walk-ups and the requirements for wheelchair accessible elevators increases the cost of construction, reduces square footage and makes it too expensive to build buildings similar to many common Manhattan buildings.

In the case of NYC in the last decade, they also require paying off politicians. If you follow NY news, you'll notice that the US Attorney has been very busy investigating that practice.

Xcelerate 2 days ago 10 replies      
A little bit unrelated to the article, but why has the US quit building skyscrapers for the most part? I know there's a few in the works (Salesforce tower) but generally speaking, it seems like the skyscrapers that exist in most major cities were built long ago and they don't plan on adding any more.
nxzero 2 days ago 1 reply      
Having done a residential housing startup, violated zoning, and talked in person to current or form heads of zoning in a number of cities, there's got to be a better solution.

Taking step back, might be worth understanding how this all got started:http://ny.curbed.com/2013/3/15/10263912/the-equitable-buildi...

If you understand the history and common zoning laws, you'll quickly start to see a pattern, that being it's a reactionary system that's often designed by politics, not science.

I personally have given up on the topic, but hope someone is able to make some progress.

patmcguire 1 day ago 0 replies      
The bit about developers demolishing all but the 1/4 of a building and rebuilding to upgrade while keeping the zoning is nuts.


edwingustafson 2 days ago 3 replies      
Same is true of cars on the road -- some or all vehicles from past model years would fail to meet this year's automotive regulations.
jdnier 2 days ago 0 replies      
When I first saw the article title, I thought it might be about all the building materials and specialist skills required for construction that are no longer available or practical, not to mention the cost of building with those materials and techniques now. Zoning issues aside, I bet many of those building really couldn't be built today.
gbourne1 23 hours ago 0 replies      
The zoning laws actually help the diversity, aesthetics aside. As the laws change, the buildings change with it. Some bulky and tall, now skinny and short. The buildings of the era are influenced from the changing laws.
coldtea 1 day ago 0 replies      
>New Yorks zoning rules were intended to create less cramped quarters, but they also have consequences for the number of aggregate apartments in the city. Such limitations can quickly decrease the supply of housing, and most likely drive up rents.

Sure, so?

Obviously, if you allow to squash 10-20 people per 1000 sq ft you could lower the rents -- but unless you aspire to be an urban slum, you should have some limits in place, even if they raise rents.

snlacks 1 day ago 0 replies      
Why does it have to be simple enough for everyone to understand? We're talking building in one of the most expensive parts of the world where only the richest organizations can afford to build and it affects millions of people directly, and the state and national image. Making it easy isn't necessarily going to protect the interests of the city as a whole.
tomohawk 1 day ago 0 replies      
Big ball of mud. The code that is the most impenetrable and hardest to refactor lasts the longest.
sandworm101 2 days ago 0 replies      
Junk headline. The day after any building code change, all that came before could not be built again.

A better story would have been now past building codes shaped many NY icons. The Empire State Building's shape isn't some architectural masterpiece, it is a diagram of the building code at the time. It fills exactly as much space as was allowed.

anizan 1 day ago 0 replies      
Is there any zoning for safe space? Does anyone know? Or do i need to build my own bunker like Switzerland did for each and every one of their citizens.... during cold war. Btw is cold war over yet? Or is it just a going through a thaw right now
Shivetya 2 days ago 1 reply      
Okay I cannot find it, but is there a square footage requirement per occupant for new living spaces?
kazinator 1 day ago 0 replies      
That helps give NY its grit. If you want some pink little buildings, to to Miami Beach.
hackaflocka 2 days ago 0 replies      
Times change. Situations change. It's understandable.
Mz 1 day ago 0 replies      
It's a shame that so much about urban planning just proves the saying "That government is best which governs least."
hiou 2 days ago 2 replies      
I'm curious what all of these increase housing supply fanatics think about cities in the rust belt(Cleveland, St Louis etc) with an oversupply of housing which makes them a hotbed of crime and gang activity?
The 9 lines of code that Google allegedly stole from Oracle majadhondt.wordpress.com
357 points by nkurz  3 days ago   195 comments top 50
jbob2000 3 days ago 9 replies      
Looks like something I've written a hundred times. It's a common pattern, you could "steal" this just through organically writing a program.

The more I hear about this case, the more I realize it's just a bunch of lawyers trying to pad their bank accounts. No sane engineer would claim this is infringement.

sgc 3 days ago 0 replies      
Given the guy who wrote this wrote both the first and supposedly infringing code, I have a bit of an analogy here from personal experience from another field.

For a while I worked in translating, and I translated a couple of books for the same author. One of the later books quoted about a page from the first one I had translated a couple of years earlier. I just translated it again because it was faster than finding the passage in my other translation (first point). Afterwards, I went back out of curiosity and checked the two translations against each other. I was quite surprised to see that in one full page of translation, after years of further experience, there was only one or two prepositions that were meaninglessly changed (point two).

Some things are just so obvious that the same guy doing the same thing years apart will produce the same results, especially if he is an expert in his craft. Unless there is some way to prove otherwise, this point of the case should be definitively dropped.

nedsma 3 days ago 4 replies      
Dear goodness. And there are tens if not hundreds of people involved in trying to prove/disprove this case and they're all getting some hefty money. What a waste of human intellect and time.
guelo 3 days ago 1 reply      
This article is from 2012 and is very outdated. The "famous 9 lines" are not being contested anymore. Google lost that case. The current trial is about whether Google's copyright infringement constituted "fair use".
AdmiralAsshat 3 days ago 4 replies      
One thing I've been thinking about as I've read through the trial:

It's my understanding (I am a wee lad compared to the grizzled vets here, so bear with me) that most of our common *nix tools were written during the UNIX days and were technically proprietary (awk, grep, cut, etc). When Linux came around, these tools were "ported" to become GNU tools and completely rewritten on the backend, while still keeping the same name so that existing UNIX developers would feel at-home using the same tools on Linux,BSD, etc.

The key point here is that they intentionally kept the same command names, for familiarity's sake.

Given that, could one make the analogy that a command name would be similar to an "API" and should also have been illegal by Oracle's logic?

worldsayshi 3 days ago 2 replies      
Wow, this is legal bullshiting beyond comprehension. It is the equivalent of one engineer copycating the way another engineer moves his arm when fastening a screw. To give anything beyond 5 minutes attention to this in a court is an insult to society.
gvb 3 days ago 0 replies      
More relevant information: "What are the 37 Java API packages possibly encumbered by the May 2014 Oracle v Google decision?"


From the #1 answer (it is worth clicking the link and reading the full answer):

 java.awt.font java.beans java.io java.lang java.lang.annotation java.lang.ref java.lang.reflect java.net java.nio java.nio.channels java.nio.channels.spi java.nio.charset java.nio.charset.spi java.security java.security.acl java.security.cert java.security.interfaces java.security.spec java.sql java.text java.util java.util.jar java.util.logging java.util.prefs java.util.regex java.util.zip javax.crypto javax.crypto.interfaces javax.crypto.spec javax.net javax.net.ssl javax.security.auth javax.security.auth.callback javax.security.auth.login javax.security.auth.x500 javax.security.cert javax.sql

foobarrio 3 days ago 6 replies      
I thought it "not obvious to a practitioner of the craft" was a requirement for a patent no? Give 10 programmers the task to write "rangeCheck()" and you'll end up with very similar looking code.
holtalanm 3 days ago 5 replies      
am I the only one that when looking at the implementation sees that there is a major flaw in the code?

if(toIndex > arrayLen) does not handle the case in which toIndex == arrayLen, which should still throw an ArrayIndexOutOfBoundsException if we are dealing with 0-based indexes.

Please correct me if I am wrong.

Aelinsaar 3 days ago 1 reply      
Incredible. The amount of money being set to the fire for the sake of something that even a student knows is utter crap.
devy 3 days ago 0 replies      
Sort of off topic, anyone know who's this Tim Peters who created the TimSort? Python docs and Wikipedia[1] has virtually no bio for him even though he's very well known Python contributor and his code becomes a legacy in this billion dollar lawsuit, among other things(like Zen of Python[2]).

[1]: https://en.wikipedia.org/wiki/Timsort

[2]: https://www.python.org/dev/peps/pep-0020/

ZeroGravitas 3 days ago 0 replies      
The worst part is that the programmer only "stole" these lines as he was contributing an improvement back to the OpenJDK and wanted his stuff to be compatible. Which adds one more level of absurdity.
hermannj314 3 days ago 0 replies      
Yeah, a bunch of jurors will be ruined financially while being forced to watch billionaires fight over how to best split up their empire. Sortition is how you spell slavery in the 21st century.
enibundo 3 days ago 1 reply      
As a software engineer, I get sad when I read news like this.
CiPHPerCoder 3 days ago 2 replies      
This code is ugly anyway:

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }
Missing a closing paren in the function prototype, among other things.

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) { if (fromIndex > toIndex) { throw new IllegalArgumentException( String.format("fromIndex(%d) > toIndex(%d)", fromIndex, toIndex) ); } if (fromIndex < 0) { throw new ArrayIndexOutOfBoundsException(fromIndex); } if (toIndex > arrayLen) { throw new ArrayIndexOutOfBoundsException(toIndex); } }
There you go Google, Oracle, et al. I release this snippet under MIT / WTFPL / CC0. You're welcome.

erikb 3 days ago 0 replies      
When the content of a trial are 9 lines of code then of course the topic are not really the 9 lines of code. It's just a way to gain something else. Everybody involved probably knows that.

I personally am very happy if powerhouses fight each other with lawsuits instead of giving me a sword and asking me to die for them. In that regard I feel humanity has come quite far over the last centuries.

tantalor 3 days ago 1 reply      
cognivore 3 days ago 1 reply      
That has to be a joke. By pursuing this Oracle just makes themselves look like idiots to anyone who actually has an technical knowledge.

So, they're idiots.

gsylvie 3 days ago 0 replies      
"I May Not Be Totally Perfect But Parts of Me Are Excellent" - I think this is a useful article to read when considering the 9 lines of code, because copyright law tends to treat novels, pop songs, and software code as the same: http://fairuse.stanford.edu/2003/09/09/copyright_protection_...
Twisell 3 days ago 1 reply      
This is total FUD. (EDIT ND: because thoses lines of code are already out of every discussions to be held in the current retrial, they are already ruled out the only remaining question is fair use)

This trial should now be entirely focused about wether Google "stole" the API SSO under a fair use exception and shall be relieved.

The preceding phases of this case already determined that:-thoses a nines lines are not significant-Google used API SSO without consent of Sun/Oracle and without any license-API SSO of Java are indeed copyrightable (this was ruled in appeal and confirmed by the Supreme Court)

This retrial is only happening because judge Aslup did a half baked first trial and the appeal court returned him the case after invalidating his bad ruling about non-copyrightability of API.

For thoses who seek deep insights about this case, take a look at Florian Mueller's blog:http://www.fosspatents.com

He pretty accurately predict the reversal of the first ruling against the opinion of many mainstream analysts. And he frequently publish link to public court documents so you can make up you mind by yourself.

EDIT: If you downvote please argument, otherwise it's very suspicious. I'm totally open to discussion but I can't fight against a hidden lobbyst activity that systmatically downvote diverging views.

EDIT2: I edited the first sentence to be more explicative. I've seen I got some upvote, but silent bashing seems to continue. Again, please argument!

I don't get why the name of this blogger unleash so much passions while he actually always publish documents and link to actual rulings. Yes he clearly don't write as elegantly as some, and yes he's by now pretty opinionated but why such much hate?

laverick 3 days ago 1 reply      
Uhm. That code wouldn't even compile...
draw_down 3 days ago 0 replies      
> Every company tries to control its developers actions, but does management really know what goes into the software?

This is backwards, developers do what management allows. If management cares to know what goes in the software, they will know. There are ways to know. Whether business people want to pay for that is a different matter. Of course they don't, for this precise reason- so they can throw up their hands and say, "those darn developers!"

chiefalchemist 3 days ago 0 replies      
Actual code aside, I would think this should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience, else they'll fear "a temporary measure" they didn't ask for. That is, suddenly, experience might not be as valuable as it used to be.
sleepychu 3 days ago 0 replies      
I'm pretty sure I've seen


0xmohit 3 days ago 0 replies      
Thankfully the patent system didn't exist when the number system was developed. Otherwise one would need to pay a royalty for counting.
meganvito 3 days ago 0 replies      
In the university I graduated, the professor definitely will mark plagiarism and give an F, unless a strict rule of sourcing followed. Most openjdk source has the first line a usual header. Maybe I am a late student of JDK. or may be there the court may prevail an exception. Finally you have to consider yourself what do we mean to contribute to open source?
curiousgal 3 days ago 0 replies      
>if i is between 0 and 11 before retrieving the ith element of a 10-element list.

Shouldn't i be between 0 and 9?

foldablechair 3 days ago 0 replies      
Reminds me of all those court cases of 'stolen' logos, using a small and fixed set of geometric primitives, the probability of coincidences is just high that way. Of course, some people believe all art is immitation and nothing ever gets created from first principles.
chiefalchemist 3 days ago 0 replies      
Code aside. This should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience.
Matt3o12_ 3 days ago 0 replies      
Does anyone have an idea what is really going on?

I've heard people say that Google actually copied the API structure (which is copyright-able) but I've also heard that this lawsuit was actually about Google using a wrong (or missing license). And I've heard that Google also manipulated the developer community by only propagating "we only copied 7 lines of code" and big evil oracle sues us.

From what I know Google used Java's API structure but did not include a license. They could have paid oracle for a license to use it conmercially or they could have used the GPL from OpenSDK and be bound to its restrictions. What they did instead was not to include a license at all, because try did not want to pay oracle but also did not want to be bound by the GPL (which might complicate things with phone manufacturers that change the code).

Could anyone tell me what the fuck this lawsuit is actually about?

eps 3 days ago 0 replies      
Am I reading this correctly that it's actually buggy?

It doesn't properly work if an array is zero-based nor it works if it's 1-based. It neither properly work if toIndex is meant to be included in the range or excluded from it.

nutate 3 days ago 0 replies      
The resonance with left-pad and the questions of "how exactly to we share super simple code" evolves through so many different prisms. From legal to organizational to not invented here to...
Tloewald 3 days ago 0 replies      
Is it just me or does this code seem to have an off-by-one error (i.e. throwing on toIndex > arrayLen and not toIndex >= arrayLen, given that the lower bound check implies zero-based arrays)?
knodi123 3 days ago 0 replies      
Interesting that these 9 lines were apparently re-typed by hand, or possibly even from memory.... or so I suppose based on the missing close-paren on the first line...
chenster 3 days ago 0 replies      
Thanks for wasting course time on non-sense like this. Things like this squatting our legal system and yields absolutely nothing.
cm2187 3 days ago 0 replies      
There is a lot of vested interest in this case and I do not know the author of this article. Are we sure the claim is down to the implementation of this function?
mark242 3 days ago 1 reply      
A void function that does nothing but throw exceptions. Scala engineers everywhere cringe at the thought of converting this kind of code to native Scala.
rootlocus 3 days ago 0 replies      

 > Google owes Oracle between $1.4 billion and $6 billion in damages if liable
In what damages, exactly?

shubhamjain 3 days ago 0 replies      
Perhaps, someone should make a software that checks code to see if it is infringing any copyright. :)
meganvito 3 days ago 0 replies      
I would leave my last comment, doing 'cheap things' is/are habitual.
eb0la 3 days ago 0 replies      
I bet you can get a similar code from BSD, EMACS, Ingres, or any venerable open source codebase and use it as prior art against that patent claim.

Ok, maybe that venerable codebases doesn't have exception handling like Java but you can prove to have the same logic maybe 10 or 20 years before that code was written.

udkl 3 days ago 0 replies      
Naively, that's $200 million to $800 million per line of code.
masters3d 3 days ago 0 replies      
One billion dollars per line.
Oletros 3 days ago 0 replies      
This case is not about RangeCheck, is about the 37 Java classed declaration
BurningFrog 3 days ago 0 replies      
Is this what the whole case rests on, or is it just one of many details?
hathym 3 days ago 0 replies      
wow, each line costs nearly one billion dollars
smaili 3 days ago 2 replies      
tldr -

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }

vladaionescu 3 days ago 0 replies      
Pretty sure that the only reason they copied that code was that they didn't know how to do it themselves.
Eat, sleep, code, repeat is such bullshit signalvnoise.com
342 points by ingve  1 day ago   173 comments top 42
ChuckMcM 21 hours ago 4 replies      
I don't know if the author knows anyone who is an artist, but if they knew an artist it might add perspective on the t-shirt.

There is a type of person who is driven to express their ideas and emotions. They can be moody and quiet or loud and extroverted but they get consumed by the process of creation and go into something like a trance state while creating.

I have been that type of person and when I let myself go, would spend hours and hours writing and rewriting code as the design evolved. I thought I was just oddly broken until I got to college and met an art student sculpting a rock at 3AM on my way back from the computer science building (its a weird thing to hear a rock hammer going 'tink, tink, tink' at 3 in the morning.) I watched in amazement for a while until they stopped and we talked. Turned out they had a very similar "mode" that I did, thinking about the work and then suddenly they could "see it" and at that point they were compelled to chip away the rock and let the rest of the world see it. I felt the same way about programs, all at once it seemed I just "knew" how it should go together and would work through the night banging it out.

It isn't a terrible or horrible thing, its something which is very satisfying. And its also true that no one else may appreciate your creation so you have to be ready to just be happy with creating it :-).

Later in life I met people who were programmers who just "turned it on and off" like a spigot, they go into work, get their assignment, turn on the programming spigot and write code, then turn it off again and go home to their life. Their t-shirt might read "code, get paid, go home."

I read the headline and disagreed, I read the article and realized the author isn't driven to create with code. That is totally understandable, it is a small percentage of the population that is. But the phrase speaks to that small percentage and not to them. Like art, if it doesn't speak to your soul then just ignore it and move on.

rplst8 1 day ago 10 replies      
Generally, this is an easy stance for young, unmarried, and otherwise unattached people to approach their career. A warning to those that think this will enhance their opportunities later in life. It won't. Sure there are a few great employers out there that will reward you with loyalty (or massive sums of money) but that is the exception.

After a life of nothing but work, no matter how much you love it at the time, you fill find yourself middle-aged and burnt out - and possibly regretting your entire career choice. This is coming from a person who was the class computer nerd, who ate, slept, and drank everything computer and software tech related for 25+ years.

I'm not saying tech is bad career choice - it isn't. But once you get burnt out at something, digging ditches out in the sunshine (or rain for that matter) looks much more appealing.

jaywunder 1 day ago 12 replies      
I think the OP misses the point of the shirt. Lot's of different subcultures and activities have this type of shirt. Someone could just as well wear a shirt that says "Eat. Sleep. Hike. Repeat." And people wouldn't actually think the person _only_ hikes. The shirt is completely ironic because to suggest someone _only_ codes all day is ridiculous. The OP misses the irony.
smoyer 22 hours ago 3 replies      
I'm married and have four kids, though two have now left the nest. When I read this, I think about all the times I set aside my code to do something more important - interact with my kids, nurture my wife, and pursue hobbies outside the technology world (I run, bike and sail).

But ... you might be surprised to learn that I love that tee-shirt. I think there are many who are misguided enough to ignore balance in their lives completely (this shirt is not for them). For me, technology is my main hobby. I get paid for it at work (aren't I lucky), sometimes I get paid for it at home (I have a consultancy) but most of the time at home, I'm coding (or even building electronics) for fun. Other times, it's a project I'm playing with that I wouldn't otherwise get to do at work.

In any case, I'm not advocating living an unbalanced life. Nor am I describing a life of crunch-time (my home projects very rarely have deadlines). What I'm saying is that I truly love the time I spend on technology and identify with the shirt as a statement of that affection.

EDIT: It's a rainy Saturday morning here ... I got the lawn cut last night, my daughter is away playing viola at a concert and I'm kind of chilling with my wife and son. While we're siting in the living room watching the rain, I'm updating the Ansible scripts I use to keep my laptops, and workstations up-to-date [1].

I've got two hardware projects planned for next winter - I'm going to get my COSMAC ELF [2] running again (my first computer based on an RCA 1802) and I'm going to turn my old Sun E450 [3] into a TEC-based mini-fridge for my office (if anyone needs Sun E450 parts let me know).

[1] https://github.com/selesy/workstation

[2] http://www.cosmacelf.com/

[3] http://www.tech.proact.co.uk/i/sun_enterprise_450.jpg

shrugger 1 day ago 2 replies      
Yeah, fuck enthusiasm! /s

I can understand that we might want to avoid the obsessive behaviors of some developers that could be considered unhealthy, but blaming a simple catch-phrase isn't going to get us anywhere.

What if the shirt said, "Coding is rad" ?

I say that all the time, and people just give me that look that says, "Ha, what a nerd" and it's fine for me because I can maintain my enthusiasm. Other people need a little bit more of a push to continue to push themselves to be the best developers that they can be.

Should developers be monks that only exist to program? No.Should developers take pride in spending their time learning and improving? Yes.Should developers be upset over a t-shirt that has no impact whatsoever on their life? No.

I hate this sort of reactionary outrage, it's more counterproductive than buying the stupid shirt is.

uhtred 1 day ago 1 reply      
I get the feeling those t-shirts were thought up by someone who doesn't "code" for a living (i.e. a higher up management type). Before I got professional full-time work as a developer I was obsessed with "coding". Now, I've become a little jaded through working with horrible spaghetti code legacy systems, solving problems I don't care about, whilst doing all this "agile" stuff. Now I've become obsessive trying to think of business ideas / products I can create to help me escape it all (and just program for fun, or as a means to run a business or supply a product).
k__ 1 day ago 1 reply      
One day, everyone has to decide what they want from life...

We only have finite time and need to use it carefully.

I was a big nerd in school and from 11 - 17 I put most of my time in video games and coding. This helped me greatly in my carer, but I ended up without any of the typical experiences people have in that age.

Later I decided I also want different things from life. Finding partners, learning instruments and getting fit was a huge cut for my coding skills.

What I learned was, that there are always people better than you, but they bought this with their lifetime. You don't need to be the best developer around, it's okay if you're average.

But I have to admit, my girlfriend made me an ESCR shirt and I like to wear it, so people "think" I'm 100% dev, haha.

closed 1 day ago 0 replies      
I didn't see anything alarming in this phrase, but from responses in this thread it's clear it strikes a nerve with some people.

And that's really helpful to know. I came into the thread seeing nothing wrong with it, but now have a sense that it might be a bigger issue than I had realized.

Another commenter mentioned that many groups use the ol' "Eat, {VERB}, Sleep, Repeat" slogan on t-shirts to express enthusiasm for something, rather than a top-down expectation. But maybe the issue of separating work / personal time is hot enough in tech that the slogan ends up reminding people (like OP) that there's a risk others will try to convince you to take it seriously as a requirement for your career.

This kind of thing comes up in academia, too--where there's a pervasive I-love-science-so-much-I-work-late-into-the-night-every-day mentality. Eat, research, tell people how late you were up researching, sleep, repeat.

hunvreus 23 hours ago 0 replies      
The first featured post from the same author reads "How I fell in love with a programming language" [1] and dissect his "love" for Kotlin (the same way DHH "loves" Ruby).

Being a dad, I'm pretty sure he's able to appreciate the difference between him saying "I love Kotlin" and "I love my son". Well, that t-shirt is no different.

I've often liked the no-nonsense talk coming out of 37Signals/Basecamp, but this sounds like over-reaching.

[1]: https://m.signalvnoise.com/how-i-fell-in-love-with-a-program...

lsiunsuex 1 day ago 3 replies      
While I don't agree that programming needs to be all consuming, spend enough time around new programmers and you'll see a difference in passion. Someone who does programming because they can earn a pay check vs someone that does programming because they truly enjoy it.

You don't get that passion by doing it 9-5.

Programming CAN be all consuming if you let it and for a while, I did. I had so much work to do be it at my day job or side projects that it left 0 time to do anything else. Because of that, I gained weight and generally feel like crap most of the time (let's not even talk about actually sleeping through the night)

It's important to take some time and do something else. I'm trying to make it a point to spend at least an hour a day bike riding or roller blading or just even spending time with my wife. You'll go insane otherwise.

The eat, sleep, code, repeat may be originated from this https://www.youtube.com/watch?v=wBoRkg5-Ieg but I don't think that's the first occurrence of a "repeat" phrase...

riebschlager 1 day ago 3 replies      
I think the larger issue here is that few people are going to drop $25 for a t-shirt that says, "EAT. SLEEP. CODE. MAINTAIN A HEALTHY BALANCE OF BOTH PROGRAMMING AND NON-PROGRAMMING RELATED ACTIVITIES."
jayd16 1 day ago 1 reply      
I'm against long hours as much as the next guy but these shirts are just meant to be enthusiastic. Pick your battles guys...
gtrubetskoy 21 hours ago 0 replies      
I share the sentiment of the article, but have another issue. What is this obsession with "coding"? As in producing lines of code? I'd encourage programmers to "code" less and think more. And the best time to think is away from the keyboard, perhaps on a walk, or with your family.

Consider that some of the cleverest things in programming, be it algorithms like sorting, hashing or whatever, ideas and inventions that actually make the computers do what they do can usually be implemented in very few lines of code or explained in a page of text, but may have taken collective lifetimes to come up with.

louprado 23 hours ago 0 replies      
At least it is still "eat, sleep". Soon it will be "soylent, provigil".
jack9 1 day ago 0 replies      
That's a funny sentiment, because I'm here on a saturday morning (like all mornings for the last 30 years) browsing the internet for news with a particular interest about programming.
havetocharge 23 hours ago 0 replies      
The OP is overreacting big time. Offended and horrified by a humorous shirt. First world problems.
omarish 23 hours ago 0 replies      
We should be careful to not conflate intensity (at one's work) with being a one-dimensional human being. Intensity is generally a good thing. Being one-dimensional is not.

Regardless of whether you're just starting out and are the most intense programmer ever, or if you've been at it for 20+ years, there's immense value in understanding and being proficient at many things other than software.

Many big discoveries happen at the intersection of two fields. A lot of our latest breakthroughs in AI are based on our newfound understanding of the human mind. Same with computational genomics. Black Scholes for option pricing. The list goes on.

anoplus 23 hours ago 0 replies      
I am bothered as-well by the "Eat, sleep, code, repeat" phrase. We must not make things complicated just to create work. Work should lead to freedom. I understand it is scary when a work is no longer relevant. But this is not a technological problem, it is a political problem.

BTW, I think Universal Base Salary is a great answer to the speed at which work gets irrelevant. UBI gives exactly the confidence to evaluate work without pressure and bias, which I believe leads to even greater freedom and productivity.

dham 22 hours ago 0 replies      
I never forget the years working Christmas and Thanksgiving at the movie theater, or bagging groceries at Harris Teeter pretty much every holiday(that was actually a huge step up because I didn't have to work Christmas day). Sure work sucks sometimes but I keep those days in the back of my head at all times and I never take what I have for granted. While my roommates in my late teens and early twenties, sat around smoking weed and playing video games and racking up student debt, I worked on paying cash for community college and learning more and more about programming which I had already started learning at an early age.

I still get up every day happy to go to work. I haven't dreaded work in 9 years. I have a 10 month child now and yea I don't eat sleep code, but it's actually given me more motivation to start a small Saas companies on the side. Just side projects to see if they stick. Bring in a little extra cash. If my employer wants a little extra work here and there that's fine. If they begin to expect it every day then I'll just go to the next thing, but I sure as hell rather sit at a desk and do something I enjoy than bag groceries or do construction.

omarforgotpwd 21 hours ago 0 replies      
There is a song called "eat sleep rave repeat" which is about taking drugs and going to raves every day. Eat sleep code repeat is, I think, just an attempt to make a funny play on that song. Obviously eat sleep anything repeat is not a healthy life style. Pretty funny how this joke was lost on the author and the entire thread.
nzoschke 1 day ago 0 replies      
I know this phrase from the electronic music artist Fatboy Slim's "Eat, Sleep, Rave, Repeat"

(Warning, explicit lyrics)


It's a fun dance song, but the literal total opposite of a healthy lifestyle.

49531 22 hours ago 0 replies      
One hard issue I've been coping with is the fact that coding was my hobby before it was my profession. I think that's a fairly normal thing for developers. I mean, who wouldn't take a job doing what they were already doing for enjoyment.

It's not that I want an unbalanced life, but when your hobby merges into your career it becomes hard to diversify. I'm fortunate that I have a wife and kids to help balance me out; without them I'm sure I'd be coding for most of my free time.

Kiro 1 day ago 0 replies      
What's with the articles lately discouraging hardcore programming? Are we hackers or not? I just want to sit in a cellar with a big Unix beard, drink Jolt cola and do programming 24/7.
a-guest 22 hours ago 0 replies      
The "Eat. Sleep. Code. Repeat." mentality, if understood in the context of workplace environments where for all practical purposes this is the expected modus operandi, should give any person who thinks the unexamined life is not worth living reason to pause.

This story helps illustrate the mindset from a particular angle.http://www.hobodrifter.com/the-fisherman/

sebringj 22 hours ago 0 replies      
It's Bullshit but... the exception to this is if you're consulting corp-to-corp hourly or doing you're own startup that's fine but otherwise it is exploitive culture propaganda praying on the naive. (overly dramatic, its just a fucking t-shirt). Also, "hump" should be added to this list prior to "sleep" and possibly inserted after "sleep" and after "code" as well to cancel out the long durations of solitary activity.
fareesh 23 hours ago 1 reply      
I think the post might be misinterpreting the intent behind the phrase. My first impression was that it is a parody of the famous electronic dance song with similar lyrics.
Tiquor 20 hours ago 0 replies      
Every subculture that requires some level of above normal commitment has a phrase like this. Basbeall, ballet, musicians, dancers. It seems the current Medium post flavor of the month is some semi formed shot at a tentative of programmer/tech culture. Most end up being so overblown, like this one, it just sound like middle schoolers in the cafeteria.
andrew_wc_brown 1 day ago 2 replies      
Even if you want to eventually you won't be able to code around the clock.

I', now 29 but working for startups putting in 8-14 hours a day takes its toll, and now I can't sit at a desk or my acid reflux will kindly tell me to take a walk.

That or too much stress is hit, I'm out for the day with chest pains for the next day.

Whats worse is you tell this to others and they don't understand because those that have not experienced it don't understand.

j1vms 21 hours ago 0 replies      
Just be sure to love the shit out of most of what you do, otherwise what you're doing just isn't sustainable for you. There are only a very, very few exceptions to this.

Edit: also, it's almost never good on you, to blame others for when you are stuck; it's better to be active yourself and fix things and try to get yourself into a better place.

lohengramm 22 hours ago 0 replies      
Personally, I like to focus on one thing for a long time, and then focus on another.

Unfortunately, it is very hard to do so in a society that tries to level everyone in the same way.

The best possible situation to me would be working nonstopping for months, then do something else for months and so on.

sidcool 1 day ago 1 reply      
I have been trying to be a 10x engineer for the past 10 years and only been somewhat successful.
fullshark 21 hours ago 0 replies      
This guy must be overworked to take such issue with something so innocuous.
DougN7 1 day ago 1 reply      
Everyone figures out that balance is the key to happiness in life. Some are wise and learn from others while they are young. Some of us have to spend many years and figure it out the hard way ourselves, which brings some regret. Be wise!
jcoffland 23 hours ago 0 replies      
Many really good programmers "eat, sleep, code, repeat," some of the time. Highly intelligent people tend to have intense focus. That's how they got to be so intelligent.
galistoca 20 hours ago 0 replies      
I think this whole "you need a work life balance" propaganda can be as misleading as "eat, sleep, code, repeat", especially for younger programmers who have no idea and rely on more experienced idols (such as the guys at Basecamp).

It really depends on what kind of life you want to have. If your priority is more towards making a huge dent in the world, I think trying to have a "work life balance" is a terrible idea. Here I'm speaking statistically because there are rare cases where people stumbled upon success even without putting all their life into it, but I would say these are exceptions, not the rule.

Most extremely successful people have had very abnormal life--far from a balanced life--(Maybe you hear about them talking about having a balanced life here and there but that's them speaking after they have achieved success. Of course if you spent all your youth on working on something you would want that time back. But I doubt they would be where they are if they actually did what they say).

Take a look at Basecamp for example. I don't want to pick on them but there's no better way to argue with their philosophy than what's going on with them. Sure they were one of the pioneers in their space, sure their co-founder created Ruby on Rails, and I totally admire what they have achieved, but what have they achieved in the last several years otherwise in terms of their own product innovation? They have killed most of its other products, and their main product--Basecamp--is not exactly the mainstream product that everyone uses. If anything it's Slack that will become what Basecamp could have been. I watched one of their interview videos where they were talking about how they thought about building an awesome new product but decided not to because they didn't want to waste time maintaining a new product. Personally I cannot sympathize with that at all.

Of course, it sounds like this is exactly the type of business these guys want, which would let them live the lifestyle they want, but if you ask me, I would choose a life where I create something that has extremely huge impact in the world--hopefully even after I die--even if it means sacrificing a lot of my "lifestyle". The Basecamp guys decided to live a life where they are mildly successful and enjoy their life, but they probably won't achieve anything world-changing if they keep doing what they do. That's fine and I'm sure they don't care, and I'm not saying everyone should live a crazy life, but I'm just saying it's as "such bullshit" to say everyone should live an unambitious life as saying everyone should give up their life to be successful.

kyriakos 23 hours ago 0 replies      
I used to be like that, then I realised there's better things in life. So my conclusion is that you need a balance. Too much work is bad but too much family time is as bad.
746F7475 22 hours ago 0 replies      
Man, people really do get butthurt over all kind of stupid shit.
konne88 21 hours ago 0 replies      
Such bullshit. With Soylent you can code while you eat :)
welanes 21 hours ago 0 replies      
They were all sold out of Eat, sleep, code, travel, meet friends, shop, shower, jog, date, travel, watch GoT, procrastinate on HN, repeat t-shirts.
fideloper 23 hours ago 1 reply      
Similar opinion I've had about the phrase "never stop learning", which is a common sentiment but can also be menacing threat depending on your situation.
albertojacini 23 hours ago 1 reply      
> horrifying

is definitively too much. And also the interpretation is too strict. Why being outraged for something like this?

imsofuture 23 hours ago 0 replies      
Hyperbolic slogan contains hyperbole, news at 11!
Software Design Patterns Are Not Goals, They Are Tools exceptionnotfound.net
303 points by kiyanwang  3 days ago   163 comments top 30
userbinator 3 days ago 12 replies      
It probably seems like an obvious statement to a lot of HN, but I have a feeling that it isn't to the majority of developers, who for some reason appear to love immense complexity and solving simple problems with complex solutions. I think a lot of them started with OO, which immediately raises their perception of what is "normal" complexity --- at that point, they're already creating more abstraction than is really necessary. Then they learn about design patterns and all the accompanying "hype" around them, so they think "awesome, something new and shiny to use in my code!" and start putting them in whenever they can, I guess because it feels productive to be creating lots of classes and methods and hooking everything together. It's easier to dogmatically apply design patterns and generate code mindlessly than to think about what the problem actually needs to solve it. The result is code that they think fulfills all the buzzwordy traits of "good software engineering practice" (robustness, maintainability, extensibility, scalability, understandability, etc.), but in reality is an overengineered brittle monstrosity that is only extensible in the specific ways thought of when it was first designed. That almost never turns out to be the case, so even more abstractions are added (including design patterns) on the next change, on the belief that it will help with the change after that, while leaving the existing useless ones in, and the system grows in complexity massively.

I did not start with OO, I never read the GoF book, and don't really get the obsession with design patterns and everything surrounding them. I've surprised a lot of others who likely have, by showing them how simple the solutions to some problems can be. Perhaps it's the education of programmers that is to blame for this.

The statement could be generalised to "software is not a goal, it is a tool".

Related article: https://blog.codinghorror.com/head-first-design-patterns/

jrochkind1 2 days ago 1 reply      

Design patterns are super useful as tools.

As "goals" they are idiotic. I think lots of people that think they are idiotic have been exposed to them as "goals", or don't realize that's not the point.

I think there is a larger issue here, which is that many kinds of software development, including web dev, has become enormously more complex in many ways than it was when many of us came up.

People coming up now are looking for magic bullets and shortcuts and things they can just follow by rote -- because they are overwhelmed and don't know how to get to competence, let alone expertise, without these things.

It's easy for us to look down on people doing this as just not very good developers -- and the idea of 'software bootcamp' doesn't help, I think it's probably not _possible_ to get to competence through such a process -- but too easy to forget that if we were starting from scratch now we ourselves would likely find it a lot more challenging than we did when we started. There's way more stuff to deal with now.

"Design patterns" are never going to serve as such a magic bullet or thing you can follow by rote, and will often make things worse when used that way -- but so will any other potential magic bullet or thing you can follow by rote. Software doesn't work that way. It's still a craft.

dantheman 3 days ago 2 replies      
Patterns are from software archaeology, they were naming things that were commonly seen and what they were for -- they were helping build a vocabulary to talk about larger constructs.

They are useful if you have a problem and one fits it perfectly, it can help you start thinking about it -- but it might not be a good fit.

In general we should be keeping software as simple as possible, with the understanding that it can be changed and adapted as needed. Often large "pattern" based projects devolve into a morass of unneeded complexity to support a level of flexibility that was never required.

prof_hobart 3 days ago 1 reply      
>, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

Unless I'm misunderstanding him, I would disagree with this. When you're doing it wrong is when you use a design pattern without understanding what problem its solving, and whether you have that specific problem.

To use his tool analogy - if you're a joiner who turns up to a job thinking "we always need to use a hammer" and start randomly hitting everything, then you've gone wrong. But equally, if you're halfway through knocking a nail in with your shoe and think "Oh look, I'm using the hammer pattern now", you're doing it just as wrong.

If you're looking at two things you need to attach together and you've considered whether glue, a screw, a nail or something else is the most appropriate for this specific job, decide it's the nail and then think - "I need to use my hammer now", then you're doing it right.

gwbas1c 3 days ago 0 replies      
Design patterns aren't the problem. All a design pattern is, is a well-known way of doing something.

When you build a house, do you re-invent how to frame, plumb, wire, and roof it? No. That's all a design pattern is. Choosing the right design pattern is akin to making sure that your basement is made out of cement and your walls framed with wood. (You don't want to put shingles on your countertops!)

The problem is that some developers think they are some kind of magical panacea without really understanding why the pattern was established and what it tries to achieve. These are the over-complicated projects that everyone is complaining about in this thread. (These are the projects where the basement is made with wood or the concrete walls too thick; or the projects where someone decided to put shingles on the countertop.)

I try to pick, establish, and follow design patterns in my code. It helps ensure that I don't spend a lot of time re-learning why some other technique is flawed; and it helps achieve a consistent style that everyone else on the team can work with.

rootlocus 3 days ago 0 replies      
I found both his definition of the adapter pattern and his example to be a bit off. In his example, the adapter extends the external interface instead of the client interface. By definition the adapter must implement the client interface. It's even in the UML diagram displayed on the website he quotes (http://www.dofactory.com/net/adapter-design-pattern)

 > The fact was that I just didn't understand them the way I thought I did. > To be clear, I've never read the Gang of Four book these patterns are defined in
After admitting he has a less than desired understanding of design patterns (proven by his poor example), he makes bold claims like:

 > if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.
I'm having problems taking this article seriously.

MoD411 3 days ago 2 replies      
"Software Design Patterns Are Not Goals, They Are Tools" - I do not understand why this needs to be said in the first place.
golergka 3 days ago 2 replies      
I have been interviewing a lot of developers recently, and one of the best questions I've found is to ask them _why_ they have used MVC pattern in the test assignment (most do). Most of developers misunderstand the question at first and either start to explain how MVC works or explain how they would've implemented it without MVC (when you ask people why they did something, they often take it as "you shouldn't have done it"). But even when I clarify the question, a surprising number just can't even begin to answer it instead they stumble and at best just tell that that's how they have always been taught to do it.
emodendroket 3 days ago 3 replies      
As far as I can tell design patterns are mostly about taking something simple and obvious and using terms to describe it that make it obscure and difficult to understand.
Arzh 2 days ago 0 replies      
This article makes way more sense when he says he never read the Design Patterns book. If he had, he would know that before he started. They explain that the book is a collection of patterns that they have compiled from a bunch of people and from years of experience. The patterns did come about organically, and they were never meant to be the way to design software. They were only trying to come up with a common lexicon for something that they were all already doing.
madeofpalk 3 days ago 0 replies      
I'm reminded of a set of tweets from Harry Roberts about whatever new hot CSS naming convention was popular for the week:

> Modularity, DRY, SRP, etc. is never a goal, its a trait. Dont let the pursuit of theory get in the way of actual productivity.

> Thats not to say Modularity, DRY, SRP, etc. arent great ideasthey are!but understand that theyre approaches and not achievements.

There's nothing super revolutionary about these thoughts, but they've stuck in the back of my mind for a while now.


rhapsodic 3 days ago 1 reply      
A design pattern is a reusable solution to a recurring problem. Too many inexperienced devs forget that part, and use a pattern where the problem it's designed to solve doesn't exist. Had the author read the GoF book (he admits he still hasn't) he might have avoided that pitfall.
awinter-py 3 days ago 1 reply      
design patterns are guru thinking. they're bad ways to describe self-descriptive tricks like callbacks. don't let a person who talks this way write docs ever; they'll focus on 'what's being used' rather than what's happening.

design patterns are like when a consultant creates a stupid name for something that already exists -- the name isn't about expressive power, it's about declaring ownership so the consultant can sell the 'Consulting Method' to solve your problem.

when a phenomenon or trick has an easily understood one-word name, don't let a nonexpert rename it to something nobody understands.

apo 3 days ago 0 replies      
> Here's the problem I have with design patterns like these [Adapter Pattern]: they seem to be something that should occur organically rather than intentionally. We shouldn't directly target having any of these patterns in our code, but we should know what they are so that if we accidentally create one, we can better describe it to others.

It's not clear what the author would have done differently in this example. It's one thing to raise concerns about pattern-first thinking in general, but quite another to spell out what exactly is wrong with reaching for the Adapter Pattern to solve a very specific problem under a given set of constraints. I can imagine a number of situations in which going straight for an Adapter is the only sane choice.

I've come to view with great suspicion any general discussion of programming divorced from its context. Architecture Astronauts and Cowboy Coders can each do a lot of damage if left to their own devices.

badloginagain 3 days ago 1 reply      
Design patterns, OOP, to a large degree programming languages are just tools. You don't hear of craftsmen saying things like "The only thing you really need is a hammer. It's been around longer than the other tools and you can use it on every project". Replace "hammer" with C or Java and you have a legitimate comment on a score of threads.

> What patterns don't help with is the initial design of a system. In this phase, the only thing you should be worried about is how to faithfully and correctly implement the business rules and procedures.

I submit that should be your overriding concern at all times, not just the design phase. If you have to refactor some code in order to extend it, tie it back to the changed requirement. This forces you to make the least amount of changes, refactoring the least amount code, breaking the least amount of unit tests and introducing the least amount of bugs into production.

EliRivers 3 days ago 0 replies      
While we're here, SOLID is a nice acronym that is helpful as a checklist of generally good ideas to consider. It's not a law of physics, it's not compulsory, following it blindly can lead to worse outcomes and if transgressing it leads to a better outcome (with all things considered) then it should be transgressed.
arxpoetica 3 days ago 1 reply      
Just now realizing there is ambiguity around the terms design patterns. Say it in a different crowd, they'll think you are talking about the kind of design patterns Brad Frost is writing about. http://atomicdesign.bradfrost.com/
EGreg 3 days ago 0 replies      
Goals should include:

 1) Solve the problem 2) Make it maintainable 3) Make it extensible 4) Make it scalable (server) 5) Optimize it for memory, speed
So the reason to use an existing paradigm and a well-tested framework is because it makes the above easier, especially #2. And over time, #2 winds up saving you a lot resources and probably saves your project from tanking.

Finally, using an existing well known platform also lets you hire developers who know what they're doing from the beginning, leading to more prosuctivity and less dependence on any one particular developer. We leverage the knowledge that's already out there.

mirekrusin 3 days ago 0 replies      
His problem may be learning about those concepts from snake-oil sellers - he mentions he didn't bother to read GoF and gets his knowledge from things like http://www.dofactory.com/products/net-design-pattern-framewo... .

My advice is to learn from people like Martin Fowler or Kent Beck and if you want to look at companies, look at something like ThoughWorks.

V-2 2 days ago 0 replies      
As pointed out (arguably a bit harshly) in comments under the original article, this is really a strawman argument. That's because that ol' classical GoF book on design patterns - which the author admits has not even read - addresses this concern already. It's still a valid argument, but not exactly a fresh one. And speaking on the subject without even bothering to read the piece widely considered as canonical is a bit arrogant.
RangerScience 2 days ago 0 replies      

The point of design patterns is a way to describe what you've made succinctly.


When you set out to do something that you don't yet know how to do, having a crank you can turn to get out functioning code is a good thing.

I think what you mean is "Design Patterns are Tools, not Dogma".

Plus, a lot of design patterns only make sense in typed and/or OOP languages, so under those circumstances, they can't be applied as goals.

exception_e 2 days ago 0 replies      
Kind of relevant to the discussions in this thread: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

When I do hit the magic 3 and can justify restructing code, I consider my options in terms of design patterns (which are very much tools!)

matchagaucho 2 days ago 0 replies      
Stated in other terms, patterns are a means to an end. Not the end goal.

Patterns will organically emerge as the result of ongoing refactoring.

bradenb 3 days ago 3 replies      
> In other words, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

I completely disagree... if I'm working with a team. I've spent far too many hours trying to fix fragile code that comes about as a result of different devs with different methodologies trying to tie their code together.

id122015 2 days ago 0 replies      
I can say the same thing about programming.

Thats why when I read HN I'm trying to understand what are you trying to achive. Something that goes beyond staying in front of the computer 10 hours a day.

johanneskanybal 3 days ago 0 replies      
"I didn't read the article or the comments but I think you're all wrong, maybe it's bad upbringing or maybe something else but whatever". ok thanks for sharing.
smoreilly 2 days ago 0 replies      
How can someone doing research on these patterns not have read the most basic/important piece of literature on the subject?
projektfu 3 days ago 0 replies      
When I was in college, I assumed (like most) that patterns were received wisdom in how to construct software. Then I actually attended a talk with John Vlissides and realized that patterns were an entirely different thing, closer to the "archaeological" sense dantheman mentioned. In this way, the study of design patterns correspond better to the study of rhetoric or poetics in human language. "Homeric Simile" could be a design pattern in poetry.

In software, some rigidity of expression might be preferred, and so the design patterns also help us avoid creating new terminology for things that have been appropriately described.

There are places where each pattern might have utility, and I suppose if there is any sense to the term "software architecture" it is in the ability to make sense of what the system should look like in a way that can be explained to the relevant parts of the team.

There is a tendency, as well, among software developers to think that a complicated architecture must be the result of countless stupid decisions, probably made by junior technicians, who were doing things without understanding what's going on. Thus you find people exhorting others for simplicity, and acting like they've done their job at that point. But instead, complicated architecture is the result of compromises and rewrites throughout the software's life, and attempts to discard those old architectures and start afresh with similar tools usually result in an initially simplistic, but ultimately inflexible, design that will eventually evolve into a different complex architecture.

The Linux kernel is an example of a complicated architecture that was designed from a standpoint of simplicity initially, and developed its own object-oriented layer on top of C, with pluggable elements all over, loadable modules, etc., and millions of lines of code. BSD is smaller and more coherent, but also much more limited in scope.

There are also examples like Windows NT, which suffered from being the second system to 3 systems: Windows, OS/2 and VMS. In this kernel, there are so many design features that were included before implementation, that it seems incredible it was ever built. But they persisted and made it happen, and even eventually made it fast, in some cases by working around its design with new compromises and approaches. Still, it lacks the simplicity of a Plan9 or an Oberon, but what it doesn't lack is users.

Anyhow, I digress. What is important to me about patterns is the language that we get from them, and the ability to recognize what's going on in code. They can provide useful hints about implementation gotchas, and they can also help people stop reinventing the wheel.

bjr- 2 days ago 0 replies      
Read the book. Then read the books that inspired the book.
olleicua 2 days ago 0 replies      
Why I don't spend time with Modern C++ anymore linkedin.com
275 points by nkurz  4 days ago   255 comments top 39
jupp0r 4 days ago 6 replies      
In my experience, the opposite of what the author claims is true: modern C++ leads to code that's easier to understand, performs better and is easier to maintain.

As an example, replacing boost::bind with lambdas allowed the compiler to inline functor calls and avoided virtual function calls in a large code base I've been working with, improving performance.

Move semantics also boosted performance. Designing APIs with lambdas in mind allowed us to get rid of tons of callback interfaces, reducing boilerplate and code duplication.

I also found compilation times to be unaffected by using modern C++ features. The main problem is the preprocessor including hundreds of thousands of lines for a single compilation unit. This has been a problem in C and C++ forever and will only be resolved with C++ modules in C++2x (hopefully).

I encourage the author to try pasting some of his code into https://gcc.godbolt.org/ and to look at the generated assembly. Following the C++ core guidelines (http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) is also a good way to avoid shooting yourself in the foot (which is surprisingly easy with C++, unfortunately).

justsaysmthng 4 days ago 5 replies      
HFT is a pretty limited and extreme application case.

From what I understand - everything is not enough for HFT - network cards, kernel drivers, cables, etc.

You have milliseconds (edit: nanoseconds !) to receive, process and push your orders before someone else does it and gets the prize.

It's an arms race between technologists for the purpose of making a small number of people rich.

I doubt that these requirements apply to other application fields where C++ is used - and it's used almost everywhere, with great success I might add.

In my view C++ is actually a couple of languages mixed into one.

The hard part is knowing which part of the language to use for which part of the problem.

The "modern" C++ solves a lot of the nuisances of the "old" C++, but you can do without these features just fine. I apply them carefully to my code and so far it's been a pleasant experience. Even if I don't use all of the new features, it's nice to know that I can (and I will some day!).

So I don't really buy this rant..

pjc50 4 days ago 4 replies      
There are two separate rants here that aren't delineated well.

1) C++ is too complicated, and therefore hard to reason about and slow to compile.

We're going to argue about this forever, but you'll have to agree that the spec is very large and warty compared to other languages, and that C++ tends to take far longer to compile (this was already a problem a decade ago, it's not specific to "modern" C++).

2) The future of software development will include more of what I'm going to call ""non-isotropic"" software; rather than assuming a flat memory model and a single in-order execution unit, and exerting great effort to pretend that that's still the case, programmers will have to develop effectively on GPUs and reconfigurable hardware. Presumably this speculation is based on the Intel-Altera acquisition.

You can sort of do hardware programming in C (SystemC) but C++ is really not a good fit for hardware. Personally I'd like to see a cambrian explosion of HDLs, but the time is not yet right for that.

It sounds like the author favours the "C with classes" programming style, maybe including smart pointers, and is probably not keen on lambdaization of everything.

hellofunk 4 days ago 5 replies      
This article is not very general. Much of what it tries to convince us is not going to matter for most developers, and has the cost of suggesting modern features are not good for any developers. For example:

>It is not rare to see Modern C++ applications taking 10 minutes to compile. With traditional C++, this number is counted in low seconds for a simple change.

This is simply a bogus statement with respect to what at least 90% of c++ developers do on a daily basis.

I have benchmarked unique_ptr, auto, brace initialization, lambdas, range-based-for and other modern idioms and found them all to be at least as fast, and often faster, than their older counterparts. Now, if I were to instead go off and write template-heavy code using new features, that would be different. But in reality, the vast majority of c++ developers -- I'd wager at least 95% -- are not writing variadic templates on a daily basis (nor should they be).

The memory safety and many benefits from unique_ptr [0] is one of many modern tools that is a non-brainer to use in nearly all contexts. No, not nearly all contexts, allow me to rephrase: all contexts. It just is, and if you compare its use to manual new/delete code, the benefits are solid and faster.

The author further claims that modern C++ is less maintainable and more complex. The absolute opposite is true in nearly all cases. Using unique_ptr again as an example, it leads to less code, less complex code, more clear code, and better maintainability and code readability. Uniform brace initialization is another example that prevents many common older problems in the language.

FYI the author keeps talking about high frequency trading as an example of why modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. This author is either naive or political.

I think the title of this article is highly misleading, and the contents are not relevant. Overall, this article is just bad advice for most of us.

[0] https://news.ycombinator.com/item?id=11699954

n00b101 4 days ago 1 reply      
As has always been the case, effective use of modern C++ requires knowing which subset of the language to use and which to avoid.

I agree with the author's criticisms of many C++ features. At the same time, I think that a proper simple, modern subset of C++ exists that is much more productive and safer than C, without sacrificing performance. You can also optimize progressively, for example start with using std::string and std::vector and then replace the stock implementations if they aren't performant on your target architecture. I would not, however, recommend using C++ for GPU kernel code - a mix of C++ for CPU code and C for GPU kernel code works best. It is not ideal, but it's the best toolset available for serious industrial development.

FPGAs are exciting, but they've also been the "next big thing" in general purpose computing forever. Obviously it makes sense to use FPGAs for certain HFT and embedded applications, but that's not the same as general purpose computing which is what C/C++ is for. Not to mention, FPGA compile times can take hours or even days, which pales in comparison to most C++ template overhead.I would also say that for IOT, I'm not sure why it is obvious that "$10 FPGAs" should dominate. Why not a $0.50 microcontroller? Or the $5 Raspberry Pi Zero board? Both of which are eminently programable in C and even C++. Embedded devices have been around long before "IOT" became a buzzword, and we can see that microcontrollers, FPGAs, SOCs, and custom ASICs all have a role to play depending on the application.

typon 4 days ago 1 reply      
If he is complaining about C++ being bad and suggesting Verilog on FPGAs as an alternative, boy do I have some bad news for him.

HDLs (yes including Systemverilog) have 10x worse design than the worst software languages. This is why there are entire companies out there that make high level synthesis tools or high level HDL specification languages (like Bluespec).

And I haven't even said anything about the quality of FPGA tool chains.

kangar00 4 days ago 3 replies      
> If you cannot figure out in one minute what a C++ file is doing, assume the code is incorrect.

This statement at first resonated with me, and then I thought about it: this doesn't reduce the complexity of the overall application or service, it just means that one file is simple. You could have 10,000 files instead of 1 much shorter one; is that any more simple?

jonathankoren 4 days ago 8 replies      
I know why I don't like C++ anymore, it's just no fun.Its slow to compile, the errors are like 6 lines long full of template and class hierarchy that makes it hard to understand what exactly happened, and then of course there's the common coding shortcut of declaring everything auto. (What type is this list? I don't know, it's auto all the way down.) Then there's the whole thing about making constructors, but leaving the bodies empty because everything should be on initialize lists now, and now there's wrapped pointers for some reason.

I hated writing modern C++. It was just so depressing and frustrating.

messel 4 days ago 1 reply      
Ok. Try a different language :)?

A single language needed to solve all problems is a fallacy.

I don't see FPGA programming ousting c++, but expect higher level languages with strong parallel semantics to gain "market share". You can always call a dedicated process written in optimized c for the hottest components. Compose the rest in go, elixir, or any high level language (lisp).

Architectures will naturally gravitate to higher level languages that support cleaner composition. The tools and interfaces will push towards higher abstraction without impacting build or run time. Maybe this process is related to Kevin Kelly's inevitable. I'm an optimist here.

halayli 3 days ago 0 replies      
This article is coming from a frustrated developer and lacks any scientific evidence. The frustration (understandably) is coming from the overwhelming complex new features and patterns that barely a compiler can understand.

C++11 onward revamped the language to make up for the lack of progress in the past 10 years. The majority of C++ developers that aren't keeping up with the new features because they are busy with their daily jobs feel that they are falling behind and the language they thought they new has changed underneath them.

C++03 already had a steep learning curve, but with C++11+ that learning curve is orders of magnitude more.

On the upside, you can use C++11 without understanding most of the details and it will do the right thing most of the them. And I think that's the bet that the language is making.

aninteger 4 days ago 0 replies      
I've come to the conclusion that one should "use C++ when they absolutely have to and C when you can." There just aren't many areas where C++ is absolutely required when plain old simple C can be used. (Not to mention using higher languages if possible).
DrBazza 4 days ago 1 reply      
There are only two kinds of languages: the ones people complain about and the ones nobody uses. - Stroustrup.

C++30, might end up being D, today.

shanwang 4 days ago 0 replies      
Such rant appears once every few months on HN, this one is one of the least convincing. Many problems he mentioned are not "Modern C++" problems, but problems with C++ from beginning, some of them already have reasonable solutions, for example ccache + distcc for speeding up compilation.

The real problem with C++ is the standard committee, the design by committee approach for such a complex language is failing. If C++ is taken over by a company, it will be a much better language.

fsloth 4 days ago 2 replies      
This sounds like it's written from the point of view of implementing something inhouse. I fail to see how FPGA programming will be relevant if one wants to distribute software for consumers (or am I technologically clueless...).
cpwright 4 days ago 0 replies      
I find the beginning and end of the article quite contradictory. Basically that C++ is too complicated; and oh by the way we should start programming FPGAs, which are much harder to get right.

I like modern C++, because I think it simplifies a lot of things (RAII for the win here). Templates let you engage in duck typing, but with (if you are careful) very performant results.

Const-me 2 days ago 0 replies      
I never programmed HFT software, but I agree with the criticism of the modern C++.

Its bad the author hasnt defined what exactlys modern is. I saw some comments compared boost with C++/14.I think boost is also modern. Even Alexandrescus Loki is also modern, despite the book was published in 2001.

I think that modern stuff was introduced in C++ because in end 90s-start 2000s there was expectation C++ will remain dominant for some time. There was desire to bring higher-level features to the language, to make it easier to learn and safer to use even at the cost of performance.

People didnt expect C++ will lose its market that fast: very few people now use C++ for web apps or rich GUI. However, due to the inertia and backward compatibility, the features remain in the language.

Personally, Im happy with C++.

C++ is excellent for system programming, also for anything CPU bound. For those you barely need those modern features, and fortunately, theyre completely optional: if you dont like them, dont use them.

But if you do need higher-level language for less performance-critical parts of the project, I find it better to use another higher-level language and integrate it with that C++ library. Depending on the platform, such higher-level language could be C#, Lua, Python, or anything else that works for you.

Philipp__ 4 days ago 0 replies      
While some pretty good points were stated in this post, I cannot but feel OP is a bit biased. Too narrow sort to say.

I feel totally opposite in terms of new Modern C++. I guess the thing is how, where and when you use it will define your opinion/experience.

aspiringuser 3 days ago 1 reply      
20 year C++ programmer here. I work on multithreaded server code. Stopped using modern C++ features 5 years ago. I'd compare my use of C++ to be roughly equivalent to the use of C++ in the NodeJS project or the V8 project. I'm not a user of Boost.

I have to agree with the author of the article. It takes longer to train developers to write idiosyncratic modern C++ code and compilation times explodes. Compiler support for bleeding edge C++ features is spotty at best. Harder to reason about the correctness of modern C++ code.

dahart 4 days ago 2 replies      
> Today the "Modern Technologist" has to rely on a new set of languages: Verilog, VHDL

That was a complete surprise ending! :)

I like surprise endings, and he makes a lot of good points, whether or not I agree with them. But, I totally wasn't expecting "I'm done with C++ because: hardware." I was expecting because web or because awesome new high performance functional scripting language <X>.

A lot of what he's talking about there will still run compiled software though... FPGA programming and C++ aren't exactly mutually exclusive, right?

stormbrew 3 days ago 0 replies      
One of the biggest users (some would say abusers) of template metaprogramming I know works on HFT software. He trades extremely long compile times for performance at runtime and finds that C++ allows him to do this and maintain a decent architecture (through what amounts to compile-time polymorphism as well as RAII).

For him, it's actually the older features of C++ that have no use. He doesn't use deep class inheritance and never touches virtual functions, for example.

thinkpad20 3 days ago 0 replies      
> After 1970 with the introduction of Martin-Lf's Intuitionistic Type Theory, an inbreed of abstract math and computer science, a period of intense research on new type languages as Agda and Epigram started. This ended up forming the basic support layer for functional programming paradigms. All these theories are taught at college level and hailed as "the next new thing", with vast resources dedicated to them.

This seems pretty dubious. Dependently typed languages and other projects embracing advanced type theory are still the realm of niche enthusiasts. While some of the more academic colleges might teach them in one or two courses, the vast majority of education a CS college student receives will be taught in traditional imperative languages. If "vast resources" have been devoted to Agda and Epigram, then I'm not sure what kind of language should be used to describe the resources devoted to C, C++, Java, etc. Also as the author mentions, Intuitionistic Type Theory has been around since the 70's, in fact the same year that C was introduced. Certainly it hasn't been taking over the CS world by storm since its inception, as he seems to claim.

Beyond that, the author's argument seems to be a bit incoherent. He critiques the readability of Modern C++, but C++ is notoriously hard to understand, including or especially prior to the development of C++11. It's never going to be an easy language to read except to seasoned developers. If anything, modern C++11 seems to provide abstractions that increase readability and safety. He critiques the performance of modern C++, but then he ends up recommending that people ditch C++ entirely and learn VHDL/verilog instead. Not even vanilla C++ is fast enough for him, then why criticize modern C++ on the grounds of performance?

cm3 4 days ago 1 reply      
I recently had to switch a project to -std=c++11 because a header I include now uses C++11 files. This change alone made compilation at least twice if not three times as slow. The new safety and convenience features are nice but compile times seem to be out of focus and getting slower and slower every year. I don't know how I feel with g++ 6.1 defaulting to -std=gnu++14.
syngrog66 2 days ago 0 replies      
I was once a C++ programmer but migrated first to Java, when I thought it was better designed and more convenient, and then to Python when I wanted less verbosity while having greater freedom to choose between a procedural style or OO.

C++ may still be an ideal choice in some problem spaces but I think the number and size of them has shrunk as more and better alternate choices have appeared and ate away at the C++ share.

ausjke 4 days ago 0 replies      
Just started to relearn c++ and QT for cross-platform GUI programs, c++ is not easy, but its performance is still unbeatable and in certain use cases, e.g. games or video-related-performance-critical-apps or GPU-opencl-etc, c++ seems to be the sole candidate still.
jcbeard 4 days ago 0 replies      
I have a few problems with this article:>structure leads to complex code that eventually brings down the most desired characteristic of a source code: easiness of understanding.

If done well, the structure of things like variadic templates make libraries easier to use, and make coding faster (granted, code bloat can be an issue with N different function signatures).

>C++ today is like Fortran: it reached its limits

Not quite. Fortran died because well, object oriented programming came out and lots of people like it. And well, C was always more popular regardless so...C-like C++ was the obvious next choice. There is a lot of cruft in any new library, so some things aren't as performant as if you wrote them in say assembly, which is what the author seems to suggest. Yes, if I built a bare metal iostream-like functionality it would be more performant (ha, used the word :) ). People know iostream isn't that performant. Could it be better? Perhaps. Is it safe? Yes! If you want perf, use the C interface directly. Is that safe to use, probably not for the general careless user.

>To handle the type of speed that is being delivered in droves by the technology companies, C++ cannot be used anymore because it is inherently serial, even in massively multithreaded systems like GPUs.

Well, yes but so is just about every language. People are trained to write sequentially (left to right, top to bottom), with many exceptions...but none the less, sequentially. There are very few languages that do multithreading natively. There are lots of additions/libraries to C++ that enable very nice ways to consider parallelism (including w/in standard: std::thread), outside of standard (raftlib.io,hpx(https://github.com/STEllAR-GROUP/hpx),kokkos (https://github.com/kokkos), etc.). There are lots, some are quite easy to use. C++ is inherently serial, but there is no better way to write. It is fairly easy to pull out "parallel" pieces of code to execute. It is even easier if the programmer gets quick feedback (like the icc loop profiler,etc.) on things like ambiguous references and loop bounds that can be fixed quickly.

Interesting read, but don't agree at all.

bitL 4 days ago 0 replies      
I agree with the author; I still long for the not-overly-complicated C++ back in the 00s I could write super-fast 3D rendering engine without much bloat. I find it very appalling when C++ went from a poster child of imperative programming to implementing monads in its libraries (mind you, monads are used to "simulate" imperative programming in functional programming). Something went wrong there...
sickbeard 4 days ago 0 replies      
His argument about simplicity resonates with me. Sure you can learn variadic templates and all that fancy stuff but in practice when you are working on production software in any company involving more than one person using the code base, it just pays in heaps to write the simplest easiest to understand code; meaning all that nice fancy stuff is almost never used.
hackerweb 3 days ago 0 replies      
How are Verilog and VHDL a "new set of languages"? That set has been around 30 years, almost as long as C with classes.
progman 4 days ago 0 replies      
The problem with modern C++ is that it wants to be everything. Now this behemoth is crushing under its own weight.

People who are not forced to use C++ should consider other languages which are way cleaner and even more performant. Code written in Ada and Nim for instance is much easier to maintain.

koyote 4 days ago 1 reply      
Am I the only one being redirected to a linkedin sign up screen?
afsafafaf 4 days ago 1 reply      
Wonder if they tried IncrediBuild to reduce their compile time? They are right that C++ - while faster than ever before - takes much longer to compile than many other languages.
Nano2rad 3 days ago 0 replies      
Functional language programs have to run as interpreted. If compiled they will be too bloated.
frozenport 3 days ago 0 replies      
Being an expert FPGA programmer is easy, the problem is that small things take a really, really long time.
sitkack 4 days ago 0 replies      
> "that is where the unicorns are born: by people that can see on both sides of the fence"
blux 4 days ago 1 reply      
Anybody got an idea to which video series of Chandler Carruth he is referring to?
je42 4 days ago 0 replies      
Actually, the Author wants GO.
known 4 days ago 0 replies      
Kernel is my new home;
known 4 days ago 0 replies      
Me too :)
ensiferum 4 days ago 2 replies      
It just sounds like someone who couldn't handle C++ whining and making a bunch of blanket statements without really having any proper understanding.

I agree that some of the features such as lambdas can use to hard to track bugs (lifetime issues) and difficult to follow code when abused. When used nicely though they can lead to simple, elegant and straightforward code (anyone who tried to use the STL algorithms before lambdas knows what a pita it was most of the time).

Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.

CSS coding techniques mozilla.org
313 points by nachtigall  2 days ago   131 comments top 25
georgestephanis 2 days ago 10 replies      
> This is implied in the previous recommendations, but since its crucial I want to stress it: Dont use !important in your code.

Ehhhh, sometimes `!important` is a good option. I agree that just for the 'quick fix' it's a bad idea, but it can solve other issues beautifully.

For example: If you're writing a plugin or some sort of module that will inject a block of styled content in websites whose normal styling you have no control over, and you need to not accidentally let the sites override your own styling, it's a great fit.

In that situation, it's used to ensure that your block will display correctly across thousands or millions of websites and not fall victim to some awkward accidental local CSS rule like `#body #content #post strong {}` that some developer thought was a clever idea at one point, but didn't realize the implications of the insane level of specificity that is assigned. (I've seen this happen and the support load that it causes).

Far better to have reasonable selectors targeting only your markup, with `!important` so that sites, if they want to, can override your styling, but they just need to be explicit about it.

Long story short, rigidly holding to rules like these in /all/ situations can lead to far worse results for your users than taking them with a grain of salt and making educated decisions while understanding the implications.

JasonSage 2 days ago 2 replies      
This is a pretty good article about general things to do or to avoid when writing CSS.

If you'd like to go a step further, I really recommend this[1] CSS style guide by the folks at Trello. I try to use it on all my projects. It makes working with CSS so much easier in my experience.

[1] https://gist.github.com/bobbygrace/9e961e8982f42eb91b80

BigJono 2 days ago 3 replies      
> And using JavaScript to do the things CSS is meant for is just a terrible idea.

That's very strong wording, and I couldn't disagree more. I've added in-line styles into my React workflow for several projects and couldn't be happier with it.

To me, React is primarily a tool that lets the developer define their view in terms of their model, separating business logic from display. Styling is a part of that view. Class names are just an extra abstraction between the view and the model, and offer little benefit.

Kmaschta 2 days ago 6 replies      
We've just released Universal.css, which is a great state-of-the-art CSS library.https://github.com/marmelab/universal.css

What do you thinks, guys?

spdustin 2 days ago 3 replies      
All the people piling on with "!important is evil, nuke it from orbit!" have clearly never been tasked with branding an enterprise product like SharePoint.
crucio 2 days ago 1 reply      
I know not everyone is using react and webpack, but using CSS modules has made CSS incredibly easy and removes the pitfalls that most large projects fall into.

With CSS modules you don't have to think about whether you're being too specific or whether your styles will have side effects, because all of your CSS is locally scoped to just the component you're working with.


huuu 2 days ago 2 replies      
What also helps a lot is to keep in mind that classes are for classifications (categories) and id's for identifications (unique).

I see a lot of people using classes to create unique styles for specific elements. This will make a mess of your stylesheet.

Also try not to think too much in subcategories (nested classes) because it will remove flexibility from you stylesheet.For example:

 .button {} .button.text-center {}
is used to create buttons with centered text. But now you need to create another class for other elements that need centered text.So instead this would be more flexible:

 .button {} .text-center {}
In both cases you can use <element class="button text-center">.

drinchev 2 days ago 0 replies      
These days, based on the requirements I would suggest using CSS Modules [1].

On the other hand if this would be an overkill ( small landing page ), then basically whatever you would choose it would be okay, because rewriting the stylesheets will take less than a day.

If the requirements doesn't allow that and you are building a huge complex UI, then I would advise >against styling tags<. There have been at least 100 times in my career, where I discover someone styled an `a` element, that I would like to become a button. My approach is to use class names everywhere.

1 : https://github.com/css-modules/css-modules

ianpurton 2 days ago 4 replies      
I'm a dev so I think I'll carry on using Bootstrap thanks.

It may be a 'bloated framework' but it works and I've spent enough time on CSS already.

babbeloski 2 days ago 1 reply      

- Only use cascading for global theme styles, like type faces

- Don't nest rules with sass or w/e, it's too hard to know what's affecting what in a template

- Naming conventions: Prefix every component class with the titlecased version of the name, have one css file per component that's of the same name. This makes it easy to know where to look for styles when you're working in a template. e.g. styles/header.css would contain:

 - .Header-container .Header-link .Header-title--hover .Header-image

blowski 2 days ago 2 replies      
For me personally, the hardest problem in CSS is the inability to automate testing effectively. If I change this declaration to fix this bug, who knows what's going to happen across the rest of the project? It's less likely to break something else if I add a new class to fix this bug.

Of course, I know in the long run, that will lead to even more bugs but I don't have an effective workflow around that.

madethemcry 2 days ago 10 replies      
Great article. It should emphasize the part about !important even more.

!important should be banned. !important divides the good guys from the bad guys. Whenever I see code with !important I instantly know that guy was lazy, unexperienced, wrong guided - you name it. And it also makes my life hard if I have to continue to work on that same css.

I also miss one of the most useful methods to be more granular with the cascading. To apply a new rule, instead of using a fresh new class or to hammer into some new style definitions with an id/!important just use the class twice in your css. More of the same class means more important.

 .button.button.button outranks .button.button which outranks .button. 
See here for an example:http://codepen.io/gkey/pen/ONevEe

It's not beautiful and if you're doing css right you won't need it most of the time. But try this before using the big bad boys like an #ID or the !important statement.

regecks 2 days ago 4 replies      
Compounding em is total hell to keep track of in my head, even with a stylesheet open in front of me. I'm not much on top of front-end, but why aren't rem units just better in every way?
z3t4 2 days ago 0 replies      
You should only use classes in CSS! Leave #id's for Javascript: Wont have to update your JavaScript when you update your CSS and vice versa.
tmaly 1 day ago 0 replies      
Great post, I did not know about the view port options. The specificity I first learned about from another poster on here that created a store locator module for websites. He pointed me to this 3rd party javascript book written by former engineers at Disqus.

I love using SASS to generate my CSS, and I recently started using bourbon and neat with my Go project.

I ran into issues as I did not want to use gulp or grunt so I put together my own way using a Makefile


jaseflow 2 days ago 2 replies      
I like most of these except the one about not using !important.

I use .flat { margin-bottom: 0 !important } to remove the default bottom margin from certain typographical elements (ie. h1,p,ul). In this case where I absolutely know that I mean what I say when I apply that class, it works great.

quakenul 2 days ago 0 replies      
Overall rather mundane stuff (I kind of expected something more profound when it addressed the "seasoned developers" in the opening paragraph). I recommend csswizardy.com articles for more insightful thoughts/solutions to actual high level css problems.
benologist 2 days ago 1 reply      
I'm partial to BEM From Yandex for my CSS -


Raphmedia 1 day ago 0 replies      
Don't use !important and don't use any id.

If you are reduced to needing to use either of them, it means that your CSS had gotten out of hands and you are trying to apply band-aids.

If you ever need to use !important or an id to fight selector specificity, take a step back and rework all your CSS.

Source: 5 years of not using either of those as a front-end developer.

manmal 2 days ago 1 reply      
My main learning in this article was: Be as unspecific as possible. I think making things as specific as possible is what coders learn early on (component thinking etc). But since we want the whole page/component to profit from the styles we write - this makes a lot of sense.
igotsideas 2 days ago 0 replies      
Having been on projects where !important was used, I'm not able to rationalize why it's okay to use. It's a quick fix and leads to bad css. We have it in a linter at work so nobody tries to pull a fast one :)
z3t4 2 days ago 1 reply      
Be careful some of these tips wont work in old browsers! Yeh, there's still old browsers out there (mobile devices, enterprise workstations)
xufi 2 days ago 0 replies      
Pretty cool set of tips. I didn't even know about the !important tip.
pistoriusp 2 days ago 0 replies      
The majority of these problems have been solved for me by adopting SMACSS and BEM.
jackgavigan 2 days ago 8 replies      
Am I the only person who thinks that writing CSS should not count as "coding"?

I feel like the term "coding" should be reserved for writing code that will be compiled. Anything that's not compiled (but is still Turing complete) is "scripting". CSS, HTML, etc. is just "tagging".

How I Accidentally Captured the SpaceX Falcon 9 Landing petapixel.com
294 points by electriclove  3 days ago   26 comments top 10
6stringmerc 2 days ago 2 replies      
If the author happens by to read this, I wanted to maybe offer a bit of help with some of the 'in the field' frustration noted:

>This time of year is sea turtle season in the southeast and the threatened turtles that come up on the beach to lay their eggs (and any little ones that hatch) are highly sensitive to light and often get turned around and disoriented by lights on the beach. For that reason, South Carolina (and presumably other states in the area) has instituted a no lights on the beach policy. Luckily theres enough light pollution that you can at least navigate without a problem, but not being able to use a flashlight to help with focusing, adjusting camera settings, etc., is a bit of an annoyance.

When I was in Costa Rica seeing endangered sea turtles doing their thing, the local guides used red bulbs because they were not disorienting to the sea turtles. Noise from the tourist group was killing me I gotta say but hey I was along for the ride in this case. Can say I learned the red light thing.

So I looked up real quick and found some info and links from a South Carolina conservation group. They state the ordinance reads that "disruptive lights" are forbidden. Then they had a link to a site of 'certified' bulbs for use around wildlife. Red is one of the main colors featured:


Thus, using a red light may be okay under the spirit and way the ordinance(s) are written, but calling ahead might be a good idea too.

eganist 3 days ago 4 replies      
I'm not a photography buff by any stretch, so I'm probably flagrantly abusing some language here, but I feel an HDR timelapse of this shot (http://petapixel.com/assets/uploads/2016/05/zgrether_spacex_...) would probably be more beautiful than the end result (http://petapixel.com/assets/uploads/2016/05/zgrether_spacex_...) if only because my brain is jarred by the conflict of a timelapsed landing with a still shot of the stars.

Both are beautiful in any case.

cooper12 3 days ago 2 replies      
Wow it's astounding how much work can go into processing an image. I think he brings up an interesting point when he says he was interested in telling a story rather than depicting reality. Makes one think twice about all those beautiful nature and space shots they see.
neiled 3 days ago 1 reply      
The stars sure are beautiful.
uberdog 2 days ago 1 reply      
I personally like this animated gif that SpaceX tweeted better than the static image the photographer created:


pjungwir 2 days ago 1 reply      
Wow that sure is beautiful, and what luck! Does anyone know what the red flash was right in the middle, just above the water?
zeiss_otus 2 days ago 0 replies      
Woah!His gear costs around 8k.

Sony Alpha a7R II Mirrorless Digital Camera - 3kZeiss Otus 28mm f/1.4 ZE Lens - 5k

Anyone who says you need skills in photography is dead wrong, it's all about the gear.

peterwwillis 2 days ago 0 replies      
Great, now I want to spend $10,000 on cameras to photograph nature.
Aelinsaar 2 days ago 0 replies      
That's... so cool. I always love to see the relative motion of celestial bodies, and with a rocket in the foreground?! My jaw actually dropped a little.
chinathrow 2 days ago 0 replies      
You suck at masking ;)
Theranos Voids Two Years of Edison Blood-Test Results wsj.com
297 points by ssclafani  3 days ago   167 comments top 18
aresant 3 days ago 3 replies      
"One family practitioner in a suburb of Phoenix said a Theranos representative dropped off a stack of 20 corrected test reports a few weeks ago. Many of the voided results were for calcium, estrogen and testosterone tests.

The doctor said one corrected report is for a patient she sent to the emergency room after receiving abnormally elevated test results from Theranos in late 2014."

Tort attorneys should be licking their lips.

It would be shocking if Theranos survives this.

Beyond that Walgreens - the largest retail pharmacy chain in the USA and Theranos' wellness center partner - should also be in the crosshairs.

Feels like they should have had some better safeguards for consumers before committing to the 40-store pilot in AZ.

lquist 3 days ago 3 replies      
I'm based in SV, and I see a lot of big name entrepreneurs rallying behind her and I don't understand it. You cannot be cavalier/lean about human life. These people deserve to be jailed.
a_small_island 3 days ago 1 reply      
>That means some patients received erroneous results that might have thrown off health decisions made with their doctors.

Put them in jail.

apo 3 days ago 6 replies      
This is what happens when you try a Minimum Viable Product in healthcare and aren't up-front about slower than expected R&D progress.
_Codemonkeyism 3 days ago 1 reply      
Interesting they couldn't even user other peoples machines

"A person familiar with the matter said the Arizona lab performed the blood-coagulation tests with a traditional machine from Siemens AG that was programmed to the wrong settings by Theranos.

The Arizona lab also failed several tests to gauge the purity of the water it uses in its Siemens machines, which could affect the accuracy of some blood tests run on the devices, the person said."

taneem 3 days ago 2 replies      
This is likely the beginning of the end. With such a massive loss of trust, especially in the healthcare space, it is hard to see how the company could ever recover in the eyes of customers, investors or employees.
rcarrigan87 3 days ago 3 replies      
Can someone put this into perspective...how often are there major recalls or calibration issues at other, more established labs and testing companies?

Certainly, not trying to defend Theranos, just trying to understand how bad this really is. Because it sounds pretty horrible...

oneloop 3 days ago 0 replies      
"Theranos has declined to quantify to Walgreens the scale of its test corrections"

Doesn't seem like they're learning anything.

dvcrn 3 days ago 4 replies      
Using "https://www.google.com/" as Referer (with the 'Referer Control' extension for example) gets around the paywall
bane 3 days ago 0 replies      
Again, one of the most important interviews with Holmes: https://www.youtube.com/watch?v=MBs-oj7U-bo

Edison is discussed. "We don't use Edison for anything and haven't for a few years now."

tn13 3 days ago 1 reply      
Can someone please summarize what this actually means? The article is behind paywall and the title is cryptic.

What does "void" mean ? Less accurate, completely wrong, completely random ? What does Two years refer to ?

jbuzbee 3 days ago 1 reply      
The class-action lawyers must be salivating over "tens of thousands of corrected blood-test reports"
hathym 3 days ago 0 replies      
if you came here without reading the article, all you need to know is that Theranos is fucked.
radnam 3 days ago 0 replies      
I was extremely optimistic for Theranos and having to see them go through this is sad on so many levels. Not calibrating standard testing machinery correctly just does not cut it.

One of their notable contributions is to set precedence in Arizona where consumers can now order their own tests without doctor's orders.

I believe consumer awareness of state-of-art diagnostic resting and making testing readily accessible can have a fundamental impact on people's wellness.

ps: I am not advocating more testing.

josh_carterPDX 3 days ago 4 replies      
This is not what disruption looks like. This is what happens when you have someone with no domain expertise, but a great idea. We're seeing the same thing happening with Zenefits. Founders with no domain experience need to look at how they're going to enter a market full of incumbents. These incumbent businesses have survived for years because they know how to play the game. They have people in their employ that know how to lobby the right regulatory bodies. Theranos had none of this. So when all of this started coming down, they should have hired the most well-known and respected person in their field to bridge the gap between the past and future. Without that, they are outsiders playing in a game that has been around forever. They were set up for failure before they even began and no one was smart enough to ask "How will they disrupt an industry that has been around for decades?" Just saying, "We're going to make medical tests cheaper and more accessible" was clearly not the right answer.
return0 3 days ago 0 replies      
The reporting by Carreyrou is particularly insistent to put Holmes front-center in each of this series of articles. She's in the article subtitle and first image again. I wonder if other execs are also responsible for this disaster.
foobar1962 3 days ago 0 replies      
So when is the Edison estate going to issue a cease order against Theranos for using their name and damaging the reputation?
vonklaus 3 days ago 5 replies      
I still believe in the idea of Theranos and while I think it is great the Ev Williams was able to secure funding 2 more times to keep rebuilding different versions of blogger[0], I want to live in a world where we also take huge gambles on hard problems. If we adhere to VC math (we should as this hypothetical is for VC investing) one of these payouts will be well worth it, e.g. Tesla/SpaceX. So yeah, I'd write down uBeam & Theranos, but you can fuck off if you want the world to stop investing in big ideas.

[0]twitter.com, medium.com

edit: Also, we can assume it isn't physically impossible to use smaller amounts of blood to perform tests. So yeah, it was super obvious from the beginning some immigrant who happened to be in the right place at the right time and make some money at the height of the DOTCOM era working in software couldn't build a sustainable rocket program that rivals that of 1st world nations. So it wasn't obvious, and the next big innovation won't be obvious, and if you think it is you are either building it or just straight up wrong.

The TSA is a waste of money that doesn't save lives and might actually cost them vox.com
258 points by paulpauper  3 days ago   253 comments top 29
two2two 3 days ago 11 replies      
TSA is the number one reason why I don't fly and drive instead. From my POV most of the world's industries have progressed positively, but not air travel. I took a train a couple of years ago and it was a beautiful example of old merging with new. Walking through an antique of a train station, iPhone in hand, with my digital ticket ready to board; so easy and pleasant.

At that point I realized that air travel is by far the worst traveling experience money can pay for.

If an alternative airport wanted to do things a little different, such as "fly at your own risk" "no lifeguard on duty", aka no TSA b.s., I'd happily take the "at your own risk" option rather than the TSA controlled situation we're subjected to currently.

mdorazio 3 days ago 1 reply      
In my opinion, the TSA is basically a very expensive jobs program rather than an actual security organization. This is a big part of why it's going to be hard to get rid of now. According to Wikipedia, the TSA employs over 55,000 people, many of whom would probably have difficulty getting a similar level job if we reverted to a more sane security screening program. Anything that kills thousands of government jobs is hard to get through Congress, even if it's unpopular with the public.
Domenic_S 3 days ago 2 replies      
The TSA is a jobs program with a bit of "throw government contracts to your buddies" mixed in. Same with the military to an extent.

A TSA Screener job is about the closest we'll get to Basic Income: stand around in an airport occasionally groping people for $13-18/hr, plus awesome Federal benefits. Qualifications: essentially none.

rm_-rf_slash 3 days ago 1 reply      
I live in a small city with a small airport. One day, while waiting for my departure plane to arrive, TSA kicked everyone out of the secure gate and back into the insecure terminal, because the plane would not arrive for another half hour and they didn't want to keep watching us in a room with barely 50 seats. Then we had to go through security again once every single passenger had arrived.

The point is that security is fear-motivated. 99% doesn't matter if it isn't 100%, even if logic and probability puts that 1% insecurity in a .001% chance of actually happening. So if you let the 1% slip through and something happens, well, who wants to take the blame?

And now we have this mess.

ndirish1842 3 days ago 3 replies      
I wonder how autonomous driving will affect shorter flight commutes. I'll probably never take a car from Philadelphia to LA, but I might prefer to travel by car from Philadelphia to Chicago if I know that I can sleep throughout the car ride (as well as leave whenever is most convenient). When you take into account driving to the airport, checking bags, security, flight delays, baggage claim, and rental cars/driving to your hotel, a 12 hour drive doesn't look nearly as bad, especially when you could leave at 10 PM and wake up at 10 AM arriving at your destination. And it's way less stress compared to the hassle of TSA and flights.
rhino369 3 days ago 8 replies      
It's easy to say that TSA sucks (it does), but it's hard to propose a workable alternative. Well alternative 1, stop making us take off our shoes and taking out our laptops, its clear from pre-check that it's not really necessary.

You need some security. That was clear before 9-11. Airports had security and it was pretty similar to how TSA does it right now. You put your bags on an Xray machine, show your ID, and walk through a metal detector.

I'd suggest keeping the government in charge of what procedures to use, but then using private contracts to actually manage the airport security.

The real problem with TSA isn't that it is intrusive. It's that is terrible mismanaged and has no incentive to improve the experience.

Although apparently airports can opt out of the TSA.

jonnathanson 3 days ago 2 replies      
The article is exactly right about what needs to be done, and who needs to do it: the airports themselves. No chance any elected official is going to scale back the TSA's screening creep at this point.

The political risks of looking "soft on terrorism" are just too high. Imagine being a politician responsible for a TSA rollback, and then, by dumb luck, a terrorist attack succeeds a short while later. There may be zero correlation, but do you think the media will care? Do you think the public will care? Do you think your political opposition will care? Ha. Your career would be over in a heartbeat. And if your opponents really felt like twisting the knife, they might drum up hearings and lawsuits against you. So call me cynical, but I just don't see any lawmaker or policy wonk sticking his or her neck out anytime soon.

This is why it's in the hands of airports to push for any particular change. They're not running for office.

suprgeek 3 days ago 3 replies      
The article completely misses the point of the TSA. It is not meant to actually make air travel safer. It is there for exactly two reasons:

1) Provide our dear politicians the satisfaction that they "Did something" - Security theater is very useful during election times (Tough on Crime et al)

2)Provide a convenient excuse to expand the govt. ability to dictate yet another aspect of people's normal lives. The govt. now has another tool to harass "undesirables" - simply put their name on a "No fly", "No Train" , "No $SomeOtherThing" list and have their TSA buddies enforce it.Or have the "undesirables" be pulled aside for Random screenings every single time [1].

[1] http://arstechnica.com/tech-policy/2015/07/citizenfour-filmm...

This is the real purpose of the TSA. Your safety or saving lives is irrelevant.

sehutson 3 days ago 0 replies      
What's crazy is that the article doesn't even mention the effective lost lives in the sheer number of hours people waste by getting to the airport so early.

If you assume 75 years x 365 days x 24 hours, that's 657,000 hours in a fairly typical life. Millions of travelers waiting an hour or more each = a lot of "lives" wasted standing in line.

pmontra 3 days ago 2 replies      
> Airports should kick out the TSA

I'm not American and haven't been there for a long time so forgive my ignorance. TSA is an agency of DHS so I believed that its presence in airports was mandated by the government. Can airports really replace it with anybody they like? If this is the case, why they didn't do it before? Only because TSA is for free and airports have to pay private security companies?

makecheck 2 days ago 0 replies      
It is so frustrating to see a lot of the solutions being proposed by the administration: wanting to hire more screeners, blaming passengers for bringing too many pesky bottles of water and pocket knives, etc. They are missing the obvious solution that should be at the top of the list, right in front of their faces: we must REMOVE safeguards to speed things up.

The probability that a bottle of water or anything that looks like water will cause an airline disaster is effectively ZERO. It is not a risk, and not even slightly concerning, period. This is not worth checking even once, even at random, much less millions of times a day.

And pocket knives? They SERVE FOOD WITH KNIVES on planes. They literally give you a knife in first class. If it was someones goal to obtain a knife on board, they would not need to bring it through security. And frankly, one could argue that knives are the opposite of risky: a few passengers with knives to defend themselves may very well be able to prevent a handful of hijackers from doing anything. Either way, I am strongly on the side of teach people to band together and defend themselves, not cower and be fearful of everything.

And dont even get me started on having to take off shoes. It is frankly sad that we have been so fixated on ONE piece of clothing, for years and years and years, as a reaction to ONE passenger out of millions who couldnt even carry out his threat successfully.

Besides, the entire concept of prohibited items does not eliminate risk. There are human beings who are powerful enough and skilled enough to cause serious damage or death all by themselves. They dont need prohibited items, they simply are deadly. A group of passengers that knows how to band together and fight back can subdue anyone, even a passenger that is deadly all by himself.

mwsherman 3 days ago 2 replies      
In terms of $$, by far the biggest cost is in the wasted time of the millions of people who are subjected to this. It's obviously in the billions.
zer00eyz 3 days ago 3 replies      
In the world we live in there is one surefire way to get rid of the TSA: Stop flying.

Sad to say but money is a big motivator, and until the airlines get the message that we don't want to deal with this shit, they aren't going to really push for actual change.

bogomipz 3 days ago 1 reply      
SEATAC in Seattle and the Port Authority in New and New Jersey have threatened to privatize TSA duties as well. The question is can they? What's to stop them? Why is it taking so long?

How was this agency not looking at actually travel data that they failed to hire more staff as the number of air travelers increased? This was over a two year period. The idiot in charge of the TSA said they anticipated more people would sign up for TSA prescreen. At some point in the last two years they couldn't see that this trend wasn't transpiring?

This same idiot said they he was asking congress for more money for overtime for TSA employees. Great, make the same miserable people work even longer hours. That sounds like a great solution.

He also made a statement to the effect that their "mandate is to keep America safe' yet he seems to not grasp that if we can't get on the plane it doesn't much matter.

They also seem to blame part of the increased wait on the tragedy in Belgium but do you mean to tell me that not one person in this agency could see that the departure halls's were a huge a blind spot?

I imagine that lawmakers in Washington don't have to wait in the long lines like the rest of us? That's generally how the broken stuff in the US stays broken b/c lawmakers aren't exposed to it. This is true of healthcare as well. Congress has indemnity health plans which is why they have no idea how bad it is for the rest of us.

carsongross 3 days ago 1 reply      
The TSA is obviously a complete clusterfuck, but it is offering us an important lesson:

Despite everyone hating it, including Big Business, it persists and will likely continue to exist until the U.S. Government collapses. It is nearly impossible to ratchet back a government program dedicated to "security", among other sacred words.

Look at the solutions being offered: add more workers, more bomb dogs, etc.

The system cannot fix itself. Perhaps the system does not want to fix itself.

patrickmay 3 days ago 1 reply      
Airports should replace the TSA with security companies that use El Al's techniques: https://skift.com/2013/11/15/tsas-behavioral-detection-techn...
awinter-py 3 days ago 0 replies      
Love that they're quoting bruce schneier in defense. I think he was just being fair-minded because he doesn't want to appear smug. This is a guy who walked through the screening with a 'beer belly' (beer smuggling device for stadiums) full of gasoline and then blogged about it.
Friedduck 3 days ago 0 replies      
I've had TSA agents look through my wallet, and on a separate flight look through playing cards one by one. I was also let through with no screening once by accident.

I've seen them yell at passengers, drift off, sit around talking with long lines waiting, and every other conceivable offense. Most are fine but there are a lot of exceptions.

They contribute nothing, and I for one fly less frequently because of them.

As to pre-check: at Atlanta that doesn't always get you a short line or fast security wait time.

truehearted47 3 days ago 0 replies      
I also have stopped flying altogether due to invasion of privacy and feeling like cattle PLUS now that there are long lines, the chance of tempers flaring is real. Just witness the violence and hatred in the streets of America these days and watch how the police are unable to control riots...YES RIOTS...we no longer have protesters...protests are now riots. Airport crowding combined with invasion of privacy, impatience & anger = disaster waiting to happen. TSA is the terrorist here.
Mendenhall 3 days ago 2 replies      
In my personal experience what slows it all down the most is the actual people flying. Every time I fly I see countless people wearing tons of metal/jewelry/belts/whatever that they have to take off, often not till they are told to do so, the laptap is tucked far away until last moment. They still carry all sorts of lotions and liquids on for some unknown reason. When they then exit the scan they clog up the line by standing right there trying to put everything back on or away.
ccvannorman 3 days ago 0 replies      
The difference between the US and other countries is not that we're stupider. It's that the slightly smarter/more powerful people are much better at manipulating the stupidity of masses, and much more greedy, than other countries. That's why we leveraged fear and pushed hard so that you have to bend over for the TSA every flight.

My question is, what does the US look like without the TSA, and can we ever get there?

bluetidepro 3 days ago 3 replies      
How do we get rid of it, though? I get that it's terrible, and I've heard all these arguments countless times. How do you actually take action, though?
pgrote 3 days ago 2 replies      
I have long looked for an answer outside of security theater as to why the ban on liquids continues. If anyone has an answer, I'd appreciate hearing it.

If you go through a screening line and a liquid is found, the liquid is not tested. It is not handled carefully. It is not thoroughly inspected. It is tossed in the closest garbage can.

If the liquid really did pose a danger, wouldn't it be handled more carefully?

zipwitch 3 days ago 0 replies      
Those who say that the TSA is just a jobs program are missing the point. The TSA is a constant reminder of government presence and the security state, it's effectiveness at security or a jobs program is a minor concern compared to its value as a symbol. And of course, its growing, spreading its presence to highways, rail, and other forms of public transit.
descript 3 days ago 0 replies      
Air travel should be the same as motor vehicle travel. The only reason there aren't small air taxi companies that offer regional trips for affordable prices is because government has been involved in airplanes since day 1, and it is illegal for private pilots to charge.
aaroninsf 3 days ago 0 replies      
IAMA request: an honest to god TSA screener. Not an imbedded pinko journalist... someone who actually signed on.
reacweb 3 days ago 0 replies      
TSA is not a very important issue, but politicians love to discuss about this kind of issues where they can show their talent without hurting their sponsors. It is a good way to distract public from the more important issues (economy, unemployment, privacy, ...).
rconti 3 days ago 4 replies      
I hate security, though I hate the discomfort of air travel even more. In fact, I just got back to the US from Europe, and the cold that struck 12h after I left lasted 7-10 days (and I rarely get sick!)

That said, am I the only one who doesn't have these long security waits? I typically show up at the airport ~1h before boarding is to begin, and am often at my gate 50min before boarding begins.

I typically fly out of SFO, and I do admit, several journeys ago, I was actually IN LINE at security for 30 minutes which seemed absurdly painful and I was actually starting to sweat being late for boarding. Of course, at SFO they had TONS of extra machinery, they just didn't bother staffing it.

As much as I HATE taking off my watch, fitbit, ring, car keys, wallet, belt, shoes, phone, then the scramble to take my laptop out of my bag as soon as I get room on the table (it becomes a high pressure situation to do the laptop thing as by the time you get to the table you have roughly 8 seconds before you're holding people up!).. the actual lines are quite tolerable.

I typically fly SFO, SEA, SAN, SJC, and fly cross country at least once or twice a year. I just got back from Copenhagen, Frankfurt, Stockholm, Munich airports, and again, no problems. I've fairly recently been to Auckland, Queenstown, Reykjavik, Heathrow, Florence, Paris as well.

There's no doubt most other countries do a better job than the US; the automated machinery for dealing with your possessions to be xrayed (they hold your bin until it's empty and then automatically return it to the beginning of the line!) and the switching between 10-15 security lines so that you're never behind more than a few people was a revelation.

But the actual time in security is rarely all that bad inside or outside the US.

       cached 22 May 2016 15:11:01 GMT