hacker news with inline top comments    .. more ..    20 May 2016 Best
home   ask   best   2 years ago   
Google supercharges machine learning tasks with TPU custom chip googleblog.com
813 points by hurrycane  1 day ago   268 comments top 45
luu 1 day ago 15 replies      
I'm happy to hear that this is finally public so I can actually talk about the work I did when I was at Google :-).

I'm a bit surprised they announced this, though. When I was there, there was this pervasive attitude that if "we" had some kind of advantage over the outside world, we shouldn't talk about it lest other people get the same idea. To be clear, I think that's pretty bad for the world and I really wished that they'd change, but it was the prevailing attitude. Currently, if you look at what's being hyped up at a couple of large companies that could conceivably build a competing chip, it's all FPGAs all the time, so announcing that we built an ASIC could change what other companies do, which is exactly what Google was trying to avoid back when I was there.

If this signals that Google is going to be less secretive about infrastructure, that's great news.

When I joined Microsoft, I tried to gently bring up the possibility of doing either GPUs or ASICs and was told, very confidentially by multiple people, that it's impossible to deploy GPUs at scale, let alone ASICs. Since I couldn't point to actual work I'd done elsewhere, it seemed impossible to convince folks, and my job was in another area, I gave up on it, but I imagine someone is having that discussion again right now.

Just as an aside, I'm being fast and loose with language when I use the word impossible. It's more than my feeling is that you have a limited number of influence points and I was spending mine on things like convincing my team to use version control instead of mailing zip files around.

bd 1 day ago 3 replies      
So now open sourcing of "crown jewels" AI software makes sense.

Competitive advantage is protected by custom hardware (and huge proprietary datasets).

Everything else can be shared. In fact it is now advantageous to share as much as you can, the bottleneck is a number of people who know how to use new tech.

abritishguy 1 day ago 5 replies      
I think this shows a fundamental difference between Amazon (AWS) and Google Cloud.

AWSs offerings seem fairly vanilla and boring. Google are offering more and more really useful stuff:

- cloud machine learning

- custom hardware

- live migration of hosts without downtime

- Cold storage with access in seconds

- bigquery

- dataflow

manav 1 day ago 2 replies      
Interesting. Plenty of work has been done with FPGAs, and a few have developed ASICs like DaDianNao in China [1]. Google though actually has the resources to deploy them in their datacenters.

Microsoft explored something similar to accelerate search with FPGAs [2]. The results show that the Arria 10 (20nm latest from Altera) had about 1/4th the processing ability at 10% of the power usage of the Nvidia Tesla K40 (25w vs 235w). Nvidia Pascal has something like 2/3x the performance with a similar power profile. That really bridges the gap for performance/watt. All of that also doesn't take into account the ease of working with CUDA versus the complicated development, toolchains, and cost of FPGAs.

However, the ~50x+ efficiency increase of an ASIC though could be worthwhile in the long run. The only problem I see is that there might be limitations on model size because of the limited embedded memory of the ASIC.

Does anyone have more information or a whitepaper? I wonder if they are using eAsic.

[1]: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=701142...

[2]: http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.p...

semisight 1 day ago 5 replies      
This is huge. If they really do offer such a perf/watt advantage, they're serious trouble for NVIDIA. Google is one of only a handful of companies with the upfront cash to make a move like this.

I hope we can at least see some white papers soon about the architecture--I wonder how programmable it is.

mrpippy 1 day ago 3 replies      
Bah, SGI made a Tensor Processing Unit XIO card 15 years ago.

evidence suggests they were mostly for defense customers:


jhartmann 1 day ago 5 replies      
3 generations ahead of moore law??? I really wonder how they are accomplishing this beyond implementing the kernels in hardware. I suspect they are using specialized memory and an extremely wide architecture.

Sounds they also used this for AlphaGo. I wonder how badly we were off on AlphaGo's power estimates. Seems everyone assumed they were using GPU's, sounds like they were not. At least partially. I would really LOVE for them to market these for general use.

asimuvPR 1 day ago 1 reply      
Now this is really interesting. I've been asking myself why this hadn't happened before. Its been all software, software, software for the last decade or so. But now I get it. We are at a point in time where it makes sense to adjust the hardware to the software. Funny how things work. It used to be the other way around.
breatheoften 1 day ago 1 reply      
A podcast I listen to posted an interview with an expert last week saying that he perceived that much of the interest in custom hardware for machine learning tasks died when people realized how effective GPUs were at the (still-evolving-set-of) tasks.


I wonder how general the gains from these ASIC's are and whether the performance/power efficiency wins will keep up with the pace of software/algorithm-du-jour advancements.

RIMR 1 day ago 2 replies      
Somewhat off topic, but if you look at the lower-left hand corner of the heatsink in the first image, there's two red lines and some sort of image artifact.


They probably didn't mean to use this version of the image for their blog - but I wonder what they were trying to indicate/measure there.

danielvf 1 day ago 1 reply      
For the curious, that's a plaque on the side if the rack showing the Go board at the end of AlphaGo vs Lee Sedol Game 3, at the moment Lee Sedol resigned and AlphaGo won the tournament (of five games).
nkw 1 day ago 1 reply      
I guess this explains why Google Cloud Compute hasn't offered GPU instances.
fiatmoney 1 day ago 3 replies      
I'm guessing that the performance / watt claims are heavily predicated on relatively low throughput, kind of similar to ARM vs Intel CPUs - particularly because they're only powering it & supplying bandwidth via what looks like a 1X PCIE slot.

IOW, taking their claims at face value, a Nvidia card or Xeon Phi would be expected to smoke one of these, although you might be able to run N of these in the same power envelope.

But those bandwidth & throughput / card limitations would make certain classes of algorithms not really worthwhile to run on these.

Coding_Cat 1 day ago 0 replies      
I wonder if we will be seeing more of this in the (near) future. I expect so, and from more people then just Google. Why? Look at the problems the fab labs have had with the latest generation of chips and as they grow smaller the problems will probably rise. We are already close to the physical limit of transistor size. So, it is fair to assume that Moore's law will (hopefully) not outlive me.

So what then? I certainly hope the tech sector will not just leave it at that. If you want to continue to improve performance (per-watt) there is only one way you can go then: improve the design at an ASIC level. ASIC design will probably stay relatively hard, although there will probably be some technological solutions to make it easier with time, but if fabrication stalls at a certain nm level, production costs will probably start to drop with time as well.

I've been thinking about this quite a bit recently because I hope to start my PhD in ~1 year, and I'm torn between HPC or Computer Architecture. This seems to be quite a pro for Comp. Arch ;).

bravo22 1 day ago 2 replies      
Given the insane mask costs for lower geometries, the ASIC is most likely an Xilinx EasyPath or Altera Hardcopy. Otherwise the amortization of the mask and dev costs -- even for a structured cell ASIC -- over 1K unit wouldn't make much sense versus the extra cooling/power costs for a GPU.
phsilva 1 day ago 1 reply      
I wonder if this architecture is the same Lanai architecture that was recently introduced by Google on LLVM. http://lists.llvm.org/pipermail/llvm-dev/2016-February/09511...
nathan_f77 1 day ago 1 reply      
I'm thinking that this has the potential to change the context of many debates about the "technological singularity", or AI taking over the world. Because it all seems to be based on FUD.

While reading this article, one of my first reactions was "holy shit, Google might actually build a general AI with these, and they've probably already been working on it for years".

But really, nothing about these chips is unknown or scary. They use algorithms that are carefully engineered and understood. They can be scaled up horizontally to crunch numbers, and they have a very specific purpose. They improve search results and maps.

What I'm trying to say is that general artificial intelligence is such a lofty goal, that we're going to have to understand every single piece of the puzzle before we get anywhere close. Including building custom ASICs, and writing all of the software by hand. We're not going to accidentally leave any loopholes open where AI secretly becomes conscious and decided to take over the world.

taliesinb 1 day ago 0 replies      
I don't know much about this sort of thing but I wonder if the ultimate performance would come with co-locating specialized compute with memory, so that the spatial layout of the computation on silicon ends up mirroring the abstract dataflow dag, with fairly low-bandwidth and energy efficient links between static register arrays that represent individual weight and grad tensors. Minimize the need for caches and power hungry high bandwidth lanes, ideally the only data moving around is your minibatch data going one way and your grads going the other way.

I wonder if they're doing that, and to what degree.

harigov 1 day ago 3 replies      
How is this different from - say - synthetic neurons that IBM is working on, or what nvidia is building?
Bromskloss 1 day ago 2 replies      
What is the capabilities that a piece of hardware like this needs to have to be suitable for machine learning (and not just one specific machine learning problem)?
cschmidt 1 day ago 1 reply      
This seems very similar to the "Fathom Neural Compute Stick" from Movidius:


TensorFlow on a chip....

isseu 1 day ago 0 replies      
Tensor Processing Unit (TPU)

Using it for over a year? Wow

hyperopt 1 day ago 1 reply      
The Cloud Machine Learning service is one that I'm highly anticipating. Setting up arbitrary cloud machines for training models is a mess right now. I think if Google sets it up correctly, it could be a game changer for ML research for the rest of us. Especially if they can undercut AWS's GPU instances on cost per unit of performance through specialized hardware. I don't think the coinciding releases/announcements of TensorFlow, Cloud ML, and now this are an accident. There is something brewing and I think it's going to be big.
saganus 1 day ago 3 replies      
Is that a Go board stick to the side of the rack?

Maybe they play one move every time someone gets to go there to fix something? or could it be just a way of numbering the racks or something eccentric like that?

j-dr 1 day ago 1 reply      
This is great, but can google stop putting tensor in the name of everything when nothing they do really has anything to do with tensors?
hristov 1 day ago 3 replies      
It is interesting that they would make this into an ASIC, provided how notoriously high the development costs for ASICs are. Are those costs coming down? If so life will get very hard for the FPGA makers of the world soon.

It would be interesting to see what the economics of this project are. I.e., what are the development costs and costs per chip. Of course it is very doubtful I will ever get to see the economics of this project, it would be interesting.

protomok 1 day ago 0 replies      
I'd be interested to know more technical details. I wonder if they're using 8-bit multipliers, how many MACs running in parallel, power consumption, etc.
__jal 1 day ago 0 replies      
My favorite part is what looks like flush-head sheet metal screws holding the heat sink on.

No wondering where you left the Torx drivers with this one.

eggy 1 day ago 0 replies      
Pretty quick implementation.

On the energy savings and space savings front, this type of implementation coupled with the space-saving, energy-saving claims of going to unums vs. float should get it to the next order of magnitude. Come on, Google, make unums happen!

aaronsnoswell 1 day ago 2 replies      
I'm curious to know; is this announcement something that an expert in these sorts of areas could have (or did?) predict months or years ago, given Google's recent jumps forwards in Machine learning products? Can someone with more knowledge about this comment?
j1vms 1 day ago 2 replies      
I wouldn't be surprised if Google is looking to build (or done so already) a highly dense and parallel analog computer with limited precision ADC/DACs. I mean that's simplifying things quite a bit, but it would probably map pretty well to the Tensorflow application.
paulsutter 1 day ago 0 replies      
> Our goal is to lead the industry on machine learning and make that innovation available to our customers.

Are they saying Google Cloud customers will get access to TPUs eventually? Or that general users will see service improvements?

eggy 1 day ago 1 reply      
I think the confluence of new technologies, and the re-emergence / rediscovery of older technologies is going to be the best combination. Whether it goes that way is not certain, since the best technology doesn't always win out. Here, though, the money should, since all would greatly reduce time and energy in mining and validating:

* Vector processing computers - not von Neumann machines [1].

* Array languages new, or like J, K, or Q in the APL family [2,3]

* The replacement of floating point units with unum processors [4]

Neural networks are inherently arrays or matrices, and would do better on a designed vector array machine, not a re-purposed GPU, or even a TPU in the article in a standard von Neumann machine.Maybe non-von Neumann architectire like the old Lisp Machines, but for arrays, not lists (and no, this is not a modern GPU. The data has to stay on the processor, not offloaded to external memory).

I started with neural networks in late 80s early 1990s, and I was mainly programming in C. matrices and FOR loops. I found J, the array language many years later, unfortunately.Businesses have been making enough money off of the advantage of the array processing language A+, then K, that the per-seat cost of KDB+/Q (database/language) is easily justifiable. Other software like RiakTS are looking to get in the game using Spark/shark and other pieces of kit, but a K4 query is 230 times faster than Spark/shark, and uses 0.2GB of memory vs. 50GB. The similar technologies just don't fit the problem space as good as a vector language.I am partial to J being a more mathematically pure array language in that it is based on arrays. K4 (soon to be K5/K6) is list-based at the lower level, and is honed for tick-data or time series data. J is a bit more general purpose or academic in my opinion.

Unums are theoretically more energy efficient and compact than floating point, and take away the error-guessing game. They are being tested with several different language implementations to validate their creator's claims, and practicality. The Mathematica notebook that John Gustafson modeled his work on is available free to download from the book publisher's site.People have already done some type of explorator investigations in Python, Julia and even J already. I believe the J one is a 4-bit implementation of enums based on unums 1.0. John Gustafson just presented unums 2.0 in February 2016.

[1] http://conceptualorigami.blogspot.co.id/2010/12/vector-proce...

[2] jsoftware.com

[3] http://kxcommunity.com/an-introduction-to-neural-networks-wi...

[4] https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gus...

mistobaan 1 day ago 0 replies      
Another point is that they will be able to provide much higher computing capabilities at a much lower price point that any competitors. I really like the direction that the company is taking.
swalsh 1 day ago 0 replies      
I wonder if opening this up as a cloud offering is a way to get a whole bunch of excess capacity (if it needs it for something big?) but have it paid for.
dharma1 1 day ago 0 replies      
hasn't made a dent on Nvidia's share price yet
amelius 1 day ago 2 replies      
One question: what has this got to do with tensors?
camkego 1 day ago 1 reply      
Does anyone have links to the talk or the graphs?
ungzd 1 day ago 0 replies      
Does it use approximate computing technology?
niels_olson 1 day ago 0 replies      
I like that the images are mislabeled :)
revelation 1 day ago 0 replies      
There is not a single number in this article.

Now these heatsinks can be deceiving for boards that are meant to be in a server rack unit with massive fans throwing a hurricane over them, but even then that is not very much power we're looking at there.

nxzero 1 day ago 0 replies      
Is there anyway to detect what hardware to being used by the cloud service if you're using the cloud service? (yes, realize this question is a bit of a paradox, but figured I'd ask.)
LogicFailsMe 1 day ago 1 reply      
Perf/W, the official metric of slow but efficient processors. How many times must we go down this road?

Let's see this sucker train AlexNet...

rando3826 1 day ago 1 reply      
Why use an ANKY in the title? Using an ANKY(Acronym no one knows yet) is bad writing, makes readers feel dumb, etc. Google JUST NOW invented that acronym, sticking it in the title like just another word we should understand is absolutely ridiculous.
simunaga 1 day ago 2 replies      
In what sense in this a great news? Yes, it's a progress, so what? After all, you - programmers - earn money for your jobs and pretty soon you might not have one. Because of these kinds of great news -- "Whayyy, this is really interesting, AI, maching learning. Aaaaa!".

"I'll get fired, won't have money for living and AI will take my place, but the world will be better! Yes! Progress!"

Who will benefit from this? Surely not you. Why are you so ecstatic then?

Chrome removes Backspace to go back chromium.org
668 points by ivank  1 day ago   551 comments top 102
klodolph 22 hours ago 12 replies      
I'm going to take a somewhat contrarian view and say, "Thank you, Chrome developers."

It's always easy to tell apart the people who know shortcuts from the people who don't, if you watch them use their computers. Someone with a few shortcuts on tap will zoom around their monitors, switching between mouse and keyboard only when necessary.

But there are a few shortcuts and user interface quirks that are too outdated and weird, and only serve to surprise and annoy us. They herald from an earlier age when people were still figuring things out in new UI paradigms. For example, these days, you expect the scroll wheel to scroll up and down in a scrolling view. However, my coworker was changing some project settings in Visual Studio the other day, and he tried to scroll through the settings while a drop-down menu in the settings had focus. It scrolled through the menu options, selecting them, instead of scrolling through the view. He had to cancel the changes he was making and open the window again, because he couldn't remember what was originally selected.

This is the worst kind of surprise. Something you thought was just supposed to let you look at different parts of the interface instead modified the data you were looking at. Backspace to go back is a similar surprise. It's supposed to delete text, but instead it can navigate away from a page entirely, if you are in the wrong state when you press backspace. For the same reason, I'm even getting sick of the old middle mouse button paste, since it's too easy to press when I'm scrolling.

Forward and back navigation are already mapped to alt + left and right arrow. Let's reserve backspace for deleting text. (I'm not happy that it sometimes means "navigate up a level", but that might tell you what kind of computer I had growing up.)

Jedd 1 day ago 20 replies      
Chrome / Chromium have a habit of making these arbitrary changes that seriously annoy some (arguably small) percentage of their users, while claiming that it makes it simpler / better for everyone else, while explaining impatiently why it's infeasible to make the now missing feature a configuration option.

Evidently the kinds of people that can't be bothered going into the Advanced Configuration Settings page would be confused by an additional item in the Advanced Configuration Settings page.

I never used the backspace button for back (though it's probably what's mapped to my mouse button #8 - I'll know on the next upgrade), but I did get mightily annoyed by two changes a while back, and am always happy to bring them up whenever there's a story about Chrom* devs doing this kind of thing.

1. snap-to-mouse - while dragging the scrollbar, if you move the mouse further than ~80 pixels away from the scrollbar column, the page jumps back to the original location - apparently MS Windows users love this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this, and

2. clicking inside the URL bar selects the whole contents - apparently MS Windows users are used to this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this.

No idea what the defaults are for OSX, and, really, it doesn't matter - these features should be sensitive to extant defaults on whatever desktop environment the browser finds itself running on.

ruipgil 1 day ago 7 replies      
I might be the minority here, but I think that using the backspace to go back is counter intuitive. In my mind backspace is to delete something, and I always worry about that.
floatboth 22 hours ago 5 replies      
Good. I always set browser.backspace_action to do nothing in Firefox, because this is SO infuriating. You think you have a text field focused but you actually don't (e.g. accidental mouse click removed the focus), you press Backspace and BOOM! suddenly you're on the previous page.

Ctrl/Cmd+[ and ] is the real shortcut!

oneeyedpigeon 1 day ago 8 replies      
One of the contributors states:

"Building an extension for this should be very simple."

Why on earth isn't there just a generic keyboard-shortcut preference where I can control every possible browser action and its associated keyboard shortcut? In fact, why isn't this available at an OS level? Surely it would remove a lot of unnecessary duplicate code.

dandare 1 day ago 3 replies      
"We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction. The latter are often cases where the user loses data. Years of user complaints have been enough that we think it's the right choice to change this given the degree of pain users feel by losing their data and because every platform has another keyboard combination that navigates back."

Personally I am shocked that the Chromium team ignored years of user complaints before they decided to fix what their own usability studies found to be a worthless yet painful gimmick.

ChrisArgyle 1 day ago 3 replies      
Analysis from Chrome devs here https://codereview.chromium.org/1854963002

Though I am a frequent user of backspace in Chrome I'm inclined to agree with their decision. Almost no one is using it and casual users are confused by it.

I'll just wait for someone to implement the feature in an extension.

kibwen 1 day ago 4 replies      
This is going to sound hyperbolic, I'm sure, but backspace-as-back is enormously important to my browsing experience. When I recently installed Ubuntu I had a small moment of panic when I realized that hitting backspace in Firefox performed some Ubuntu-specific thing rather than navigating backwards (as it does in Windows), but fortunately there's an about:config pref to re-enable the behavior. Just my two cents.
FollowSteph3 1 day ago 0 replies      
I think this is very good. I can't tell you the number of times I've lost form data by hitting backspace.

For those wondering how, if you do control backspace to erase a word etc igs very easy to miss, especially as you transition between word delete and single character delete.

The other common use case for errors is when u think you're in a field editing and you're actually not, bam, you just lost all your form data.

I also like the idea that backspace is for text editing and not for a second feature such as navigation. For enter yes but not backspace

samuellb 1 hour ago 0 replies      
I welcome this change, because I've lost a lot of form data with the backspace key. Not specifically in Chrome, because I remember having the same problem with IE 6 at the time when Firefox was still in alpha and was called "Phoenix".

Now I wish that the Thunderbird developers also remove or change their single letter shortcuts that are easy to mis-type. E.g. "A" for archive, which creates an undeletable "Archive" folder in your mail account. There's a bugzilla issue for it here:https://bugzilla.mozilla.org/show_bug.cgi?id=615957

EdSharkey 1 day ago 3 replies      
This feels a bit like how Esc was nerfed over the years in Firefox and others until it essentially did nothing. It used to mean STOP. All sockets were closed, the page stopped loading, and I think way waaay back, even animated gifs stopped cycling and JavaScript timeouts and intervals were cancelled.

Single-page webapps were the death of Esc, it was too confusing to users to have a page suddenly hang because they pressed Esc for some reason and all the XHR connections silently closed. "Stopping" just no longer made sense.

Just going to need to train the old timers on the new key strokes. It is sad though when convenient controls are taken away.

gjvc 1 day ago 3 replies      
This is most annoying. I have used this for the past twenty years and have not lost form data using it. In any event, chrome seems to remember form contents upon navigating back to a form page.

Leave my muscle memory alone please.

pfarnsworth 23 hours ago 0 replies      
Thank GOD. So many times I've been filling out forms and sometimes I hit backspace to delete something, and maybe I clicked on a dropdown, but it goes back one page and I lose everything. Not the end of the world, but pretty annoying and I'm glad they're removing this.
spo81rty 1 day ago 1 reply      
This has always been annoying when doing it on accident. Good riddance!
crazygringo 1 day ago 1 reply      
Finally! It's about time. I don't know who ever thought having a command that didn't use a modifier key was a good idea -- it's not just about losing form data (even if that's protected against), a webpage can have all sorts of "state" you don't want to lose.

Also, what's so hard with tapping Cmd+Left or Ctrl+Left to go back? It's all I've ever done, incredibly intuitive, and simply to do with one hand (using the right Cmd button), at least on most keyboards I've seen.

dhd415 22 hours ago 1 reply      
I think comment #32 (https://bugs.chromium.org/p/chromium/issues/detail?id=608016...) is worth highlighting:

 If you can fill out a formular field correctly without losing focus, you are not part of Chrome's target audience. edit: Had to type this four times due to accidently going back.

nikanj 21 hours ago 0 replies      
I can't count the number of times I've noticed a typo in a form, hit shift-tab one time too many or few, hit backspace and ended up losing all of the info I filled in. The forward button mostly just leads to "resubmit form data?", instead of bringing me back.
greggman 19 hours ago 0 replies      
Oh thank you thank you THANK YOU!!!!

I can't tell you how many times I've lost data because of backspace! Good riddance.

Now, please also get rid of pull down to refresh in iOS Chrome because that has also lost me data a ton of times as well. I don't even know who uses that feature. I don't need to refresh most pages and if I do there are better ways.

itslennysfault 22 hours ago 0 replies      
master race!!!

...but seriously, if I had a dollar for every time I've tried to hit "delete" (backspace on mac) to delete something I had selected in a web app and had it navigate back losing my unsaved changes I'd have a couple bucks.

It's rare, but it's annoying when it does happen.

YeGoblynQueenne 5 hours ago 1 reply      
We have UseCounters showing that 0.04% of page views navigate back via thebackspace button and 0.005% of page views are after a form interaction. Thelatter are often cases where the user loses data. Years of user complaints havebeen enough that we think it's the right choice to change this given the degreeof pain users feel by losing their data and because every platform has anotherkeyboard combination that navigates back.

We're doing this via a flag so that we can control this behavior should there besufficient outcry.

oOh dear lord, that's a horrible idea. You make a change to your software to fix aproblem that is not caused _by_ your software? If a form is confusing enoughthat the user thinks they have focus when they don't and ends up losing datathen that's an issue with the form, isn't it? Not the browser and not thebutton.

jneal 1 day ago 0 replies      
I've personally always used alt+left to go back. I know backspace does the same thing, but the only reason I know that is because I seem to hit more frequently than you'd expect while not focused on a form field causing my browser to go back unexpectedly. I've never lost data, though, it always seems to persists when I go forward.
mstade 22 hours ago 1 reply      
I wonder if this is in any way related to the exceptionally annoying thing on google.com, where if you hit backspace it doesn't navigate back, but starts removing characters from your search. It does this with other keypresses too, presumably so you can just keep typing till you find whatever you're looking for, but it's a flagrant disregard for my action of moving focus from the input field.

In any event, I use backspace to navigate back all the time, so this is sure to annoy me to no end. Especially since I use multiple browsers, and it'll be hard to break habits. Ah well..

djwbrown 21 hours ago 0 replies      
Cumulative time wasted using 'Command+[': none.Cumulative time wasted due to overloading of the backspace key: hours.

Relying on current context to determine the behavior of backspace was a terrible idea from the start. To hell with your muscle memory. Re-learn a shortcut that makes sense, and which will save you time one day, rather than insisting with hacker-machismo that you've never lost data in a form.

_pferreir_ 21 hours ago 0 replies      
As a web application developer, I second the motion to officially thank the Chrome development team for this. "Backspace" triggering "back" is a usability disaster, and not only for inexperienced users. We recently had issues with a 3rd party editor widget losing focus due to a bug, which led to people accidentally triggering "back" and losing their data (it was a rich text field, so you can imagine how much of a problem that was). Sure, the problem here was the widget, but using such a commonly pressed key as the shortcut for a potentially destructive operation is a recipe for disaster.More advanced users have the option to use a custom extension, or even mouse gestures. Just develop an "Advanced Chrome" plugin and the problem will be solved.

As a side note, it's interesting to see how such a small change (which, as mentioned above is even reversible) can trigger such an outcry. I've read stuff such as "I've been using this shortcut for 20 years" or "I don't want an extension"... are those even arguments? Yes, applications should be "user-centred" but the "user" here is a collective of thousands or millions of people with their own incompatible opinions. There is a (very good) reason for this change and I've seen zero achievable solutions that would not imply it.

hkjgkjy 21 hours ago 1 reply      
davb 1 day ago 0 replies      
They say 0.04% of page views are a result of pressing backspace. 0.04% sounds small but imagine how many page views per month there are, globally, with Chrome. That's. Significant number.

Backspace sure is an unusual navigation choice these days, and perhaps wouldn't make sense to code in new software. But in browsers, backspace to navigate back is expect behaviour.

This isn't the first time the Chrome or Chromium teams have made sweeping changes based on usage stats, pissing off the minority who use those features and pushing ever closer to a browser with only the lowest common denominator features that everyone uses.

retbull 21 hours ago 0 replies      
Good fuck that was annoying. I actually came up with a new work flow for all browsers because of this. I always open links in a new tab so if I want to go back I have to close the tab and if I want to go forward I will middle click to open in a new tab. This only falls apart when I run into a multi-page form or application that requires text. When that happens I hate that backspace goes back.
YeGoblynQueenne 4 hours ago 0 replies      
We have UseCounters showing that 0.04% of page views navigate back via thebackspace button and 0.005% of page views are after a form interaction.

That's from the linked issue, the one that actually made the change.

So, um. What is "UseCounters"? Does this mean that when you're entering text in a form Chrome is registering your keypresses?

pbiggar 21 hours ago 0 replies      
Well done Chrome. While it's the way that I go back and I now need to change my habits, this is the kind of hard decision that you need to make to have a really great product. They weighed the upsides and downsides, and pissed off a small subset of people (esp on HN who are likely to be the backspace-as-back users) to make a better experience. Bravo!
Hermel 21 hours ago 0 replies      
Finally! I don't know how often I accidentally navigated away from a page by pressing backspace while writing in a textbox.
jbb555 1 day ago 0 replies      
They don't spend any time fixing everything that's broken with modern computers, instead they spend time changing things that weren't broken. Great.
slavik81 19 hours ago 0 replies      
My first instinct was to bemoan it's loss, but after thinking about it, I make this mistake far too often.

I actually just lost a draft of an annual self-assessment to this. I wanted to delete some text, but I guess I didn't have focus in the text box, and hit back. The form was created by an awful website (PeopleSoft/Oracle), so hitting forward didn't bring my data back.

Sure, it was just 20 minutes of work. Sure, a better website would have had the fields autosaved, or at least not have broken the browser autofill. Sure, I could have written it in a different program and then pasted into a browser.

But seriously, that should never happen. Not like that.

Mithaldu 1 day ago 1 reply      
So they applied the wrong fix, to a problem that had been solved a decade ago.

The problem: Moving away from a form can result in data loss.

Their solution: Make it harder to move away?

The actual solution implemented more than a decade ago: Cache history completely and make it easy to move forward and backward in a tab's history while maintaining form contents.

brudgers 1 day ago 1 reply      
Having a friend who often operates their keyboard by the old stick between their teeth method, I'd like to see an analysis demonstrating that the breaking change improves accessibility. Particularly since the alternative posed in the thread is the chorded alt-left.
marcusarmstrong 1 day ago 0 replies      
Finally! I can get rid of my third party extension to get rid of this insane behavior.
Kadin 21 hours ago 0 replies      
Striking a blow for mediocrity. Ugh.

If there was really a problem with data loss, the better solution would seem to be warning the user before navigating away from the page. Removing a widely-used single-key behavior in order to protect users from themselves seems like a bad prioritization.

It'd be nice if we could still have software that is unashamedly not trying to target some sort of Archie Bunker "low information" user. Even the big Linux distros seem obsessed with making things easy for some hypothetical moron-in-a-hurry, at the expense of actual users who know what they're doing. It's unfortunate, and it seems to be a sort of antipattern that's infected a lot of software design. It wasn't always this way: there used to be an expectation that users would learn to use software, and that like any tool, if misused you could mess things up. Somewhere along the line, we've decided that it's unacceptable to tell users that they need to learn how to use software instead of blindly stabbing at it and expecting it to protect them.

I'm not against sane defaults or warning users before they really do something horrible, but the current trend towards ripping out anything and everything that might possibly be 'confusing' seems to be far overstepping the mark.

Firefox isn't much better, but at least they haven't Nerfed the back button.

davesque 22 hours ago 0 replies      
I agree with this. Accidentally going back when you lose focus on a text field is super annoying.
okonomiyaki3000 1 day ago 0 replies      
Thank the gods! I can't imagine who ever thought backspace as back was a good idea in the first place.
emodendroket 22 hours ago 0 replies      
I have to say, I find it a lot more common that I accidentally lose focus inside a text box and go back than that I intentionally use that shortcut.
Pharylon 18 hours ago 0 replies      
I guess I can uninstall Backstop now (a Chrome extension that literally exists only to disable backspace to go back). I've been running it for years now.

For you 1% of people that actually use the Backspace key for going back, I'm sure someone will come up with an extension to re-enable it, don't worry.

avehn 20 hours ago 1 reply      
People who cannot use a mouse or see a screen, who rely exclusively on keyboard commands, will be greatly affected by this change.

Global Accessibility Awareness Day https://www.w3.org/WAI/perspectives/

whoisthemachine 18 hours ago 0 replies      
Positive change in my book. This key was always overloaded, leading to unintentional usages. Using backspace as a "navigate back in history" shortcut never worked reliable for me in any of the browsers I've used extensively (Chrome, FF, and IE).
soheil 20 hours ago 0 replies      
This is actually a no brainer, many times I have accidentally tapped on my mousepad while typing and took the focus away from a textarea then noticed a typo and tried to delete it and baaam you're no longer on that page and possibly all the text you typed has gone into the abyss.
mcrmonkey 14 hours ago 0 replies      
ffs this is stupid. backspace has always been "back" in browsers and it really vexes me when some versions of firefox on some linux distro's do this. two hands have to be used to action this because the right alt key is either not mapped ( on some os'es ) or is alt-gr.backspace works well when moving quickly too - one finger from home row and bam.

Rather then this be the fix they should probably look at the bug thats causing the user to go back when the form element is focused.

Whats next ? take away space bar for moving through the page ?

A about:config thing needs to be present for this to allow the user to switch between what they want. Sure extentions are possible to fix this too but i dont really want a 3rd party extention to re-enable whats a tried and tested keyboard shortcut. Additionally what happens if that dev's account gets hacked and the extention modded for malace ?Or if the dev pulls the thing in a way similar to the node.js module issue a month or so ago.

This part is worrying though:

We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction.

Where is that data being gathered from and how?

Additionally what is classed as a form interaction ?

jasonm23 22 hours ago 0 replies      
Wontfix - just apply the worst possible, cruddy fix and shut down discussion.

Forgive me if I do not applaud.

jdelaney 17 hours ago 0 replies      
I wrote a quick Chrome extension to fix this for those interested.

Extension: https://chrome.google.com/webstore/detail/back-to-backspace/...

Source: https://github.com/j-delaney/back-to-backspace

djrconcepts 17 hours ago 0 replies      
Great News! I have never intentionally hit backspace to go back, and yet I've hit backspace on accident and been taken many times. Quite annoying when it happens.
grandalf 1 day ago 2 replies      
I've never used backspace for nav intentionally, and it has caused annoying data loss for me a few times.

It's never made sense to me why this behavior was ever added to browsers. The logical choice would have been the left arrow key (since there is a corresponding right arrow).

SeriousM 9 hours ago 0 replies      
"If you can fill out a formular field correctly without losing focus, you are not part of Chrome's target audience.

edit: Had to type this four times due to accidently going back."

Made my day.

Osiris 22 hours ago 0 replies      
This is why I switched to Vivaldi. All the shortcut keys are customizable. It also includes mouse gestures for quick back/forward with the mouse. I prefer having the choice than someone else dictating.
daveloyall 20 hours ago 0 replies      
I got to this thread late, sorry.

Here's the thing about Chrome... They don't want power users.

Remember when you first switched to Chrome? That sleek little pastel colored window elegantly fast. It worked on most websites. It was notably fast on Gmail, which at the time was the slowest website you spent a lot of time on.

You didn't mind that Chrome wasn't configurable. You might even have thought that it would become more configurable over time.

You were wrong. You were never the target audience.

I once had a infuriating (to me, at the time) argument with a Googler who was responsible for an internal app which performed better in Firefox than in Chrome. He said "Use Firefox!". I didn't get it at the time. He was a power user, all his co-workers were power users, and thus the internal app was only used by power users... They all used firefox! At least for real work... (Pretty sure they all had Chrome on hand for Mail and Maps, etc...) Anyway, the internal app correctly targeted firefox.

Meanwhile, back in time, when Chrome came out, Firefox started hemorrhaging users. Mozilla reacted. Today, it's as fast or faster than Chrome for most sites I use. And it's configurable!

If you are reading this, and don't have the latest beta or Nightly FF installed, you should go do so now! Really, trying firefox after being away from years will make you smile and renew your faith in humanity. :)

But unfortunately, this story doesn't end there...

I think some firefox devs see Chrome as a rolemodel... Maybe they want to compete with Google for those users who are not you! As a small example, I offer this: https://bugzilla.mozilla.org/show_bug.cgi?id=1118285 Note the posts that are marked "Comment hidden (advocacy)". You can click the [+] to show what was hidden (comments from power users).

There are niche browsers for power users, and there are extensions... But there isn't a mainstream browser for power users because power users aren't mainstream.

I'm just describing the problem (well, I hope!), I'm sorry but I don't have a solution.

azinman2 22 hours ago 1 reply      
I love how most of the comments on the bug tracker are "I've never lost data so therefore no one has and this should go back because I'm used to it."

Typical myopic power users...

kbenson 22 hours ago 0 replies      
About time! I've been seriously annoyed a number of times when doing text/data entry (on this very site!) where somehow I removed focus from the input, and then tried to erase some text, only to find it going back a page, and my input is gone when I browse back forward (this problem is exacerbated by inputs that don't exist until Javascript creates them from some page event).

When using a laptop with a sensitive touchpad, this can get really bad.

usaphp 22 hours ago 0 replies      
I am glad they are making this change. I've lost my form data by accidentally going back while trying to erase something in a text field so many times.
math0ne 17 hours ago 0 replies      
The amount of times I've accidentally navigated away from a form by hitting that damn shortcut!
eropple 21 hours ago 0 replies      
So I'm going to need another Chrome extension, further exacerbating the gong show that is Chrome battery life, for something I use all the time.


MrBra 4 hours ago 0 replies      
Is there a setting to re-enable it?
dghughes 13 hours ago 0 replies      
Wouldn't it make more sense to have a pop up "Are you sure you want to navigate away?" solution instead?

This is the very definition of throwing the baby out with the bathwater.

copperheart 17 hours ago 1 reply      
Big thanks to the Chrome devs for this, I applaud and personally appreciate the decision but wonder why a navigation shortcut like this couldn't be made into an option for others to enable or disable based on their preference?
swingedseraph 22 hours ago 0 replies      
Do I like this? Yes. Should it be an immutable part of the interface and not configurable? No. That's ridiculous.
lr 23 hours ago 0 replies      
On OS X, Command-left bracket has worked on Chrome, Safari, Firefox (and probably more) for years. Not sure about Windows or Linux, but keyboard shortcuts are very well established across browsers in this way (like Command-L, and all of the Emacs bindings like Control-A, Control-E, etc.).
backtoyoujim 11 hours ago 0 replies      
I bet that eventually the "quit application" feature for Mac OS X is going to be offshored.
Kiro 22 hours ago 0 replies      
As someone who constantly gets screwed by this: finally!

Example from literally one minute ago: a cool thing was going on in a Twitch stream and I wanted to hype in the chat, misclicked the chat box so backspace went back to the stream list instead making me miss the moment.

jijojv 20 hours ago 0 replies      
Thank you. This is the right fix for 99% users who'd otherwise lose data
perezdev 21 hours ago 1 reply      
>Are you suggesting that the only remaining options are Alt-Left (a two-hand key combo for that I have to move my mouse hand towards the keyboard, and then back)

I guess no one told this guy that a standard keyboard has two ALT keys.

ogreveins 1 day ago 0 replies      
I would very much like them to revert this change. Using backspace to go back has been in my browsing habits since I began using the internet. Atl+left or right is annoying. Either give us a checkbox or revert it. Please. Pretty please.
andrei_says_ 13 hours ago 0 replies      
It's the most frequently used key for me when I browse.

Any way to add it back? Maybe an extension?

XorNot 16 hours ago 0 replies      
I'm surprised people think this is a bad idea? I know of no one who uses backspace this way in the browser.
rocky1138 1 day ago 0 replies      
I don't mind removing backspace, but this better not remove the functionality of my back button on my mouse. That's one of the worst things about having to boot into OSX at work.
henvic 19 hours ago 0 replies      
What the hell? Just ask the user if he intended to go backwards when there is a form on focus or something.
mixedCase 21 hours ago 0 replies      
Why must every browser out there suck? Servo seems like the last hope if it gets integrated into FF or a FF-like browser.
jordache 1 day ago 0 replies      
on the topic of annoying details that browser makers overlooks.

In safari, when opening a new tab, the focus is not on the address bar. I have to always to Cmd+L before start typing. The address bar focus works when you don't have a homepage defined (so a blank page), but who doesn't configure a default home page? arrghh

hosh 18 hours ago 0 replies      
The very first comment says:

"How is someone who grew up in terminal times expected to navigate back when using a two-button mouse?"

I grew up in terminal times. I was lucky that, while growing up, I had access to my father's Unix account through the university. Not only that, I do all of my development work on the terminal (via tmux, vim, and spacemacs). I like the terminal. I love keyboard shortcuts. Keeping my hands in the home row -- awesome!

The backspace in the browser has always struck me as misfeature. I've lost data when typing in forms.

In contrast, when I browse a page, I rarely hit the back button. I'm more likely to open a link in a new page when I am doing serious research.

Times move on. Some things are lost, and our civilization is not for the better. This is not one of those cases.

dc2 22 hours ago 0 replies      
I just hit backspace to go back to HN after reading this... and it didn't work.
yAnonymous 1 day ago 0 replies      
I hope they also remove support for forward/back mouse buttons. I keep accidently pressing those.
rietta 21 hours ago 0 replies      
NOOO! That's how I go back! And use the space bar to scroll up and down!
sammorrowdrums 1 day ago 0 replies      
Good riddance! This is such a terrible double-purpose binding. When bindings seriously are common typing commands, that are not just bound, but bound in a way that is often destructive, it just needs to die.

Anyone who thinks this shouldn't die is basically a bad person. It was an affliction, and one of the poorest design choices in history. :-p

lpsz 1 day ago 1 reply      
Sometimes, it's better without these features. E.g. on Mac, dragging left in the browser is a gesture for going back to the previous page, and I can't count how many times I've accidentally triggered that while filling out a web form or interacting with a page. Isn't the back button and the keyboard shortcut enough?
jdhzzz 23 hours ago 0 replies      
Thank you.
tehchromic 18 hours ago 0 replies      
Hip hip hooray!!!
homero 6 hours ago 0 replies      
autoreleasepool 21 hours ago 0 replies      
I can finally uninstall BackspaceMeansBackspace!
ravenstine 21 hours ago 0 replies      
bluhue 20 hours ago 0 replies      
Space-bar next!
OJFord 20 hours ago 0 replies      

I've only ever done this by accident.

monochromatic 14 hours ago 0 replies      
Morons don't know how to use our web browser? Better break it!
mdholloway 12 hours ago 0 replies      
Thank god.
optimuspaul 19 hours ago 0 replies      
finally, now maybe I can go back to Chrome.
logicallee 17 hours ago 0 replies      
I'm a bit late to the party (already 408 comments) but, guys, here is an example of what happens currently in many browsers:




(this example is not prescriptive, it's just what happens)

At any rate, the GIF shows the current situation. You should watch it.



I actually wrote to this app creator that they should throw up a confirmation window ( like these https://www.google.com/search?q=confirm+navigation&tbm=isch )

but the fact is that the browser is the one that decided to navigate away. Now what's very interesting, is that even in this, HN's, thread we have people saying "Yes!!!" and people saying "No!!!" to the change.

So people who simply have never used backspace for navigation, like me, have many times accidentally touched backspace or thought we were focused on a form, and ended up losing data (because the page didn't throw up a confirmation window after navigating back, and after clicking forward the page is blank again.) While other people, who have no convenient single key they can use to navigate back, have come to rely on it. I'm not sure what the solution is, but here's the current situation so everyone understands it.

april1stislame 22 hours ago 0 replies      
Never lost data on firefox by going forward after going back for accident while writting on a form, but whatever...Google only wants dumb users who can't see past what they're doing.
givinguflac 22 hours ago 0 replies      
One more plus for Vivaldi.
dredmorbius 19 hours ago 0 replies      
Google Chrome have fixed a longstanding UI/UX bug and state overload of the Backspace key

Backspace key in Chrome browser no longer navigates backward, but instead is limited to its initial and rightful role: deleting the previous character under the pointer (mouse / text cursor).

I swear by His Noodliness I'd ranted on this at G+ some time, though unfortunately since Microsoft Bing Search isn't available on Google+, I cannot actually find shit in a useful fashion.

That said, I applaud this change, thumb my nose at the fuckwits who are bitching about it, and note again the Flaw of Averages: One Size Fits None.

As to the justification of not relying on Backspace for Navigation

I ordinarily take exception to blame-the-user / taunt-the-user practices, and should hasten to explain my own here.

Learning a New Backward Navigation Method is a Temporary Training Inconvenience.

Repeatedly losing Vast Quantities of Newly Composed Content is an Irrevocable User State Loss.

Among the canons of human-computer interface design is this: Thou shalt not fuck with thine users' State.

Which by definition makes those who fail to make this distinction fuckwits. Perhaps only ignorant fuckwits, a curable state, though quite possibly and regrettably stupid fuckwits, a State of Extreme Durability in my experience.

The larger fault is arguably for the lack of a clear stateful separation of editing from _ browsing_ modes in Web browsers. Editing involves creating novel user state which can be easily lost through capricious client behavior, such as, to draw on a randomly selected example, fucking overloading the backspace key with the behavior of "delete my highly considered and Very Important Message to the Univers by immediately and irrevocably moving off this page.

It's with some irony that I note that console-based Web browsers rarely have this problem. The w3m browser, for example, when editing a text field, dumps the user to a local full-powered editor, and in fact defaults to that one specified by the users environment ($VISUAL, $EDITOR, etc.). The result is that a "primitive" browsing tool actually has an exceptionally powerful editing environment.

(At this point, the Emacs users in the room are of course laughing and pointing at me, but they in fact entirely substantiate my claim in doing so. And, my dear good friends, I've given not inconsiderable thought to actually joining you, as it seems that via Termux, a commandline environment for Android, emacs and all its capabilities are in fact available to me, and may vastly surpass the Android applications environment in capabilities. The fact that Viper is a well-established and long-standing component of the Emacs landscape means that the One True Operating System now does in fact have a useful editor.)

Chrome has other utterly unredeemable failures on Android, including an utter lack of ad-blocking capabilities. But for the task of composing and editing, this is a nice touch.

But it does raise one futher point: why is editing via Web tools so abysmally poor?

Despite various deficiencies, the G+ app actually does favourable compared to a number of other platforms, and virtually all Web editable tools. Reddit and Ello stand out particularly. As much as I love the Reddit Enhancement Suite full-screen editor (it's a browser extension for Firefox and Chrome desktop), it's not available on Android. Meaning I've got to jump through Multiple Divers Hoops in order to compose long-form content on Reddit. Android's various content-creation deficiencies make this a tedious process. This accounts for some of my Diminished Output in recent months.

In particular, Firefox/Android has proven Exceptionally Capable at Losing My Shit, at least in memory not exceptionally distant (considering I've owned my present Samsung[tm] Infernal Device[r] only since October last), a characteristic which makes me Exceptionally Leery of Embarking on Enterprises of Extensive Prose Composition within that context.

Given the, shall we say, exceptional advancement of text-composition in other contexts, I find this particular failure mode of the Browser Development Community in General most unpardonable.


kruhft 20 hours ago 0 replies      
Good, now I don't have to fix one of my sites to handle backspace 'properly' that uses a custom keyboard handler for input. What a pain.
ryanlol 22 hours ago 0 replies      
Because adding customizable keybinds is too difficult? Hell, if configs looking scary to normal users is a problem why not just have a json/sqlite/whatever file in the profile directory?
imaginenore 1 day ago 0 replies      
exabrial 1 day ago 0 replies      
THANK YOU!!!!!! Progress
soperj 22 hours ago 0 replies      
And i'll never use chrome again.
smegel 20 hours ago 0 replies      
So it takes 51 versions before common sense kicks in?

Have a pat on the back.

optforfon 1 day ago 1 reply      
Anyone want to place bets on how long till Firefox copies them?
kaonashi 22 hours ago 0 replies      
Next up: remove form submit on return from textarea fields.
alexc05 23 hours ago 1 reply      
> "We're doing this via a flag so that we can control this behavior should there be sufficient outcry."

I love that they decided to do this. I think the justification for taking it away is really good.

I also think that the decision to disable via "flag" shows some prescience with respect to how the public reacts to things.

Great move and a template for "sound product development".

IvanK_net 1 day ago 1 reply      
I wanted to use Ctrl+N, Ctrl+O and Ctrl+T shortcuts in my webapp. I reported a bug 3 years ago https://bugs.chromium.org/p/chromium/issues/detail?id=321810 which is not fixed yet, but they have "fixed" Backspace ... that seems crazy to me.
Google Home home.google.com
570 points by stuartmemo  1 day ago   451 comments top 72
cheald 1 day ago 22 replies      
When I was younger, I dreamed of something like this. Voice control for my home! A Star Trek computer that I can interact with conversationally! I just say what I want and it happens!

Now, I just see an internet-connected microphone in a software black box which I can only interpret as a giant frickin' security liability. I want this, but unless it's open source top-to-bottom, I won't ever actually put one in my home. We know too much about how these things can be abused for me to ever seriously consider it without being able to verify for myself what it's doing and why.

t0mbstone 1 day ago 10 replies      
Please, please, please be a completely open, extensible platform...

I want to be able to control my Apple TV with my Google Home device.

I want to be able to control my Phillips Hue and LiFX bulbs.

I want to be able to build my own custom home automation server endpoints and point my Google Home commands at them.

I want to be able to remote start my car with a voice command.

I want to be able to control my Harmony remote, and all of the devices connected to my Harmony hub.

I want to be able to access my Google calendar.

I want to be able to make hands-free phone calls to anyone on my Google contacts.

If my grandmother falls, I want her to be able to call 911 by talking to the Google Home device.

I want to be able to ask wolfram alpha questions by voice.

I want to be able to have a back-and-forth conversation to arrive at a conclusion. I don't want to have to say a perfectly formulated command like, "Add an event to my calendar on Jan 1, 2016 at 2:00 pm titled go to the pool party". I want to be able to say, "Can you add an event to my calendar?", and then answer a series of questions. I hate having to formulate complex commands as a single sentence.

I want to be able to have a Google Home device in each room, without having to give each one its own wake-up word. Just have the closest one to me respond to my voice (based on how well it can hear me).

I want to be able to play music on all of my Google Home devices at the same time, and have the music perfectly synchronized.

This is my wish list. I am currently able to do more than half of these items with Amazon Echo, but I had to do a bunch of hacking and it was a pain in the ass.

If Google Home can deliver on these points, I would switch from Amazon Echo in a heartbeat.

koolba 1 day ago 4 replies      
RFP - Request For Project

1. Train Google Home to recognize Amazon Echo's voice as its owner.

2. Train Amazon Echo to recognize Siri's voice as its owner

3. Train Siri to recognize Google Home's voice as its owner

4. Kick start some kind of endless loop between the three of them.

frik 1 day ago 1 reply      
Google, thanks for shutting down Freebase.com on 2 May 2016. By taking it offline, and using it (Knowledge Graph) for Google Home you effectively locked out all competitors. WikiData is a far cry and a fraction of the size of what was Freebase.

Freebase was a large collaborative knowledge base consisting of data composed mainly by its community members. It was an online collection of structured data harvested from many sources, including individual, user-submitted wiki contributions. Freebase aimed to create a global resource that allowed people (and machines) to access common information more effectively.


Google is using a lot of data collaborative collected data from closed Freebase and Wikipedia without giving it back.

will_brown 1 day ago 2 replies      
When Windows10/Cortana was released my buddy attached a mixer/switch to his PC allowing him to wire input mics and sound speakers to every room in his house.

And though I can't see any personal uses for such a device, he swears it has changed his life, and the only thing I believe he does with it is tells Cortana to play Van Halan first thing when he wakes up.

protomikron 1 day ago 6 replies      
Ok, controversial opinion:

"[...] and manage everyday tasks"

What exactly do we want to automate at home? I think this whole home automation and smart home stuff is complete bullshit. Obviously there are some nice things, like "play me song xyz", but IMHO it is completely oversold. There are just not that much things to automate at home.

And this does not mean that I think 640K are enough memory for everyone.

fizzbatter 1 day ago 1 reply      
I'm dying for an Echo / Home that is fully api friendly and allows custom keywords. I want to buy an interface to my own home assistant. I want a hackers friend.

Sure, offline-capable would be great too, but for now just give me the damn api hooks. :s

edit: Note that i believe Echo has a pretty good API. I just don't want to talk to echo haha. I want to talk to my system.

JarvisSong 1 day ago 1 reply      
ITT smart hackers asking for more features and noting the privacy implications. Unfortunately, this, Echo, and others are coming for the masses, the masses who have everything public on Facebook and won't really understand the issues until it's too late. Give it a few years and 'everyone' will have a Star-Trek-like home computer experience. What can we do to turn the tide in favor of privacy and security? Or do we just trust Google/Amazon will do the right thing?
deprave 1 day ago 2 replies      
A company that makes money by collecting and selling access to personal information about people is offering to put a microphone in your home.

If you need a product like this, for the sake of your privacy, buy an Echo.

izolate 1 day ago 6 replies      
Looks like something that should've been under the Nest brand. Whatever happened to that?
pbnjay 1 day ago 1 reply      
I find it odd that Google is going to take so long to get this out the door - "later this year" seems like ages. Did they start on the hardware that late?

Amazon has what, 6 months to get more competitive on the search/trivia front? or this is going to kill it.

free2rhyme214 1 day ago 2 replies      
Competition against Amazon Echo is always positive for consumers.
swalsh 1 day ago 1 reply      
As somoene who runs a small ecommerce company i'm really hoping the next platform is open, and not owned by Amazon (or Google). I sell products where purchasing them would be fantastic via a voice interface. If Amazon owns it though, there's no way I'm going to get any fraction of that business. The ownership of these voice platforms is a huge risk for market competition. The voice interface naturally lends itself to "choose the first choice that fits my paramters, and let's go with it". If you say "Alexa, book me a taxi to the airport". Alexa chooses who takes you. Being the priority choice is a huge advantage for whoever wins that. It's just so much power in the hands of so few. It's the opposite of what the internet should've been.
grownseed 1 day ago 1 reply      
The page linked here is basically an ad with no content (yet it manages to have a scrollbar no matter the window size...). Tried to look for actual specs but couldn't find anything, does anybody have anything more substantial?

On another note, is there a way to just get some sort of remote microphone array (I think that's what it's called on the Echo) and set up Alexa/Google/Cortana/... directly on a PC?

zitterbewegung 1 day ago 0 replies      
I'm confident that Amazon won't kill off Alexa (due to its success). I am not so confident that if this isn't widely successful or even in the future this will be killed off just like Revolv and bricking the device . It is good that Alexa is getting competition though.
beilharz 1 day ago 1 reply      
This gives me a 404.
shogun21 1 day ago 0 replies      
I am impressed Amazon was able to make a new product category. It's only a matter of time before Apple announces their take on Siri Home.
xiphias 1 day ago 0 replies      
,,Always on call'' - it just got the worst memories for me of waking up at 3am
enibundo 1 day ago 0 replies      
Does anyone else feel as this kind of stuff (I'd put it in the same bag as the apple Watch, and the amazon something) is completely useless?

Personally, I feel we need to use less technology in everyday life.

partiallypro 1 day ago 0 replies      
Microsoft, where are you? Cortana on a device that is similar to the chip Master Chief has would be incredibly popular, and it done right could also be just as popular. Especially since Cortana is on every platform and completely agnostic unlike Google Home and Echo. Give it the same extensible API as Cortana has on Windows 10, etc and it could be a home run. Don't let Google and Amazon eat your lunch here.

I do wonder though how Google/Microsoft/Apple will handle there being multiple instances of their devices able to take commands. So if I say "Hey Cortana" or "Ok Google" will each device have to sort of communicate with the other to only activate the one that is closest?

bbunqq 1 day ago 4 replies      
You too can bring a slice of 1984 into your home with this lovely crafted listening device!
blabla_blublu 1 day ago 0 replies      
Competition in this space is welcome! Can't wait to see what their difference / what sets them apart from Echo. Given Google's propensity to sell Ads, it will be interesting to see if customers are willing to put a device like this in their house.

Reminded me of a humidifier for some reason - http://www.amazon.com/Aromatherapy-Essential-Oil-Diffuser-co...

struct 1 day ago 0 replies      
Looks neat, let's hope Google leads in 3rd party applications too and not just in appearance. Also interesting that they specifically gave a shout out to the Alexa team.
Roritharr 1 day ago 0 replies      
I love how they put the LG MusicFlow Speaker on the Home presentation. I've been suffering that malpurchase for about a year now. I can rely on it not working 70% of the time, seemingly crashing, creating a mesh wifi although plugged into ethernet or attached to my home wi-fi...

If they can't get the third party vendors to get their Google Cast integration up to the reliability level of a Chromecast Audio, they should stop supporting this.

dmritard96 1 day ago 1 reply      
I think the most interesting thing in the echo and now google home narrative is that these are subsets of phones. Speakers, microphones and internet connections with only two substantial differences - they are powered 100% of the time and they have better speakers/acoustics. It will be interesting to see if those are substantial enough to overwhelm the obviousness of doing these through the phone in your pocket.
evolve2k 1 day ago 1 reply      
There's no way I'm putting something like this, collecting data directly for Google in my house.

Anyone else have privacy concerns?

pluc 1 day ago 3 replies      
It's blocked in Canada.


lazyjones 1 day ago 4 replies      
What's the business model for Google Home? Will it suddenly splurt out an advertising message in the middle of the night, or will it rather include subtle product placements in otherwise harmless answers?

Remember, it's made by a company that thinks it's appropriate to put text ads on the first spots of your search results, in increasingly confusing ways.

xenihn 1 day ago 0 replies      
Hey, I have that pasta strainer. The one that's being used to store citrus fruits for some reason...
gopher2 1 day ago 0 replies      
Yeah, I'm sticking with Echo because business model.
walrus01 1 day ago 1 reply      
How many months from release until the FISA court issues a secret order to turn one of these on 24x7x365 in a suspect's home, and stream the audio to the FBI "counter terrorism" people investigating a subject?
jug 1 day ago 0 replies      
This is interesting but to be honest I already have this on my phone, which is with me not only in my living room, but even in the street.
mattmaroon 1 day ago 0 replies      
I love my Echo but it has a couple weak points, all of which could be solved by a competent platform. I can't, for instance, just tell Alexa to play new podcasts from my lists or directly from the net (except through TuneIn, which sucks.) It doesn't work with many home automation devices. It's AI is not that great when it comes to non-Amazon services.

I'm hopeful the Android platform will make this a better device.

kristianc 1 day ago 0 replies      
The search queries that get sent to Google are probably the least interesting part of this to them. Sure, Google will get some additional search queries and be able to target you slightly better, but it's a rounding error in terms of the data they already have.

The interest in this on Google's side is on having a permanently connected 'listener' on your network to identify which devices you're running and when. If it's running through your WiFi network, Google is going to know about it.

djloche 1 day ago 0 replies      
Voice controlled computer interactivity doesn't appeal to me, and double unappealing is the skynet factor to the whole thing.

Home automation doesn't need nor should it require signing over your privacy.

imh 1 day ago 0 replies      
In what world is "Always on call" an appealing phrase?
wodenokoto 1 day ago 1 reply      
What is this? All I get is:

 404. Thats an error. The requested URL / was not found on this server. Thats all we know.

theideasmith 1 day ago 0 replies      
The website is down now. For those who want to check it out, here's the link: https://web.archive.org/web/20160518173022/https://home.goog...
jredwards 1 day ago 0 replies      
Google Nope
alexc05 1 day ago 2 replies      
I really don't want to come off as super negative here ... but am I the only one who finds this one UGLY?

Compare to some of the other devices from previous years, and competitors:




It sort-of looks like a cheap air freshener. Maybe it'd grow on me, but I kinda think it is ugly.

Someone should manufacture a range of "tchotchke skins"

https://www.google.ca/search?q=tchotchke&tbm=isch so it could sit on your counter and look like something that you'd be happy to mix in with the rest of your decor. (angels, golden lucky-cats, porcelain hands, googly-eyed-wooden-owls https://s-media-cache-ak0.pinimg.com/736x/78/b5/80/78b580270...)

Anything to stop that thing from lookling like a plug in air freshener really.

ComodoHacker 1 day ago 0 replies      
Next step: chemically analysing your kitchen fumes and flavors in nearly real time to profile your gastronomic habits.
gcr 1 day ago 0 replies      
Will they offer a rebate for this device to burned Revolv users?

Doing that would be a great gesture.

As it stands, I would be wary of purchasing one of these. How long would it last before Google tires of it?

machbio 1 day ago 0 replies      
Hope this is not as disappointing as Onhub, it would be helpful if they have a rich api to start with and not promise that the APIs are coming later...
conjectures 1 day ago 0 replies      
How is this different to having a smartphone on your person? Other than using an additional plug socket.
exodust 1 day ago 0 replies      
That page is so simple, yet even Google devs are "powering" these simple pages with multiple JS files. Why? Is it laziness? Or just some belief that Angular is required now for even "hello world"?

When viewing source I initially thought 'great, a nice clean HTML page'... after all, it's just 3 images fading between each other and a simple form.

But then at the bottom we see Angular, Angular Animate, Angular Scroll, and a fourth Main JS file. Way to set an example Google.

sickbeard 1 day ago 1 reply      
Remember when voice commands for your computer came out? It was cool but nobody talks to their computer. They won't be talking aimlessly in their kitchen either.
mathpepe 1 day ago 0 replies      
When the danger is so near we admire the foresight of those warning about it. Kudos to the FSF.
irrational 1 day ago 0 replies      
So to use it I have to get up and go to wherever it is plugged in? Why wouldn't I just use my phone which is always on me?
lamein 1 day ago 0 replies      
People don't care about their privacy anymore. Many of us do care about it, but we are not the majority.

This project relies on that fact.

raajg 1 day ago 0 replies      
Another Amazon Echo. Not at all interested.

I wish there was a text box:

Please never send me the latest updates about Google Home.

pbreit 1 day ago 0 replies      
Please support 3rd party streaming audio.
paulftw 1 day ago 0 replies      
Amazon has Echo, Google has Nest and now Home.

What could Apple's Project Titan be if not a smart home device?

sgnelson 1 day ago 0 replies      
How long before we find out the NSA has access to this and the Amazon Echo?
swasheck 1 day ago 0 replies      
kinda looks like my wife's essential oil diffuser. it'd fit right in if i wanted one.
csrm123 1 day ago 0 replies      
What happened to "Don't be Evil"?
Joof 1 day ago 0 replies      
Initially after snowden I thought, "the government and governments around the world will crack down on this behavior now".

I was nave. Nobody cares. Now they viciously support such practices. As long as that exists, I can't buy into datamining devices. And it will always exist.

educar 1 day ago 0 replies      
Seriously, never in my wildest dreams did I think that technology would come down to this. Like many others, I dreamed a future where I could have an automated assistant at home. Just not this way! It's really all about ads and mining data, isn't it.
bobwaycott 1 day ago 0 replies      
Can we get the link changed to https://home.google.com/? Non-HTTPS just 404s.
King-Aaron 1 day ago 0 replies      
Sucked in, anyone who bought a Nest.
Kinnard 1 day ago 1 reply      
Why wasn't this done under Nest?
58028641 1 day ago 0 replies      
till google disables it ...
gambiting 1 day ago 0 replies      
In 4 years Google will drop support for it leaving you with a pretty paperweight. Not interested, not from Google.
dharma1 1 day ago 0 replies      
looking forward to replacing my Echo Dot with this
tempodox 1 day ago 0 replies      
Now, we can volunteer for the Big Brother experience.
ilaksh 1 day ago 0 replies      
Only one sentence explanation unless I missed something. Its an Echo competitor.
zozo123 1 day ago 0 replies      
ck2 1 day ago 0 replies      
So it's echo/alexa by Google?


Is there going to be a patent war?

bache 1 day ago 1 reply      
Oletros 1 day ago 0 replies      
And I suppose it will be another US only product/service from Google
romanovcode 1 day ago 1 reply      
Haha, no thank you. i don't want google to listen to everything I say in my house.

Next thing you know it's going to tell me is "Smith! Put more effort in those crunches!".

jayfuerstenberg 1 day ago 0 replies      
I'm not so lazy that I can't hold my phone and google for something. Pass.
nkg 1 day ago 1 reply      
This morning a friend of mine got his gmail hacked, which means his Play, Maps, Music and everything got hacked also.

With Google Home, add your "everyday tasks" and voice history to this! ^^

Fast.com: Netflix internet connection speed test fast.com
622 points by protomyth  1 day ago   354 comments top 63
exhilaration 1 day ago 7 replies      
This is from Netflix, it downloads Netflix content and reports the speed back.

This is important because unlike your average Internet speed test (which ISPs take pains to optimize), there's a very real possibility that your ISP is happy to let your Netflix experience suffer - assuming they don't throttle it outright - as previously mentioned on HN:



CyrusL 1 day ago 6 replies      
Cool. I just redirected http://slow.com to https://fast.com .
finnn 1 day ago 7 replies      
For those hatin on speedtest.net and wanting upload, http://speedtest.dslreports.com/ and https://speedof.me/ have booth been around for a while. The reason for fast.com is that it tests download speed from netflix. ISPs can't prioritize it without prioritizing netflix as well.
nlawalker 1 day ago 3 replies      
What I'd really love to see is this concept provided as a service by all of the big streaming/gaming/large-content-blob providers and aggregated into a single page.

I have absolutely no reason to believe that every well-known "speed test" app/site/utility out there isn't being gamed by my ISP. A speed test that showed me my actual streaming bandwidth from Netflix, actual download speed of an XX MB file from Steam, actual upload bandwidth to some photo-sharing service, and actual latency to XBox Live or some well-trafficked gaming service would be awesome.

bdwalter 1 day ago 1 reply      
Seems like this is really about training their consumers to define the quality of their internet by their reachability to the Netflix CDN nodes.. Smart move on Netflix's part.
gdulli 1 day ago 4 replies      
When I used to be a Netflix customer it was more the variability of my connection that was an issue and not its "speed" at a given optimal time.

Usually I could begin a stream without problems. But often while streaming (often enough for me to realize streaming was a bad experience) the bitrate dynamically dropped way down to a terrible quality in response to what I imagine were poor network conditions. Netflix no doubt sees this dynamic quality adjustment as a feature, and preferable to buffering, but I chose an HD stream and I'd rather even see an SD quality video that I could be sure would stay that quality than switching between HD and very low bitrate, fuzzy, artifacty video.

I don't blame Netflix for the quality of my connection, but streaming is just not as reliable as cable and it's not one of those Moore's Law type things where throwing more processing power or memory fixes the network issues.

vessenes 1 day ago 0 replies      
I like the idea of getting ISPs into internal conflict: the folks responsible for making sure that speed checks like speedtest.net run quickly will be fighting the folks responsible for throttling Netflix.

But, I think the throttling folks will ultimately win. In that case, I guess Netflix is laying out a good case for consumers to complain, so it's win-win.

ejcx 1 day ago 0 replies      
This is super awesome! It's a good speedtest that works on mobile, which I had not been able to find.

Funny thing is I found this in the source.

 <!-- TODO: add code to remove this script for prod build --> <!--<script> document.write('<script src="http://' + (location.host || 'localhost').split(':')[0] + ':8081/livereload.js?snipver=1"></' + 'script>') </script>-->
Not a big deal, but kind of funny.

gregmac 16 hours ago 0 replies      
Some observations about this:

For me, it's getting stuff from https://*.cogeco.isp.nflxvideo.net -- which indicates my ISP (Cogeco) is part of their Open Connect [1] program with an on-network netflix cache.

Other people are reporting downloads from https://*.ix.nflxvideo.net, which appears to be the Netflix cloud infrastructure.

It downloads data from 5 URLs every time, but their sizes fluctuate, something like ~25MB, ~25MB, ~20MB, ~2.2MB, ~1.2MB.

The contents of each response appears to be the same (though truncated at a difference place), with the beginning starting with:

 5d b9 3c a9 c3 b4 20 30 b9 bc 47 06 ab 63 22 11
`file` doesn't recognize what this is.


Since it's https, ISPs shouldn't be able to easily game this (eg: make this go fast, but still throttle video content).

So one potential way would be to only start throttling after 25MB is downloaded (or after a connection is open for ~2 minutes): does anyone know how Netflix actually streams? If they have separate HTTP sessions for 'chunks' of a video, then presumably this wouldn't work.

They could see if a user visits fast.com and then unthrottle for some amount of time. I'm not sure if ISPs have the infrastructure to do a complex rule like this though (anyone know?). I also think this would be relatively easy for users to notice (anytime they visit fast.com, their netflix problems disappear for a while) and there would be a pretty big backlash about something so blatant.

[1] https://openconnect.netflix.com/en/

victorNicollet 1 day ago 2 replies      
Very interesting, and it confirmed my suspicions that my ISP throttles me (or at least, tries to).

I'm using Numericable from Paris and got 18Mbps to Netflix, 40Mbps to their comparison test. By going through an SSH tunnel (which makes a 230km detour through Roubaix), I get 39Mbps to both Netflix and control.

I am rather surprised that the bandwith loss caused by the SSH tunnel is so small.

kcorbitt 1 day ago 2 replies      
Really nice and easy to use -- the test starts way quicker than speedtest.net.

However, am I missing something, or does this only test downloading? I guess that makes sense for Netflix's use case, but I'm usually at least as interested in knowing my upload speed, because with typical asymmetric connections that can be a bigger bottleneck for video calls and content-production workloads.

jedberg 1 day ago 4 replies      
Oh man this is awesome. I can't wait till people start calling thier ISPs claiming they aren't getting the speeds they pay for, only for the poor agent to have to explain how peering agreements work.
mofle 1 day ago 0 replies      
I made a command-line app for it: https://github.com/sindresorhus/fast-cli
callmeed 1 day ago 1 reply      
Most interesting is comparing it to the ISP speed tests:




Fast.com is reporting about 1/2 the speed of these for me (2 seem to use the same Ookla speed test).

zodPod 1 day ago 1 reply      
I'd bet this is a move to make the ISPs that are throttling them look bad. If people start to use it to check their speeds and they are downloading Netflix content from Netflix and the ISP is throttling, it will look slower than it is and more people will likely complain.

I like it. It's suitably evil!

danr4 1 day ago 2 replies      
This is good but my god what a waste of a domain name :(
janpieterz 1 day ago 3 replies      
Odd, on a dedicated 500 mbit line I've now gotten 6 different results, ranging from 350-500. Speedtest.net indicates a stable 500+ mbit line, downloads from very fast servers always max it out at 500 as well.

Besides stabilizing it a bit, getting the upload on there would be amazing, it's certainly a lot nicer for the eye than speedtest.net.

mrbill 1 day ago 1 reply      
Interesting. Even over multiple tests, I get almost exactly 1/3rd the download bandwidth speed to NetFlix that I do testing with speedtest.net.
_jomo 1 day ago 2 replies      
I also like speedof.me which tests latency, download, and upload but purely using HTML5/JS (unlike speedtest.net with it's Flash app)
tigeba 1 day ago 4 replies      
Just for a reference point, I'm getting about 350 on Google Fiber in Kansas City.
smaili 1 day ago 4 replies      
Not to sound ignorant, but what's the point? Why would Netflix go through the trouble of acquiring what I suspect to be a fairly expensive domain just to show how fast one's internet speed is?
nodesocket 1 day ago 1 reply      
While cool, I can't believe they bought and use fast.com for something so simple. Fast.com has to be worth some coin. Anybody have any idea what that domain is worth?
iLoch 1 day ago 1 reply      
Description would be nice for anyone on mobile who doesn't want to needlessly waste bandwidth.
pazra 1 day ago 0 replies      
This is nice and great that it loads quickly with no bloat or distractions. Not sure about the domain name though, as it's not immediately obvious what the site is for.
stanleydrew 1 day ago 2 replies      
I'm pretty sure Google is about to release a speed test tool embedded directly into its SRP for speed-test-related queries.

Similarly to how they eliminated the need for third-party IP address checking tools by returning your actual IP address when you search for "what's my ip address".

pgrote 1 day ago 1 reply      
The amount of data netflix will collect from this is exciting! I can only imagine the stories it will tell once hundreds of thousands of people use it. It would be fantastic to see how the agreements between ISPs and netflix affect the data transfer rates.
isomorphic 1 day ago 0 replies      
I have multiple WAN connections (multiple ISPs). This actually (correctly) reports the aggregate download speed!

Obviously if they are "downloading multiple files," they aren't waiting for them to complete synchronously.

loganabbott 1 day ago 1 reply      
I prefer the speed test here: https://www.voipreview.org/speedtest No flash or silverlight required and a lot more details
danvoell 1 day ago 1 reply      
I feel like you could do more with this domain. Cool little tool though.
manmal 1 day ago 1 reply      
I have absolutely terrible Netflix quality on my Samsung TV sometimes, but it shows 68MBit here. Makes me wonder whether the firmware is to blame..
athenot 1 day ago 2 replies      
This is interesting.

Test 1: on Comcast but connected to company's VPN: 48Mb/s

Test 2: on Comcast but not on the VPN: 11Mb/s

erickhill 1 day ago 2 replies      
Thanks Xfinity. For my home service fast.com should redirect to slow.com. 5.2 Mbps (it's sold at 50 Mbps with asterisks everywhere).
k4rtik 1 day ago 0 replies      
Is it inflating the results shown on Wi-Fi?

I am on a MacBook Air Early 2014 and my current link speed is 144 Mbit/s according to Network Utility, but fast.com shows between 210 to 230 Mbps on each run.

Speedtest.net results are consistent as before at ~38 Mpbs, which is what I would expect from the routers around me.

smhenderson 1 day ago 2 replies      
OK, so I get 48 on fast.com and decided to use the link to compare on speedtest.net. There I get 101 down, 112 up.

So while 48 seems very fast to me (I get 19 at work) it's a lot less than 101. Is Verizon throttling the connection or is NetFlix not giving me more the ~50? At what point is the cap on NetFlix's side and not the client connection?

lemiffe 1 day ago 1 reply      
Only downlink? For me uplink is more important, and I suspect for others as well (gaming/streaming).
jasallen 1 day ago 0 replies      
Wow, "fast.com" is one helluva valuable piece of DNS real estate Netflix is throwing at this.
IgorPartola 1 day ago 1 reply      
Yup, and it nicely confirms that (a) my Charter connection is in fact 65 Mbps down and (b) I can't get faster internet where I live.

Oh, and 5 Mbps up is just ridiculous. That's what I get with my business plan. Back up a TB of data to the cloud? Yeah, that'll take weeks.

ahamdy 1 day ago 0 replies      
The download speed is absolutely incorrect, I live in a 3rd world country and have a 2Mb connection I get max 200/kb max download rate, fast.com is showing a download speed of 1.2Mb I really wish it was true
vonklaus 1 day ago 1 reply      
My internet speed (according to fast.com) is 0. Adblock & uBlock off on the site & fast.com uses https. Not sure why it wouldn't be working, no VPN in middle. Anyone else having issues?

edit: Speedtest.net was ~38mbps down. Is a netflix subscription nec. for this?

narfz 1 day ago 0 replies      
is there a bandwidth cap? i constantly get 160Mbps but i know for sure that our office line can do way more. speedtest.net is always close to 900Mbps. maybe speedtest.net has a endpoint within the ISP backbone and netflix not? or is the peering between AWS and my ISP?
EpicEng 1 day ago 0 replies      
Well... I just found out that my connection went from 20Mb to ~120Mb recently. I have no idea when this happened and my bill hasn't changed.
zmitri 1 day ago 1 reply      
Speed is halved using fast.com (140 down) vs speedtest (287 down) and I'm currently on Paxio in Oakland http://www.paxio.com
kilroy123 1 day ago 2 replies      
Why doesn't netflix just try to by-pass ISPs by rolling out their own service?

Big ISPs are starting to cap data, to stop/slow-down netflix. They should just put out their own high speed service like Google.

nodesocket 1 day ago 0 replies      
Interesting looking at Chrome developer tools, lots of magic and interesting payloads.

The http header Via is interesting as it lists the AWS instance that served the request and region. i-654a87b8 (us-west-2)

mrmondo 1 day ago 0 replies      
Just tried it on our 300/300Mbit link at work, lots of people working today so it'll be heavily under use but:

- Netflix: 240Mbit/s

- Speedtest: 293Mbit/s

myrandomcomment 1 day ago 2 replies      
On AT&T U-verse in Palo Alto area. $72 p/month for 24mb now with even more data caps.

Fast.com ~23mbSpeedtest.net ~38mb

Hum, I wonder which is right and which is the ISP screwing with traffic?

martin-adams 1 day ago 0 replies      
Cool tool. I'd love to know the story behind how Netflix managed to use such a lucrative domain.
dangson 1 day ago 0 replies      
Not surprisingly since this is downloading Netflix content, it doesn't work when I'm connected through Private Internet Access VPN.
caludio 1 day ago 0 replies      
Mhh, I get consistently (much) lower Mbps with Firefox than with Chrome. Is it how it's supposed to be? Is it my network maybe?
JustSomeNobody 1 day ago 1 reply      
How long until ISPs catch on and make sure fast.com is given a high priority?

I don't see how this will accomplish anything for Netflix.

mrmondo 1 day ago 0 replies      
Doesn't work at all well for me here in Melbourne on 4g. Netflix: 7Mbit, speedtest.net: 39Mbit
vadym909 1 day ago 2 replies      
Wow- this is awesome. I hated speedtest.net
bodytaing 1 day ago 0 replies      
This is an awesome alternative to the other speed tests because it's very minimal and has a clutter-free UI.
hacks412 1 day ago 0 replies      
Is this a way for them to optimize who they deliver faster streaming services to?
parfe 1 day ago 3 replies      
730mpbs to fast.com while only 700mbps on speedtest.net (with 853mbps up).
techaddict009 1 day ago 1 reply      
No upload speed results?
arnorhs 1 day ago 0 replies      
man, i'd love to see something like this for twitch streams. i feel like i have problems with twitch streams at specific times per day.
wil421 1 day ago 0 replies      
On my laptop:

First test: 55mbps

Second: 35mbps

Third: 22mbps

Fourth: 22mbps

Speedtest: right at 36mbps every time.

It seems to be more stable on my cellphone.

philjackson 1 day ago 4 replies      
SPEED MEGATHREAD, post your speed/location/ISP below here:

44Mbps / London, uk / BT

known 1 day ago 0 replies      
jefurii 1 day ago 2 replies      
Yawn, only checks download speed.
developer545 1 day ago 6 replies      
It's surprising that people on HackerNews don't seem to understand the basics of how the Internet works.
Online tracking: A 1-million-site measurement and analysis princeton.edu
577 points by itg  1 day ago   262 comments top 28
randomwalker 1 day ago 11 replies      
Coauthor here. I lead the research team at Princeton working to uncover online tracking. Happy to answer questions.

The tool we built to do this research is open-source https://github.com/citp/OpenWPM/ We'd love to work with outside developers to improve it and do new things with it. We've also released the raw data from our study.

ultramancool 1 day ago 4 replies      
As soon as I saw these APIs being added I immediately dropped into about:config and disabled them. How the hell do these people think this is a good idea to do without asking any permissions?

Put these in your user prefs.js file on Firefox:

user_pref("dom.battery.enabled", false);

user_pref("device.sensors.enabled", false);

user_pref("dom.vibrator.enabled", false);

user_pref("dom.enable_performance", false);

user_pref("dom.network.enabled", false);

user_pref("toolkit.metrics.ping.enabled", false);

user_pref("dom.gamepad.enabled", false);

Here's my full firefox config currently:


Privacy on the web keeps getting harder and harder. Of course this should only be used in conjunction with maxed out ad blockers, anti-anti-adblockers, privacy badger and disconnect.

We need browsers to start asking permission. When you install an app on Android or iOS it says "here's what it's going to use, do you want this?". The mere presence of the popup would annoy people and prevent them from using these APIs.

brudgers 23 hours ago 2 replies      
Google has a vested interest in information leakage. I have a suspicion that the Chromium project expresses a strategic desire to shape the direction of browser development away from stopping those leaks. The idea of signing into the browser with an identity is a core feature and in Google's branded version, Chrome, the big idea is that the user is signed into Google's services.

Google only pitches the idea of multiple identities in the context of sharing devices among several people: https://support.google.com/chrome/answer/2364824?hl=enand even then doesn't do much to surface the idea. https://www.google.com/search?hl=en&as_q=multiple+identities...

rdancer 1 day ago 3 replies      
This is the kind of nonconsensual sureptitious user tracking that the EU privacy directive 2002/58/EC concerns itself with, not those redundant, stupid cookie consent overlays.
f- 23 hours ago 0 replies      
Although the emphasis on the actual abuse of newly-introduced APIs is much needed, it is probably important to note that they are not uniquely suited for fingerprinting, and that the existence of these properties is not necessarily a product of the ignorance of browser developers or standards bodies. For most part, these design decisions were made simply because the underlying features were badly needed to provide an attractive development platform - and introducing them did not make the existing browser fingerprinting potential substantially worse.

Conversely, going after that small set of APIs and ripping them out or slapping permission prompts in front of them is unlikely to meaningfully improve your privacy when visiting adversarial websites.

Few years back, we put together a less publicized paper that explored the fingerprintable "attack surface" of modern browsers:


Overall, the picture is incredibly nuanced, and purely technical solutions to fingerprinting probably require breaking quite a few core properties of the web.

pmlnr 1 day ago 2 replies      
So... what we need is a browser, which says it supports these things but blocks or provides false data on request and looks as ordinary as possible for "regular" browser fingerprinting.

Is anyone aware of the existence of one?

anexprogrammer 1 day ago 3 replies      
Colour me unsurprised. Disappointed though.

I'm glad I disabled WebRTC when I first discovered it could be used to expose local IP on a VPN.

These "extension" technologies should all be optional plugins. Preferably install on demand, but a simple, obvious way to disable would be acceptable. (ie more obvious than about:config)

Not a great deal can be done about font metrics other than my belief that websites shouldn't be able to ferret around my fonts to see what I have. Not like it's a critical need for any site.

jimktrains2 1 day ago 6 replies      
NoScript is an all-or-nothing approach. Are there any JS-blockers that allow API-level blocks?
cptskippy 1 day ago 1 reply      
All of this makes me wonder how some of these interfaces should be more closely guarded by the user agent.

Perhaps instead of a site probing for capabilities, they should instead publish a list of what the site/page can leverage and what it absolutely needs to work. Maybe meta tags in the head or something like the robots.txt. Browsers can then pull the list and present it to the end user for white-listing.

You could have a series of tags similar to noscript to decorate broken portions of sites if you wanted to advertise missing features to users and, based on what features they chose to enable/disable for the site, the browser would selectively render them.

codedokode 1 day ago 1 reply      
Some methods of fingerprinting are probably used to distinct between real users and bots. Bots can use patched headless browsers that are masquaraded as desktop browsers (for example as latest Firefox or Chrome running on Windows). Subtle differences in font rendering or missing audio support can be useful to detect underlying libraries and platform. Hashing is used to hide exact matching algorithm from scammers.

There is a lot of people trying to earn on clicking ads with bots.

Edit: and by the way disabling JS is an effective method against most of the fingerprinting techniques.

kardos 1 day ago 3 replies      
So given this information, how can we poison the results that the trackers get?
wodenokoto 1 day ago 0 replies      
What annoys me the most is how many useless cycles these trackers use to track me.
MichaelGG 1 day ago 0 replies      
WebRTC guys get around this by stating fingerprinting is game over, so don't even bother. They ignore that they are going against the explicitly defined networking (proxy) settings. Browsers are complicit in this. If the application asks "should I use a proxy", then ignores it, silently, wherever it wants, that's deceptive and broken.

There's still zero (0) use cases to have WebRTC data channels enabled in the background with no indicator.

If all these APIs are added, the web will turn into a bigger mess than it is. They can't prompt for permissions too much. So they'll skip that, like WebRTC does.

ape4 1 day ago 0 replies      
Seems like browsers should ask the user's permission to use these html5 features. Then whitelist. For example, a site that does nothing with audio should be denied access to the audio stack.
pjc50 1 day ago 1 reply      
I think it's time for HTML--, which would contain no active content at all and simply be a reflowable document display format.
makecheck 23 hours ago 0 replies      
Over 3,000 top sites using the font technique, and from the description this sounds really wasteful (choosing and drawing in a variety of fonts for no reason other than to sniff out the user).

Each font is probably associated with a non-trivial caching scheme and other OS resources, not to mention the use of anti-aliasing in rendering, etc. So a web page, doing something you dont even want, is able to cause the OS to devote maybe 100x more resources to fonts than it otherwise would?

A simple solution would be to set a hard limit, such as 4 fonts maximum, for any web site; and, to completely disallow linked domains from using more.

aub3bhat 1 day ago 1 reply      
There is an acceptable tradeoff between pseudo anonymous access through browsers vs non-anonymous access through native apps.

To interpret this research as reason for crippling web or browsers would be a giant mistake. Crippling browsers will only work against users, who will be then forced into installing apps by companies.

Two popular shopping companies in India exactly did this, they completely abandoned their websites and went native app only. This combined with large set of permission requested by apps lead to worse experience in terms of privacy for consumers. As the announcement for Instant Apps at Google I/O demonstrate, web as an open platform is in peril and its demise will be only hastened by blindly adopting these types of recommendations.

Essentially web as open platform will be destroyed in the name of perfect privacy. Only to be replaced by inescapable walled gardens. Rather consider that web allows a motivated user to employ evasion tactics, while still offering usability to those who are not interested in privacy. While with native apps where Apple needs a credit card on file to install, offer no such opportunity.

I am happy that Arvind (author of the paper) in another comment recommends a similar approach:

"""Personally I think there are so many of these APIs that for the browser to try to prevent the ability to fingerprint is putting the genie back in the bottle.But there is one powerful step browsers can take: put stronger privacy protections into private browsing mode, even at the expense of some functionality. Firefox has taken steps in this direction https://blog.mozilla.org/blog/2015/11/03/firefox-now-offers-....Traditionally all browsers viewed private browsing mode as protecting against local adversaries and not trackers / network adversaries, and in my opinion this was a mistake."""


cdnsteve 1 day ago 1 reply      
After reading this it makes me want to disable JavaScript entirely, along with cookies, and go back to text browsing. I've been using Ghostery on my phone, it's been pretty good.
wyldfire 1 day ago 3 replies      
Whoa, what's the use case for exposing battery information?
radicalbyte 1 day ago 0 replies      
Of course this is something you do. Throw it together with all of the other information you can clean from a browser (referrer, ip) and you can get a match with a very high confidence level.

Shops can do the same with baskets, you find that people are either identified by one very rare feature which reoccurs often or their little graph of 4-5 items which correlate 99% to them.

buremba 1 day ago 2 replies      
All these things make the websites the new apps. Most probably we won't need to use many desktop applications a few years later.
chatmasta 21 hours ago 0 replies      
If you want to see a live demo of all the ways your browser can fingerprint you, this is a great website: https://www.browserleaks.com/
id122015 1 day ago 0 replies      
I think its similar to how Absolute Computrace rootkit identifies Android and Lenovo devices. Each hardware compoment has a unique ID, like your ethernet, bluetooth, even microphones and batteries.
coygui 21 hours ago 0 replies      
Would it be more secure to use tor than traditional browser. The only drawback is the longer RTT.
youaretracked 19 hours ago 0 replies      
Since the original web based ad campaigns were launched we have been tracked. Serious web analytics companies know these tactics already.

So what exactly is the research contribution being made here? What's new and interesting?

jkot 1 day ago 1 reply      
Malware filtering is needed.
tomkin 1 day ago 1 reply      
Ahhh. Remember when this was just a Flash problem, and getting rid of Flash was going to rid the world of evil?

Spoiler: that didn't happen.

ysleepy 1 day ago 2 replies      
Well, who would have guessed. Surprise surprise.

The web is such a shit technology.

Firebase expands to become a unified app platform googleblog.com
524 points by Nemant  1 day ago   201 comments top 41
mayop100 1 day ago 19 replies      
(firebase founder here) Im thrilled to finally be able to show everyone what weve been working on over the last 18 months! When I said big things are coming in the HN comments back when our acquisition was announced, I was talking about today : )

Were really excited about these new products. There are some big advances on the development side, with a new storage solution, integrated push messaging, remote configuration, updates to auth, etc. Perhaps more important though are the new solutions for later in your apps lifecycle. Weve added analytics, crash reporting, dynamic linking, and a bunch more so that we can continue to support you after youve built and launched your app too.

I'd suggest reading the blog post for more info:https://firebase.googleblog.com/2016/05/firebase-expands-to-...

This is the first cut of a lot of new features, and were eager to hear what the Hacker News community thinks. I look forward to your comments!

primitivesuave 1 day ago 4 replies      
Firebase is an incredibly powerful tool, and in a sense is a "democratizing force" in web development. Now anyone can build a complete web application without needing to know anything about setting up servers, content delivery networks, AWS (which is still quite difficult to use), and scaling. I teach kids as young as 10 years old to build iOS apps and websites with Firebase - they can develop locally and push to Firebase hosting with a single command. After exploring this new update, I can say with confidence that literally everything is easy-to-use now.

Whenever there is a Firebase announcement there are many replies along the lines of "this won't work for me because it's owned by Google, may be discontinued, doesn't have on-premise solution, etc". If these are your thoughts then you are missing the point of Firebase. It enables small web development shops like mine to focus on building beautiful web applications without having to give up manpower toward backend engineering. The cost of using Firebase is peanuts compared to the savings in employee hours.

Perhaps some day we will have to migrate elsewhere, but I find that possibility extremely unlikely because the clear amount of effort it took to create the Google-y version means this is a long-term play.

zammitjames 1 day ago 0 replies      
We were part of the Early Access Program for the expanded Firebase and used it to build our music collaboration app Kwaver. With the new features, they did a nice job of collecting a bunch of related mobile products (Analytics, Push Notifications, Storage, Authentication, Database, Deep Linking, etc) into a pretty cohesive platform, and it's saved us a bunch of time.

With Firebase Analytics we can track events, segment audiences (according to user properties; active, dormant, inactive) and take action according to the user segment. We are able to send push notifications (also using Firebase) to dormant male users who play the piano for example. Another cool feature is Remote Config, which gives you the option to ship a number of separate experiences and track the user interaction. Like A/B Testing but way more flexible.

For us, the best product is the existing database product they had, as it really improves our user experience to ditch the 'pull to refresh' button' and have our app respond to changes live.

We have been waiting for Google to provide developers a more complete mobile solution for a while now, and theyve done it superbly through Firebase!

Feedback; It would be really cool if Firebase could implement UTM codes to be able to track user acquisition and be able to automate actions according to User Properties.

Shameless plug: if you're a musician (or a music fan), we'd really appreciate if you could download our music collaboration app, try it out and give us feedback. Its available for free on the app store; The following link will re-direct you there later today. http://kwaver.com

timjver 1 day ago 1 reply      
I love Firebase, but the Swift code in the iOS guide is of really low quality. For example (https://firebase.google.com/docs/database/ios/save-data#dele...):

 if currentData.value != nil, let uid = FIRAuth.auth()?.currentUser?.uid { var post = currentData.value as! [String : AnyObject] var stars : Dictionary<String, Bool> stars = post["stars"] as? Dictionary<String, Bool> ?? [:] // ... }
What this should really be:

 guard let post = currentData.value as? [String : AnyObject], uid = FIRAuth.auth()?.currentUser?.uid else { return FIRTransactionResult.successWithValue(currentData) } let stars = post["stars"] as? [String : Bool] ?? [:] // ... }

chatmasta 1 day ago 2 replies      
Interesting that Google is doubling down where Facebook divested. The obvious difference is that Google has a cloud platform and Firebase is a funnel into it, whereas Facebook had nothing to funnel Parse users into.

I wonder if Facebook will ever launch a cloud platform. They've got the computing resources for it.

bwship 1 day ago 0 replies      
We've been using the Firebase platform for a while now. It's pretty cool to see them expand from 3 products to ~15 overnight. I'm most excited about their analytics and crash reporting. I must say that their system has been one of the best we have used in a longtime, I am really excited to see other aspects like analytics and ads being housed under this same umbrella, as I think it is going to help with development time overall. One area that I'd like to see improved though is a deeper querying language for database, or even better would be a way to automatically export the system in realtime to a postgres database for better SQL type analytics.
davidkhess 1 day ago 1 reply      
The concern I've always had with Firebase is the lack of a business logic layer between clients and the database. This tends to force the business logic into the clients themselves.

Trying to change the schema if you have Firebase clients deployed that can't be instantly upgraded via a browser refresh (i.e. iOS and Android mobile apps) seems an extremely challenging task.

ivolo 1 day ago 0 replies      
We used the original Firebase database product to build http://socrates.io/ 3.5 years ago, and I remember getting Socrates running in a few hours. Im looking forward to seeing them up the bar on speed of development / ease for their next 10 products :) Nice work team!
mybigsword 1 day ago 4 replies      
way too risky to use it for startups. Google may discontinue this project at any time and you have to spend months to rewrite everything for another database. IF google open source it and we will be able to install it on premise and patch without Google, that would be ok. So, I would recommend to use PostgreSQL instead.
fredthedinosaur 1 day ago 1 reply      
When will it support a count query? now to be able to count number of children I have to download all the data. Count is such an important feature for me.
dudus 1 day ago 0 replies      
Even if you don't want to use any Firebase service you might still want to use it only for Analytics. Drop the firebase SDK in the App and you are done. Free, unlimited and unsampled Analytics reports for your App.



fahrradflucht 1 day ago 2 replies      
I have build apps with firebase in the past and the feature I missed the most was performing scheduled tasks on the database.Now we are getting this BIG app platform update and this feature is still not in there. AWS Lambda with Scheduled Events for a long time to come :sad-panda:
joeblau 1 day ago 1 reply      
I remember walking into Firebase's offices about 4 years ago when it was 4 people on Townsend St in SOMA in a 300 square foot share office space. It's amazing to see how far they've come; Congrats to the whole team.
skrebbel 1 day ago 1 reply      
As a current Firebase customer, I'm pretty thrilled about all this (especially since I was afraid Google would pull a Facebook here). However, there's quite a bunch of API changes and absolutely no info about how long the old JS library, endpoints, etc etc are to keep working. Should I get stressed out?
maaaats 1 day ago 5 replies      
This may be a stupid question, but: What do you use it for? Cannot everyone basically edit the client code and do whatever with your data? I've only used Firebase for prototyping.
oceankid 1 day ago 1 reply      
The thought of reliable, managed hosting is interesting.

But how does one extend an app outside storing and fetching data? What if you want to run a background job to send emails, parse a complex csv or create a custom pdf and write to firebase storage?

albeva 1 day ago 0 replies      
I think services like firebase are a very scary thing. Too much dependence on one vendor, too much black box magic, too much logic that is beyond control. And services like this contribute to general dumbing down of software developers. We're heading towards world of script kiddies where html and js rule and all complex logic is handled and controlled by service providers. Is it a good thing? You can deliver fast, but in the long term is it worth it?
WalterSear 1 day ago 0 replies      
I'm in talks with a company regarding building an application for users in developing countries, where Android 2.0 is still the dominant OS version.

Firebase 2.0 looks like a great fit for their needs otherwise, but is the new sdk backward compatible to Android 2.0?

blairanderson 1 day ago 0 replies      
From my experience with the new API, it's a little less intuitive and worse documentation. I think it's rad that Google invested a ton of resources into firebase.

we have been super successful with firebase, and are proponents of using it as a notification system and less of a datastore. That would be easy, but unwise. Use it to notify clients of changes so they can fetch data. Read from Firebase, write to your own server/DB.

pier25 1 day ago 1 reply      
So how would one address server side logic?

Like for example doing something with the data before sending it to the client?

mcv 1 day ago 0 replies      
I intend to use Firebase as at least a temporary backend while developing my app. Maybe I'll move to a real server later, but during development it's really easy to just have some place you can shoot json at. And I can always add interaction later by having some other application listen to it.

I don't really need the actual realtime communication stuff all that much (though it might turn out to be useful), but just a lightweight place to store json is really useful.

ddxv 1 day ago 0 replies      
This appears to just be a way to limit the growth of third party tracking which threaten Google by encouraging user acquisition from many sources.

I say this because they dont specifically say they will postback events to advertising networks other than Google's.

Philipp__ 1 day ago 0 replies      
It looks like it is here to stay... But that surprise Parse shutdown will leave me asking, what if...
robotnoises 1 day ago 2 replies      
I don't think it was explicitly mentioned in the keynote, but it looks like they updated pricing:


Can't find the old pricing now, but it seems similar, but with less plan types.

wiradikusuma 1 day ago 1 reply      
For Firebase/Google Cloud Platform engineer: does it mean Google Cloud Endpoints is being phased out? if i'm already using Google Cloud Endpoints, should i move to Firebase? what's the advantage?
1cb134b57283 1 day ago 0 replies      
As a server engineer already having trouble finding a new job, how worried should I be about this?
aj0strow 1 day ago 0 replies      
I've had only good experiences with firebase. They added an HTTP api, web hosting, multiple security rule preprocessors (pain point), and got faster and cheaper. Yeah only good things.
robotnoises 1 day ago 0 replies      
Not expressly mentioned anywhere that I've seen: the Free plan now includes custom domains + SSL cert. Under the previous firebase.com, that was $5 a month.

Sounds good to me!

intellegacy 1 day ago 1 reply      
Is there a tutorial that explains how to setup a backend for user-taken videos? for an IOS app

one thing I liked about Parse was that it's documentation was newbie-friendly.

gcatalfamo 1 day ago 1 reply      
Can somebody explain the new Firebase reframing towards GCP? Maybe with another provider analogy? (e.g.,Firebase is to GCP like Parse is(was) to Facebook)
welanes 1 day ago 0 replies      
FYI, new docs on data structure mention rooms, which was an example in the old docs. Should read messages or conversations: https://firebase.google.com/docs/database/web/structure-data...
kawera 1 day ago 1 reply      
Question: Would Firebase be a good option where the desktop/web app is the main access point, mobile been secondary (around 3:1) ?
eva1984 1 day ago 0 replies      
Feel like the new Wordpress/Drupal/CMS, just in App space.
Kiro 1 day ago 1 reply      
I'm building a simple web app where I want signed in users to be able to add a JSON object to a database and then list all JSON objects publicly. Only the user who created the object should be able to edit it. Is this a good use-case for Firebase or should I look into something else?
tszming 1 day ago 0 replies      
The biggest problem with any Google cloud services nowadays is you don't know if it was/will blocked in China, of course it is okay if you don't care the users in China.
ssijak 1 day ago 1 reply      
What is the state of AngularFire library, there are no guides for angular in the new documentation? And when will the angularfire for angular2 be ready to use?
dmitriz 1 day ago 1 reply      
Is user email confirmation finally supported by Firebase? Last time I checked it wasn't.
themihai 1 day ago 0 replies      
"... and earn more money." Is this really necessary on the homepage? Sounds like a old misleading spam page
sebivaduva 1 day ago 0 replies      
for all of you looking for a real-time api platform that's open source and not owned by a cloud giant come join us build telepat.io
Blixz 1 day ago 3 replies      
So, Still no offline persistence for JS. What a huge disappointment.
choward 1 day ago 1 reply      
Provide a self hosting option or GTFO.
Tesla Announces $2B Public Offering to Accelerate Model 3 Ramp Up bloomberg.com
396 points by dismal2  1 day ago   178 comments top 11
jboydyhacker 1 day ago 17 replies      
The big surprise here isn't that Tesla was doing an offering - it was Goldman did a huge research note 24 hours before the offering while actually participating in said offering.

Super bad form and just goes to show the community- don't trust investment bankers. Such bad form.

Animats 1 day ago 2 replies      
Well, $1.4 billion for Tesla, $0.6 billion for Musk personally, and an option for Goldman Sachs to get $0.21 billion.[1] Tesla stock is down in after-hours trading, but that doesn't mean much. If the stock is down significantly at the close tomorrow, the market didn't like this.

It's a legit offering. The company intends to build a big factory and make stuff. Real capital assets will be bought with that money. It's not to sell stuff at a loss to gain market share in hopes of raising prices later. (Looking at you, Uber.)

Tesla just hired Audi's head of manufacturing, Peter Hochholdinger. About a week ago, the previous two top people in manufacturing quit, right after Musk announced he wanted the production line running two years sooner. Maybe Hochholdinger can do it.

[1] https://www.sec.gov/Archives/edgar/data/1318605/000119312516...

jernfrost 1 day ago 11 replies      
Why do people keep spouting this nonsense that Tesla is losing money on EVERY car? They make money on every car otherwise they wouldn't be selling any cars. The lose money due to their high R&D.
vessenes 1 day ago 2 replies      
This is not a surprise; there's an old saw that I think I first read in a Buffet annual report. It says that financing tends to alternate forms for companies in terms of what makes sense: debt -> equity -> debt -> equity.

Equity offering seems likely to be much cheaper than debt right now; Tesla has great mindshare among consumers, and lots of doubters on the professional investor side.

crabasa 1 day ago 3 replies      

 echo "Tesla to offer $1.4 billion shares, remaining to be sold by Elon Musk. Musk is exercising options to buy 5.5m shares and will boost overall holdings on net basis. Developing... " | wc 1 30 176
News articles and tweets are converging at an alarming rate.

11thEarlOfMar 1 day ago 0 replies      
It's neither here nor there, but I feel like Bugs Bunny in "High Diving Hare", and Musk just raised the platform another 50 feet:


marvin 1 day ago 1 reply      
From the press release, it appears that the capital raise is "only" $1.4 billion -- the remainder is Elon Musk selling shares to cover his tax liability for simultaneously exercising options from 2009. Hopefully 1.4 billion is enough.
slantaclaus 1 day ago 0 replies      
Tesla has a really great business. They're not just cars, they're batteries. Their home battery for storing solar energy is a huge deal at least in terms of future cash flows. Also, they're a white label supplier of batteries to companies like Toyota and Mercedes. Anyway--new long term TSLA shareholder here. Bought in at $205.
mjbellantoni 1 day ago 1 reply      
Anyone have thoughts as to why they're selling stock as opposed to issuing bonds?
syngrog66 17 hours ago 0 replies      
When you have 375k $1000 preorder deposits its probably an ideal time to raise investment.
jgalt212 1 day ago 0 replies      
It's pretty obvious at this point that Tesla's number one product is their stock. Which makes it no different from a number of other high fliers.

At first they were an innovative car company. Then the stock price shot well above the level sustainable by an electric car company. Elon realized this, and then builds the Giga factory. We're not just a car company, we're a power company!

Now they are raising more equity off of an inflated stock price. I'd stay away from this one.

Not a total hater, Tesla cars are great, but one of these day's Elon's moon shots and obsession with the stock price will catch up with him (he'll still be rich) and his investors (they may be significantly less rich).

Reason: A new interface to OCaml facebook.github.io
598 points by clayallsopp  2 days ago   272 comments top 50
Cyph0n 2 days ago 4 replies      
This looks very interesting. I've always had OCaml in mind but never actually got around to using it in a project. Facebook could have done a better job describing what exactly this is, but they do provide a good overview at the end of the page (strangely!) [1].

In summary, Reason [2] is a new language (correction: interface to OCaml) that shares a part of the OCaml compiler toolchain and runtime. I don't know of any language that uses a similar approach, that is, plugging into an existing compiler toolchain. I guess a reasonable yet inaccurate analogy would be Reason -> OCaml is like Elixir -> Erlang or Clojure -> Java.

I hope Reason can provide OCaml with the extra push needed to bring it into the mainstream PL space and more widespread adoption.

[1]: http://facebook.github.io/reason/#how-reason-works

[2]: https://github.com/facebook/reason

mhd 2 days ago 1 reply      
I hope this doesn't sound like trolling, but JavaScript's syntax is now a selling point? I kinda-sorta get the reason why people want an actual JavaScript stack on the backend, but I never heard that syntax/semantics brought people from e.g. Rails to Node.

Sure, OCaml isn't even the nicest syntax in the ML family, but I'm not sure whether that's worth it, especially considering that almost any "X-like" language often turns out to be an Uncanny Valley for "X" programmers -- close enough to make some frustrating errors.

e_d_g_a_r 2 days ago 3 replies      
I for one welcome the syntax. I run the OCaml meetup in Silicon Valley and syntax is definitely an issue for newcomers. This makes it easier for other programmers to instantly just jump into OCaml/ML rather than ask about what is `in` or what is `let foo = function`, etc etc.

EDIT: Hosting a Meetup this friday at 6pm in San Francisco about Reason and how to instantly start using it, http://www.meetup.com/sv-ocaml/events/231198788/

jameshart 2 days ago 2 replies      
Wonder if this project has anything to do with Eric Lippert's move to Facebook (https://ericlippert.com/2016/02/08/facebook/ - Eric has also been producing a series of blog posts implementing a Z-Machine interpreter in OCaml to run mini-Zork on, starting here: https://ericlippert.com/2016/02/01/west-of-house/). Eric was on the C# compiler team at Microsoft and previously worked on JScript.
civilian 2 days ago 4 replies      
I know that it's common to have namespace collisions, but their logo is so similar to Reason magazine's. https://reason.com/
alex_muscar 2 days ago 2 replies      
Nice to see that OCaml is getting so much love at facebook. Unfortunately, adding a new syntax that's almost OCaml, but not quite, doesn't seem like such a great idea. While it might make the language accessible to more people, it runs the risk of fragmenting the community.

I know syntax is subjective, but some of the choices seem a bit odd. For example, declaring variants and using their constructors looks like Haskell, but the semantics is still OCaml. In Haskell, constructors are first order so they can be passed as functions, and partially applied. It makes sense that their declaration and use looks like function declaration and function calls. In OCaml they are not first class, that is, you can't pass the as arguments, or partially apply them. That's why it makes sense for the declaration to look like a tuple, and the use to look like a function applied to a tuple--well, somewhat, you can still argue that it's still confusing because you might expect to be able to apply the constructor to a tuple variable, but well, such is life :). Unless constructors are first class in Reason--it doesn't look like it from a quick scan through the docs--this particular syntactic difference is of dubious value, and, worse, it can be misleading to newcomers.

Also, changing `match` to `switch` seems gratuitous as well, and it also loses some of the meaning of the original. i.e. "I want to match this value against this set of patterns".

Finally, I know that using `begin` and `end` for blocks is verbose and Pascal-ish--which people seem to hate for some reason--but using { } for scopes looks out of place, and leads to awkward cases like this:

 try { ... } { | Exn => ... };
I don't mean for this to sound ranty, or like I'm picking on Reason. I think it's good that facebook is tryiog to spice things up in the OCaml community.

avsm 2 days ago 1 reply      
There's a screencast fresh off the presses on the info page at https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml

I'm finally going to switch away from my ancient nvi setup and use Atom instead! MirageOS recently moved all our libraries over to using the new PPX extension point mechanism in OCaml instead of the Camlp4 extensible grammar. This means that MirageOS libraries should be compatible with Reason out of the box -- so it'll be possible to build unikernels from a slick editor interface quite soon hopefully!

MichaelGG 2 days ago 4 replies      
I started off a bit skeptical with the <- renaming to =. Mutability should be rare enough that <- makes things stand out. But apart from that I think I rather like this syntax, on the whole. Not a fan of semicolons. It also makes me appreciate F#'s #light syntax (now its default). Using whitespace really clarifies stuff, and there's always in and ; for fallback.

What's OCaml's status with multithreading? Are there any proposals for more flexible operators, so there doesn't need to be different operators for different numerics? (F# solves this by allowing inlined functions.)

greyhat 2 days ago 1 reply      
The slowness in Firefox appears to be solely due to this:

 @media (min-width: 1180px) { body:not(.no-literate) .content-root { background-color: #fdfcfc; -webkit-box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); } }
Removing it in the Firefox style editor restores normal performance.

Edit: And they have commented out the box-shadow! Hah.

TY 2 days ago 5 replies      
Ok, it might be the end of the day for me and I'm denser than usual, but I can't understand what is this? Ocaml to JS transpiler?

Checked this out, but the reason still eludes me:https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml pun intended)

hellodevnull 2 days ago 8 replies      
Site doesn't load in Firefox. Works in Chrome.
chenglou 2 days ago 0 replies      
I've worked on the Atom plugin for this, itself written in Reason and compiled to JS using js_of_ocaml: https://github.com/facebook/reason/tree/7f3b09a75cacf828dd6b....

Having worked with Reason, JavaScript, and the bridge between the two, most of my errors seem to fall on the JavaScript side. So I guess the type system's indeed working =).

mseri 2 days ago 1 reply      
I love OCaml, but that's a really nice reshape of OCaml syntax! And apparently things will be interoperable. I am really curious to see where it goes.

EDIT: and they want to use and maintain compatibility with ppx. Great news

ipsum2 2 days ago 1 reply      
Looking at http://facebook.github.io/reason/mlCompared.html it looks like regular OCaml, with a sprinkling of JS syntax.
haches 2 days ago 0 replies      
If you'd like to play with Reason you can do it online here:


Of course, you can also create your own Reason projects.

Paul_S 2 days ago 3 replies      
Website fries the CPU (FF).
nikolay 2 days ago 1 reply      
Nice, but I always wonder why function is abbreviated as the longer unambigous fun and not just fn?!
akhilcacharya 2 days ago 1 reply      
Do want to learn this - does anybody know any interesting projects that can take advantage of the OCaml ecosystem and functional aspects?
cwyers 2 days ago 2 replies      
I really wish they'd taken the pipeline (|>) operator from F#, if they were going to rework OCaml.
grhmc 2 days ago 1 reply      
I'm seeing "BUILD SYSTEMS RAPIDL" over here on Linux.
robohamburger 2 days ago 2 replies      
I took ocaml for a spin a couple months ago and compared to more recently created languages it seems a bit crufty.

If they can simplify the build system to be on par with something like cargo that would be swell.

Also: having rust style traits or haskell classes would be amazing. Also macros that aren't obscure and hard to use compiler plugins please :)

Hopefully it ends up being more than just questionable sugar around ocaml and actually adds some sorely needed language features.

honua 2 days ago 1 reply      
What problems would be well solved by Reason/OCaml?
bjz_ 2 days ago 2 replies      
Would be nice to see modular implicits like those that are being proposed for OCaml. It's a shame to not have any form of ad-hoc polymorphism.
oblio 2 days ago 4 replies      
Has anyone here built something say, over 10k lines in Ocaml? How is the development experience? IDEs, debuggers, linters, deployment, etc.
incepted 2 days ago 2 replies      
Interesting but since they are designing a revised syntax, I wish they had got rid of Ocaml's semi colon. These stand out in 2016.
konschubert 2 days ago 1 reply      
> A new, developer experience for rapidly building fast, safe systems.

The comma placement suggests that developer is an adjective for experience.

xvilka 2 days ago 1 reply      
It would be nice if they'll make it work on Windows platforms. There is already an issue for that[1]. It also depends from the Windows support in OCaml itself and opam[2].

[1] https://github.com/facebook/reason/issues/470

[2] https://github.com/ocaml/opam/issues/2191

SwellJoe 2 days ago 5 replies      
So, I know OCaml is impressively fast. And, I know OCaml is impressively terse ("concise" may be a more positive term). But, I wonder what would make one choose OCaml (or a variant of it like this) over some of the other new or old languages that exhibit some excellent characteristics for modern systems. In particular, a convincing concurrency story seems mandatory. I don't know enough to know if OCaml (or this variant) has a convincing concurrency story, and nothing on the front page of website tells me.

So, why do I want to learn this, rather than, say, Go or Elixir?

swuecho 2 days ago 3 replies      
Do it provide a usable standard lib? If so, I may try to use it in side project.
johnhenry 2 days ago 0 replies      
Wondering how, or even if, this compares to elm? http://elm-lang.org/
mark_l_watson 2 days ago 0 replies      
Reason looks interesting. I have had a 5 year run of alternating between really liking Haskell, and sometime thinking that my own development process was too slow using Haskell. I am putting Reason on my try-it list.

Documentation suggestion: add examples for string manipulation.

ubertaco 2 days ago 4 replies      
As excited as I was to see a big new thing in OCaml-land, I have to say my excitement died down as I read on.

I don't really see most of the changes as improvements.

Having a different, explicitly-noticeable syntax for mutable updates is nice, because it calls out mutability (which should be used sparingly).

I don't see extra braces as necessarily an improvement, given that OCaml's local scopes are already quite unambiguous thanks to "let ... in". On that note, Removing "in" and just going with semicolons removes another "smelly-code-callout" by making it less obvious what's imperative and what's functional.

I actually don't like ambiguity between type annotation and value assignment in my records. It's clear in current OCaml that {a: int} is a type declaration and {a = 1} is a value declaration/assignment. Moving to colons-for-record-values is at best a bikesheddy, backwards-incompatible change for change's sake, and at worst a breaking-change way of code less clear.

Speaking of making code less clear, how is "int list list" not clear? It's an int-list list. As in, a list of int-lists. So of course it should parse as "(int list) list". Why change to backwards annotations? Just to prevent existing code from working as-is, and making people used to reading ML spend extra brain cycles on remembering that your types read the opposite way?

And they make a huge deal out of their type for tuples being "(a, b)" instead of "(a * b)". Yeah, okay, I get it. It's not that big a deal, since people are used to reading product types as, well, products.

The other thing that seems weird to me is the need to change to a "fat arrow" instead of a "skinny arrow", again for no real reason. In fact, it just makes it more likely that you'll confuse it with a comparison operator. Nobody tries to type ">-", but people try to type ">=" all the time. You're just switching for the sake of switching, and it's not an improvement.

Their example code of their replacement for match...with is especially egregious. If you showed me the OCaml snippet and the Reason snippet unlabelled, I would think that the OCaml snippet is the new-and-improved version, since it's much more compact, much less noisy, and reads more like what it's trying to do ("match my_variable with either SomeValue x or SomeOtherValue y").

Another thing they make a lot of noise about is requiring fewer parens in some places. But then, they also require more parens in other places. So...okay? I guess? Not really a win.

And why rename equality operators? Are you really going to tell me that people prefer that their languages have "==="?

yegle 2 days ago 0 replies      
This is the new low of search engine unfriendly :-(
cm3 2 days ago 1 reply      
I miss dead code elimination the most, especially when building code that uses Core.
breatheoften 2 days ago 1 reply      
Is Facebook using mirage or similar ocaml unikernel tool chain? Is part of the goal of reason to make a more approachable syntax available for authoring code that will run inside next-generation containers?
partiallypro 2 days ago 0 replies      
Does anyone Else's Firefox absolutely slow to a crawl on this page?

Edit: just doesn't load at all on Edge. Does load in Chrome/Opera and surprisingly IE 12 but doesn't load the logo's font.

elcapitan 1 day ago 0 replies      
Is there an overview in which regard this differs from "classical" Ocaml?
zem 2 days ago 2 replies      
i noticed this in the examples:

 | List p (List p2 (List p3 rest)) => false /* 3+ */
has the regular list destructuring in pattern match syntax been removed? that's pretty sad, if so - lists are the default data structure in ocaml, and it's worth retaining some special syntax for cons especially in pattern matches.

stuartaxelowen 2 days ago 3 replies      
Can we please keep using parens for function invocation? Leaving them out hurts readability.
querulous 2 days ago 0 replies      
if this had come out five years ago i'd probably be all over it, but i think i'd rather just use rust at this point. different syntax but better safety and it's not like the ocaml ecosystem has a lot to offer
andrew_wc_brown 2 days ago 0 replies      
Everything reads like double talk.Not sure what I would want to use this for.
intrasight 2 days ago 0 replies      
Pretty disappointed that they'd release something that butchers Firefox.
molotok 2 days ago 0 replies      
Fry Firefox RAPID.
aerovistae 2 days ago 0 replies      
fixxer 2 days ago 1 reply      
Why rtop?
ulber 2 days ago 7 replies      
This page is completely unusable due to lag. From the other comments it seems this is FF specific. One would think FB would have the resources to test new pages at least on common browsers before publishing.

Edit: The fix came quickly though.

carapace 2 days ago 0 replies      
Another site that is useless with JS disabled. Nice work.
ClosureChain 2 days ago 0 replies      
I wonder if the people at Propellerheads will sue Facebook for using the name of their software https://www.propellerheads.se/reason
zump 2 days ago 2 replies      
Facebook just won't let OCaml die.
devit 2 days ago 5 replies      
It seems to me that Rust would be pretty much strictly better than this.

In particular Rust has similar syntax, seems to have all Reason's features plus the linear types and regions/borrowing that allow memory and concurrency safety while still being able to mutate memory and not being forced to use GC.

They are aware of Rust since they cite it in their page, so I wonder why they decided to create this instead of using Rust.

It would be nice if they explained this in the FAQ.

I guess it might be useful if you have an OCaml codebase to interface with but don't already know OCaml, but given the relative obscurity of OCaml that seems a pretty narrow use (and also Facebook isn't known to make extensive use of it, afaik).

Play Store and Android Apps Coming to Chromebooks googleblog.com
367 points by ojn  20 hours ago   192 comments top 27
spot 16 hours ago 6 replies      
from the post:

> Schools in the US are now buying more Chromebooks than all other devices combined -- and in Q1 of this year, Chromebooks topped Macs in overall shipments to become the #2 most popular PC operating system in the US*.

that's pretty amazing actually. congrats to google & the chromebook team!

radarsat1 19 hours ago 4 replies      
I'm curious just on the technical side, what does this mean for the many apps that include ARM code? (i.e. apps that use the NDK) Will there be some emulation, or do apps generally ship with multi architecture?

Edit: Ok, the answer is, both. Thanks ;)

caffinatedmonk 18 hours ago 6 replies      
I'm curious why they didn't mention something so game changing as this in the keynote.
dharma1 5 hours ago 0 replies      
I hope they will open source this, so we would get Android apps on other Linux distros too. That would be a great win for Linux app ecosystem
sharms 20 hours ago 3 replies      
This is a big move and will majorly impact desktop / laptop computing. Now the entire ecosystem of Android apps (even Microsoft Office, Snapchat, Photoshop Express) is going to be available, and arguably this platform is much more complete than say, Universal Apps (Microsoft
gvurrdon 5 hours ago 0 replies      
Does anyone know how permissions would be handled? There are some Android apps I'd like to install on a Chromebook but I certainly don't want them to get access to my contacts.
magnumkarter 20 hours ago 3 replies      
This is great!!! I wonder if it will be possible to install the Play Store in Chromium OS. I know that Chromium has some support for installing Android .apk files.
bonaldi 19 hours ago 3 replies      
No support for the original Pixel? It's more powerful than quite a few on the list. Damn.
stkoelle 19 hours ago 1 reply      
Intellij for Android, would help a lot some developers ;-)
chrisper 17 hours ago 2 replies      
Is there a way to try out ChromeOS without owning a chromebook?
pawelkomarnicki 7 hours ago 0 replies      
Well, time to shred my "Samsung Chromebook", and maybe get something newer or just give up with this "gazillion of models and revisions" bullshit I hated about Windows years back :/
jimmcslim 16 hours ago 4 replies      
Why are Chromebooks such a US phenomenon... Here in Australia retail availability is pretty dire. I wonder if this development might see that start to change?
headmelted 8 hours ago 0 replies      
For anyone that hasn't yet played with a Chromebook and is interested in this, x86 builds of Chromium OS:


This isn't exactly the same (no play store yet), but it'll let you get a feel for the OS and it's merits.

pgrote 20 hours ago 1 reply      
While this is a great step forward, I am disappointed in the list of chromebooks supported.

I looked over the list and cannot find a common thread as to what is supported and what isn't. Does anyone know?

My Acer C720 with an i3 isn't on the list, but my Toshiba Chromebook 2 with lesser specs is on the list.

jbigelow76 17 hours ago 3 replies      
I'd be more interested in seeing Electron apps on ChromeOS before Android apps, not expecting that to happen mind you, Electron on ChromeOS probably does nothing to move the Google ecosystem forward.
ralmidani 19 hours ago 0 replies      
Hopefully this leads to the release of ARM devices with more than 32GB of storage.
asimuvPR 19 hours ago 0 replies      
Google: What does this mean for ARC users?
headmelted 19 hours ago 1 reply      
Obviously there's no-one in the world that didn't know this was coming, but even so, I feel for the Remix OS guys.

I assumed at the time their objective was to be acqui-hired by Google, but I can't see why there would be a reason for that now, or how they'd hope to compete in this situation.

Congratulations to the Chrome O/S and Android teams. I was briefly on a Chromebook when my laptop packed in, and but for the absence of solid developer tools, I'd have stayed forever. There's a lot to be said for convenience.

genieyclo 19 hours ago 2 replies      
After the Android Chrome app gets extensions, what's the point of keeping ChromeOS alive? It's the only thing Android's missing that ChromeOS has.
koolba 18 hours ago 3 replies      
Will apps run natively on Chromebooks or will my fart app slow down because it's being emulated?
superobserver 12 hours ago 0 replies      
This is really great news and I hope they execute this right. As liberating as crouton is, I still find myself wanting an Android apps for the ease of access.
hackaflocka 18 hours ago 2 replies      
To the Googlers on here -- any idea when it'll come to Chrome browser on other platforms. I really hope Google doesn't artificially delay that to boost Chrome OS penetration.
jimjimjim 18 hours ago 2 replies      
year of the linux desktop?
TazeTSchnitzel 20 hours ago 1 reply      
Coming soon: Chrome OS made into merely an alternative Android home screen, and Chromebooks becoming Droidbooks.
dandare 8 hours ago 0 replies      
Finally Sonos on Chromebook!
ncr100 18 hours ago 0 replies      
I assume Google IAB be supported on Chromebooks, too?

Cross-device purchase restoration, etc?

_pmf_ 8 hours ago 0 replies      
There's virtually no app that I feel thrilled to use on my laptop.
Academics Make Theoretical Breakthrough in Random Number Generation threatpost.com
381 points by oolong_decaf  2 days ago   155 comments top 23
tptacek 1 day ago 0 replies      
I'm sure this is as important to computer science as the article claims, but not having even read the paper I can say pretty confidently that it isn't going to have much of an impact on computer security. Even if it became far easier to generate true random numbers, it wouldn't change (a) how we generate randomness at a systems level or (b) what goes wrong with randomness.

Our problem with cryptography is not the quality of random numbers. We are fine at generating unpredictable, decorrelated bits for keys, nonces, and IVs. Soundly designed systems aren't attacked through the quality of their entropy inputs.

The problem we have with randomness and entropy is logistical. So long as our CSPRNGs need initial, secret entropy sources of any kind, there will be a distinction between the insecure state of the system before it is initialized and the (permanent) secure state of the system after it's been initialized. And so long as we continue building software on general purpose operating systems, there will be events (forking, unsuspending, unpickling, resuming VMs, cloning VMs) that violate our assumptions about which state we're in.

Secure randomness isn't a computational or cryptographic problem (or at least, the cryptographic part of the problem has long been thoroughly solved). It's a systems programming problem. It's back in the un-fun realm of "all software has bugs and all bugs are potential security problems".

It's for that reason that the big problem in cryptography right now isn't "generate better random", but instead "factor out as much as possible our dependence on randomness". Deterministic DSA and EdDSA are examples of this trend, as are SIV and Nonce-Misuse Resistant AEADs.

(unsound systems frequently are, but that just makes my point for me)

hannob 2 days ago 2 replies      
While this may be an interesting theoretical result it almost certainly has zero practical implications for cryptography.

We already know how to build secure random number generators. Pretty much every real world problem with random numbers can be traced back to people not using secure random numbers (or not using random numbers at all due to bugs) or using random number generators before they were properly initialized (early boot time entropy problems).

This random number thing is so clouded in mystery and a lot of stuff gets proposed that solves nothing (like quantum RNGs) and stuff that's more folklore than anything else (depleting entropy and the whole /dev/random story). In the end it's quite simple: You can build a secure RNG out of any secure hash or symmetric cipher. Once you seeded it with a couple of random bytes it's secure forever.

oolong_decaf 2 days ago 0 replies      
Here's a link to the actual paper: http://eccc.hpi-web.de/report/2015/119/
electrograv 2 days ago 3 replies      
> We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number

"Independent and no correlations" sounds like a crippling assumption if you want to use any two deterministic PSRNGs. How can you possibly guarantee they're completely un-correlated and independent without seeding them with collectively more bits of entropy than you can get out of the combined system?

I'm not sure what "independent" is even supposed to mean for a deterministic sequence, which by definition is recursively dependent.

beambot 2 days ago 3 replies      
Reminds me of the Von Neumann method of using a biased coin to generate unbiased random coin flips: http://web.eecs.umich.edu/~qstout/abs/AnnProb84.html

(Edit: not the algo itself, just the notion of combining randomness.)

deckar01 2 days ago 0 replies      
> Abstract:

> We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least logCn for a large enough constant~C. Our extractor outputs one bit and has error n(1). The best previous extractor, by Bourgain, required each source to have min-entropy 499n.

> A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n1, for any 0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown nq bits are chosen almost \polylog(n)-wise independently, and the remaining q=n1 bits are chosen by an adversary as an arbitrary function of the nq bits. The best previous construction, by Viola, achieved q=n12 .

> Our explicit two-source extractor directly implies an explicit construction of a 2(loglogN)O(1)-Ramsey graph over N vertices,improving bounds obtained by Barak et al. and matching independent work by Cohen.


Dagwoodie 2 days ago 8 replies      
What makes randomness so hard? I had this crazy thought awhile back and wondering if it would work out:

Say you took a small disk shaped object like a hockey puck with a window on it and you filled it with sand. 50% white sand and 50% black sand. Inside the puck would be blades that are attached to a motor and rotated slowly to constantly change the pattern. The pattern formed in the window would be truly random wouldn't it? You could mount this to a PCIE card with a camera...

dave2000 2 days ago 2 replies      
What is the possibility that this is an attack on cryptography; convince people that it's safe to produce random numbers this way using an inaccurate "proof" and then have an easy/easier time decrypting stuff produced by anyone who uses it?
wfunction 2 days ago 1 reply      
Could someone explain why XORing the outputs of the two sources isn't optimal?
jaunkst 2 days ago 5 replies      
I have always wondered why not introduce physical randomness into cryptography. Let's take scalability out of the question and look at the problem at the fundamental level. If we used a box of sand that shifted each time a random number was requested and a camera to scan and produce a number from this source would it not more random than any other method? I'm not a professional in this field I am just truly asking why not..
kovvy 1 day ago 0 replies      
How well does this handle a biased source of random numbers in one or more of the inputs? If someone has set up your random number source to be more easily exploitable (or just done a really bad job setting it up), does combining it with another poor source with this approach mean the results are still useful?
Cieplak 2 days ago 2 replies      
Does this imply that XORing /dev/urandom with /dev/random is a good practice?

PS: Thanks for clarifying @gizmo686. The arch linux wiki suggests that urandom re-uses the entropy pool that dev/random accumulates, so this is indeed a BAD idea.

I found this helpful as well:

Overall, their construction quite reminds me of a double pendulum, which is one of the simplest examples of deterministic chaos.

Houshalter 2 days ago 1 reply      
I read the article and the comments and I'm still confused why this is important.

I mean it sounds trivial. Why not take the hash of the first random number, and xor it with the first random number. Then optionally hash the output and use that as a seed for a RNG. If any part of the process isn't very random, that's fine, it's still nearly impossible to reverse and doesn't hurt the other parts.

csense 2 days ago 1 reply      
"...if you have two low-quality random sources...you can combine them in a way to produce a high-quality random number..."

I tried to skim the paper, but it's really dense. Can someone who understands it explain how what they did is different than the obvious approach of running inputs from the two sources through a cryptographically strong hash function?

marshray 2 days ago 1 reply      
How is this different than taking two independent bits with < 1 bit entropy and XORing them together to combine their entropy? (up to a max of 1 full bit)
wfunction 2 days ago 4 replies      
Isn't "Independent and no correlations" redundant? How can two random variables be independent but correlated?
nullc 2 days ago 0 replies      
But can anyone extract the algorithm from the paper?


mirekrusin 2 days ago 0 replies      
Can someone explain why it's considered so hard to get randomness? I mean you can take old radio and you hear random noise, is it hard to create tiny antenna in the computer?
bootload 2 days ago 0 replies      
another article via UT (Uni. Texas), "New Method of Producing Random Numbers Could Improve Cybersecurity" ~ http://news.utexas.edu/2016/05/16/computer-science-advance-c...
Bromskloss 2 days ago 1 reply      
> A source X on n bits is said to have min-entropy at least k if

Can a rigorous definition of "source" be found somewhere?

nullc 1 day ago 0 replies      
But can anyone extract an algorithm from the paper? :)
roschdal 2 days ago 5 replies      
We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number,

So Math.random() * Math.random() ? :)

ninjakeyboard 1 day ago 0 replies      
praise RNGesus!
LinkedIn password leak kaspersky.com
386 points by trumpeter  21 hours ago   197 comments top 34
kazinator 14 hours ago 2 replies      
> If youre not sure how strong your password is, test sample passwords with our password checker here.

That is irrelevant in the face of leaked passwords; what matters most in that situation is that your password is something other than your leaked one.

If the passwords were leaked due to being stored in plain-text, no amount of complexity would protect them, obviously.

Don't use the same password on multiple sites. If your LinkedIn password is leaked, you don't want that same password to grant access to your bank account. That just as important than how strong the password is, if not more.

If some site has suffered a password leak, and you're a user of that site, you must change the password on that site, and also on all other sites where you happened to use the same password. Do it as quickly as possible without worrying how strong the new passwords are. Then change later to stronger ones.

A password's strength is inversely proportional to how often you change it. For instance, if you happen to change a password every week (for the sake of argument---few people likely do), and it takes a month to crack on the best available hardware cluster, then you're probably okay. If you change only once a year, you're much less okay; a surreptitious password breach could happen, and two months of cracking later, the attackers have your password. Meanwhile, you're still months away from changing it, not knowing there had been a breach.

By the time users learn about a breach---if ever---they should assume that their passwords have been cracked, because some unknown amount of time has passed between the actual break and the discovery. The discovery will likely stem from the fact that some of the "lower hanging" passwords have been cracked and accounts start being misused. The site admins can then only guess from various circumstantial information (logs or whatever other breadcrumbs left bind) about when the leak might have occurred.

warrenpj 5 hours ago 1 reply      
The best security that an individual can get from passwords is clearly achieved by using a password manager and generating a unique random password for each site, and changing high-value passwords periodically. (It's arguably already impossible for a human to generate or remember enough good passwords, and either way it gets harder as computers get better at guessing human-generated passwords.)

However, from the point of view of someone implementing an authentication system, passwords on their own are broken. There will be a significant fraction of users who re-use their password at a site with minimal-effort security. If you subscribe to the idea that computer professionals have a moral duty to safeguard people's private information entrusted to them, then password-only authentication is just broken.

The solution is to either: spend the money to implement a multiple factor authentication system (with a secure password database and fraud detection) or use a federated identity service. (Even just sending a one-time login code via email is fine). The latter is simple and takes even less effort than implementing a password system from scratch.

There should be fines (at the very least) for having an unsalted password database with more than X number of users.

wglb 18 hours ago 4 replies      
1: Change your password. RIGHT NOW. If youre not sure how strong your password is, test sample passwords with our password checker here. Seriously?

Keep in mind that these estimates are based on some bogus entropy estimation. If a password hacking guy runs the correct dictionary past the hashes you password generates, it might be as small, well, as the first one tried. For example, run the passphrase Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn1 past the kaspersky bruteforce estimator, you get 10,000 centuries. But this is clearly false, as inicated in http://arstechnica.com/security/2013/08/thereisnofatebutwhat.... They clearly "cracked" this in far less time: "in a matter of minutes".

stephenitis 18 hours ago 0 replies      
useful tool to check your emails https://haveibeenpwned.com

https://haveibeenpwned.com/PwnedWebsites hasn't been updated yet with this yet because the list hasn't leaked entirely.

Also change your password: https://www.linkedin.com/psettings/change-password

benologist 19 hours ago 3 replies      
I got an email from them this morning about this, it just smells like all their other junkmail begging me to +1 their active users.

Why don't they invalidate the passwords all at once instead of letting -- someone -- use the potentially compromised passwords again...

oxguy3 20 hours ago 3 replies      
Woo, I created my LinkedIn profile in 2015, so I should be safe since the leak is supposedly from 2012. If anyone else isn't sure when they made their LinkedIn, you can see your join date here (ctrl+f "Member since"): https://www.linkedin.com/psettings/
Tharkun 20 hours ago 2 replies      
I got an e-mail from LinkedIn today saying that I would be forced to reset my password upon my next login. They didn't say why. I guess this explains it.
koyao 10 hours ago 1 reply      
And LinkedIn is now asking me to enter my phone number:

"Add an extra layer of security to your account. Add your phone number."

Leaking my email / password is bad enough; I'm not going to give them my phone number for more damages!

benzor 14 hours ago 2 replies      
Question for the more security-savvy among you: If the leak happened in 2012 and I've changed my password since then (it's listed in your account page [1]), do I need to change it again?

Logic tells me I've got nothing to worry about, even considering potential password reuse, if they've all changed since then.

[1] https://www.linkedin.com/psettings/account

luso_brazilian 20 hours ago 1 reply      
Considering the amount of "growth hacking" LinkedIn use (used?) to so, sending too many emails to too many people this breach can be much more dangerous than usual.

People raises eyebrows when they get phishing emails but when it comes purposely from LinkedIn and vouched for by your social and professional circle it could get much more credible and easy to fall.

zeveb 17 hours ago 5 replies      
Aaaand that's why I use 'pwgen -s 22' to generate a unique password for every single site I use. I don't care if a salted password database is stolen; heck, as soon as I change my password I don't even care if a plaintext database is stolen.

Why -s? Because it means each password is a complete word, and may easily be double-clicked in a password list (which is nice, because selection is copy in X).

Why 22 characters? Because 22 mixed-case letters and digits are just over 128 bits of entropy.

Say it with me:

 pwgen -s 22

jasonpeacock 18 hours ago 2 replies      
Also, why is the 2FA option hidden under "Privacy" and not right next to the Change Password option?

You'd think they would want to advertise 2FA better...

may 19 hours ago 0 replies      
You can see how long you've been a LinkedIn member by going to your Privacy & Setting page, where it displays at the top.


tudorw 4 hours ago 1 reply      
I've read zero reports of people breaking into houses, finding a piece of paper down the back of the cabinet with lots of passwords on and no site names, then using those passwords randomly to gain access to an unknown system... A 'software' or 'online' password manager seems like a terrible idea, all your eggs in one convenient basket, if Sony and VISA and the NSA are unable to secure their systems 100% of the time I doubt the maker of your software will fare much better over the long term.
sleepychu 7 hours ago 0 replies      
I'm pretty sure the right move for me is going to be to just delete my account. I mainly just receive recruiter spam from it.
joelthelion 19 hours ago 4 replies      
Do we know how strong their hashing scheme was?

Edit: SHA-1... You'd think a site as big as linkedin would have strong hashing...

heartsucker 17 hours ago 0 replies      
I'm going to use this to recommend a CLI for strong, memorialize passwords (if you're not using something like KeePass).


 $ pip install diceware $ diceware -n 8 -d ' ' --no-caps proton hunts blake 31 pope pivot taped plain

jedmeyers 20 hours ago 2 replies      
> test sample passwords with our password checker here.

Do NOT do that with your exact password though :)

vermooten 18 hours ago 3 replies      
Who cares if their LinkedIn account gets hacked? In my case they'll be able to see 500+ recruitment agents I've never heard of as my 'contacts'.
electic 19 hours ago 1 reply      
Folks, this is becoming a common occurrence. Use a password generator and password vault to protect against this type of scenario.
noja 2 hours ago 0 replies      
> test sample passwords with our password checker here.

And you just lost my trust Kaspersky, congratulations.

JumpCrisscross 14 hours ago 0 replies      
A useful HaveIBeenPwned feature would be a list of pwned passwords connected to my email address.

Yes, I know - don't reuse and use a password manager. But not everyone follows best practice. Knowing which password motifs to absolutely not reuse would be helpful.

mkhpalm 13 hours ago 0 replies      
Whats interesting to me is that their spams to change your password showed up on a whole bunch of group email addresses I am a member of. So at some point linkedin went and harvested email addresses that got to my inbox and made a bunch of bad assumptions to include those as secondary addresses for me. I can only assume it was their mobile app, which is now forever uninstalled on all my devices. I simply cannot have them doing that.
awinter-py 19 hours ago 0 replies      
beyond linkedin logins, they also have a zillion email passwords from the bad old days before oauth.
jjm 11 hours ago 0 replies      
For those that have forgotten, https://news.ycombinator.com/item?id=4073309

Back then there were issues. If I remember correctly, there was some nodejs even after this with no bcrypt.

20andup 7 hours ago 0 replies      
Just points out the fact that we should use password generators for all web sites that requires one.
gggggggg 13 hours ago 0 replies      
anyone know how I can get a copy of the list. I want to see if the email/password combination I used back then is still in my regular circulation on other sites.
ILoveMonads 10 hours ago 0 replies      
I'm amazed LinkedIn is as big as it is. They have a big, new, building in Sunnyvale and lots of employees--too many it seems for a simple social network. I drive past their HQ a few times a week when I'm in Sunnyvale and see their employees, who don't look like other tech employees, waddling down the street to the McDonalds on the corner of Mathilda and Delray.
DyslexicAtheist 15 hours ago 0 replies      
is it even verified that the data isn't again the warmed up stuff that surfaced from LinkedIn's 2012 breach? This is quite common these days.
ryanlol 20 hours ago 1 reply      
My theory is that this data leaked via custhelp.com, the filename of the data dump I have (linkedin.cfg) seems to support that.

This would also explain linkedins initial "confusion" regarding the hack.

misiti3780 17 hours ago 2 replies      
this might be a dumb question - but if the password was unique to that account AND you have 2 factor auth enabled, is there any reason you need to change the password ?

so if some hacker somehow manages to backward engineer a salted-bcrypted-hash of my unique password, he still cant get in without my cell phone

adamredwoods 16 hours ago 0 replies      
2-step authentication?
open-source-ux 19 hours ago 3 replies      
As someone who isn't versed in security issues, can anyone explain how security breaches like this one (and Adobe etc.) occur?

I'm assuming (and I may be completely wrong) that some kind of software monitors if the database of customer details is being downloaded. If a download is detected, an alert is issued. Does software like this exist? Or there other measure that guard against these data breaches?

MikeJougrty 9 hours ago 1 reply      
So basically, if I get interviewed by a company and I get asked why I don't have a Linkedin account, am I legitimate to respond to them by saying that Linkedin sucks in many different ways including password breach?
A language that compiles to Bash and Windows Batch github.com
381 points by onecooldev24  18 hours ago   135 comments top 31
thinkpad20 16 hours ago 3 replies      
The idea is interesting, but ultimately the utility of this seems limited. The differences between windows and Unix are more than just the shell language involved; shell scripts are typically deeply intertwined with the system in question, to the point where it's often not the case that a bash script will even run reliably across different Unix systems, much less on windows. Also, you can already run bash on windows, so once again the problem doesn't seem to be the language per se. I can only imagine how difficult it would be not just to design a script that would work properly on both platforms, but to debug the cases where it didn't.

Also, as others have noted this language doesn't support redirection, which in my mind makes it practically useless beyond toy applications. I've written hundreds of bash scripts and I don't think any of them didn't make heavy use of pipes and redirection. I'm also not sure if the language supports running processes in the background, traps, magic variables like $!, extra options, subshells, file globbing, etc, all things that many real-world scripts use. Bash scripts often use many programs available on Unix systems as well, such as find, grep, readlink, xargs, and other things that aren't part of the language per se. Unless those are ported over too, writing useful scripts would be almost impossible.

Finally, I don't think the author has made a convincing argument that such a language is even needed, when languages like perl/python/ruby exist for when the task is complex enough to require a more powerful language. On the other hand, if the project is (as I suspect) purely for fun and interest, then by all means :)

scandox 18 hours ago 3 replies      
You've got to admire this and at the same time you can't help but see the shadow of two more letters on the end of its name...
BYVoid 15 hours ago 3 replies      
I am the author of Batsh. Batsh is a toy language I developed 2.5 years ago in a hackathon. It was just for me to play with OCaml. Feel free to play with it at http://batsh.org/.
jtwebman 17 hours ago 2 replies      
The project hasn't been worked on in over a year also it isn't the language that is much different but the tools. Are you going to write grep on windows? If not the language really doesn't matter.
niutech 50 minutes ago 0 replies      
There is also Pash: PowerShell + Bash (http://pash.sourceforge.net)
meunier 24 minutes ago 0 replies      
Opportunity to name this batshit wasted.
legacy2013 17 hours ago 2 replies      
This is cool, but I hope the new Linux Subsystem in Windows 10 will propagate enough that everything can be written in bash
xrstf 18 hours ago 2 replies      
Is there a way to get this as a precompiled Windows binary? I really want to give this a try, but I'm just sooo lazy and don't want to install OPAM if there's maybe a way around that.
pm 18 hours ago 2 replies      
Was hoping the website was a .it domain.
onedognight 17 hours ago 0 replies      
I was disappointed to see that the compiler was not written in batsh and therefore not self hosting. Seems like a missed opportunity.
Someone 17 hours ago 1 reply      
Weird that it doesn't seem to support basename (1) and dirname (1) and their Windows equivalents (e.g. %~dp0%)

Quite a few of my scripts need to find files relative to the location of the script, or compute the path to an output file by replacing the extension of an input file.

tomcam 16 hours ago 1 reply      
I must have done it wrong. In the late 80s, I created a Windows batch-to-executable compiler called Builder (extending the language greatly, to the point of menus as language constructs). Made a decent living off it, too!
nikolay 15 hours ago 0 replies      
I posted Batsh long ago, but it's totally impractical. The multiplatform PowerShell is the best option. Anyway, we need a new shell and NGS [0] looks like it. Neither zsh, nor Fish offer what a shell scripting language needs so that you don't have to use Python, Go, or another language for a slightly more complex stuff!

[0]: https://news.ycombinator.com/item?id=11734622

moondev 16 hours ago 0 replies      
Since docker is on windows now, this should run script.sh in current directory pretty easily.

docker run -v %CD%:/opt -w /opt debian:jessie ./script.sh

kelvin0 13 hours ago 0 replies      
Cool stuff! Would have been useful for me when all I knew was C++! Unfortunately, I use Python for exactly this type of scripting. Oh and I also use it for stuff like Django, lxml, requests, wxPython, OpenCV, PyGame, NumPy, Reportlab ...
fpoling 9 hours ago 0 replies      
I like the idea of compiling to bash from a much saner language even without Windows compatibility. With containers and slimmed-down OS like CoreOS Python and friends may not be available. Besides, for some tasks Python startup time is just unacceptable so using Bash could be a good option if not the awkward syntax to put it mildly.
jheriko 15 hours ago 1 reply      
how is this top item? seems unfinished, unpolished, not very useful and very run of the mill as an achievement.

no easily found windows installer and has obscure dependencies whilst claiming it has none etc. etc.

zwieback 18 hours ago 4 replies      
Looks nice but wouldn't you just use something like Python nowadays?
juped 17 hours ago 1 reply      
Excellent idea. The only thing I thought was "why not PowerShell", but XP compatibility is probably the reason. (It exists but isn't preinstalled for that version.)
youjiuzhifeng 8 hours ago 0 replies      
Cygwin provides another way to run the bash shell on windows. It is really helpful with full features of some commands like 'find' 'sed' 'awk'.
incepted 14 hours ago 1 reply      
Isn't it ironic to write this tool in a language that is not available on Windows[1]?

[1] "Windows support is comming soon. "https://ocaml.org/docs/install.html

exabrial 17 hours ago 0 replies      
"Bash is the assembly code of Unix?"
emmelaich 6 hours ago 0 replies      
This would be a candidate for the scripting language in redo

Paging @apenwarr

marshray 17 hours ago 0 replies      
I was yearning for something like this literally just last night. Still am, tempted to give it a try.
evincarofautumn 16 hours ago 1 reply      
How should I pronounce the name? I read batsh as a homophone for batch [b], which is unfortunate. Is it supposed to be pronounced as two syllables [b.] or something like that?
eagsalazar2 16 hours ago 2 replies      
I've never tried but can't you already just run your node or ruby or whatever scripts on both Windows, Mac, Linux? Is there some reason targeting bash and batch is important?
knocte 12 hours ago 0 replies      
At the time GNU/Windows became a thing recently (https://msdn.microsoft.com/en-us/commandline/wsl/about), the idea of Batsch turned obsolete to me.

And anyway I have been already been using crossplatform .fsx scripts for quite some time. KTHXBYE

sedatk 16 hours ago 1 reply      

 @echo off set java=/usr/bin/java
that's not how it works. that's not how any of this works.

ricksplat 6 hours ago 0 replies      
A couple of slight nitpicks, but isn't this more like a translator? I understand a "compiler" to be that which condenses high-level structures down to more fundamental CPU-oriented structures (as opposed to an "interpreter" that takes the structures as presented). If you're compiling into Bash, or windows batch then you're converting from one high-level representation into another. Though you could debate whether these are "high level" whether they are technically or not. I will concede that this batsh language does indeed look a lot nicer than either - it seems to be of the "general purpose" style of C or Javascript rather than the more targeted style of either bash or batch - which is specifically for the domain of operating system scripting - I guess this might mean it's actually lower level. The whole thing is pretty cool though :-)
chris_wot 16 hours ago 1 reply      
I love the fact that someone, somewhere needed this, and then someone, somewhere created it.
agjmills 5 hours ago 0 replies      
Relevant XKCD: https://xkcd.com/927/
Open Whisper Systems Partners with Google on End-To-end Encryption for Allo whispersystems.org
328 points by ThatGeoGuy  1 day ago   212 comments top 14
robert_foss 1 day ago 7 replies      
To me it seems like Open Whisper Systems are accepting a lot of concessions in order to have Signal included into products. The trust I once had for moxie is quickly dissipating.

* Privacy is only provided in Allo in a secondary mode. Not by default.

* Federation of the Signal protocol has been rejected for non-technical reasons.

Also, on a personal note, the desktop client requiring chrome is pretty awful.

cm3 1 day ago 4 replies      
Has anyone given this


more thought and whether one should avoid Signal and work with a more friendly project that doesn't seemingly fail at its desire to have widespread use of the protocol and actually tried to sue WireApp? WireApp's now approved as a non-infringing implementation in Rust, so that's great for reliability.

Edit: The suing part was initiated by Wire as a response to Moxie demanding GPL compliance over their claim Wire is infringing. I got that backwards.

tptacek 1 day ago 7 replies      
This is fantastic news. The two largest messaging platforms on the Internet will both be using Signal protocol.

I could ask for more: E2E could be the default for Allo, and it isn't. That's not great. But the E2E you get when you ask for it will apparently be best-in-class.

Jarwain 1 day ago 2 replies      
What I'm curious about, and think would be really neat, is if one could take advantage of the shared Signal Protocol to send messages cross-platform. Specifically, sending an encrypted message to a Whatsapp user from Allo. Or to a Signal user from Whatsapp. Or any combination/permutation really.
NetStrikeForce 1 day ago 1 reply      
I'm not sure I got it right.

Is Google going to be scanning all my conversations to give me suggestions on what to say next? Really?

I understand the price of things like Gmail, where I get a robust email system in exchange of scanning my emails and mining my data. I got something very good from Google, they got my data. Not the best of the deals I ever made, but it has (had?) a strong appeal.

On the other hand I don't understand this Allo thing: There's no appeal in the smart assistant, it doesn't bring anything I want to have.

Roritharr 1 day ago 2 replies      
I really wonder what the people of allo.im are thinking now.
superkuh 1 day ago 3 replies      
Does this require a phone number like the rest of Open Whisper Systems products?
lawnchair_larry 1 day ago 0 replies      
For someone concerned about privacy, it's baffling to me that we'd be forced into sharing our phone number in order to communicate.
sigmar 1 day ago 0 replies      
Great news! Hopefully this also means that identity verification (through a key fingerprint) will be available in Allo (and in Duo?)
mahyarm 1 day ago 0 replies      
I wonder how many other signal protocol integrations are in progress...
chinathrow 1 day ago 4 replies      
That is awesome - now we also need to kill metadata collection. Is this feasible?

Oh and off-the-record was there on Hangouts/Gtalk before - I used it but the chats were replicated across clients (e.g. Pidgin vs gmail.com) - so not really off-the-record (i.e. they lied).

dang 1 day ago 1 reply      
Please don't do this here.
dang 18 hours ago 1 reply      
Please don't do this. Personal attacks are not ok on HN, regardless of how wrong or annoying someone is.

We detached this subthread from https://news.ycombinator.com/item?id=11728339 and marked it off-topic.

p0ppe 1 day ago 5 replies      
Why didn't Google just develop this in house? It almost feels like they're admitting to having no credibility on privacy without an external partner.
Bootcamps vs. College triplebyte.com
367 points by kwi  22 hours ago   403 comments top 80
brudgers 20 hours ago 8 replies      
There are several factors that don't enter this analysis.

1. Bootcamps can be selective over a range of non-academic criteria such as interview skills, personal hygene, and prior work experience. Or to put it another way, unlike a public university, a boot camp can select for culture fit both in its internal cohorts and in the workplaces it targets.

2. Bootcamps tend to attract people with previous work experience: someone more likely to have several years of working to keep a roof over their head than a recent CS grad. There's a difference between a junior programmer with their first real job and a junior programmer who has spent six years working crappy jobs [or good ones].

3. Bootcamps have much more latitude to train for employment and employability. Listening to Jeff Meyerson's hours of bootcamp love songs, those interviews have left me with the distinct impression that doing so is common.

4. Bootcamp grads may come out with a stronger alumni network that can provide recent feedback about interview processes like Triplebyte's. Going in with some idea of what's coming is likely to produce better results.

5. Bootcamps don't have to report their "failures". There's no independent oversight or accountability of the sort common in university education. A "C student" may simply find it impossible to graduate a bootcamp. The bootcamps are free to shape their "graduate" pool however they wish.

lloyd-christmas 21 hours ago 5 replies      
How is this possible?

I think one key aspect that is missing is that boot camp graduates aren't straight off the barista lineup. I took one at age 28 after having worked in a technical role in finance since undergrad. The average age of my class was probably 29. Beyond just time in the workforce, I had a double major in math and economics with a minor in applied statistics. Had I dropped "Behavioral Economics" and taken "Data Structures" along with some other random course, I could have switched my Econ major to a minor in CS. Many people in my class were of a similar background.

madmax96 21 hours ago 10 replies      
I have a slightly more pessimistic view of the situation:

Sure, bootcamp grads can write a web application just fine; after all, it's usually only CRUD. But what value are they bringing to an organization? Why would I pay them the same amount as a college graduate who undoubtedly has more total knowledge not only in CS, but in other areas as well? Ideally, a college should expose students to a diverse range of knowledge, each tidbit providing additional value to an organization. If I just wanted an application constructed, I could offshore the job and get it done cheaper.

Yes, a well-run bootcamp might be a better __coding__ education than a computer science degree, but coding is the easy part. There are other valuable skills that aren't being taught (i.e. the ability to communicate clearly, how to do research, how to learn independently) that make an organization strong.

We aren't in the coding business, we're in the building business. Code is simply a means to an end.

AlldenKope 16 hours ago 1 reply      
Companies focus too much on attracting talent, not enough on developing it.

If both of these screened avenues of entry to software development are as promising as these metrics indicate (each with their pros and cons) here are some potential larger takeaways for companies:

1) Invest in the continuous development of your employees, regardless of their background and seniority

2) Hire for teams, and diversify teams with both CS and BC grads

3) Hire more people in general (maybe on a probationary period)

Fit to small teams with the goal of cultivating experientially diverse teams, and spend significant time developing employees - junior and senior.

Any intellectual work should involve continuous learning and development. If the company's focus is restricted to current projects, or on the bottom line, or if managers enforce strict division of labor, an organization will warp to optimize for those metrics and become less adaptable to inevitable changes in the market (or within the company) and the company will fail to compete - or at minimum incur major opportunity costs.

What these metrics suggest is that if you take relatively successful candidates and invest in their individual development, both in depth and breadth, that investment will pay off. You'll create engineers who find better solutions to problems and - more importantly - who find better problems to solve.

HNcow 21 hours ago 6 replies      
I'm in the process of hiring a junior position and have no bias towards college grads or bootcamp grads. The only negatives towards boot camp grads I've seen so far is:

1) One candidate had no idea what the terms "Class" or "OOP" even meant. I'm FINE with them not understanding stuff like sorts/advanced data structures, but he ACTUALLY had 0 idea what an int was. No lie!

2) I wish there wasn't such a heavy reliance on MongoDB in most of these programs. Some do have SQL as well, but I feel like 80% of workplaces will be dealing with SQL, so I'm not sure what the focus on Mongo is all about if the purpose of these programs is to make you hireable. I think it's that it's an easier concept to relay since you're working with JSON everywhere already, but I've seen a bunch of people have a very strong bias towards Mongo to the point where they seem to not understand why you even would use SQL.

3) This part might get me in trouble here, but we are a small company in NJ and budgeting 50k for the junior 0 experience position. Most of these bootcamps in Brooklyn or Manhattan instill that you minimum should be making 60k and not to even look for anything else. I disagree with that personally, but I realize it is possible for grads to make this (especially in NYC). I've just come across a few that scoff at us for the pay we have, and I do understand it, but some of my higher ups who don't really feel comfortable with the bootcamp concept don't think they are worth it.

Obviously there are a lot of pros with hiring them as well. I think typically they are the more qualified candidates skill wise. None of the ones we've come across have been a great fit so far though, but I think it's because of how close to NYC we are. These programs are based there, and we have trouble competing with the salaries there. That's why we have been having more luck finding college grads from the NJ area though, they don't have these kind of higher expectations.

kemiller2002 21 hours ago 6 replies      
Boot camps have their place, but they are not a replacement for a traditional CS degree. I have met good and bad programmers from both types of programs (some from well respected colleges who I still wonder how they exactly passed), but here's the thing, I don't care about practical skills. I care about the person being able to think.

All those concepts that they teach in CS isn't about knowing the name of an algorithm, it's about thinking abstractly. I honestly don't care if a recent grad knows how to use IDE x or even much about source control. I can easily teach them that. I can't easily teach a person how to understand pointers or pass functions as parameters. I don't need someone who can write code; I need someone who can look at a problem and realize that we can cut the amount of work we have to do by understanding programming concepts at an abstract level. It is very hard to achieve this in a 12 week course. Can some people do this? Sure, they may have the background from a previous career that aids them in this, but they are the exception and not the rule.

ammon 22 hours ago 6 replies      
I'm happy to answer any questions about this (I expect it to be controversial). When we started Triplebyte one year ago, I was pretty skeptical of bootcamps. Doing credential-blind interviews and seeing what some bootcamp grads can do, however, has won me over. Clearly there are a lot of bad bootcamp grads (and probably a lot of bad bootcamps). But the model is working really well at the top.
Jormundir 19 hours ago 2 replies      
These results aren't very surprising because this is about interviewing performance. The goal of bootcamps is "teach you enough to get a job"; they're basically gaming the interview process by teaching to the test. University programs on the other hand are "teach you CS theory"; learning to interview well is up to the student and the specific school's offering of interview training.

I think there's a strong argument to make that university programs are too focused on theory, when the vast majority of their students are going to go out and get practical engineering jobs. I don't want the pendulum to swing too far to the practical side, though, because then you lose the long-term benefits of getting a CS degree. Although, schools can certainly buff up their practical material.

Anecdotally; when I participate in hiring, I tend to discount the bootcamp grads. Maybe it's unfair, but my experience hiring them has been that they know how to interview well, and know their tools well, but when you compare them a year in, they're pretty far behind their university counterparts. I see a plateau, where it's hard for a lot of bootcamp grads to move from doing generic web development to designing more challenging systems. Obviously it depends on the individual, but this seems to be a categorical struggle for bootcamp grads with little technical background. A lot of companies really just need more people doing web development, so being open to the bootcamp pool is essential, and ruling out bootcamp grads is silly.

norea-armozel 1 hour ago 0 replies      
My employer has been hiring a few people from some local coding bootcamps here in the Minneapolis area. Most of them are very decent at programming, so I'm not sure if they had any experience prior to their bootcamps or not, but I can't say I have any complaints for those they've hired. Never had a fix any of their code since they've been on the job either. Sometimes they need more help since they didn't get the discrete structures or software architecture knowledge that I did from my traditional CS degree. Honestly I think that should be something you pick up on the job or have been taught in high school (I'm biased of course).
jhchen 20 hours ago 3 replies      
It was not long ago that Computer Science degrees itself faced a similar challenge, against more well-rounded liberal arts programs, championed and prided by the Ivy Leagues. Today MIT and Stanford are ahead by the strength of their more practical engineering degrees. The data from Triplebyte supports the same narrative, just in greater granularity: businesses value practical skills.

There is value in being balanced and diversity, but this applies to teams, not necessarily individuals. Not everyone on your engineering team needs to be an architect. After your globally distributed, fault tolerant, realtime, highly available system is designed, somebodys got to build it. And most startups or software teams have no business even trying to design such a system in the first place.

In the US, my generation was told we all needed four year degrees. We dont. Some jobs and some roles certainly but the entire population of future adults?

There is an engineering shortage in the US because everyone was too busy getting four year degrees in more well rounded fields. Meanwhile Apple needs tens of thousands of engineers that could have been trained by two year vocational programs that the US was apparently above for our children, and thus cannot meet their business needs.

And yet this data from Triplebyte is incredibly encouraging because while we screwed up the educational policy, it may not be so difficult to fix.

WWKong 19 hours ago 2 replies      
Go to college. Life is long and it is not about passing your first interview. Real world is complex and ever changing. The point of going to college is not to acquire coding skills to pass the interview. It is about facing real world challenges: people, responsibilities, complicated decisions, uncomfortable situations etc. And hopefully at the end of it you are better prepared to take on life. It is a harder path than going to a coder factory. Take the hard path.
caconym_ 20 hours ago 1 reply      
It makes a lot of sense that bootcamp grads would outdo fresh college grads on "web system design"; they've presumably spent most of their bootcamp time focusing heavily on web systems. Stuff like load balancers/reverse proxies, distributed message queues, noSQL DBs, etc. may be totally foreign to a lot of fresh college grads, while a bootcamp grad can probably be expected to have a not-too-shabby understanding of how those components fit together.

The "practical programming" bit is a little more depressing, though it does ring somewhat true based on what I've seen in real life. How people can spend 4 years programming and still consistently fail at building decent abstractions, I have no idea.

Also, where is the "neither" category? There are dozens of us... dozens!

felix_thursday 18 hours ago 0 replies      
There's something to be said about a person doing a bootcamp. Not only is it a drastic career pivot, but choosing to invest in yourself like that is a huge sign of maturity, growth mindset, and awareness. It's no surprise that a bootcamp grad can quickly get up to speed in their first professional dev environment.

I did the WDI bootcamp through GA, and loved the experience. My motivations weren't to become a full-time web dev, but to become a much better, more well-rounded product manager. It's paid off 5x over so far.

There's a ton of garbage bootcamps out there, and it's unfair to lump them all together -- it's unfair that these exist. period. While, you can't replace the deep technical and theoretical understanding you get with a classic CS degree, if your goal is to build web apps, do you really need the formal experience, or can you learn that on the job?

bunnymancer 21 hours ago 6 replies      
Bootcamper here,

Of course 3 months is going to get you running with a solid basic knowledge of your stuff.

In what world would low-level, algorithms and data structures be doable in 3 months?

Point is, I don't think Bootcamps and Colleges are comparable.

It's like being a woodworker and a forester..

There's a place for each and it's not the same positions...

Now, here's my big question:

If your interview includes Practical programming, Web system Design, Algorithms and Low level system design...

What in the nine hells are you hiring for?

Had it been for a trucker position you'd be asking for "driving license, laws and regulations, engine design and car physics"..

For reference: https://i.imgur.com/sh7LJgj.jpg

morgante 20 hours ago 2 replies      
I'd love to see some more mathematical analysis of these differences. In particular, I suspect that while the averages are similar, the distributions look extremely different.

Specifically, the average engineer out of either a bootcamp or college is pretty mediocre. But the top 10% of engineers are mostly college graduates and are definitely not bootcampers. This is because the best developers are overwhelmingly passionate about development and have been doing it since high school. If you love programming, you might go to college to get a firmer academic standing. You definitely won't go to a bootcampif you've been programming for 5 years, a 3 week bootcamp makes no sense.

On the other hand, when it comes to the bottom tier I suspect bootcampers are a lot better. This is mostly because the bottom tier of CS graduates are atrociously bad. Regrettably, it is possible to graduate with a degree in CS without ever having written a single program by yourself. They slink by mostly through cramming for exams and "collaborating" with peers. My impression is that bootcamps are actually less tolerant of this behavior: you won't make it through a bootcamp without ever programming autonomously.

ArkyBeagle 2 hours ago 0 replies      
Programming isn't all one thing. You have to have what amounts to an epistemology about the system you're working on right now or you're going to break things.

A degree improves the chances of this. About half of what I do is teach these things, on the job. Just being able to classify a systems error can be daunting - is it a show stopper, or an ignore, or something in between?

I see bootcamps as being fine for getting people into seats, but the rest takes a long time.

Finally, employability and what (IMO) CS/programming should be about are diverging rapidly. This was not always so. This is starting to be a real problem.

madiathomas 4 hours ago 0 replies      
Bootcamps are filling a void which existed for a long time in the CS industry. Most of the time, A CS grad is hired to do a job that can be done by someone with little programming knowledge. I feel it is a waste of resources to hire a CS grad to do CRUD app with maximum 5 users on a very good day.

Now companies can use people from bootcamps for such kind of jobs and use CS grads for deep and high level stuff. Surely some top bootcampers will be able to do high level stuff too.

ogrev 18 hours ago 1 reply      
This is basically a warning to every single person going through bootcamps right now: Your skills are not special. You can be replaced with ease. Unless you differentiate yourself through what you learn either at your job or after the camp and demonstrate it through your work then your job will be kaput. That's basically what all of those Everyone Can Code advertisements were trying to achieve which is to make these skills a commodity.

Good luck.

harlanji 18 hours ago 0 replies      
This is the most honest comparison I've read so far.

I dropped out of high school because I was making good money by 18... kept working, saw my own limitations, and did a BS degree in 3 years, graduating at 26. That was 5 years ago today, actually :)

I see this same distinction in practice, thanks Triplebyte for quantifying it. If I were staffing an engineering team, I'd absolutely take junior engineers from bootcamps and senior engineers with university backgrounds. I like the surgical model from The Mythical Man Month, and have seen elements of it working by hiring junior test engineers of varying technical backgrounds and training them.

I think a BS degree in CS makes a lot more sense when you're hitting the edge of your capability as an independent contributor--many may never need it, some will love going on a few year sabbatical and earning their 'piece of paper' (as I did).

Biggest factor that gave me an edge was I had lots and lots of context for all the content of classes, and I took notes every single day, Beginner's Mind style and didn't try to test past intro classes... even CS 101 with Scheme. I was also able to work on my mentoring/leadership skills with classmates.

avs733 21 hours ago 1 reply      
there is a simple confounding variable here that unfortunately triplebyte can't touch with a ten foot pole...age/work experience

College is largely about transitioning children to adults (we can argue that separately) the personal and professional development that students go through over 4 years is vast. They are becoming adults in many frames, including understand the world and technology as systems. They aren't just learning to code, they are learning how to think.

To the extent that I know (warning: anecdata) Bootcamps presume a lot more worldly knowledge, attract and expect more grown up students, get students with direct interest in web/software/apps, and are much more likely to get career transitioning students (from the people I know who have bootcamp'ed). They have a much broader knowledge base to build on which will help them in some areas and hurt them in others. I would be curious if Triplebyte has any data they can touch at all looking at that.

Simply said...a 22 year old college student with a CS degree and a 35 year old BC grad may look similar on metrics but function entirely differently as employees in both the short and the long term...caveat emptor, figure out what you need.

superuser2 18 hours ago 0 replies      
If I were stranded on an island with the laptop I used in college and a power source, I'd have a pretty good idea of how to stumble through:

- A multithreaded UNIX-like operating system with user programs, system calls, and a filesystem, with reasonable (if not entirely optimal) caching strategies.

- A TCP/IP stack for that operating system.

- An authenticated encrypted channel over my TCP/IP stack with forward secrecy by building a pseudrandom function up to a stream cipher, RSA with OAEP, Diffie-Helman, etc.

- Network services from the RFCs in C (we did a router and IRC).

- A high-level programming language with support for both functional and OO idioms based on the typed lambda calculus with recursion, lists, records, tuples, ref cells, subtyping, etc.

- A lexer, typechecker, and interpreter for that language using parser generator tools, a recursive descent parser, or a shift-reduce parser in a pushdown automata model.

- A formal specification of the evaluation and typing rules and a type soundness proof for that lanugage.

- A distributed KV store with Paxos, Raft, or Byzantine Generals running on my encrypted channel and written in my language (we used 0MQ and were given a 0MQ broker that could be told to drop messages for testing purposes).

- Greedy, dynamic programming, network flow, and ILP algorithms with proofs of correctness and efficiency.

My class work repositories put me about three quarters of the way there.

I'm sure bootcamps can teach people enough to tread water in a dynamic language web framework, and that meets real business needs and adds real value. But college is a chance to go deeper.

I know nobody is paying us to build our own lightsabers. But - and call me old fashioned if you'd like - I think a professional ought to be able to build his own lightsaber anyway.

lordnacho 7 hours ago 0 replies      
Well this is interesting to me, as I've recently worked with a bootcamp graduate, and I've been looking over my brother's shoulder while he finishes his CS degree at Columbia.

- Bootcamp lady was very able on the iOS project we were working on. She seemed to know where things were in XCode, and she understood Obj-C and Swift (no embarrassing questions about what classes are). She didn't seem to know about other environments (and said so), but we were doing an iOS project.

- Ivy league guy seems to have touched every common language (c, c++, Python, HTML/JS/CSS, R, and more), along with common tools (vim, pyCharm, tmux, gcc, VC++, laundry list). I was surprised by how practical it was, actually. I thought it would be obscure algorithms the whole way, but I guess they take the theory and essentially force you to learn the practical aspects by implementing things in relevant stacks.

- Bootcamp lady was very good working in our little MVP team. Understood how common management ideas like Agile work. Conscientious with looking at the Trello board, asking questions in Slack. Not sure if this is just her personality, or because they tell you how software teams work.

- Ivy league guy had lots of group projects, but they tended to be dysfunctional. There was always someone shirking. Some people had no clue what was being built or how to compile it. There didn't seem to be any management oversight, just blind "let's get this piece done" type organisation.

- Degree guy has way more breadth. He was routinely looking at machine learning, implementing demos with scikit, setting up VMs for himself, looking at assembly, looking at SQL optimisation, and other diverse tasks. Bootcamp grad didn't need this stuff, but also would need significant training to get to that level.

- Ambitions were similar. My background is in financial code, and they both want to do that. Bootcamp grad has quite a mountain to climb, particularly with things that take more explanation than MVC. She has a good attitude, so if someone would teach her she could do it. My brother is better positioned though, and would need less teaching to reach the same place.

humbleMouse 22 hours ago 4 replies      
I think a well-run bootcamp is a better coding education than college computer science. The only thing most grads have on bootcamp people is algorithm knowledge. This is easy to fix. Just teach algos in bootcamp. It really isn't that hard to understand.

Ideal bootcamp:


-angular or any mvvc data mirroring framework

-OOP and ntier patterns

-Stored Procs/ORM/SQL training


-Webservices SOAP/REST

The college grads I work with tend to have written a couple shitty programs that don't really do anything, and their "final project" was hooking up a database to a business logic layer.

source: I have taught in bootcamps before and work with lots of new college comp sci grads now.

jedberg 20 hours ago 0 replies      
> It backs up the assertion that algorithm skills are not used on the job by most programmers, and atrophy over time.

This was the most interesting part to me. I'd love to see more on this.

I've always found it silly to ask algorithm questions of senior engineers. There seems to be an exponential falloff of that knowledge as one gets further from graduation.

danellis 17 hours ago 2 replies      
I swear, articles like this are going to cause me to have an existential crisis. I started learning programming as a child in the 80s. More than 30 years later, I like to think that I've acquired a lot of valuable knowledge and experience across a broad range of topics, and yet... when I hear about people training for three months and walking into decent jobs, I start to wonder what actually differentiates me at all.

For the sake of my ego, I'd love to hear that these bootcamp graduates have shallow, fragile knowledge in a narrowly focused area.

enricobruschini 6 hours ago 0 replies      
Thel real big difference that too often Americans forget is that college (and all the education system below colleges) gives you the structural mindset to break down complex problems and find solutions. They create your way of thinking and your rational side.Bootcamps, instead, just teach you how to execute some actions.It's like the difference between colleges and industrial schools.
nappybrainiac 18 hours ago 0 replies      
I'm not sure that a comparison between bootcamps and college is viable.

College is not just about learning to code. You also learn to deal with professors and how to get the best grades out of them. You figure out how much you can drink without the glaring hangout that interferes with your morning philosophy class. You sign those forms to get credit cards that haunt you till you have a job. If you're smart, chose a good college and get really lucky, you might actually learn something and get a job after graduation.

Boot camps are about learning to code, creating networks and passing interviews for tech jobs. You can't pledge, or hang out with the furries, paint your face with your college colors for the football game at the weekend, or struggle figure out if your summer course fulfills the requirement for your social science elective.

These two places of learning can peacefully co-exist and each one has its purpose.

I even think that it would be good for some CS Degrees to walk into a bootcamp to explore something new and expand their knowledge.

Bootcamp replacing college? I don't think so. Not till bootcamps have long lines of students trying to change their course selections at the registrar's office.

There are some options that lie somewhere in the middle...

danso 20 hours ago 0 replies      
I wouldn't be surprised that a bootcamp grad could beat a college CS student in practical web knowledge. Stanford has a web applications elective, CS142 [0]...in the previous years, it focused on Rails [1]; this year, it moved to the MEAN stack. In both syllabi, a week is spent on learning HTML/CSS alone...this year, I believe they spend a couple weeks learning JavaScript.

This class is an elective, which means that students aren't expected to know HTML/CSS/JS before taking it, though the core CS classes (Java, C) are prereqs. This also means that students who don't take 142 could quite graduate without having any practical knowledge about web development.

That said, it's not because the CS students couldn't actually learn practical web dev, and as others have said here, the best bootcampers are often folks who have a STEM background already.

[0] http://web.stanford.edu/class/cs142/

[1] http://web.stanford.edu/~ouster/cgi-bin/cs142-winter14/index...

dontscale 20 hours ago 0 replies      
I think the debate about Colleges vs. Bootcamps is an apples to oranges comparison

Algorithms are commoditized into libraries. Web design has been commoditized with templates.

Open-ended programming is still more complicated, but putting apps on the web today is easier than static HTML just 5 years ago. Parts of programming will continue being commoditized.

So if it's easy to create something and put it out there, the great and all-important challenge that faces developers today is making it matter.

megapatch 8 hours ago 0 replies      
This is obviously comparing different things. Bootcamps and College are not replacing each other. But there is a difference between learning because you are hungry for knowledge (college) and learning because you are hungry for food (boot camp, you need to do your job). The former makes you better in the trade.
pbiggar 21 hours ago 2 replies      
It's not discussed, but I would guess that the best CS grads beat the best BC grads, but average/bad BC grads beat average/bad CS grads.
lsadam0 21 hours ago 2 replies      
> Bootcamp grads match or beat college grads on practical skills, and lose on deep knowledge.

I feel as though you are attempting to lower the bar of what is acceptable in order to sell something :). The word 'practical' is thrown around in this article without much of a definition. Are we talking about making simple web pages?

I've just finished conducting a round of interviews for a junior level position, and based on this experience I highly doubt I will be considering bootcamp graduates in the future. As an example, for a question which involved sorting an integer array, and providing a method GetElementAt(index)....95% of the bootcamp applicants implemented sort within the GetElementAt method so that the entire array is sorted with every single call. A handful of CS grads made the same mistake, but most of them did not. Is this sort of oversight excused in the idea of 'practical' programming? Or in your definition, is this considered deep knowledge?

DougWebb 21 hours ago 1 reply      
After reading through all of the comments so far, my impression is that bootcamps are for training the developers whose jobs will be automated away in the coming years, and college is for training the developers who will be writing the code that automates those jobs.
douche 13 hours ago 0 replies      
So would the best of both worlds be the combination? Four-year CS program for the fundamentals and the deep knowledge, then the summer after graduating (or really, senior spring, when the coasting sets in) a bootcamp-style training on practical development?

I don't think that a traditional CS degree makes you code enough to become a good software engineer. I certainly wouldn't have gotten enough practice actually writing code if I just did my coursework and didn't dabble in other things, like game development. Let alone other practical skills, like debugging/profiling (barely touched upon), source control (likewise), testing (completely ignored), project management/estimation (noop...).

I'm still amazed and horrified that I took a Data Structures and Algorithms course that required nothing beyond proofs and a little pseudocode - not a line of actual, working code. It could be tailor-made for really understanding memory-management or TDD.

pbiggar 21 hours ago 2 replies      
> This does not leave bootcamp grads equivalently skilled to university grads. If you want to do hard algorithmic or low-level programming, youre still better served by traditional CS eduction.

Or, if I may suggest, a low-level/algorithmic bootcamp.

somecodemonkey 21 hours ago 0 replies      
Bootcamp without years if experience will not replace a Computer Science degree. It lacks the depth to build a solid foundation. While this is just an anectdote every company I have worked for refuses to hire bootcamp grads.
partycoder 19 hours ago 0 replies      
Having interacted with a lot of both lately, my impressions are that bootcamp graduates focus on mostly on functional requirements.

They have a hard time identifying non-functional requirements, assessing and mitigating risk, and start getting confused when things go low level.

In my experience, all "friendly" technologies have sharp edges somewhere, where you start getting exposed to low level issues. When you face these issues, there's no guarantee the answer will be in stack overflow and you will appreciate having learned some theory.

RankingMember 22 hours ago 1 reply      
I think that both college and boot-camp styles of training have their place. I'd think my default inclination would be to want to hire the CompSci majors to do the deep-scope planning/figuring and use boot-camp hires to do the grunt-work of supporting that vision.

It's important to note that this is just my initial inclination. I have no expectation that there won't be instances of boot-camp hires being better than CompSci hires in cases. It really comes down to the particular person, and hopefully any hiring process would do a decent job of evaluating each person.

lxe 21 hours ago 2 replies      
Do you remember your college web programming courses? The curriculum is always woefully out of date and seems that traditional undergraduate programs don't focus on updating it. This makes sense -- there are very few academic research areas that deal with practical web applications, and this is obviously mirrored in your undergraduate classes.

Don't forget -- universities are also research institutions, while bootcamps are not, and the coursework will reflect this.

Mc_Big_G 20 hours ago 0 replies      
That's because bootcamps specifically teach you how to pass interviews. I'm a senior developer and suck at interviews because I haven't taken my spare time to specifically study for interviews. I don't remember how to reverse sort a b-tree or whatever inane questions are asked because I don't need to know that to do my job spectacularly. It's actually kind of a joke that bootcamp grads interview better.
provemewrong 5 hours ago 0 replies      
I find the whole premise a bit amusing, because in my country the prime target audience for bootcamps are undergrad CS students or fresh graduates.
savrajsingh 20 hours ago 0 replies      
One of my close friends said it best: "savrajsingh, Lebron James doesn't care if you start playing basketball."
balls187 21 hours ago 0 replies      
> Weve found bootcamp grads...worse at algorithms and understanding how computers work.

Solution: hire a college grad and send them to a boot camp.

JustSomeNobody 20 hours ago 0 replies      
Likely, you see more people getting a CS degree because they feel it's a good job than the people going to bootcamps. The people going to bootcamps are more likely to be doing it because they've done some development and really like it.

Now, what bootcamps aren't going to give you is the breadth of a CS degree. But if you're getting a CS degree just for the money, you're not picking things up very well either.

So, I can see where a certain % of CS students and bootcampers are roughly equivalent.

I feel if you're very interested in CS and get a college degree and do really well in college, you're going to come out ahead of someone taking a 3 month bootcamp. I also feel there's more opportunity for CS degrees. ie, one probably isn't going to see too many 3 month bootcampers doing real time development. (I'm talk real real-time, not that buzzword web real time.)

Fiahil 19 hours ago 0 replies      
Could it be possible to combine both worlds? Getting a university degree by spending four years with a strong focus on practical skills and intense workload[1].

To my knowledge only the top tier of american colleges (MIT, Berkeley, Stanford, ...) come close to that achievement. But, in France, where I live; I had the opportunity to go to a private school "specialized" in computer science (Epitech, 42, if you wanna look it up), that was mostly an "enlarged bootcamp" from year one to year three. It was kind of funny, for me, when my peers from traditional schools ended up discovering version control in their final internship.

[1]: Once you replace the shitty paper exams by actual projects in programming classes, you'll be amazed by how much you'll increase student proactivity.

shubhamjain 21 hours ago 1 reply      
I started programming before college and I was always on my own. I never had a programmer friend until I started working. Although I was always able to get things done but the code I wrote is something that should never go into production. It took lots of mistakes, a lot of reading and shooting my own foot to finally start writing worthy code after like 2-3 years. (Although, there were code bases I worked with that were way worse!)

One thing I am curious about is, does a bootcamp make you proficient enough to avoid those mistakes and contribute directly to the application? I am pretty sure, it could have been a lot of help if someone could point out the mistakes I am making in my code, but I am not sure if it would have been enough.

AlexeyMK 17 hours ago 0 replies      
I'm most curious to see what the stats are deeper into the funnel, specifically:

- At what rate do bootcamp grads vs new grads get offers (intro --> offer at portfolio companies)?

- Is the above metric significantly different for different classes of companies (either segmented by company size, field, or "CRUD-eyness" of company?

As a former hiring manager at a "much harder than CRUD" company, I remember looking at some bootcampers and saying "I wish we could interview these people, but the knowledge gap is just too significant".

egonschiele 20 hours ago 2 replies      
Hey, I wrote an algorithms book aimed at bootcampers! The epub is out today, print book to follow: http://amzn.com/1617292230

I'm hoping this will be an easy to read algorithms book for bootcamp grads. Here's a sample chapter for anyone interested:https://manning-content.s3.amazonaws.com/download/f/a75f93d-...

redschell 21 hours ago 1 reply      
I think there's a great opportunity for bootcamps to help people like me. I'm currently a pre-sales professional, and have been for a few years now. I'm closing in on 28, and while I've been served very well by developing product expertise, my background isn't in CompSci, and I've never actually formally learned to code. If I want to be a good Solutions Architect down the line, and I certainly do, this could be how I bridge the skill gap.

Sure, I could learn most of what I need to know on my own time, but this might be a great way to get it done quickly in a batch and then move on to applying it in a very practical way with my customers.

strathmeyer 8 hours ago 0 replies      
Triplebyte figured out in twenty minutes that I didn't learn enough while getting a CS degree at CMU in order to get a programming job so... good luck to them.
Philipp__ 21 hours ago 0 replies      
They definitely are comparable. But I think going to both would be best thing if there is time and strength. Just as I thought college gives you most of theoretical stuff. If you are not used to working on your own, on side projects and are taking college for granted, then you aren't off to a great start. But if you are used to doing something besides college, whether it is paid job or some tinkering projects you do in free time that you later put on github, then need for bootcamp maybe isn't present. So hitting it somewhere in the middle might be best...
seanhandley 19 hours ago 0 replies      
We've hired a couple of junior developers lately that had no college experience but significant online training and the experience so far has been very positive.

Given how long it takes universities to update course materials, I'm not sure they can compete with this kind of education programme. It's true that a lot of the fundamental computer science is missing but with senior devs on the scene, any gaps can be filled with an afternoon around a whiteboard.

AJRF 20 hours ago 0 replies      
One thing I realised my final year of University is how much marketing Universities do towards the job market.

They shop their students and curriculum around to employers all over the county (some on an international level).

There is going to be a lot of inertia involved when it comes to hiring from Universities that most Bootcamps don't even consider or spend time doing. I don't think they give universities any cause for concern, and wont, for some time.

babbeloski 21 hours ago 0 replies      
Bootcamps make people employable for sure, I work with a team of people mostly from Bootcamps. I think the problem with some of them is they don't actually like programming. Learning new things and change is met with a lot of feet dragging. Don't get me wrong, I know a lot of programmers that are just 9-5'ers. It just seems like anytime there's any extra effort involved a ton of justification and selling needs to happen.
lyime 20 hours ago 0 replies      
It's hard to measure hunger. My intuition is that it plays a big role when it comes to finding success after going through a bootcamp.
andrewfromx 20 hours ago 0 replies      
when I read "4 years" I don't remember doing nothing but code for all 4 years during my CS degree. Part of the appeal must be that you focus on just coding intensely for a short period of time. I'm thinking back on my 4 years at pitt.edu and my god did I waste a lot of time. If you distill it all down, maybe it does == 3 months at good camp.
swalsh 21 hours ago 0 replies      
So basically, the guy who builds a rafter is a woodworker, the guy who nailed the rafter to the structure is a woodworker and the guy who made the dining room table is a woodworker. Each guy is important, but the skill level and education time are different. Not every rough carpenter needs to have an extensive education in fine carpentry to be successful in their area of woodworking.
baron816 19 hours ago 0 replies      
All you need to know: some companies can use bootcampers very effectively, some cannot. It all depends on what the company is doing. It's evident that since many companies have found great success while employing bootcampers that the skills they provide are useful.
vparikh 21 hours ago 1 reply      
I would love it if Computer Science grads took a boot camp course - one that covers css/html/javascript, any MVC framework, ntier patterns, ORM/SQL/NoSQL training. Because from my experience, they apparently don't teach any of that in comp-sci school.
seattledev14 18 hours ago 0 replies      
When you think about it as skills training vs. college I think it delivers on it's promise.

In college, most people don't declare a major until their Sophomore or Junior year, so the idea that competition is a 4 year degree is a bit misplaced. Code Schools don't teach music appreciation, though there are a lot of musicians. Bootcamps offer an intensive at 40+ hours a week vs. a two hour class two days a week.

Can you deliver skills based training in 10 weeks? The placement rates would say yes. Do some schools focus on placement while others focus on taking tuition... That's true as well.

Look to find the school that has a placement track record.

data4lyfe 21 hours ago 0 replies      
So triplebyte still can't infer anything about how well a software engineer performs on their job from the metrics that they are gathering though if they're basing performance on how well they do on their coding questions and interviews?
brandonmenc 21 hours ago 0 replies      
Bootcamps seem to encapsulate and accelerate the "I taught myself to program in middle school and high school" experience for adults who missed that boat - which is great.

The results make a lot more sense when you look at it that way.

puppers 10 hours ago 0 replies      
Universities don't necessarily teach students programming.They teach them Computer Science.

Bootcamps teach students programming, definitely not CS. I highly doubt they could teach a student 4 years of CS material in 3 months.

personjerry 21 hours ago 5 replies      
I think this misses a huge point: College is a huge factor in social development; This is extremely important not only for developing software on a team, but to developing a healthy lifestyle in and out of the workplace.
Kinnard 20 hours ago 0 replies      
I would love to see a break out for people who are neither bootcamp grads nor college grads but who are completely self-taught, like me :)

Surely they've received some applicants in this category.

soneca 14 hours ago 0 replies      
I believe not all bootcamps are equal.

Is there anywhere a curated list of good, recommended, worth your money bootcamps?

findjashua 19 hours ago 0 replies      
I'm not sure why this is a surprise. Computer Science and Software Engineering are different things, the only common factor being programming.
emodendroket 20 hours ago 0 replies      
This is neat, although as someone who attended neither (well, not for computers anyway) I guess I can't do the solipsistic thing and look for myself.
mmkx 21 hours ago 0 replies      
Nice ad.
forgotAgain 20 hours ago 1 reply      
I wonder how many of the engineers at this weeks Google I/O or the next Apple Dev conference went to bootcamps.
genzoman 19 hours ago 0 replies      
whether or not you come from a CS background, or a bootcamp background the proof is in the pudding: can you answer the whiteboard questions? if so, you pass, and nobody cares where you went/did not go to school.

if that's not enough, revise the whiteboard question.

kbuchanan 21 hours ago 0 replies      
I think this supports the hypothesis that schooling (secondary, post secondary, bootcamps, whatever) is first and foremost a sorting mechanism. Bootcamps have discovered _one_ avenue for quickly assessing and sorting students into a career they can succeed at.
Ologn 18 hours ago 0 replies      
> it still just seems hard to believe that 3 months can compete with a 4-year university degree.

Yes, it is very hard to believe. Impossible, actually.

> Bootcamps, are intense. Students complete 8 hours of work daily

In-class time is not the gauge for college. Students are supposed to spend at least three hours studying for every hour spent in class. On top of that are office hours with the professor, as well as contact with the TAs or study labs.

If my courseload for a semester is Calculus 102, Theory of Computation, Algorithms 201, Principles of Programming Languages, and Computer Architecture, I don't see how it is different than a bootcamp because a bootcamp is "more intense". I don't know how you can get more intense than juggling these five topics.

> Traditional CS programmers spend significant amounts of time on concepts like NP-completeness and programming in Scheme...But it is not directly applicable to what most programmers do most of the time. Bootcamps are able to show outsized results by relentlessly focusing on practical skills...How to use an editor is something that a traditional CS degree program would never think of teaching.


I took a course in OS principles and then one in distributed systems. The first course covered mutual exclusion somewhat, the second much more. I spent quite a lot of time writing complex Java programs that handled mutual exclusion well. Guess what I am doing today, years after that course? Writing a complex Java program that uses mutual exclusion. I only took that second course because it fit my schedule, but it has come in very handy over the years.

Insofar as NP-completeness being "academic CS", I have unfortunately seen too many bugs ( https://bugs.freedesktop.org/show_bug.cgi?id=3188 , https://sourceforge.net/p/jedit/bugs/3278 etc.) where people did not heed the polynomial growth of algorithms.

They're trying to dumb down what you can't dumb down.

The reality can be seen if you look around a SoMa startup and wonder where all the grey-haired programmers went. Where did those programmers who were in their mid-20s in the late 1990s, programming for the dot-com startups, in an even more inflated market, go? Where are the grey-haired, balding programmers in your company?

And this bootcamp is the answer. Just look at the real estate prices and you know the market has heated up. Naval Ravikant turned down $600 million last year because he said there weren't enough places to invest that. Despite talk of perhaps some cooling since the beginning of the year, things are pretty hot. So get some kid to go to a bootcamp for a few months. They can only get their hands on one real programmer, but they can hire a few of these bootcamp kids to do a few MVP's, or maybe code some features up, which the real programmer will have to fix later.

What happens to these kids later, who have no foundation in what they're doing, who have no deeper understanding of what they're doing?

> programming in Scheme...How to use an editor is something that a traditional CS degree program would never think of teaching.

That's because a traditional CS degree program teaches you to write your own editor if need be. Stallman went to MIT and wrote Emacs, Bill Joy went to Berkeley and wrote vi.

What the hell point is there to teaching an editor? I was using Eclipse with Android plugins a year ago, now I'm using Android Studio. University is to teach concepts which will exist decades from now, not the Javascript library framework du jour.

The ones who will make out on this are the bootcamps, and the companies who can use these kids when the market is hot and will dump them when their usefulness is over. Just like what happened in 2000 (or 2008). You'll see what your bootcamp and two years working at a failed startup amounts to when the economy cools, job listings dry up and the posted ones say "BSCS required". Being able to cut and paste from Stack Overflow and use frameworks other people wrote and extended is not an educational foundation.

There are a lot of strawman arguments on the other side. Yes, the hardest working, brightest bootcamp graduate is probably better than the laziest, dullest person who managed to graduate from some third-rate college and get a CS degree. And so forth. None of that detracts from the point though.

indatawetrust 16 hours ago 0 replies      
> Note: We are only accepting applications from programmers.


andrewvc 20 hours ago 14 replies      
What a load of crap.

What, they bred the capacity for abstract thought into you in college?

College attracts a generally higher quality applicant pool. You're mistaking selection bias for an effect.

Let me tell you, I've interviewed programmers from all over. There are boatloads of people with CS degrees with close to zero capacity for creative thinking. There are also boatloads of CS grads who can barely code their way out of a while loop (true story!).

I've spent my career (no CS degree!) working alongside CS grads. I've gone further, faster, than most of them. I've had to deal with this kind of idiotic commentary over and over again.

CS grads are always surprised that I never got a degree (oh, I never would have guessed! you're different, its those OTHER people without degrees who are idiots). Four years of school + the associated debt creates a big incentive to believe that you got a square deal out of college.

eastWestMath 19 hours ago 0 replies      
This just in: web dev body shop is perfectly happy with bootcamp grads.
analognoise 20 hours ago 1 reply      
Of course employers love bootcamps - they need business logic monkeys who lack the fundamentals and therefore don't increase in value over time as much as the people who actually put in the time with said fundamentals.

It's cheaper to have somebody who doesn't have a real education.

DaveParkerCF 16 hours ago 0 replies      
Over the last three years at Code Fellows in Seattle (www.CodeFellows.com) we've seen the market change a lot for students, hiring companies and curriculum.

At launch, there was a lot of pent up demand. 400 people applied for a Ruby class of 25. Most that took that first class had been self taught and in the surveys said they had been hacking at projects for an average of 18 months. Code school was a way to speed their path into a professional developer role (note developer, not engineer).

The majority of students today already have a degree and are looking to switch careers, average age of ~30. They are looking for skills to transition so in that way, going back to college isn't an option unless it's for advanced degree. The same is true for the veterans that are transitioning to the workforce, they have been in a very structured environment and want to speed through job ready training vs. four more years at college.

"Stack switchers" tend to be the top of the compensation range. If you have 10 years of .Net experience and want to switch to iOS. You'll earn top dollar. If you don't have much real world experience you'll land an entry level JavaScript job with that skill.

The needs of hiring companies has also shifted as the market has matured. There are more "code school grads" in the market looking for jobs, so the process of screening needs to be better, interviews need to be improved and tools like triplebyte.com improve transparency of skills. Hiring Junior developers has never been the preference for employers. Everyone would rather hire both skill and experience. But when you're competing with larger companies in a hot job market, you'll often take Junior talent that is a good culture fit.

By culture fit I mean a combination of past education, work experience and new skills. Combine that with work ethic and desire and you see why most of the strong code schools have a high (90%+) placement rate.

Curriculum have changed as well. Code schools have to be teaching at the front end of the hiring demand. Teaching an old tech stack where job postings are heading down won't work. Review StackOverflows recent survey if you're curious about stack preferences.

Code schools are also required to be licensed with each state where they do business. That's a requirement not all schools follow. It's really about consumer protection in that way so check with your state.

The industry is still immature and you're correct that there isn't any reporting standards, e.g. are placements rates reported at 90 or 180 days past graduation, etc? We're working with a number of companies like the Iron Yard to standardize on reporting and moving to audited results over time. I hope that someday we can apply the same placement rate standards to other academic institutions. As a dad of college age kids that would be amazing (note the White House tried that two years ago with a scorecard and the Universities said no).

Regarding the debate of should everyone learn to code or no one learn to code? It's a skill, it's not for everyone. It's a job that isn't for everyone. There are a lot of online resources, information sessions and one day courses, start with the low risk version and see if it's for you. With an average starting salary of $71k in Seattle, the compensation appeal is a strong draw for people outside of the tech industry. You may be drawn to the compensation just make sure that you are also drawn to the work.

puppetmaster3 18 hours ago 0 replies      
Also cheaper and faster, a good way to save tax resources maybe?
serge2k 19 hours ago 0 replies      
> How to use an editor is something that a traditional CS degree program would never think of teaching.

Of course not. Why would they ever do that. It falls into the same bucket as version control. It's useful, but go learn it yourself because it's not that hard.

AWS X1 instances 1.9 TB of memory amazon.com
338 points by spullara  1 day ago   178 comments top 22
jedbrown 1 day ago 1 reply      
Does anyone have numbers on memory bandwidth and latency?

The x1 cost per GB is about 2/3 that of r3 instances, but you get 4x as many memory channels if spec the same amount of memory via r3 instances so the cost per memory channel is more than twice as high for x1 as r3. DRAM is valuable precisely because of its speed, but the speed itself is not cost-effective with the x1. As such, the x1 is really for the applications that can't scale with distributed memory. (Nothing new here, but this point is often overlooked.)

Similarly, you get a lot more SSDs with several r3 instances, so the aggregate disk bandwidth is also more cost-effective with r3.

lovelearning 1 day ago 14 replies      
This is probably a dumb question, but what does the hardware of such a massive machine look like? Is it just a single server box with a single motherboard? Are there server motherboards out there that support 2 TB of RAM, or is this some kind of distributed RAM?
MasterScrat 1 day ago 3 replies      
As a reference the archive of all Reddit comments from October 2007 to May 2015 is around 1 terabyte uncompressed.

You could do exhaustive analysis on that dataset fully in memory.

ChuckMcM 1 day ago 4 replies      
That is pretty remarkable. One of the limitations of doing one's own version of mass analytics is the cost of acquiring, installing, configuring, and then maintaining the hardware. Generally I've found AWS to be more expensive but you get to "turn it on, turn it off" which is not something you can do when you have to pay monthly for data center space.

It makes for an interesting exercise to load in your data, do your analytics, and then store out the meta data. I wonder if the oil and gas people are looking at this for pre-processing their seismic data dumps.

1024core 1 day ago 1 reply      
Spot instances are about $13 - $19/hr, depending on zone. Not available in NorCal, Seoul, Sydney and a couple of other places.
dman 1 day ago 4 replies      
Going to comment out the deallocation bits in all my code now.
pritambarhate 1 day ago 4 replies      
Question for those who have used monster servers before:

Can PostgreSQL/MySQL use such type of hardware efficiently and scale up vertically? Also can MemCached/Redis use all this RAM effectively?

I am genuinely interested in knowing this. Most of the times I work on small apps and don't have access to anything more than 16GB RAM on regular basis.

vegancap 1 day ago 8 replies      
Finally, an instance made for Java!
krschultz 1 day ago 8 replies      
A bit under $35,000 for the year.
realworldview 1 day ago 0 replies      
Recompiling tetris with BIGMEM option now...
Erwin 1 day ago 0 replies      
I'm curious about this AWS feature mentioned: https://aws.amazon.com/blogs/aws/new-auto-recovery-for-amazo...

We've experiemnted with something similar on Google Cloud, where an instance that is considered dead has its IP address and persistent disks taken away, then attached to another (live or just created instance). It's hard to say whether this can recover from all failures however without having experienced them or even work better than what Google claims it already does (moving around failing servers from hardware to hardware). Anyone with practical experience in this type of recovery where you don't duplicate your resource requirements?

jayhuang 1 day ago 0 replies      
Funny how the title made me instantly think: SAP HANA. After not seeing it for the first 5 paragraphs or so, Ctrl+F, ah yes.

Not too surprising given how close SAP and Amazon AWS have been ever since SAP started offering cloud solutions. Going back a couple years when SAP HANA was still in its infancy; trying it on servers with 20~100+ TB of memory, this seems like an obvious progression.

Of course there's always the barrier of AWS pricing.

zbjornson 1 day ago 0 replies      
How does this thing still only have 10 GigE (plus 10 dedicated to EBS)? It should have multiple 10 Gig NICs that could get it to way more than that.
0xmohit 1 day ago 0 replies      
Wow! http://codegolf.stackexchange.com/a/22939 would now be available in production.
manav 1 day ago 0 replies      
Hmm around $4/hr after a partial upfront. I'm guessing that upfront is going to be just about the cost of a server which is around $50k.
micro-ram 1 day ago 2 replies      
What happened to the other 16 threads?

18(core) * 4(cpus) * 2(+ht) = 144

ben_jones 1 day ago 0 replies      
I'd be guilty if I ever used something like this and under utilized the ram.

"Ben we're not utilizing all the ram."

"Add another for loop."

mrmondo 1 day ago 0 replies      
I'm taking it this is so people can run NodeJS or MSSQL on AWS now? Heh, sorry for the jab - what could this be used for considering that AWS' top tier provisioned storage IOP/s are still so low (and expensive)?

Something volatile running una RAM disk maybe?

samstave 1 day ago 2 replies      

Thats amazing.

amazon_not 1 day ago 1 reply      
The pricing is surprisingly enough not terrible. Given that dedicated servers cost $1-1.5 per GB of RAM per month the three year price is actually almost reasonable.

That being said, a three year commitment is still hard to swallow compared to dedicated servers that are month-to-month.

samstave 1 day ago 3 replies      
16GB of ram should be enough for anyone.

Edit, y'all don't get the reference: famous computer urban legend...


0xmohit 1 day ago 1 reply      
Encouraging folks to write more inefficient code?

I'd be interested in hearing what Gates [1] has to say about it, though.

[1] "640 kB ought to be enough for anybody"

Going dark: online privacy and anonymity for normal people troyhunt.com
341 points by danso  1 day ago   113 comments top 19
sixhobbits 1 day ago 2 replies      
I'm surprised he doesn't mention NoScript, Privacy Badger, etc. "Normal people" should be more concerned about about the highly detailed profiles that companies are building based on browsing habits. "Normal people" read about data breaches and embarrassing leaks that force politicians to resign. "Normal people" know nothing about the behind the scenes tracking that goes on when you google medical symptoms[0] or visit pages which have Facebook like buttons as footers[1].

Yes, this article is targeted at people who don't understand the problem of using their .gov email address to sign up for dodgy sites, but think about whether you'd rather have your bank statement made public or a large, visualizable data set representing most of your browsing history.

I would love to see more work done on privacy through noise/obfuscation, such as that started by Adnauseum[2] and TrackMeNot[3] - not necessarily publishing your credit card details online as suggest in another comment here, but in making random search queries and clicking on random ads when your device is idle. Most of us have sufficient processing power and bandwidth for the overhead not to be a problem. It's sad that it looks like both add-ons have failed to make a splash, and seem to have fallen out of active development (end of 2015 marks the last commits for both projects, which is too soon to pronounce them dead, but they definitely don't seem to be hives of activity).

[0] http://motherboard.vice.com/read/looking-up-symptoms-online-...

[1] http://www.allaboutcookies.org/cookies/cookie-profiling.html

[2] http://adnauseam.io/

[3] http://cs.nyu.edu/trackmenot/

xrorre 1 day ago 2 replies      
I appreciate the intention of this article. Written for people only starting to change their surfing habits in light of Snowden. But the example of the tools they should use are not thought out very well.

First: Freedome by F-Secure is closed source and there is no OpenVPN alternative. Always choose a VPN that has OpenVPN so that users can configure the connection to their needs. No need for this bloated mess.

Second: Whilst disposable Google accounts might seem like a good idea, there are any number of ways for Google to cross-correlate a disposable identity with your actual identity using fingerprinting captchas or even your screen resolution. Google does this to spot serial re-registrations and to stop people gaming Google Plus voting rings and spammers in general.

Third: Be careful of online websites offering fake-name services. Most of this data is generated server-side and logged for the purposes of cross-correlation with your IP address and useragent string. Quite possibly the vast majority of fake-identity sites are run by LEA

- I like to write some quick and dirty ruby gems to generate fake identities because then it can't be correlated. (The names are pulled in from disparate sources and I always ensure true-randomness).

- In terms of email, use things like Riseup which use TLS at every hop so that passive dragnets cant sniff the password. 99% of all IMAP and SMTP services can be passively sniffed because they use weak STARTTLS.

- Use 'honeywords' in an email to correlate different emails with different activities. For example:

 john.doe+shopping@riseup.net john.doe+gaming@riseup.net john.doe+correspondant@riseup.net
This way you can whitelist those addresses for the purposes of filtering out spam and phishing attempts.

apecat 1 day ago 1 reply      
Great article.

The only real omission I noticed is the lack of mention of advanced browser fingerprinting techniques that can be used against browsers, even if caches are emptied, 'porn modes' activated, VPNs opnened. As demonstrated here by the EFF's Panopticlick initiative. https://panopticlick.eff.org/

One of the most important points about the anonymity provied by the Tor project to remember is that the Tor Browser is painstakingly hand crafted to avoid many of these problems. In other discussions about TOR it is worryingly common to see other ways to route browser traffic through TOR, without mentions of the implications.

For those interested, here's a recent look into the Tor Browser system by one of the developers.


huuu 1 day ago 3 replies      
Doesn't this create a risk of committing fraud and identity theft in some countries?

I can understand it wouldn't be a crime to create a random email address but creating a fake house address and using this for payments sounds a little tricky.

btrask 1 day ago 5 replies      
This is just a list of more things for them to clamp down on.

I'm thinking about going in the opposite direction, and broadcasting all of my personally identifying information (credit card, SSN, etc). Obviously I would have to set aside a large amount of time to deal with issuing fraud reports, and make sure that I wasn't risking anything that I can't afford to lose--but it does seem simpler in some ways.

After all, if you don't have anything to hide, you're bulletproof, right?

rkrzr 1 day ago 2 replies      
TLDR: Use a VPN + Incognito mode + fake email and info

The VPN hides your IP. Incognito mode prevents your cookies from giving away your identity. And the fake info helps with things like sites being hacked and the data being dumped online.

amelius 1 day ago 4 replies      
> Going dark: online privacy and anonymity for normal people

Caveat: normal people don't care about such things.

descript 21 hours ago 0 replies      
It is so difficult to balance productivity/convenience and privacy/security.

Only recently did I stop worrying about privacy/security, and frankly my online experience is much better. I can now participate in any services/apps that catch my eye, I now save CC data at some sites, don't have a VPN/Tor slowing traffic and giving me cloudflare walls/"im not a bot" verification, don't have noscript/ublock/privacy badger breaking most sites, can sync across devices and backup online.

Having both secure & private online behavior is a massive inconvenience. You basically can't participate in the online world as it exists. (There are definitely opportunities to create secure/private versions of existing tools)

mirimir 1 day ago 0 replies      
It's a good piece, but the treatment of VPNs is bad. There's a new site about choosing a VPN service: https://thatoneprivacysite.net/ It summarizes a huge amount of information, for 159 VPN services.
ChefDenominator 22 hours ago 1 reply      
The article recommends going to Fake Name Generator (tm) to get a random online identity. The page is not encrypted and looks very, very fishy.

That page recommends going to Social Security Number Registry. Again, an unencrypted totally scammy looking page. If you enter a random name and select a random state, it will 'verify' that your identity has been stolen. Then, if you click on 'Validate', you can enter your SSN (unencrypted, of course).

I don't even know how to code, and this is a news site for hackers? This tripe makes it to the top of the front page?

maglavaitss 1 day ago 0 replies      
This submission has some more tips for preserving your privacy https://news.ycombinator.com/item?id=11706680
ikeboy 1 day ago 0 replies      
The SMS receiving sites don't work so well IME. They tend to use a single number for everyone, and the demand by spammers etc is so much higher than the free supply that for any given service, your number will probably already be blocked. Or the receiving will be unreliable, etc. I've gotten it to work sometimes, but usually not. Definitely too hard and time consuming for "normal people".

Is there a site that sells phone numbers for viop and sms for bitcoin without requiring identity?

bunkydoo 22 hours ago 1 reply      
Here's the thing, if you are a normal person - you aren't going to read a guide on something like this. I have a 1 sentence guide on this for the 'normal person' - If you wouldn't want your grandma to see it, just don't enter it in an internet browser.
tmaly 23 hours ago 0 replies      
As I am cranking away on some Go services on my laptop on my local coffee shop wifi, I see log entries popup of people trying to access php pages.

I go and ask the staff, and they said their POS is full of some weird software.

a good VPN provider is worth it, but finding one that will not keep logs on you is another story.

fulafel 1 day ago 1 reply      
The article exemplifies why the widespread misappropriation of of the VPN term is unfortunate (in same series as "router" for NAT boxes...), it serves to confuse people about the potential of real overlay networks.
coldpie 1 day ago 0 replies      
Is this really still the best way to pay for stuff anonymously online? Lie to a financial institution? I understand the desire to avoid fraud, but boy does that irk me. Hrmm...
jrcii 1 day ago 4 replies      
I really object to the language of "dark" to describe privacy or anonymity, which are thereby painted with a sinister connotation.
astazangasta 23 hours ago 0 replies      
I'm interested in 'phishing and malware protection', which I think means all my traffic gets reported to Google. This plus Google Analytics means the electric eye is on me wherever I go. Tips to browse safely without these?
kevingrahl 1 day ago 3 replies      
Skimmed the article, saw that he reccomended using Googlemail. Looked at the title of the post again.Looked at Googlemail reccomendation.Laughed and made a mental note not to trust "Troy Hunt".
Improving Docker with Unikernels: Introducing HyperKit, VPNKit and DataKit docker.com
289 points by samber  2 days ago   37 comments top 6
kevinmgranger 1 day ago 1 reply      
docker's go-9p now makes for the 3rd implementation of 9p in go:

docker/go-9p https://github.com/docker/go-p9p

rminnich/ninep: https://github.com/rminnich/ninep

rminnich/go9p: https://github.com/rminnich/go9p

There's also the Andrey Mirtchovski and Latchesar Ionkov implementation of go9p, but all I can find is a dead Google Code link from here: http://9p.cat-v.org/implementations

pjmlp 2 days ago 2 replies      
With lots of OCaml love it seems, from a quick glance through the source repositories.
tachion 1 day ago 1 reply      
I wonder if we'll see a move towards getting Docker working on FreeBSD using either Jails or bhyve finally, since it talks about using bhyve hypervisor... That would be really great.
kordless 2 days ago 4 replies      
Seems like only a year ago Docker changed how it used Virtualbox to boot VMs using machine (and caused me endless amounts of suffering trying to figure out how to fix it). Now it would seem they are getting rid of Virtualbox entirely with their own VM...which needs contributions.
chuhnk 1 day ago 0 replies      
Very interesting work. I find go-9p quite fascinating and think it could really have broader applications. Docker if you see this, I actually think you're on to something for microservice development thats native to the docker world. I've been trying to come up with ways of replicating the unix philosophy around programs that do one thing well and the use of pipes but was always limited in my thinking in terms of http, json, etc, etc.

My advice, as a guy who's currently building something in the microservice space, explore this further. Spend some time building fit for purpose apps with this and see where it goes.

andrew_wc_brown 2 days ago 1 reply      
I guess I just want to know the take away.eg. Will consume less memory on mac.
My wife has complained that OpenOffice will never print on Tuesdays (2009) launchpad.net
417 points by hardmath123  2 days ago   155 comments top 29
Animats 2 days ago 8 replies      
Did this get fixed, 7 years later?

Yesterday, we had a story about Microsoft's disk management service using lots of CPU time if the username contained "user". Microsoft's official reply was not to do that.

I once found a bug in Coyote Systems' load balancers where, if the USER-AGENT ended with "m", all packets were dropped. They use regular expressions for various rules, and I suspect someone typed "\m" where they meant "\n". Vendor denied problem, even after I submitted a test case which failed on their own web site's load balancer.

Many, many years ago, I found a bug in 4.3BSD which prevented TCP connections from establishing with certain other systems during odd numbered 4 hour periods. It took three days to find the bug in BSD's sequence number arithmetic. A combination of signed and unsigned casts was doing the wrong thing.

sampsonetics 2 days ago 1 reply      
Reminds me of my favorite bug story from my own career. It was in my first year or two out of college. We were using a commercial C++ library for making HTTP calls out to another service. The initial symptom of the bug was that random requests would appear to come back with empty responses -- not just empty bodies, but the entire response was empty (not even any headers).

After a fair amount of testing, I was somehow able to determine that it wasn't actually random. The empty response occurred whenever the size in bytes of the entire request (headers and body together) was exactly 10 modulo 256, for example 266 bytes or 1034 bytes or 4106 bytes. Weird, right?

I went ahead and worked around the problem by putting in a heuristic when constructing the request: If the body size was such that the total request size would end up being close to 10 modulo 256, based on empirical knowledge of the typical size of our request headers, then add a dummy header to get out of the danger zone. That got us past the problem, but made me queasy.

At the time, I had looked at the code and noticed an uninitialized variable in the response parsing function, but it didn't really hit me until much later. The code was something like this:

 void read_status_line(char *line) { char c; while (c != '\n') { c = read_next_byte(); *(line++) = c; } }
Obviously this is wrong because it's checking c before reading it! But why the 10 modulo 256 condition? Of course, the ASCII code for newline is 10. Duh. So there must have been an earlier call stack where some other function had a local variable storing the length of the request, and this function's c variable landed smack-dab on the least-significant byte of that earlier value. Arrrrgh!

mpeg 2 days ago 2 replies      
The title reminds me of "the 500 mile email"


mazda11 2 days ago 1 reply      
My most memorial bugfix was when I was on a team ,temporary ,that did email encryption/decryption.They had one customer where some mails could not get decrypted, they had been figthing with this for one year, no one could figure out what was going on.I told them to do a dump for a week with the good and bad emails.After one week I was given the dump of files, looked at the count of bad vs good, did some math in my head and said:"Hmm, it appears that about 1/256 mails is bad.That could indicate that the problem is releated to a specific byte having a specific value in the random 256 bit AES key.If there is a specific value giving problems it is probaly 0x00 and the position I would guess being at the last or first byte."

I did a check by decoding all SMIME mails to readable text with openssl- sure, all bad emails had 0x00 as the least signicant byte.Then i looked at asn1 spec and discovered it was a bit vague about if the least significant byte had to be there if it was 0x00.I inserted a line into the custom written IBM 4764 CCA driver written in c called by JNI.Then all emails decrypted.

The team dropped their jaws- they had been figthing with it for 1 year and I diagnosed the bug only by looking at the good/bad ratio :)

I might remember some details wrong- but the big picture is correct :)

icambron 2 days ago 5 replies      
The most interesting part of this story to me is actually that his wife noticed that the printer didn't work on Tuesdays. I'd have never, ever put that together, no matter how many times I saw it succeed or fail. I'd actually be more likely to figure it out by debugging the CUPS script than I would be observing my printer's behavior. Can a lot of people pick up on correlations like that? "Ever notice how it's always Tuesday when the printer won't work?"
alblue 2 days ago 2 replies      
The TL;DR is that the "file" utility was miscategorising files that had "Tue" in the first few bytes of a file as an Erlang JAM file, with knock on effects for PostScript files generates with a header comment with Tue in the date.
nilstycho 2 days ago 1 reply      
The weirdest case at my tenure as a neighborhood computer tech was a personal notebook computer that would not boot up at the customer's apartment. Of course we assumed user error, but further investigation revealed that if the computer were running as it approached the home, it would bluescreen about a block away.

We guessed it was due to some kind of RF interference from a transmitter on the apartment building. Removing the WiFi module and the optical drive had no effect, so we further guessed it was interference within the motherboard or display. Rather than investigate further, we replaced the notebook at that point.

mark-r 2 days ago 0 replies      
I have an anecdote, which isn't mine but comes from someone I know personally. This guy was working as a service tech, and was called out to diagnose a problem with a computer that had been recently moved. It worked most of the time, but any attempt to use the tape drive failed within a certain number of seconds (this was long ago, when tape drives were still a thing). Everything had worked fine before the move, and diagnostics didn't show anything out of place. Then he happened to look out the window - this was a military installation, and there was a radar dish rotating nearby. The failures occurred exactly when the radar dish was pointed their direction. It turns out the computer had been moved up one floor, which strengthened the interference just enough to cause the failure.
kazinator 2 days ago 0 replies      
But "Tue" is not at the fourth byte in the example, which has:

 %%CreationDate: (Tue Mar 3 19:47:42 2009)
Something munged he the data. Perhaps some step which removes all characters after %%, except those in parentheses?

 %%(Tue Mar 3 ...)
Now we're at the fourth byte. Another hypothesis is that the second incorrect match is kicking in.That is to say, some fields are added above %% CreationDate such that the Tue lands on position 79. The bug that was fixed in the magic database is this:

 -+4stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 -+79stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 ++4stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2 ++79stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2
(This is a patch of a patch: a fix to a an incorrect patch.) There are two matches for this special date which identifies JAM files: one at offset 4, but a possible other one at offset 79 which will cause the same problem.

The real bug here is arguably the CUPS script. It should identify the file's type before munging it. And it shouldn't use a completely general, highly configurable utility whose data-driven file classification system is a moving target from release to release! This is a print script, so there is no reason to suspect that an input file is a Doom WAD file, or a Sun OS 4 MC68000 executable. The possibilities are quite limited, and can be handled with a bit of custom logic.

Did Brother people write this? If so, I'm not surprised.

Nobody should ever write code whose correct execution depends on the "file" utility classifying something. That is, not unless you write your own "magic" file and use only that file; then you're taking proper ownership of the classification logic, such that any bugs are likely to be your own.

The fact that file got something wrong here is a red herring; the file utility is wrong once in a while, as anyone knows who has been using various versions of it regularly regularly for a few decades. Installations of the utility are only suitable for one-off interactive use. You got a mystery file from out of the blue, and need a clue as to what it is. Run file on it to get an often useful opinion. It is only usable in an advisory role, not in an authoritative role.

Adaptive 2 days ago 4 replies      
I've noticed that printing is still one of the poorest UX aspects of *nix/OSS and regularly seems to suffer from errors so egregious that they can only be attributed to OSS devs not dogfooding these features. I'm assuming they just don't print much (I mean, we ALL print less than 20 years ago, but all the more reason to test these features which, when you need them to work you REALLY need them to work).
t0mek 2 days ago 0 replies      
During my studies I had a course called "Advanced Network Administration". I learnt about the OSPF routing protocol and its Quagga [1] implementation and I had to prepare a simple installation that consisted of 3 Linux machines. They were connected with cheap USB network adapters.

After everything was configured I started the Quagga daemons and somehow they just didn't want to talk to each other. I've opened tcpdump to see what happens and the OSPF packets were exchanged properly. After a while the communication and routing was established. I thought that maybe the services just needed some time to discover the topology.

I've restarted the system to see if it's able to get up automatically, but the problem reoccured - daemons just didn't see each other. Again, I launched tcpdump, tweaked some settings and now it worked - until it didn't a few minutes later.

It take me a long time to find out that diagnostic tool I've used had actually changed the observed infrastructure (like in the quantum world). tcpdump enables the promiscuous mode on the network interfaces and apparently this was required for Quagga to run on the cheap USB ethernet adapters. I've used the ifconfig promisc and after that the OSPF worked stable.

[1] http://www.nongnu.org/quagga/

pif 2 days ago 0 replies      
CERN: LEP data confirm train time tables http://cds.cern.ch/record/1726241

CERN: Is the moon full? Just ask the LHC operatorshttp://www.quantumdiaries.org/2012/06/07/is-the-moon-full-ju...

carapace 2 days ago 1 reply      
Stuff like this is why I find "Synthetic Biology" so fucking scary.
BrandonM 2 days ago 0 replies      
Near the end of that post, the commenter suggested a fix that includes the most qualified Useless Use of Cat entry[0] that I've ever seen!

 cat | sed ... > $INPUT_TEMP
[0] http://porkmail.org/era/unix/award.html#cat

krylon 1 day ago 0 replies      
One of our users complained that she could no longer print PDF documents. Everything else, Word, Excel, graphics, worked fine, but when she printed a PDF ... the printer did emit a page that - layout-wise - pretty much looked like it was supposed to, except all the text was complete and utter nonsense.

Or was it? I took one of the pages back to my desk, and later in the day I had an idle moment, and my eyes wandered across the page. The funny thing is, if I had not known what text was supposed to be on the page, I would not have noticed, but the text was not random at all. Instead, all the letters had been shifted by one place in the alphabet (i.e. "ABCD" became "BCDE").

I went back to the user and told her to check the little box that said "Print text as graphics" in the PDF viewers printing dialog, and voila - the page came out of the printer looking the way it was supposed to.

Printing that way did take longer than usual (a lot longer), but at least the results were correct.

To this day, I have no clue where the problem came from, and unfortunately, I did not have the time to investigate the issue further. I had never seen such a problem before or after.

In a way it's part of what I like about my job: These weird problems that seem to come out of nowhere for no apparent reason, and that just as often disappear back into the void before I really understand what is going on. It can be oh-so frustrating at times, but I cannot deny that I am totally into weird things, so some part of me really enjoyed the whole experience.

chris_wot 2 days ago 0 replies      
Wait till you see where they found the print server!


gchadwick 2 days ago 0 replies      
Surely the real bug is the reliance on the 'file' utility in the first place? It attempts to quickly identify a file that could be literaly anything so it's not surprising (and indeed should be expected) that sometimes it gets it wrong.

I don't know the details of the CUPS script but presumably it can only deal with a small number of different file types. Implementing it's own detection to positively identify PS vs whatever other formats it deals with vs everything else would be far more robust.

kinai 2 days ago 0 replies      
I once had the case with a desktop system that when you sat down and started typing it often hardware reseted. Turned out Dell left some metal piece in the case which was hanging between the case and the motherboard (in those few millimeter) and with some stronger desk vibration caused a shortcut.
mark-r 2 days ago 1 reply      
I love the modification that pipes the output of cat into sed; doesn't he realize that cat is redundant at that point?
gsylvie 2 days ago 0 replies      
Here's a great collection of classic bug reports (including the never-printing-on-tuesdays): https://news.ycombinator.com/item?id=10309401
sklogic 2 days ago 0 replies      
No, it is a cups bug indeed. File was never guaranteed to be precise in the first place, it is not a good idea to rely on it.
rcthompson 2 days ago 0 replies      
I once found a bug in a weather applet that only occurred when the temperature exceeded 100 degress. The 3-digit temperature caused a cascade of formatting issues that rendered part of the applet unreadable. I believe the author used Celsius, and so would never have encountered this bug on their own.
DonHopkins 2 days ago 2 replies      
My 6502 based FORTH systems would sometimes crash for no apparent reason after I tweaked some code and recompiled it. Whenever it got into crashy mode, it would crash in a completely different way, on a randomly different word. I'd put some debugging code in to diagnose the problem, and it would either disappear or move to another word! It was an infuriating Heizenbug!

It turns out that the 6502 has a bug [1] that when you do an indirect JMP ($xxFF) through a two byte address that straddles a page boundary, it would wrap around to the first byte of the same page instead of incrementing the high half of the address to get the first byte of the next page.

And of course the way that an indirect threaded FORTH system works is that each word has a "code field address" that the FORTH inner loop jumps through indirectly. So if a word's CFA just happened to straddle a page boundary, that word would crash!

6502 FORTH systems typically implemented the NEXT indirect threaded code inner interpreter efficiently by using self modifying code that patched an indirect JMP instruction on page zero whose operand was the W code field pointer. [2]

JMP indirect is a relatively rare instruction, and it's quite rare that it's triggered by normal static code (since you can usually catch the problem during testing), but self modifying code has a 1/256 chance of triggering it!

A later version of the 65C02 fixed that bug.It could manifest in either compiled FORTH code, or the assembly kernel. The FIG FORTH compiler [3] worked around it at compile time by allocating an extra byte before defining a new word if its CFA would straddle a page boundary.I defined an assembler macro for compiling words in the kernel that automatically padded in the special case, but the original 6502 FIG FORTH kernel had to be "checked and altered on any alteration" manually.

[1] http://everything2.com/title/6502+indirect+JMP+bug

[2] http://forum.6502.org/viewtopic.php?t=1619

"I'm sure some of you noticed my code will break if the bytes of the word addressed by IP straddle a page boundary, but luckily that's a direct parallel to the NMOS 6502's buggy JMP-Indirect instruction. An effective solution can be found in Fig-Forth 6502, available in the "Monitors, Assemblers, and Interpreters" section here. (The issue is dealt with at compile time; there is no run-time cost. The word CREATE pre-pads the dictionary with an unused byte in the rare cases when the word about to be CREATEd would otherwise end up with a code-field straddling a page boundary.)"

[3] http://www.dwheeler.com/6502/FIG6502.ASM

 ; The following offset adjusts all code fields to avoid an ; address ending $XXFF. This must be checked and altered on ; any alteration , for the indirect jump at W-1 to operate ! ; .ORIGIN *+2

 .WORD DP ;) .WORD CAT ;| 6502 only. The code field .WORD CLIT ;| must not straddle page .BYTE $FD ;| boundaries .WORD EQUAL ;| .WORD ALLOT ;)

GigabyteCoin 1 day ago 0 replies      
"tue" means "kill" in french... I wonder if a french programmer somewhere had something to do with this?
lifeisstillgood 2 days ago 2 replies      
And this is why we won't ever get AI. Humans seem to only manage to get to a certain level of complex before it all gets too much.

There are supposedly people in Boeing who understand literally every part of a 747, the wiring and the funny holes in the windows. But there is probably no one who understands all parts of Windows 10.

We're doomed to keep leaping like dolphins to reach a fish held too high by a sadistic Orlando world trainer

gregschlom 2 days ago 2 replies      
So what's the lesson here? What should we learn from that?
broodbucket 2 days ago 1 reply      
Is it just me or does this get posted every month?
meeper16 2 days ago 1 reply      
Yet another reason I don't let OpenOffice or any Linux UIs slow me down. It's all about the command line and always will be.
The 9 lines of code that Google allegedly stole from Oracle majadhondt.wordpress.com
354 points by nkurz  1 day ago   195 comments top 50
jbob2000 1 day ago 9 replies      
Looks like something I've written a hundred times. It's a common pattern, you could "steal" this just through organically writing a program.

The more I hear about this case, the more I realize it's just a bunch of lawyers trying to pad their bank accounts. No sane engineer would claim this is infringement.

sgc 1 day ago 0 replies      
Given the guy who wrote this wrote both the first and supposedly infringing code, I have a bit of an analogy here from personal experience from another field.

For a while I worked in translating, and I translated a couple of books for the same author. One of the later books quoted about a page from the first one I had translated a couple of years earlier. I just translated it again because it was faster than finding the passage in my other translation (first point). Afterwards, I went back out of curiosity and checked the two translations against each other. I was quite surprised to see that in one full page of translation, after years of further experience, there was only one or two prepositions that were meaninglessly changed (point two).

Some things are just so obvious that the same guy doing the same thing years apart will produce the same results, especially if he is an expert in his craft. Unless there is some way to prove otherwise, this point of the case should be definitively dropped.

nedsma 1 day ago 4 replies      
Dear goodness. And there are tens if not hundreds of people involved in trying to prove/disprove this case and they're all getting some hefty money. What a waste of human intellect and time.
guelo 1 day ago 1 reply      
This article is from 2012 and is very outdated. The "famous 9 lines" are not being contested anymore. Google lost that case. The current trial is about whether Google's copyright infringement constituted "fair use".
AdmiralAsshat 1 day ago 4 replies      
One thing I've been thinking about as I've read through the trial:

It's my understanding (I am a wee lad compared to the grizzled vets here, so bear with me) that most of our common *nix tools were written during the UNIX days and were technically proprietary (awk, grep, cut, etc). When Linux came around, these tools were "ported" to become GNU tools and completely rewritten on the backend, while still keeping the same name so that existing UNIX developers would feel at-home using the same tools on Linux,BSD, etc.

The key point here is that they intentionally kept the same command names, for familiarity's sake.

Given that, could one make the analogy that a command name would be similar to an "API" and should also have been illegal by Oracle's logic?

worldsayshi 1 day ago 2 replies      
Wow, this is legal bullshiting beyond comprehension. It is the equivalent of one engineer copycating the way another engineer moves his arm when fastening a screw. To give anything beyond 5 minutes attention to this in a court is an insult to society.
gvb 1 day ago 0 replies      
More relevant information: "What are the 37 Java API packages possibly encumbered by the May 2014 Oracle v Google decision?"


From the #1 answer (it is worth clicking the link and reading the full answer):

 java.awt.font java.beans java.io java.lang java.lang.annotation java.lang.ref java.lang.reflect java.net java.nio java.nio.channels java.nio.channels.spi java.nio.charset java.nio.charset.spi java.security java.security.acl java.security.cert java.security.interfaces java.security.spec java.sql java.text java.util java.util.jar java.util.logging java.util.prefs java.util.regex java.util.zip javax.crypto javax.crypto.interfaces javax.crypto.spec javax.net javax.net.ssl javax.security.auth javax.security.auth.callback javax.security.auth.login javax.security.auth.x500 javax.security.cert javax.sql

foobarrio 1 day ago 6 replies      
I thought it "not obvious to a practitioner of the craft" was a requirement for a patent no? Give 10 programmers the task to write "rangeCheck()" and you'll end up with very similar looking code.
holtalanm 1 day ago 5 replies      
am I the only one that when looking at the implementation sees that there is a major flaw in the code?

if(toIndex > arrayLen) does not handle the case in which toIndex == arrayLen, which should still throw an ArrayIndexOutOfBoundsException if we are dealing with 0-based indexes.

Please correct me if I am wrong.

Aelinsaar 1 day ago 1 reply      
Incredible. The amount of money being set to the fire for the sake of something that even a student knows is utter crap.
devy 1 day ago 0 replies      
Sort of off topic, anyone know who's this Tim Peters who created the TimSort? Python docs and Wikipedia[1] has virtually no bio for him even though he's very well known Python contributor and his code becomes a legacy in this billion dollar lawsuit, among other things(like Zen of Python[2]).

[1]: https://en.wikipedia.org/wiki/Timsort

[2]: https://www.python.org/dev/peps/pep-0020/

ZeroGravitas 1 day ago 0 replies      
The worst part is that the programmer only "stole" these lines as he was contributing an improvement back to the OpenJDK and wanted his stuff to be compatible. Which adds one more level of absurdity.
hermannj314 1 day ago 0 replies      
Yeah, a bunch of jurors will be ruined financially while being forced to watch billionaires fight over how to best split up their empire. Sortition is how you spell slavery in the 21st century.
enibundo 1 day ago 1 reply      
As a software engineer, I get sad when I read news like this.
erikb 1 day ago 0 replies      
When the content of a trial are 9 lines of code then of course the topic are not really the 9 lines of code. It's just a way to gain something else. Everybody involved probably knows that.

I personally am very happy if powerhouses fight each other with lawsuits instead of giving me a sword and asking me to die for them. In that regard I feel humanity has come quite far over the last centuries.

tantalor 1 day ago 1 reply      
cognivore 1 day ago 1 reply      
That has to be a joke. By pursuing this Oracle just makes themselves look like idiots to anyone who actually has an technical knowledge.

So, they're idiots.

gsylvie 1 day ago 0 replies      
"I May Not Be Totally Perfect But Parts of Me Are Excellent" - I think this is a useful article to read when considering the 9 lines of code, because copyright law tends to treat novels, pop songs, and software code as the same: http://fairuse.stanford.edu/2003/09/09/copyright_protection_...
Twisell 1 day ago 1 reply      
This is total FUD. (EDIT ND: because thoses lines of code are already out of every discussions to be held in the current retrial, they are already ruled out the only remaining question is fair use)

This trial should now be entirely focused about wether Google "stole" the API SSO under a fair use exception and shall be relieved.

The preceding phases of this case already determined that:-thoses a nines lines are not significant-Google used API SSO without consent of Sun/Oracle and without any license-API SSO of Java are indeed copyrightable (this was ruled in appeal and confirmed by the Supreme Court)

This retrial is only happening because judge Aslup did a half baked first trial and the appeal court returned him the case after invalidating his bad ruling about non-copyrightability of API.

For thoses who seek deep insights about this case, take a look at Florian Mueller's blog:http://www.fosspatents.com

He pretty accurately predict the reversal of the first ruling against the opinion of many mainstream analysts. And he frequently publish link to public court documents so you can make up you mind by yourself.

EDIT: If you downvote please argument, otherwise it's very suspicious. I'm totally open to discussion but I can't fight against a hidden lobbyst activity that systmatically downvote diverging views.

EDIT2: I edited the first sentence to be more explicative. I've seen I got some upvote, but silent bashing seems to continue. Again, please argument!

I don't get why the name of this blogger unleash so much passions while he actually always publish documents and link to actual rulings. Yes he clearly don't write as elegantly as some, and yes he's by now pretty opinionated but why such much hate?

laverick 1 day ago 1 reply      
Uhm. That code wouldn't even compile...
chiefalchemist 1 day ago 0 replies      
Actual code aside, I would think this should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience, else they'll fear "a temporary measure" they didn't ask for. That is, suddenly, experience might not be as valuable as it used to be.
sleepychu 1 day ago 0 replies      
I'm pretty sure I've seen


0xmohit 1 day ago 0 replies      
Thankfully the patent system didn't exist when the number system was developed. Otherwise one would need to pay a royalty for counting.
meganvito 1 day ago 0 replies      
In the university I graduated, the professor definitely will mark plagiarism and give an F, unless a strict rule of sourcing followed. Most openjdk source has the first line a usual header. Maybe I am a late student of JDK. or may be there the court may prevail an exception. Finally you have to consider yourself what do we mean to contribute to open source?
curiousgal 1 day ago 0 replies      
>if i is between 0 and 11 before retrieving the ith element of a 10-element list.

Shouldn't i be between 0 and 9?

foldablechair 1 day ago 0 replies      
Reminds me of all those court cases of 'stolen' logos, using a small and fixed set of geometric primitives, the probability of coincidences is just high that way. Of course, some people believe all art is immitation and nothing ever gets created from first principles.
chiefalchemist 1 day ago 0 replies      
Code aside. This should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience.
Matt3o12_ 1 day ago 0 replies      
Does anyone have an idea what is really going on?

I've heard people say that Google actually copied the API structure (which is copyright-able) but I've also heard that this lawsuit was actually about Google using a wrong (or missing license). And I've heard that Google also manipulated the developer community by only propagating "we only copied 7 lines of code" and big evil oracle sues us.

From what I know Google used Java's API structure but did not include a license. They could have paid oracle for a license to use it conmercially or they could have used the GPL from OpenSDK and be bound to its restrictions. What they did instead was not to include a license at all, because try did not want to pay oracle but also did not want to be bound by the GPL (which might complicate things with phone manufacturers that change the code).

Could anyone tell me what the fuck this lawsuit is actually about?

eps 1 day ago 0 replies      
Am I reading this correctly that it's actually buggy?

It doesn't properly work if an array is zero-based nor it works if it's 1-based. It neither properly work if toIndex is meant to be included in the range or excluded from it.

chenster 1 day ago 0 replies      
Thanks for wasting course time on non-sense like this. Things like this squatting our legal system and yields absolutely nothing.
nutate 1 day ago 0 replies      
The resonance with left-pad and the questions of "how exactly to we share super simple code" evolves through so many different prisms. From legal to organizational to not invented here to...
Tloewald 1 day ago 0 replies      
Is it just me or does this code seem to have an off-by-one error (i.e. throwing on toIndex > arrayLen and not toIndex >= arrayLen, given that the lower bound check implies zero-based arrays)?
knodi123 1 day ago 0 replies      
Interesting that these 9 lines were apparently re-typed by hand, or possibly even from memory.... or so I suppose based on the missing close-paren on the first line...
cm2187 1 day ago 0 replies      
There is a lot of vested interest in this case and I do not know the author of this article. Are we sure the claim is down to the implementation of this function?
mark242 1 day ago 1 reply      
A void function that does nothing but throw exceptions. Scala engineers everywhere cringe at the thought of converting this kind of code to native Scala.
rootlocus 1 day ago 0 replies      

 > Google owes Oracle between $1.4 billion and $6 billion in damages if liable
In what damages, exactly?

shubhamjain 1 day ago 0 replies      
Perhaps, someone should make a software that checks code to see if it is infringing any copyright. :)
udkl 1 day ago 0 replies      
Naively, that's $200 million to $800 million per line of code.
meganvito 1 day ago 0 replies      
I would leave my last comment, doing 'cheap things' is/are habitual.
eb0la 1 day ago 0 replies      
I bet you can get a similar code from BSD, EMACS, Ingres, or any venerable open source codebase and use it as prior art against that patent claim.

Ok, maybe that venerable codebases doesn't have exception handling like Java but you can prove to have the same logic maybe 10 or 20 years before that code was written.

masters3d 1 day ago 0 replies      
One billion dollars per line.
BurningFrog 1 day ago 0 replies      
Is this what the whole case rests on, or is it just one of many details?
hathym 1 day ago 0 replies      
wow, each line costs nearly one billion dollars
smaili 1 day ago 2 replies      
tldr -

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }

vladaionescu 1 day ago 0 replies      
Pretty sure that the only reason they copied that code was that they didn't know how to do it themselves.
CiPHPerCoder 1 day ago 2 replies      
This code is ugly anyway:

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }
Missing a closing paren in the function prototype, among other things.

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) { if (fromIndex > toIndex) { throw new IllegalArgumentException( String.format("fromIndex(%d) > toIndex(%d)", fromIndex, toIndex) ); } if (fromIndex < 0) { throw new ArrayIndexOutOfBoundsException(fromIndex); } if (toIndex > arrayLen) { throw new ArrayIndexOutOfBoundsException(toIndex); } }
There you go Google, Oracle, et al. I release this snippet under MIT / WTFPL / CC0. You're welcome.

Oletros 1 day ago 0 replies      
This case is not about RangeCheck, is about the 37 Java classed declaration
draw_down 1 day ago 0 replies      
> Every company tries to control its developers actions, but does management really know what goes into the software?

This is backwards, developers do what management allows. If management cares to know what goes in the software, they will know. There are ways to know. Whether business people want to pay for that is a different matter. Of course they don't, for this precise reason- so they can throw up their hands and say, "those darn developers!"

Software Design Patterns Are Not Goals, They Are Tools exceptionnotfound.net
298 points by kiyanwang  1 day ago   158 comments top 30
userbinator 1 day ago 12 replies      
It probably seems like an obvious statement to a lot of HN, but I have a feeling that it isn't to the majority of developers, who for some reason appear to love immense complexity and solving simple problems with complex solutions. I think a lot of them started with OO, which immediately raises their perception of what is "normal" complexity --- at that point, they're already creating more abstraction than is really necessary. Then they learn about design patterns and all the accompanying "hype" around them, so they think "awesome, something new and shiny to use in my code!" and start putting them in whenever they can, I guess because it feels productive to be creating lots of classes and methods and hooking everything together. It's easier to dogmatically apply design patterns and generate code mindlessly than to think about what the problem actually needs to solve it. The result is code that they think fulfills all the buzzwordy traits of "good software engineering practice" (robustness, maintainability, extensibility, scalability, understandability, etc.), but in reality is an overengineered brittle monstrosity that is only extensible in the specific ways thought of when it was first designed. That almost never turns out to be the case, so even more abstractions are added (including design patterns) on the next change, on the belief that it will help with the change after that, while leaving the existing useless ones in, and the system grows in complexity massively.

I did not start with OO, I never read the GoF book, and don't really get the obsession with design patterns and everything surrounding them. I've surprised a lot of others who likely have, by showing them how simple the solutions to some problems can be. Perhaps it's the education of programmers that is to blame for this.

The statement could be generalised to "software is not a goal, it is a tool".

Related article: https://blog.codinghorror.com/head-first-design-patterns/

jrochkind1 23 hours ago 1 reply      

Design patterns are super useful as tools.

As "goals" they are idiotic. I think lots of people that think they are idiotic have been exposed to them as "goals", or don't realize that's not the point.

I think there is a larger issue here, which is that many kinds of software development, including web dev, has become enormously more complex in many ways than it was when many of us came up.

People coming up now are looking for magic bullets and shortcuts and things they can just follow by rote -- because they are overwhelmed and don't know how to get to competence, let alone expertise, without these things.

It's easy for us to look down on people doing this as just not very good developers -- and the idea of 'software bootcamp' doesn't help, I think it's probably not _possible_ to get to competence through such a process -- but too easy to forget that if we were starting from scratch now we ourselves would likely find it a lot more challenging than we did when we started. There's way more stuff to deal with now.

"Design patterns" are never going to serve as such a magic bullet or thing you can follow by rote, and will often make things worse when used that way -- but so will any other potential magic bullet or thing you can follow by rote. Software doesn't work that way. It's still a craft.

dantheman 1 day ago 2 replies      
Patterns are from software archaeology, they were naming things that were commonly seen and what they were for -- they were helping build a vocabulary to talk about larger constructs.

They are useful if you have a problem and one fits it perfectly, it can help you start thinking about it -- but it might not be a good fit.

In general we should be keeping software as simple as possible, with the understanding that it can be changed and adapted as needed. Often large "pattern" based projects devolve into a morass of unneeded complexity to support a level of flexibility that was never required.

prof_hobart 1 day ago 1 reply      
>, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

Unless I'm misunderstanding him, I would disagree with this. When you're doing it wrong is when you use a design pattern without understanding what problem its solving, and whether you have that specific problem.

To use his tool analogy - if you're a joiner who turns up to a job thinking "we always need to use a hammer" and start randomly hitting everything, then you've gone wrong. But equally, if you're halfway through knocking a nail in with your shoe and think "Oh look, I'm using the hammer pattern now", you're doing it just as wrong.

If you're looking at two things you need to attach together and you've considered whether glue, a screw, a nail or something else is the most appropriate for this specific job, decide it's the nail and then think - "I need to use my hammer now", then you're doing it right.

gwbas1c 1 day ago 0 replies      
Design patterns aren't the problem. All a design pattern is, is a well-known way of doing something.

When you build a house, do you re-invent how to frame, plumb, wire, and roof it? No. That's all a design pattern is. Choosing the right design pattern is akin to making sure that your basement is made out of cement and your walls framed with wood. (You don't want to put shingles on your countertops!)

The problem is that some developers think they are some kind of magical panacea without really understanding why the pattern was established and what it tries to achieve. These are the over-complicated projects that everyone is complaining about in this thread. (These are the projects where the basement is made with wood or the concrete walls too thick; or the projects where someone decided to put shingles on the countertop.)

I try to pick, establish, and follow design patterns in my code. It helps ensure that I don't spend a lot of time re-learning why some other technique is flawed; and it helps achieve a consistent style that everyone else on the team can work with.

rootlocus 1 day ago 0 replies      
I found both his definition of the adapter pattern and his example to be a bit off. In his example, the adapter extends the external interface instead of the client interface. By definition the adapter must implement the client interface. It's even in the UML diagram displayed on the website he quotes (http://www.dofactory.com/net/adapter-design-pattern)

 > The fact was that I just didn't understand them the way I thought I did. > To be clear, I've never read the Gang of Four book these patterns are defined in
After admitting he has a less than desired understanding of design patterns (proven by his poor example), he makes bold claims like:

 > if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.
I'm having problems taking this article seriously.

MoD411 1 day ago 2 replies      
"Software Design Patterns Are Not Goals, They Are Tools" - I do not understand why this needs to be said in the first place.
emodendroket 1 day ago 3 replies      
As far as I can tell design patterns are mostly about taking something simple and obvious and using terms to describe it that make it obscure and difficult to understand.
Arzh 23 hours ago 0 replies      
This article makes way more sense when he says he never read the Design Patterns book. If he had, he would know that before he started. They explain that the book is a collection of patterns that they have compiled from a bunch of people and from years of experience. The patterns did come about organically, and they were never meant to be the way to design software. They were only trying to come up with a common lexicon for something that they were all already doing.
madeofpalk 1 day ago 0 replies      
I'm reminded of a set of tweets from Harry Roberts about whatever new hot CSS naming convention was popular for the week:

> Modularity, DRY, SRP, etc. is never a goal, its a trait. Dont let the pursuit of theory get in the way of actual productivity.

> Thats not to say Modularity, DRY, SRP, etc. arent great ideasthey are!but understand that theyre approaches and not achievements.

There's nothing super revolutionary about these thoughts, but they've stuck in the back of my mind for a while now.


rhapsodic 1 day ago 1 reply      
A design pattern is a reusable solution to a recurring problem. Too many inexperienced devs forget that part, and use a pattern where the problem it's designed to solve doesn't exist. Had the author read the GoF book (he admits he still hasn't) he might have avoided that pitfall.
awinter-py 1 day ago 1 reply      
design patterns are guru thinking. they're bad ways to describe self-descriptive tricks like callbacks. don't let a person who talks this way write docs ever; they'll focus on 'what's being used' rather than what's happening.

design patterns are like when a consultant creates a stupid name for something that already exists -- the name isn't about expressive power, it's about declaring ownership so the consultant can sell the 'Consulting Method' to solve your problem.

when a phenomenon or trick has an easily understood one-word name, don't let a nonexpert rename it to something nobody understands.

apo 1 day ago 0 replies      
> Here's the problem I have with design patterns like these [Adapter Pattern]: they seem to be something that should occur organically rather than intentionally. We shouldn't directly target having any of these patterns in our code, but we should know what they are so that if we accidentally create one, we can better describe it to others.

It's not clear what the author would have done differently in this example. It's one thing to raise concerns about pattern-first thinking in general, but quite another to spell out what exactly is wrong with reaching for the Adapter Pattern to solve a very specific problem under a given set of constraints. I can imagine a number of situations in which going straight for an Adapter is the only sane choice.

I've come to view with great suspicion any general discussion of programming divorced from its context. Architecture Astronauts and Cowboy Coders can each do a lot of damage if left to their own devices.

badloginagain 1 day ago 1 reply      
Design patterns, OOP, to a large degree programming languages are just tools. You don't hear of craftsmen saying things like "The only thing you really need is a hammer. It's been around longer than the other tools and you can use it on every project". Replace "hammer" with C or Java and you have a legitimate comment on a score of threads.

> What patterns don't help with is the initial design of a system. In this phase, the only thing you should be worried about is how to faithfully and correctly implement the business rules and procedures.

I submit that should be your overriding concern at all times, not just the design phase. If you have to refactor some code in order to extend it, tie it back to the changed requirement. This forces you to make the least amount of changes, refactoring the least amount code, breaking the least amount of unit tests and introducing the least amount of bugs into production.

EliRivers 1 day ago 0 replies      
While we're here, SOLID is a nice acronym that is helpful as a checklist of generally good ideas to consider. It's not a law of physics, it's not compulsory, following it blindly can lead to worse outcomes and if transgressing it leads to a better outcome (with all things considered) then it should be transgressed.
arxpoetica 1 day ago 1 reply      
Just now realizing there is ambiguity around the terms design patterns. Say it in a different crowd, they'll think you are talking about the kind of design patterns Brad Frost is writing about. http://atomicdesign.bradfrost.com/
RangerScience 19 hours ago 0 replies      

The point of design patterns is a way to describe what you've made succinctly.


When you set out to do something that you don't yet know how to do, having a crank you can turn to get out functioning code is a good thing.

I think what you mean is "Design Patterns are Tools, not Dogma".

Plus, a lot of design patterns only make sense in typed and/or OOP languages, so under those circumstances, they can't be applied as goals.

V-2 22 hours ago 0 replies      
As pointed out (arguably a bit harshly) in comments under the original article, this is really a strawman argument. That's because that ol' classical GoF book on design patterns - which the author admits has not even read - addresses this concern already. It's still a valid argument, but not exactly a fresh one. And speaking on the subject without even bothering to read the piece widely considered as canonical is a bit arrogant.
mirekrusin 1 day ago 0 replies      
His problem may be learning about those concepts from snake-oil sellers - he mentions he didn't bother to read GoF and gets his knowledge from things like http://www.dofactory.com/products/net-design-pattern-framewo... .

My advice is to learn from people like Martin Fowler or Kent Beck and if you want to look at companies, look at something like ThoughWorks.

exception_e 23 hours ago 0 replies      
Kind of relevant to the discussions in this thread: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

When I do hit the magic 3 and can justify restructing code, I consider my options in terms of design patterns (which are very much tools!)

matchagaucho 22 hours ago 0 replies      
Stated in other terms, patterns are a means to an end. Not the end goal.

Patterns will organically emerge as the result of ongoing refactoring.

bradenb 1 day ago 3 replies      
> In other words, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

I completely disagree... if I'm working with a team. I've spent far too many hours trying to fix fragile code that comes about as a result of different devs with different methodologies trying to tie their code together.

id122015 23 hours ago 0 replies      
I can say the same thing about programming.

Thats why when I read HN I'm trying to understand what are you trying to achive. Something that goes beyond staying in front of the computer 10 hours a day.

johanneskanybal 1 day ago 0 replies      
"I didn't read the article or the comments but I think you're all wrong, maybe it's bad upbringing or maybe something else but whatever". ok thanks for sharing.
golergka 1 day ago 2 replies      
I have been interviewing a lot of developers recently, and one of the best questions I've found is to ask them _why_ they have used MVC pattern in the test assignment (most do). Most of developers misunderstand the question at first and either start to explain how MVC works or explain how they would've implemented it without MVC (when you ask people why they did something, they often take it as "you shouldn't have done it"). But even when I clarify the question, a surprising number just can't even begin to answer it instead they stumble and at best just tell that that's how they have always been taught to do it.
smoreilly 23 hours ago 0 replies      
How can someone doing research on these patterns not have read the most basic/important piece of literature on the subject?
projektfu 1 day ago 0 replies      
When I was in college, I assumed (like most) that patterns were received wisdom in how to construct software. Then I actually attended a talk with John Vlissides and realized that patterns were an entirely different thing, closer to the "archaeological" sense dantheman mentioned. In this way, the study of design patterns correspond better to the study of rhetoric or poetics in human language. "Homeric Simile" could be a design pattern in poetry.

In software, some rigidity of expression might be preferred, and so the design patterns also help us avoid creating new terminology for things that have been appropriately described.

There are places where each pattern might have utility, and I suppose if there is any sense to the term "software architecture" it is in the ability to make sense of what the system should look like in a way that can be explained to the relevant parts of the team.

There is a tendency, as well, among software developers to think that a complicated architecture must be the result of countless stupid decisions, probably made by junior technicians, who were doing things without understanding what's going on. Thus you find people exhorting others for simplicity, and acting like they've done their job at that point. But instead, complicated architecture is the result of compromises and rewrites throughout the software's life, and attempts to discard those old architectures and start afresh with similar tools usually result in an initially simplistic, but ultimately inflexible, design that will eventually evolve into a different complex architecture.

The Linux kernel is an example of a complicated architecture that was designed from a standpoint of simplicity initially, and developed its own object-oriented layer on top of C, with pluggable elements all over, loadable modules, etc., and millions of lines of code. BSD is smaller and more coherent, but also much more limited in scope.

There are also examples like Windows NT, which suffered from being the second system to 3 systems: Windows, OS/2 and VMS. In this kernel, there are so many design features that were included before implementation, that it seems incredible it was ever built. But they persisted and made it happen, and even eventually made it fast, in some cases by working around its design with new compromises and approaches. Still, it lacks the simplicity of a Plan9 or an Oberon, but what it doesn't lack is users.

Anyhow, I digress. What is important to me about patterns is the language that we get from them, and the ability to recognize what's going on in code. They can provide useful hints about implementation gotchas, and they can also help people stop reinventing the wheel.

bjr- 23 hours ago 0 replies      
Read the book. Then read the books that inspired the book.
olleicua 21 hours ago 0 replies      
EGreg 1 day ago 0 replies      
Goals should include:

 1) Solve the problem 2) Make it maintainable 3) Make it extensible 4) Make it scalable (server) 5) Optimize it for memory, speed
So the reason to use an existing paradigm and a well-tested framework is because it makes the above easier, especially #2. And over time, #2 winds up saving you a lot resources and probably saves your project from tanking.

Finally, using an existing well known platform also lets you hire developers who know what they're doing from the beginning, leading to more prosuctivity and less dependence on any one particular developer. We leverage the knowledge that's already out there.

Why I don't spend time with Modern C++ anymore linkedin.com
273 points by nkurz  2 days ago   255 comments top 39
jupp0r 2 days ago 6 replies      
In my experience, the opposite of what the author claims is true: modern C++ leads to code that's easier to understand, performs better and is easier to maintain.

As an example, replacing boost::bind with lambdas allowed the compiler to inline functor calls and avoided virtual function calls in a large code base I've been working with, improving performance.

Move semantics also boosted performance. Designing APIs with lambdas in mind allowed us to get rid of tons of callback interfaces, reducing boilerplate and code duplication.

I also found compilation times to be unaffected by using modern C++ features. The main problem is the preprocessor including hundreds of thousands of lines for a single compilation unit. This has been a problem in C and C++ forever and will only be resolved with C++ modules in C++2x (hopefully).

I encourage the author to try pasting some of his code into https://gcc.godbolt.org/ and to look at the generated assembly. Following the C++ core guidelines (http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) is also a good way to avoid shooting yourself in the foot (which is surprisingly easy with C++, unfortunately).

justsaysmthng 2 days ago 5 replies      
HFT is a pretty limited and extreme application case.

From what I understand - everything is not enough for HFT - network cards, kernel drivers, cables, etc.

You have milliseconds (edit: nanoseconds !) to receive, process and push your orders before someone else does it and gets the prize.

It's an arms race between technologists for the purpose of making a small number of people rich.

I doubt that these requirements apply to other application fields where C++ is used - and it's used almost everywhere, with great success I might add.

In my view C++ is actually a couple of languages mixed into one.

The hard part is knowing which part of the language to use for which part of the problem.

The "modern" C++ solves a lot of the nuisances of the "old" C++, but you can do without these features just fine. I apply them carefully to my code and so far it's been a pleasant experience. Even if I don't use all of the new features, it's nice to know that I can (and I will some day!).

So I don't really buy this rant..

pjc50 2 days ago 4 replies      
There are two separate rants here that aren't delineated well.

1) C++ is too complicated, and therefore hard to reason about and slow to compile.

We're going to argue about this forever, but you'll have to agree that the spec is very large and warty compared to other languages, and that C++ tends to take far longer to compile (this was already a problem a decade ago, it's not specific to "modern" C++).

2) The future of software development will include more of what I'm going to call ""non-isotropic"" software; rather than assuming a flat memory model and a single in-order execution unit, and exerting great effort to pretend that that's still the case, programmers will have to develop effectively on GPUs and reconfigurable hardware. Presumably this speculation is based on the Intel-Altera acquisition.

You can sort of do hardware programming in C (SystemC) but C++ is really not a good fit for hardware. Personally I'd like to see a cambrian explosion of HDLs, but the time is not yet right for that.

It sounds like the author favours the "C with classes" programming style, maybe including smart pointers, and is probably not keen on lambdaization of everything.

hellofunk 2 days ago 5 replies      
This article is not very general. Much of what it tries to convince us is not going to matter for most developers, and has the cost of suggesting modern features are not good for any developers. For example:

>It is not rare to see Modern C++ applications taking 10 minutes to compile. With traditional C++, this number is counted in low seconds for a simple change.

This is simply a bogus statement with respect to what at least 90% of c++ developers do on a daily basis.

I have benchmarked unique_ptr, auto, brace initialization, lambdas, range-based-for and other modern idioms and found them all to be at least as fast, and often faster, than their older counterparts. Now, if I were to instead go off and write template-heavy code using new features, that would be different. But in reality, the vast majority of c++ developers -- I'd wager at least 95% -- are not writing variadic templates on a daily basis (nor should they be).

The memory safety and many benefits from unique_ptr [0] is one of many modern tools that is a non-brainer to use in nearly all contexts. No, not nearly all contexts, allow me to rephrase: all contexts. It just is, and if you compare its use to manual new/delete code, the benefits are solid and faster.

The author further claims that modern C++ is less maintainable and more complex. The absolute opposite is true in nearly all cases. Using unique_ptr again as an example, it leads to less code, less complex code, more clear code, and better maintainability and code readability. Uniform brace initialization is another example that prevents many common older problems in the language.

FYI the author keeps talking about high frequency trading as an example of why modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. This author is either naive or political.

I think the title of this article is highly misleading, and the contents are not relevant. Overall, this article is just bad advice for most of us.

[0] https://news.ycombinator.com/item?id=11699954

n00b101 2 days ago 1 reply      
As has always been the case, effective use of modern C++ requires knowing which subset of the language to use and which to avoid.

I agree with the author's criticisms of many C++ features. At the same time, I think that a proper simple, modern subset of C++ exists that is much more productive and safer than C, without sacrificing performance. You can also optimize progressively, for example start with using std::string and std::vector and then replace the stock implementations if they aren't performant on your target architecture. I would not, however, recommend using C++ for GPU kernel code - a mix of C++ for CPU code and C for GPU kernel code works best. It is not ideal, but it's the best toolset available for serious industrial development.

FPGAs are exciting, but they've also been the "next big thing" in general purpose computing forever. Obviously it makes sense to use FPGAs for certain HFT and embedded applications, but that's not the same as general purpose computing which is what C/C++ is for. Not to mention, FPGA compile times can take hours or even days, which pales in comparison to most C++ template overhead.I would also say that for IOT, I'm not sure why it is obvious that "$10 FPGAs" should dominate. Why not a $0.50 microcontroller? Or the $5 Raspberry Pi Zero board? Both of which are eminently programable in C and even C++. Embedded devices have been around long before "IOT" became a buzzword, and we can see that microcontrollers, FPGAs, SOCs, and custom ASICs all have a role to play depending on the application.

typon 2 days ago 1 reply      
If he is complaining about C++ being bad and suggesting Verilog on FPGAs as an alternative, boy do I have some bad news for him.

HDLs (yes including Systemverilog) have 10x worse design than the worst software languages. This is why there are entire companies out there that make high level synthesis tools or high level HDL specification languages (like Bluespec).

And I haven't even said anything about the quality of FPGA tool chains.

kangar00 2 days ago 3 replies      
> If you cannot figure out in one minute what a C++ file is doing, assume the code is incorrect.

This statement at first resonated with me, and then I thought about it: this doesn't reduce the complexity of the overall application or service, it just means that one file is simple. You could have 10,000 files instead of 1 much shorter one; is that any more simple?

jonathankoren 2 days ago 8 replies      
I know why I don't like C++ anymore, it's just no fun.Its slow to compile, the errors are like 6 lines long full of template and class hierarchy that makes it hard to understand what exactly happened, and then of course there's the common coding shortcut of declaring everything auto. (What type is this list? I don't know, it's auto all the way down.) Then there's the whole thing about making constructors, but leaving the bodies empty because everything should be on initialize lists now, and now there's wrapped pointers for some reason.

I hated writing modern C++. It was just so depressing and frustrating.

halayli 1 day ago 0 replies      
This article is coming from a frustrated developer and lacks any scientific evidence. The frustration (understandably) is coming from the overwhelming complex new features and patterns that barely a compiler can understand.

C++11 onward revamped the language to make up for the lack of progress in the past 10 years. The majority of C++ developers that aren't keeping up with the new features because they are busy with their daily jobs feel that they are falling behind and the language they thought they new has changed underneath them.

C++03 already had a steep learning curve, but with C++11+ that learning curve is orders of magnitude more.

On the upside, you can use C++11 without understanding most of the details and it will do the right thing most of the them. And I think that's the bet that the language is making.

messel 2 days ago 1 reply      
Ok. Try a different language :)?

A single language needed to solve all problems is a fallacy.

I don't see FPGA programming ousting c++, but expect higher level languages with strong parallel semantics to gain "market share". You can always call a dedicated process written in optimized c for the hottest components. Compose the rest in go, elixir, or any high level language (lisp).

Architectures will naturally gravitate to higher level languages that support cleaner composition. The tools and interfaces will push towards higher abstraction without impacting build or run time. Maybe this process is related to Kevin Kelly's inevitable. I'm an optimist here.

aninteger 2 days ago 0 replies      
I've come to the conclusion that one should "use C++ when they absolutely have to and C when you can." There just aren't many areas where C++ is absolutely required when plain old simple C can be used. (Not to mention using higher languages if possible).
DrBazza 2 days ago 1 reply      
There are only two kinds of languages: the ones people complain about and the ones nobody uses. - Stroustrup.

C++30, might end up being D, today.

shanwang 2 days ago 0 replies      
Such rant appears once every few months on HN, this one is one of the least convincing. Many problems he mentioned are not "Modern C++" problems, but problems with C++ from beginning, some of them already have reasonable solutions, for example ccache + distcc for speeding up compilation.

The real problem with C++ is the standard committee, the design by committee approach for such a complex language is failing. If C++ is taken over by a company, it will be a much better language.

fsloth 2 days ago 2 replies      
This sounds like it's written from the point of view of implementing something inhouse. I fail to see how FPGA programming will be relevant if one wants to distribute software for consumers (or am I technologically clueless...).
Const-me 21 hours ago 0 replies      
I never programmed HFT software, but I agree with the criticism of the modern C++.

Its bad the author hasnt defined what exactlys modern is. I saw some comments compared boost with C++/14.I think boost is also modern. Even Alexandrescus Loki is also modern, despite the book was published in 2001.

I think that modern stuff was introduced in C++ because in end 90s-start 2000s there was expectation C++ will remain dominant for some time. There was desire to bring higher-level features to the language, to make it easier to learn and safer to use even at the cost of performance.

People didnt expect C++ will lose its market that fast: very few people now use C++ for web apps or rich GUI. However, due to the inertia and backward compatibility, the features remain in the language.

Personally, Im happy with C++.

C++ is excellent for system programming, also for anything CPU bound. For those you barely need those modern features, and fortunately, theyre completely optional: if you dont like them, dont use them.

But if you do need higher-level language for less performance-critical parts of the project, I find it better to use another higher-level language and integrate it with that C++ library. Depending on the platform, such higher-level language could be C#, Lua, Python, or anything else that works for you.

cpwright 2 days ago 0 replies      
I find the beginning and end of the article quite contradictory. Basically that C++ is too complicated; and oh by the way we should start programming FPGAs, which are much harder to get right.

I like modern C++, because I think it simplifies a lot of things (RAII for the win here). Templates let you engage in duck typing, but with (if you are careful) very performant results.

syngrog66 17 hours ago 0 replies      
I was once a C++ programmer but migrated first to Java, when I thought it was better designed and more convenient, and then to Python when I wanted less verbosity while having greater freedom to choose between a procedural style or OO.

C++ may still be an ideal choice in some problem spaces but I think the number and size of them has shrunk as more and better alternate choices have appeared and ate away at the C++ share.

aspiringuser 1 day ago 1 reply      
20 year C++ programmer here. I work on multithreaded server code. Stopped using modern C++ features 5 years ago. I'd compare my use of C++ to be roughly equivalent to the use of C++ in the NodeJS project or the V8 project. I'm not a user of Boost.

I have to agree with the author of the article. It takes longer to train developers to write idiosyncratic modern C++ code and compilation times explodes. Compiler support for bleeding edge C++ features is spotty at best. Harder to reason about the correctness of modern C++ code.

Philipp__ 2 days ago 0 replies      
While some pretty good points were stated in this post, I cannot but feel OP is a bit biased. Too narrow sort to say.

I feel totally opposite in terms of new Modern C++. I guess the thing is how, where and when you use it will define your opinion/experience.

dahart 2 days ago 2 replies      
> Today the "Modern Technologist" has to rely on a new set of languages: Verilog, VHDL

That was a complete surprise ending! :)

I like surprise endings, and he makes a lot of good points, whether or not I agree with them. But, I totally wasn't expecting "I'm done with C++ because: hardware." I was expecting because web or because awesome new high performance functional scripting language <X>.

A lot of what he's talking about there will still run compiled software though... FPGA programming and C++ aren't exactly mutually exclusive, right?

stormbrew 1 day ago 0 replies      
One of the biggest users (some would say abusers) of template metaprogramming I know works on HFT software. He trades extremely long compile times for performance at runtime and finds that C++ allows him to do this and maintain a decent architecture (through what amounts to compile-time polymorphism as well as RAII).

For him, it's actually the older features of C++ that have no use. He doesn't use deep class inheritance and never touches virtual functions, for example.

thinkpad20 1 day ago 0 replies      
> After 1970 with the introduction of Martin-Lf's Intuitionistic Type Theory, an inbreed of abstract math and computer science, a period of intense research on new type languages as Agda and Epigram started. This ended up forming the basic support layer for functional programming paradigms. All these theories are taught at college level and hailed as "the next new thing", with vast resources dedicated to them.

This seems pretty dubious. Dependently typed languages and other projects embracing advanced type theory are still the realm of niche enthusiasts. While some of the more academic colleges might teach them in one or two courses, the vast majority of education a CS college student receives will be taught in traditional imperative languages. If "vast resources" have been devoted to Agda and Epigram, then I'm not sure what kind of language should be used to describe the resources devoted to C, C++, Java, etc. Also as the author mentions, Intuitionistic Type Theory has been around since the 70's, in fact the same year that C was introduced. Certainly it hasn't been taking over the CS world by storm since its inception, as he seems to claim.

Beyond that, the author's argument seems to be a bit incoherent. He critiques the readability of Modern C++, but C++ is notoriously hard to understand, including or especially prior to the development of C++11. It's never going to be an easy language to read except to seasoned developers. If anything, modern C++11 seems to provide abstractions that increase readability and safety. He critiques the performance of modern C++, but then he ends up recommending that people ditch C++ entirely and learn VHDL/verilog instead. Not even vanilla C++ is fast enough for him, then why criticize modern C++ on the grounds of performance?

cm3 2 days ago 1 reply      
I recently had to switch a project to -std=c++11 because a header I include now uses C++11 files. This change alone made compilation at least twice if not three times as slow. The new safety and convenience features are nice but compile times seem to be out of focus and getting slower and slower every year. I don't know how I feel with g++ 6.1 defaulting to -std=gnu++14.
ausjke 2 days ago 0 replies      
Just started to relearn c++ and QT for cross-platform GUI programs, c++ is not easy, but its performance is still unbeatable and in certain use cases, e.g. games or video-related-performance-critical-apps or GPU-opencl-etc, c++ seems to be the sole candidate still.
jcbeard 2 days ago 0 replies      
I have a few problems with this article:>structure leads to complex code that eventually brings down the most desired characteristic of a source code: easiness of understanding.

If done well, the structure of things like variadic templates make libraries easier to use, and make coding faster (granted, code bloat can be an issue with N different function signatures).

>C++ today is like Fortran: it reached its limits

Not quite. Fortran died because well, object oriented programming came out and lots of people like it. And well, C was always more popular regardless so...C-like C++ was the obvious next choice. There is a lot of cruft in any new library, so some things aren't as performant as if you wrote them in say assembly, which is what the author seems to suggest. Yes, if I built a bare metal iostream-like functionality it would be more performant (ha, used the word :) ). People know iostream isn't that performant. Could it be better? Perhaps. Is it safe? Yes! If you want perf, use the C interface directly. Is that safe to use, probably not for the general careless user.

>To handle the type of speed that is being delivered in droves by the technology companies, C++ cannot be used anymore because it is inherently serial, even in massively multithreaded systems like GPUs.

Well, yes but so is just about every language. People are trained to write sequentially (left to right, top to bottom), with many exceptions...but none the less, sequentially. There are very few languages that do multithreading natively. There are lots of additions/libraries to C++ that enable very nice ways to consider parallelism (including w/in standard: std::thread), outside of standard (raftlib.io,hpx(https://github.com/STEllAR-GROUP/hpx),kokkos (https://github.com/kokkos), etc.). There are lots, some are quite easy to use. C++ is inherently serial, but there is no better way to write. It is fairly easy to pull out "parallel" pieces of code to execute. It is even easier if the programmer gets quick feedback (like the icc loop profiler,etc.) on things like ambiguous references and loop bounds that can be fixed quickly.

Interesting read, but don't agree at all.

bitL 2 days ago 0 replies      
I agree with the author; I still long for the not-overly-complicated C++ back in the 00s I could write super-fast 3D rendering engine without much bloat. I find it very appalling when C++ went from a poster child of imperative programming to implementing monads in its libraries (mind you, monads are used to "simulate" imperative programming in functional programming). Something went wrong there...
hackerweb 1 day ago 0 replies      
How are Verilog and VHDL a "new set of languages"? That set has been around 30 years, almost as long as C with classes.
sickbeard 2 days ago 0 replies      
His argument about simplicity resonates with me. Sure you can learn variadic templates and all that fancy stuff but in practice when you are working on production software in any company involving more than one person using the code base, it just pays in heaps to write the simplest easiest to understand code; meaning all that nice fancy stuff is almost never used.
progman 2 days ago 0 replies      
The problem with modern C++ is that it wants to be everything. Now this behemoth is crushing under its own weight.

People who are not forced to use C++ should consider other languages which are way cleaner and even more performant. Code written in Ada and Nim for instance is much easier to maintain.

Nano2rad 1 day ago 0 replies      
Functional language programs have to run as interpreted. If compiled they will be too bloated.
koyote 2 days ago 1 reply      
Am I the only one being redirected to a linkedin sign up screen?
afsafafaf 2 days ago 1 reply      
Wonder if they tried IncrediBuild to reduce their compile time? They are right that C++ - while faster than ever before - takes much longer to compile than many other languages.
sitkack 2 days ago 0 replies      
> "that is where the unicorns are born: by people that can see on both sides of the fence"
blux 2 days ago 1 reply      
Anybody got an idea to which video series of Chandler Carruth he is referring to?
je42 2 days ago 0 replies      
Actually, the Author wants GO.
known 2 days ago 0 replies      
Kernel is my new home;
frozenport 1 day ago 0 replies      
Being an expert FPGA programmer is easy, the problem is that small things take a really, really long time.
known 2 days ago 0 replies      
Me too :)
ensiferum 2 days ago 2 replies      
It just sounds like someone who couldn't handle C++ whining and making a bunch of blanket statements without really having any proper understanding.

I agree that some of the features such as lambdas can use to hard to track bugs (lifetime issues) and difficult to follow code when abused. When used nicely though they can lead to simple, elegant and straightforward code (anyone who tried to use the STL algorithms before lambdas knows what a pita it was most of the time).

Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.

How I Accidentally Captured the SpaceX Falcon 9 Landing petapixel.com
291 points by electriclove  1 day ago   26 comments top 10
6stringmerc 21 hours ago 2 replies      
If the author happens by to read this, I wanted to maybe offer a bit of help with some of the 'in the field' frustration noted:

>This time of year is sea turtle season in the southeast and the threatened turtles that come up on the beach to lay their eggs (and any little ones that hatch) are highly sensitive to light and often get turned around and disoriented by lights on the beach. For that reason, South Carolina (and presumably other states in the area) has instituted a no lights on the beach policy. Luckily theres enough light pollution that you can at least navigate without a problem, but not being able to use a flashlight to help with focusing, adjusting camera settings, etc., is a bit of an annoyance.

When I was in Costa Rica seeing endangered sea turtles doing their thing, the local guides used red bulbs because they were not disorienting to the sea turtles. Noise from the tourist group was killing me I gotta say but hey I was along for the ride in this case. Can say I learned the red light thing.

So I looked up real quick and found some info and links from a South Carolina conservation group. They state the ordinance reads that "disruptive lights" are forbidden. Then they had a link to a site of 'certified' bulbs for use around wildlife. Red is one of the main colors featured:


Thus, using a red light may be okay under the spirit and way the ordinance(s) are written, but calling ahead might be a good idea too.

eganist 1 day ago 4 replies      
I'm not a photography buff by any stretch, so I'm probably flagrantly abusing some language here, but I feel an HDR timelapse of this shot (http://petapixel.com/assets/uploads/2016/05/zgrether_spacex_...) would probably be more beautiful than the end result (http://petapixel.com/assets/uploads/2016/05/zgrether_spacex_...) if only because my brain is jarred by the conflict of a timelapsed landing with a still shot of the stars.

Both are beautiful in any case.

zeiss_otus 1 hour ago 0 replies      
Woah!His gear costs around 8k.

Sony Alpha a7R II Mirrorless Digital Camera - 3kZeiss Otus 28mm f/1.4 ZE Lens - 5k

Anyone who says you need skills in photography is dead wrong, it's all about the gear.

uberdog 20 hours ago 1 reply      
I personally like this animated gif that SpaceX tweeted better than the static image the photographer created:


neiled 1 day ago 1 reply      
The stars sure are beautiful.
pjungwir 22 hours ago 1 reply      
Wow that sure is beautiful, and what luck! Does anyone know what the red flash was right in the middle, just above the water?
Aelinsaar 22 hours ago 0 replies      
That's... so cool. I always love to see the relative motion of celestial bodies, and with a rocket in the foreground?! My jaw actually dropped a little.
cooper12 1 day ago 2 replies      
Wow it's astounding how much work can go into processing an image. I think he brings up an interesting point when he says he was interested in telling a story rather than depicting reality. Makes one think twice about all those beautiful nature and space shots they see.
peterwwillis 22 hours ago 0 replies      
Great, now I want to spend $10,000 on cameras to photograph nature.
chinathrow 23 hours ago 0 replies      
You suck at masking ;)
Theranos Voids Two Years of Edison Blood-Test Results wsj.com
294 points by ssclafani  1 day ago   167 comments top 18
aresant 1 day ago 3 replies      
"One family practitioner in a suburb of Phoenix said a Theranos representative dropped off a stack of 20 corrected test reports a few weeks ago. Many of the voided results were for calcium, estrogen and testosterone tests.

The doctor said one corrected report is for a patient she sent to the emergency room after receiving abnormally elevated test results from Theranos in late 2014."

Tort attorneys should be licking their lips.

It would be shocking if Theranos survives this.

Beyond that Walgreens - the largest retail pharmacy chain in the USA and Theranos' wellness center partner - should also be in the crosshairs.

Feels like they should have had some better safeguards for consumers before committing to the 40-store pilot in AZ.

lquist 1 day ago 3 replies      
I'm based in SV, and I see a lot of big name entrepreneurs rallying behind her and I don't understand it. You cannot be cavalier/lean about human life. These people deserve to be jailed.
a_small_island 1 day ago 1 reply      
>That means some patients received erroneous results that might have thrown off health decisions made with their doctors.

Put them in jail.

apo 1 day ago 6 replies      
This is what happens when you try a Minimum Viable Product in healthcare and aren't up-front about slower than expected R&D progress.
_Codemonkeyism 1 day ago 1 reply      
Interesting they couldn't even user other peoples machines

"A person familiar with the matter said the Arizona lab performed the blood-coagulation tests with a traditional machine from Siemens AG that was programmed to the wrong settings by Theranos.

The Arizona lab also failed several tests to gauge the purity of the water it uses in its Siemens machines, which could affect the accuracy of some blood tests run on the devices, the person said."

taneem 1 day ago 2 replies      
This is likely the beginning of the end. With such a massive loss of trust, especially in the healthcare space, it is hard to see how the company could ever recover in the eyes of customers, investors or employees.
rcarrigan87 1 day ago 3 replies      
Can someone put this into perspective...how often are there major recalls or calibration issues at other, more established labs and testing companies?

Certainly, not trying to defend Theranos, just trying to understand how bad this really is. Because it sounds pretty horrible...

oneloop 1 day ago 0 replies      
"Theranos has declined to quantify to Walgreens the scale of its test corrections"

Doesn't seem like they're learning anything.

bane 1 day ago 0 replies      
Again, one of the most important interviews with Holmes: https://www.youtube.com/watch?v=MBs-oj7U-bo

Edison is discussed. "We don't use Edison for anything and haven't for a few years now."

dvcrn 1 day ago 4 replies      
Using "https://www.google.com/" as Referer (with the 'Referer Control' extension for example) gets around the paywall
jbuzbee 1 day ago 1 reply      
The class-action lawyers must be salivating over "tens of thousands of corrected blood-test reports"
hathym 1 day ago 0 replies      
if you came here without reading the article, all you need to know is that Theranos is fucked.
radnam 1 day ago 0 replies      
I was extremely optimistic for Theranos and having to see them go through this is sad on so many levels. Not calibrating standard testing machinery correctly just does not cut it.

One of their notable contributions is to set precedence in Arizona where consumers can now order their own tests without doctor's orders.

I believe consumer awareness of state-of-art diagnostic resting and making testing readily accessible can have a fundamental impact on people's wellness.

ps: I am not advocating more testing.

return0 1 day ago 0 replies      
The reporting by Carreyrou is particularly insistent to put Holmes front-center in each of this series of articles. She's in the article subtitle and first image again. I wonder if other execs are also responsible for this disaster.
josh_carterPDX 1 day ago 4 replies      
This is not what disruption looks like. This is what happens when you have someone with no domain expertise, but a great idea. We're seeing the same thing happening with Zenefits. Founders with no domain experience need to look at how they're going to enter a market full of incumbents. These incumbent businesses have survived for years because they know how to play the game. They have people in their employ that know how to lobby the right regulatory bodies. Theranos had none of this. So when all of this started coming down, they should have hired the most well-known and respected person in their field to bridge the gap between the past and future. Without that, they are outsiders playing in a game that has been around forever. They were set up for failure before they even began and no one was smart enough to ask "How will they disrupt an industry that has been around for decades?" Just saying, "We're going to make medical tests cheaper and more accessible" was clearly not the right answer.
tn13 1 day ago 1 reply      
Can someone please summarize what this actually means? The article is behind paywall and the title is cryptic.

What does "void" mean ? Less accurate, completely wrong, completely random ? What does Two years refer to ?

foobar1962 1 day ago 0 replies      
So when is the Edison estate going to issue a cease order against Theranos for using their name and damaging the reputation?
vonklaus 1 day ago 5 replies      
I still believe in the idea of Theranos and while I think it is great the Ev Williams was able to secure funding 2 more times to keep rebuilding different versions of blogger[0], I want to live in a world where we also take huge gambles on hard problems. If we adhere to VC math (we should as this hypothetical is for VC investing) one of these payouts will be well worth it, e.g. Tesla/SpaceX. So yeah, I'd write down uBeam & Theranos, but you can fuck off if you want the world to stop investing in big ideas.

[0]twitter.com, medium.com

edit: Also, we can assume it isn't physically impossible to use smaller amounts of blood to perform tests. So yeah, it was super obvious from the beginning some immigrant who happened to be in the right place at the right time and make some money at the height of the DOTCOM era working in software couldn't build a sustainable rocket program that rivals that of 1st world nations. So it wasn't obvious, and the next big innovation won't be obvious, and if you think it is you are either building it or just straight up wrong.

The TSA is a waste of money that doesn't save lives and might actually cost them vox.com
257 points by paulpauper  1 day ago   253 comments top 29
makecheck 47 minutes ago 0 replies      
It is so frustrating to see a lot of the solutions being proposed by the administration: wanting to hire more screeners, blaming passengers for bringing too many pesky bottles of water and pocket knives, etc. They are missing the obvious solution that should be at the top of the list, right in front of their faces: we must REMOVE safeguards to speed things up.

The probability that a bottle of water or anything that looks like water will cause an airline disaster is effectively ZERO. It is not a risk, and not even slightly concerning, period. This is not worth checking even once, even at random, much less millions of times a day.

And pocket knives? They SERVE FOOD WITH KNIVES on planes. They literally give you a knife in first class. If it was someones goal to obtain a knife on board, they would not need to bring it through security. And frankly, one could argue that knives are the opposite of risky: a few passengers with knives to defend themselves may very well be able to prevent a handful of hijackers from doing anything. Either way, I am strongly on the side of teach people to band together and defend themselves, not cower and be fearful of everything.

And dont even get me started on having to take off shoes. It is frankly sad that we have been so fixated on ONE piece of clothing, for years and years and years, as a reaction to ONE passenger out of millions who couldnt even carry out his threat successfully.

Besides, the entire concept of prohibited items does not eliminate risk. There are human beings who are powerful enough and skilled enough to cause serious damage or death all by themselves. They dont need prohibited items, they simply are deadly. A group of passengers that knows how to band together and fight back can subdue anyone, even a passenger that is deadly all by himself.

two2two 1 day ago 11 replies      
TSA is the number one reason why I don't fly and drive instead. From my POV most of the world's industries have progressed positively, but not air travel. I took a train a couple of years ago and it was a beautiful example of old merging with new. Walking through an antique of a train station, iPhone in hand, with my digital ticket ready to board; so easy and pleasant.

At that point I realized that air travel is by far the worst traveling experience money can pay for.

If an alternative airport wanted to do things a little different, such as "fly at your own risk" "no lifeguard on duty", aka no TSA b.s., I'd happily take the "at your own risk" option rather than the TSA controlled situation we're subjected to currently.

mdorazio 1 day ago 1 reply      
In my opinion, the TSA is basically a very expensive jobs program rather than an actual security organization. This is a big part of why it's going to be hard to get rid of now. According to Wikipedia, the TSA employs over 55,000 people, many of whom would probably have difficulty getting a similar level job if we reverted to a more sane security screening program. Anything that kills thousands of government jobs is hard to get through Congress, even if it's unpopular with the public.
Domenic_S 1 day ago 2 replies      
The TSA is a jobs program with a bit of "throw government contracts to your buddies" mixed in. Same with the military to an extent.

A TSA Screener job is about the closest we'll get to Basic Income: stand around in an airport occasionally groping people for $13-18/hr, plus awesome Federal benefits. Qualifications: essentially none.

rm_-rf_slash 1 day ago 1 reply      
I live in a small city with a small airport. One day, while waiting for my departure plane to arrive, TSA kicked everyone out of the secure gate and back into the insecure terminal, because the plane would not arrive for another half hour and they didn't want to keep watching us in a room with barely 50 seats. Then we had to go through security again once every single passenger had arrived.

The point is that security is fear-motivated. 99% doesn't matter if it isn't 100%, even if logic and probability puts that 1% insecurity in a .001% chance of actually happening. So if you let the 1% slip through and something happens, well, who wants to take the blame?

And now we have this mess.

ndirish1842 1 day ago 3 replies      
I wonder how autonomous driving will affect shorter flight commutes. I'll probably never take a car from Philadelphia to LA, but I might prefer to travel by car from Philadelphia to Chicago if I know that I can sleep throughout the car ride (as well as leave whenever is most convenient). When you take into account driving to the airport, checking bags, security, flight delays, baggage claim, and rental cars/driving to your hotel, a 12 hour drive doesn't look nearly as bad, especially when you could leave at 10 PM and wake up at 10 AM arriving at your destination. And it's way less stress compared to the hassle of TSA and flights.
rhino369 1 day ago 8 replies      
It's easy to say that TSA sucks (it does), but it's hard to propose a workable alternative. Well alternative 1, stop making us take off our shoes and taking out our laptops, its clear from pre-check that it's not really necessary.

You need some security. That was clear before 9-11. Airports had security and it was pretty similar to how TSA does it right now. You put your bags on an Xray machine, show your ID, and walk through a metal detector.

I'd suggest keeping the government in charge of what procedures to use, but then using private contracts to actually manage the airport security.

The real problem with TSA isn't that it is intrusive. It's that is terrible mismanaged and has no incentive to improve the experience.

Although apparently airports can opt out of the TSA.

jonnathanson 1 day ago 2 replies      
The article is exactly right about what needs to be done, and who needs to do it: the airports themselves. No chance any elected official is going to scale back the TSA's screening creep at this point.

The political risks of looking "soft on terrorism" are just too high. Imagine being a politician responsible for a TSA rollback, and then, by dumb luck, a terrorist attack succeeds a short while later. There may be zero correlation, but do you think the media will care? Do you think the public will care? Do you think your political opposition will care? Ha. Your career would be over in a heartbeat. And if your opponents really felt like twisting the knife, they might drum up hearings and lawsuits against you. So call me cynical, but I just don't see any lawmaker or policy wonk sticking his or her neck out anytime soon.

This is why it's in the hands of airports to push for any particular change. They're not running for office.

suprgeek 1 day ago 3 replies      
The article completely misses the point of the TSA. It is not meant to actually make air travel safer. It is there for exactly two reasons:

1) Provide our dear politicians the satisfaction that they "Did something" - Security theater is very useful during election times (Tough on Crime et al)

2)Provide a convenient excuse to expand the govt. ability to dictate yet another aspect of people's normal lives. The govt. now has another tool to harass "undesirables" - simply put their name on a "No fly", "No Train" , "No $SomeOtherThing" list and have their TSA buddies enforce it.Or have the "undesirables" be pulled aside for Random screenings every single time [1].

[1] http://arstechnica.com/tech-policy/2015/07/citizenfour-filmm...

This is the real purpose of the TSA. Your safety or saving lives is irrelevant.

pmontra 1 day ago 2 replies      
> Airports should kick out the TSA

I'm not American and haven't been there for a long time so forgive my ignorance. TSA is an agency of DHS so I believed that its presence in airports was mandated by the government. Can airports really replace it with anybody they like? If this is the case, why they didn't do it before? Only because TSA is for free and airports have to pay private security companies?

sehutson 1 day ago 0 replies      
What's crazy is that the article doesn't even mention the effective lost lives in the sheer number of hours people waste by getting to the airport so early.

If you assume 75 years x 365 days x 24 hours, that's 657,000 hours in a fairly typical life. Millions of travelers waiting an hour or more each = a lot of "lives" wasted standing in line.

mwsherman 1 day ago 2 replies      
In terms of $$, by far the biggest cost is in the wasted time of the millions of people who are subjected to this. It's obviously in the billions.
zer00eyz 1 day ago 3 replies      
In the world we live in there is one surefire way to get rid of the TSA: Stop flying.

Sad to say but money is a big motivator, and until the airlines get the message that we don't want to deal with this shit, they aren't going to really push for actual change.

bogomipz 1 day ago 1 reply      
SEATAC in Seattle and the Port Authority in New and New Jersey have threatened to privatize TSA duties as well. The question is can they? What's to stop them? Why is it taking so long?

How was this agency not looking at actually travel data that they failed to hire more staff as the number of air travelers increased? This was over a two year period. The idiot in charge of the TSA said they anticipated more people would sign up for TSA prescreen. At some point in the last two years they couldn't see that this trend wasn't transpiring?

This same idiot said they he was asking congress for more money for overtime for TSA employees. Great, make the same miserable people work even longer hours. That sounds like a great solution.

He also made a statement to the effect that their "mandate is to keep America safe' yet he seems to not grasp that if we can't get on the plane it doesn't much matter.

They also seem to blame part of the increased wait on the tragedy in Belgium but do you mean to tell me that not one person in this agency could see that the departure halls's were a huge a blind spot?

I imagine that lawmakers in Washington don't have to wait in the long lines like the rest of us? That's generally how the broken stuff in the US stays broken b/c lawmakers aren't exposed to it. This is true of healthcare as well. Congress has indemnity health plans which is why they have no idea how bad it is for the rest of us.

carsongross 1 day ago 1 reply      
The TSA is obviously a complete clusterfuck, but it is offering us an important lesson:

Despite everyone hating it, including Big Business, it persists and will likely continue to exist until the U.S. Government collapses. It is nearly impossible to ratchet back a government program dedicated to "security", among other sacred words.

Look at the solutions being offered: add more workers, more bomb dogs, etc.

The system cannot fix itself. Perhaps the system does not want to fix itself.

Friedduck 1 day ago 0 replies      
I've had TSA agents look through my wallet, and on a separate flight look through playing cards one by one. I was also let through with no screening once by accident.

I've seen them yell at passengers, drift off, sit around talking with long lines waiting, and every other conceivable offense. Most are fine but there are a lot of exceptions.

They contribute nothing, and I for one fly less frequently because of them.

As to pre-check: at Atlanta that doesn't always get you a short line or fast security wait time.

patrickmay 1 day ago 1 reply      
Airports should replace the TSA with security companies that use El Al's techniques: https://skift.com/2013/11/15/tsas-behavioral-detection-techn...
awinter-py 1 day ago 0 replies      
Love that they're quoting bruce schneier in defense. I think he was just being fair-minded because he doesn't want to appear smug. This is a guy who walked through the screening with a 'beer belly' (beer smuggling device for stadiums) full of gasoline and then blogged about it.
truehearted47 1 day ago 0 replies      
I also have stopped flying altogether due to invasion of privacy and feeling like cattle PLUS now that there are long lines, the chance of tempers flaring is real. Just witness the violence and hatred in the streets of America these days and watch how the police are unable to control riots...YES RIOTS...we no longer have protesters...protests are now riots. Airport crowding combined with invasion of privacy, impatience & anger = disaster waiting to happen. TSA is the terrorist here.
Mendenhall 1 day ago 2 replies      
In my personal experience what slows it all down the most is the actual people flying. Every time I fly I see countless people wearing tons of metal/jewelry/belts/whatever that they have to take off, often not till they are told to do so, the laptap is tucked far away until last moment. They still carry all sorts of lotions and liquids on for some unknown reason. When they then exit the scan they clog up the line by standing right there trying to put everything back on or away.
ccvannorman 1 day ago 0 replies      
The difference between the US and other countries is not that we're stupider. It's that the slightly smarter/more powerful people are much better at manipulating the stupidity of masses, and much more greedy, than other countries. That's why we leveraged fear and pushed hard so that you have to bend over for the TSA every flight.

My question is, what does the US look like without the TSA, and can we ever get there?

bluetidepro 1 day ago 3 replies      
How do we get rid of it, though? I get that it's terrible, and I've heard all these arguments countless times. How do you actually take action, though?
pgrote 1 day ago 2 replies      
I have long looked for an answer outside of security theater as to why the ban on liquids continues. If anyone has an answer, I'd appreciate hearing it.

If you go through a screening line and a liquid is found, the liquid is not tested. It is not handled carefully. It is not thoroughly inspected. It is tossed in the closest garbage can.

If the liquid really did pose a danger, wouldn't it be handled more carefully?

zipwitch 1 day ago 0 replies      
Those who say that the TSA is just a jobs program are missing the point. The TSA is a constant reminder of government presence and the security state, it's effectiveness at security or a jobs program is a minor concern compared to its value as a symbol. And of course, its growing, spreading its presence to highways, rail, and other forms of public transit.
descript 1 day ago 0 replies      
Air travel should be the same as motor vehicle travel. The only reason there aren't small air taxi companies that offer regional trips for affordable prices is because government has been involved in airplanes since day 1, and it is illegal for private pilots to charge.
aaroninsf 1 day ago 0 replies      
IAMA request: an honest to god TSA screener. Not an imbedded pinko journalist... someone who actually signed on.
reacweb 1 day ago 0 replies      
TSA is not a very important issue, but politicians love to discuss about this kind of issues where they can show their talent without hurting their sponsors. It is a good way to distract public from the more important issues (economy, unemployment, privacy, ...).
rconti 1 day ago 4 replies      
I hate security, though I hate the discomfort of air travel even more. In fact, I just got back to the US from Europe, and the cold that struck 12h after I left lasted 7-10 days (and I rarely get sick!)

That said, am I the only one who doesn't have these long security waits? I typically show up at the airport ~1h before boarding is to begin, and am often at my gate 50min before boarding begins.

I typically fly out of SFO, and I do admit, several journeys ago, I was actually IN LINE at security for 30 minutes which seemed absurdly painful and I was actually starting to sweat being late for boarding. Of course, at SFO they had TONS of extra machinery, they just didn't bother staffing it.

As much as I HATE taking off my watch, fitbit, ring, car keys, wallet, belt, shoes, phone, then the scramble to take my laptop out of my bag as soon as I get room on the table (it becomes a high pressure situation to do the laptop thing as by the time you get to the table you have roughly 8 seconds before you're holding people up!).. the actual lines are quite tolerable.

I typically fly SFO, SEA, SAN, SJC, and fly cross country at least once or twice a year. I just got back from Copenhagen, Frankfurt, Stockholm, Munich airports, and again, no problems. I've fairly recently been to Auckland, Queenstown, Reykjavik, Heathrow, Florence, Paris as well.

There's no doubt most other countries do a better job than the US; the automated machinery for dealing with your possessions to be xrayed (they hold your bin until it's empty and then automatically return it to the beginning of the line!) and the switching between 10-15 security lines so that you're never behind more than a few people was a revelation.

But the actual time in security is rarely all that bad inside or outside the US.

Google's mapping cars discover hundreds of underground gas leaks dallasnews.com
234 points by state_machine  21 hours ago   73 comments top 12
Animats 20 hours ago 5 replies      
That's useful. The mapping cars could carry some other useful sensors. Air quality is obvious. Less obvious is RF leakage from cable TV cable, which can interfere with other spectrum users. Cable companies used to have to check this every two years, but they lobbied to not have to check it.
janesvilleseo 3 hours ago 1 reply      
Slightly off topic, but one idea I had was for Google to share road usage stats with cities and businesses. Right now cities put out an air tube on the road that counts the cars. Google could provide much more insight given they already can tell you road congestion. They could give it away for free to a city and/or make businesses pay for it.
andrenotgiant 21 hours ago 0 replies      
Another hint at what is to come when Google starts organizing the world's _offline_ information.

Google Books and Google Streetview are just the tip of the iceberg.

I would be willing to bet that "massively distributed real-time data collector" was listed as a key business case by whoever pitched self-driving cars at Google.

danso 20 hours ago 1 reply      
Is the collected raw data available? Would be interesting to cross-reference what the Google cars found with 311 complaints.

Here's NYC's 311 data: https://nycopendata.socrata.com/Social-Services/311-Service-...

...I recall there being a category for people calling in gas odor, but the Socrata site seems to be slow/down at the moment.

alex_g 18 hours ago 10 replies      
Why are these sensors not placed on garbage trucks or police cars?
state_machine 21 hours ago 1 reply      
More details on the program: https://www.edf.org/climate/methanemaps and the other cities they've mapped: https://www.edf.org/climate/methanemaps/city-snapshots
willcodeforfoo 13 hours ago 0 replies      
I wonder if accelerometer data would be useful to determine road quality? It may help cash strapped cities like here in Toledo prioritize the bumpiest roads to fix.
ChuckMcM 15 hours ago 1 reply      
This strikes me as an excellent IoT application, gas sensors are cheap and simple SMS texting could send a message from an IoT device that it detected gas and its GPS co-ordinates.
TheSpiceIsLife 15 hours ago 2 replies      
From the article:

residents paid as much as $1.5 billion extra between 2000 to 2011 for gas lost to leaks.

Is anyone able to calculate approximate how much gas this is in volume or weight? 1.5 billion divided by residential gas price per unit?

retox 16 hours ago 2 replies      
And all they had to sacrifice was personal privacy!
Libui: GUI library in C github.com
306 points by avitex  11 hours ago   136 comments top 22
jventura 7 hours ago 5 replies      
This is a very good approach for the current times!

As far as I understand from the source code, libui is a thin C wrapper that makes calls to each platform native ui framework. For instance, to create a window on OSX it calls the corresponding Cocoa function, for Linux the corresponding GTK function, etc. So, unlike Qt and GTK which are heavy cross-platform libraries because they can "draw" their own widgets themselves, this library seems to just call other functions. In that sense, the resulting widgets are as native as they can be!

Qt, Gtk and others are from those old days when everyone wanted alternatives to native frameworks that could "draw" their own widgets. That is why Qt and GTK are/were useful to build the native desktop frameworks of Linuxes.. In these modern times that everyone has moved to Web, things on the desktop-land are far more stable and it is common agreement that Cocoa is perfectly fine for Mac, Win32 is perfectly fine for Windows and GTK is perfectly fine for Linux. This is great news, so now it is the best time to make thin wrappers around these stable things so that we can all go make useful software..

As for the why C and not anything else, is just that all these UI native frameworks can be easily called from C, and C is the common denominator of other higher level languages. So, in theory, each programming language that can interface with C (99.9% of them can) can call the functions of this libui. This means that, in theory, we can all start building 100% native desktop applications in our favorite languages with a lightweight library.. Now I'm off to start pylibui.. ;)

striking 9 hours ago 4 replies      

I'm hoping that a native UI toolkit that can be used via FFI from nearly any language might take a chunk out of the Web-as-application-delivery-platform mindset.

I don't blame anyone for making non-native apps with Electron and HTML5, because it's so difficult to make them work on every platform. But here's to hoping someone finally got it right, and that native applications can take back some ground.

Death to the battery-eaters and memory-fillers. Let the OS do the heavy lifting.

etwigg 6 hours ago 1 reply      
Nice work, but I wish SWT had picked up more steam outside the Java community. It's the same idea, but with a decade+ of banging on the corner cases for the use cases of the Eclipse IDE.

In addition to all the standard controls, it's also got OpenGL and browser embedding since before Electron was cool.

Using it on the JVM is very easy, but there was a short-lived attempt to maintain a C++ API.

- How it looks: https://www.eclipse.org/swt/

- From Jython: https://github.com/danathughes/jythonSWT

- From JRuby: https://github.com/neelance/swt4ruby

- From C++ (defunct): http://www.pure-native.com/

It's built out of small chunks of C code:https://github.com/eclipse/eclipse.platform.swt/search?p=1&q... that are wrapped in a Java API with a straightforward coding style that seems amenable to automatic source translation. And it doesn't need GC - it actually requires the Java programmer to manually dispose resources, which is easy because it always requires objects to have a parent, so everything goes away when the pane / window / whatever gets disposed.

apayan 8 hours ago 4 replies      
andlabs (Pietro) is writing this library in C as a support for the Go ui library he's been building.https://github.com/andlabs/ui

However, as others have pointed out, this will be very useful for many other languages as well. I would love to see a Renaissance of native cross platform apps.

hoodoof 9 hours ago 4 replies      
Juce is another interesting UI lib https://www.juce.com/

Works with Windows, Mac OS, Linux, iOS and Android.

Strangely, juce seems to have a revamped website that has no pictures or screenshots of the juce user interface and its widgets. Weird. Not sure how effective that marketing is...

swah 1 hour ago 0 replies      
I guess Sublime Text uses something like this (custom made of course) and everyone loves Sublime (at least regarding speed and cross platform good looks).
MrBra 3 hours ago 0 replies      
Excuse me for the following dumb question.

In past I've used Java/SWT and C#/WinForms. Both of those come with a concept of the UI thread, which is the thread which you should use to read from and write to the UI.

What I don't understand about these multiplatform libraries is: do they also provide this mechanism, or are they actually only responsible for drawing the widgets and forms, leaving to the developer (and the guest language they are using to leverage the library) to implement that part through threads or some different pattern (i.e. reactor pattern)?

laurentoget 9 hours ago 2 replies      
These APIs are notoriously hard to get right and coherent, and releasing at such an early stage will make it hard to change anything when building up on this.

That said I do not know of any cross platform library in C, so this does seem to fill a niche.

RustyRussell 3 hours ago 1 reply      
I couldn't see how to do other things in the event loop. Like, waiting for socket input. Did I miss it?
kensai 9 hours ago 1 reply      
Very interesting indeed. It supports MacOS X which is a nice addition to my other favorite, the IUP.


mzs 32 minutes ago 1 reply      
Can it handle printing too? That's a cross platform PITA too.
ausjke 9 hours ago 2 replies      
Very interesting. Just built it on ubuntu 16.04 smoothly, the only dependency is gtk-3.0.

What's the difference between libui and libsdl? The latter is also in C and supports many platforms. For large applications I may just use QT and for some embedded GUI I can use libsdl, what's the goal for libui?

register 9 hours ago 0 replies      
I was looking for this for quite a long time. Choosing C allows one to bind easily to a moltitude of "managed" languages. I Always found IUP (http://webserver2.tecgraf.puc-rio.br/iup/) approach very interesting but it lacks an OS X binding. I will see how the two libraries compare one to each other.
lossolo 1 hour ago 0 replies      
Btw cross platform Go GUI by the same author is based on that:


nurettin 8 hours ago 0 replies      
I've done my share of startness-upness and web development on both asp.net and rails. After designing desktop user interfaces using various versions of delphi, using visual and non-visual components that are bound to databases is great ease. (optionally using an API layer to do server-side processing) the amount of complexity, speed and ease of use you can cram into a single form while remaining responsive and user-friendly compared to wizard-style web pages is much staggering. The ability to step between dynamically loaded libraries while debugging is such freedom.

Overall it was an easier and more productive development experience for me.

red_admiral 7 hours ago 1 reply      
The main thing holding me back from investigating this further is requiring GTK3+ on linux, when my mint box works just fine with GTK2/Mate.
Rexxar 6 hours ago 1 reply      
Why not make a pure C wrapper around wxWidgets ? It use native controls too and it's not hard to make C call C++ code.
kodfodrasz 9 hours ago 6 replies      
Why would anyone write a GUI app in C? Almost every alternative is better for the job.
_pmf_ 8 hours ago 3 replies      
> that uses the native GUI technologies of each platform it supports.

WPF is native on Windows, Win32/WinForms is also native on Windows. I'm assuming it uses Win32, although a third party FFI to WPF would be very, very nice.

blub 9 hours ago 2 replies      
This is so funny. Yesterday there was another discussion on how C needs to be replaced due to security/corectness concerns (the K&R topic), today people are fascinated by a UI(!!!) library built in it.

And they actually consider using this library.

hoodoof 9 hours ago 4 replies      
I was recently thinking that operating systems should ditch their custom desktops in favor of a browser based UI.

Microsoft tried this idea out and it didn't work a few years back but it still seems to me that a browser based OS UI would be far more effective than things like Gnome and KDE and all that stuff.

Edit: well I guess Google had the same idea long ago and called it ChromeOS - browser as OS interface.

Vote.org is a non-profit that wants to get the U.S. to 100% voter turnout themacro.com
224 points by shayannafisi  2 days ago   448 comments top 38
baron816 2 days ago 12 replies      
The biggest reason people in the US don't vote is because they don't have enough options so they never get to choose people they really care about. Plus, no individual vote matters. This all has to do with our winner-take-all elections. Countries with proportional representation have much higher voter turnout rates (often in the 80-90% range). That's because you get to vote for the person or party you want, and they'll at least still get a seat in government even if they're just in the opposition. But you still have an incentive to get out and vote and make their position stronger. There are no lost causes or strategic voting.
debracleaver 2 days ago 19 replies      
Hey everyone. Debra Cleaver here, founder of Vote.org. I am ready to answer all of your questions about voting in the US, as long as they are non-partisan. Partisan questions should be directed to your local political party. I'm especially keen to talk about ways we can use technology to modernize the election process. My current focus is how we can use electronic signatures to roll out online voter registration in states that don't have it, and for people who don't have driver's licenses.
partiallypro 2 days ago 3 replies      
Not everyone should vote. This is going to be an unpopular opinion, but democracy only works with informed citizens. If you have a bunch of people that are not informed on the issue voting you're probably going to have an outcome that is awful. If you think politicians are bad now, just wait until the populists have the uninformed voting in high numbers (hey, that sounds familiar this year.)

To me, if you don't care enough to take time to vote as it is, you probably don't know the issues, and you probably shouldn't vote. Or maybe you do know the issues, and that's why you're not voting.

There are some cases where this is not true, such as someone having a weird shift; but that's what early voting is for, and that's why most states have legal paid time off for voting (up to 3 hours.) There's also mail-in ballots, absentee voting, etc. Democracy isn't always a good thing, to be frank; and it's undoubtedly why the U.S. was set up as a representative democracy rather than a pure democracy.

igorgue 2 days ago 8 replies      
I have a theory that since that the USA doesn't make election day a federal holiday is one of the major factors the working class of america is not voting, usually it takes too long to vote, and it's not paid for employees, especially blue collar ones.

I wonder what are your thoughts about it, and how can it happen in America? I've seen every single proposal be rejected in congress.

seomis 2 days ago 6 replies      
To those commenting with some variation of "only informed citizens should vote," pause and consider how much overlap there is with your idea of what an "informed" voter is with race/class lines. You may be unwittingly (or wittingly in some cases?) insisting that voters in the US should be, disproportionately, wealthier whites.
inanutshellus 2 days ago 13 replies      
I'm likely to be flamed out of existence for saying this, but I'm against 100% voter turnout. A shocking number of people in my social circle get their political opinions by intuition. Never do they watch a debate, nor do they have any idea who the contenders are. When the primaries were running in my home state I asked several of my friends about their opinions and most of them only knew one person from each party that was even running.

They were passionate about their hatred of the opposing party's most-tweeted person, and clueless about what their person's positions were, the states they were from, their voting history, their "moral fabric" as it were...

Point is, screw my friends. If they only Facebook-Care(TM) about politics, they shouldn't be encouraged to vote anyway. They do not get to decide my country's fate.

In fact, I want the opposite.

I want a test. Step one: Name 3 people in your primary and name 3 in the opposing party's primary. Who is your party chair? Who's in the majority in the house? Who're the senate majority and minority leaders?

dragonwriter 2 days ago 0 replies      
US voter turnout is low relative to many other modern democracies (much less compared to the ideal of 100% turnout) because the choices are poor because of the structure of the electoral system which supports only two viable parties at a time (which two has changed nationally twice in the history of the nation, and when things were more regional there were times when the two locally-viable parties included one of the national parties and one other, such as the Missouri Republican vs. Farmer-Labor period.) This is a fairly well-established effect of the electoral system, evidence through, among other things, comparative studies of modern democracies.

So, what is Vote.org's plan for dealing with this, which is the fundamental problem in keeping turnout low?

bpodgursky 2 days ago 4 replies      
I'd rather have 100% turnout by the 10% who are actually informed about the issues and candidates.
jimrandomh 2 days ago 2 replies      
There's a very important reason to do this which no one has brought up. US elections have a lot of vote-suppression shenanigans; in some noteworthy cases (including the 2000 Presidential election), fraudulently removing voter registrations, understaffing and obstructing poll locations changed the outcome. This sort of thing becomes much more difficult to execute and much more difficult to get away with if there's an expectation of 100% turnout, as in countries which have mandatory voting; large asymmetric chunks of the population failing to reach the polls no longer look plausibly innocent. I think mandatory voting is worth having for that reason alone, in addition to the other reasons.
boona 2 days ago 1 reply      
To know why this is a bad idea, you might want to watch this video titled The Myth of the Rational Voter https://www.youtube.com/watch?v=XKANfuq_92U.
atria 2 days ago 1 reply      
This is a worthy goal, but is it a good thing to have 100% voter turnout? Every direct democracy has failed since the time of Socrates. After several generations, direct democracies turn into a mob with the majority voting themselves benefits while minorities become permanently disenfranchised. At least voter disinterest allows minorities the possibility of voting as a block and gaining influence in off-year elections. In my opinion, sometimes too much so.
redthrowaway 2 days ago 6 replies      
Not to sound elitist, but is that really a good thing? What sorts of people are going to be simultaneously not motivated to vote, and good at picking the best candidate?
dingo_bat 2 days ago 0 replies      
Why is 100% so important? There will always be a section of people who really do not care about choosing their representative. Why insist on such people voting too? In my opinion it would be a random, ill thought out vote.
rcheu 20 hours ago 0 replies      
I feel like an easy solution is to just pay people $20 when they vote so that it's no longer an irrational decision. People will find a way to register and vote if there's a financial incentive.

Part of the problem right now is that it's hard to convince someone to vote because it's not actually a rational decision. With very high probability, your voting action has no impact on your life, and it takes significant time. Make voting a rational decision, and we'd probably see more people doing it.

jayess 2 days ago 0 replies      
The right to vote also includes the right not to vote.
jjtheblunt 2 days ago 2 replies      
Why stop at 100%? Chicago supposedly has gone far beyond before.
marcoperaza 2 days ago 0 replies      
I don't agree with the stated goal of 100% turnout. As many people should turn out to vote as their are citizens who want to vote. If you're not self-motivated to vote one way or the other, with so much being on the line these days, then maybe it's better that you don't.
swalsh 2 days ago 3 replies      
I've always thought that online voting could be made MORE secure than traditional systems. If you combine cryptography with more traditional layers. It can also be anonymous.

Take an existing online registration, allow a user to login, and "create a password". Take the password, hash it with the users registration id, and a salt, and that becomes the id for a ballot. Now a user can always login, and view their existing vote (as long as they remember their password) however no outside or inside user could directly link a ballot with a voter.

In addition, allow all online votes, and registered users (who voted) to be instantly publicly accessible via API by 3rd party non government organizations so that all results can be monitored.

The hashing algorithm can be the same used by any traditional password system.

ausjke 2 days ago 3 replies      
whenever a local election comes up, the first thing I want to know is that who are running for what, and what's their key difference and if available, track records. A quick comparison chart/table will serve the purpose but I rarely if ever found that, hope someone will create a website like that for all, so voters can know the quick-facts before voting relatively easily.
pjc50 2 days ago 1 reply      
I recently realised that one big advantage of compulsory voting is that it completely kills attempts to suppress voter registration or differential turnout.

How do you plan on dealing with voter suppression and gerrymandering?

Aoyagi 2 days ago 0 replies      
Why would you want to force people who don't have enough knowledge or interest to make an educated opinion to vote?
afinlayson 2 days ago 0 replies      
It's an easy problem to solve, just have to make incentives for politicians to get more people to vote. Oregon made it an opt out state, and made all votes be mail in ballots. Seems like an easy way to increase democracy.

You could also make it so any politician's terms relative to the populations voting for him.

If 50% of eligible voters vote for someone with 51% of the votes, they should only get 25.5% of the term.

programmarchy 2 days ago 0 replies      
Anarchist here. You can count me out. I won't be consenting to this form of government any time soon.
civilian 2 days ago 0 replies      
I want my brother to tell all of you guys about how it's not worth his time to vote. (Especially in an extreme democrat state like california-- it's unlikely that he'd be the deciding vote.)

But my brother is so aware of the value of his time that he doesn't post on HN.

lujim 2 days ago 1 reply      
Hi Debra,

I've always been skeptical of these kind of "rock the vote" initiatives because I believe they motivated not by a overall altruistic love of democracy, but a motivation to sway elections towards the organizations favored political party. What are your thoughts?

klue07 2 days ago 0 replies      
Recent phrack had an article on Internet voting for those interested.


nxzero 2 days ago 1 reply      
100% voter turnout assumes that not voting is a not an meaningful expression of someone's right to vote. Unless there's option to express this within the voting system, there will always be voters that don't vote.

Second, if it's safe enough to bank and file taxes online, it safe enough to at least take an unofficial, but publicly published count online; people could for example update their vote during the run up to the official election.

Some countries have had penalties for not voting. Unclear if this helps, or hurts, the system.

masudhossain 2 days ago 1 reply      
If someone were to vote on your website, how long would it take?

One of the main reasons why people don't vote is not being informed enough. Let's be honest, we hate the media and a lot of people don't know WHAT to believe anymore.

So do you have any plans on educating the people to think more logically and look at statistics for example when choosing to vote? Of course I'm not suggesting you to have any bias towards any candidates, but rather educate the people on what they vote for.

duncan_bayne 2 days ago 0 replies      
To the vote.org people who are here - have you ever considered throwing your weight behind alternative democratic mechanisms, like sortition, for selecting representatives?

This would have the effect of considering all eligible citizens for Government, thereby mitigating many of the problems with voter turnout.

nxzero 2 days ago 0 replies      
Unlikely, given that many states have an absentee voting option, polls are long enough in a day that if someone really wanted to vote they would, etc.

Also heard it's the time of the year, but fact is that there's only so far that you're able to go before you corrupt or bias the system.

tn13 2 days ago 0 replies      
Why? High voter turnout is like high page visits, it does not necessarily translate into good things. Ideally people who understand political issues and see that by voting they can truly help their ideology must vote.
smnscu 2 days ago 1 reply      
Related, here's Andrew Kim's reimagining of the US ballot: http://www.minimallyminimal.com/blog/america-elect
jakeogh 2 days ago 2 replies      
Computerphile: Why Electronic Voting is a BAD Idea https://www.youtube.com/watch?v=w3_0x6oaDmI
estrabd 2 days ago 2 replies      
Any idea about what the political ideologies of the full 100% looks like? It's bound to be vastly different that the low percentage that turns out.
ravenstine 2 days ago 1 reply      
Can anyone explain why everyone should vote?
samstave 2 days ago 0 replies      
I would like to see a security discalimer just like the one at the airport regarding if your bag has ever left your possesion: "Has anyone at any time come to you and asked you to cast a vote for them, or otherwise attempted to compromise your individual right to vote"
debracleaver 2 days ago 0 replies      
hey all. it's been great chatting with everyone. i have to go offline for a bit to eat dinner and talk to potential partners, but happy to pick this conversation back up later tonight.
ck2 2 days ago 2 replies      
Wouldn't insisting on motor-voter take it to near 90% ? Most states refusing motor-voter are controlled by the hard right.

Sure hope there is a plan to fund 100% more voting locations and booths.

Because Republicans have figured out a great way to kill voting when they simply just have to give in and let people vote is to defund voting locations.

Unless the plan is to just have everyone vote absentee but that allows for votes to be "lost".

Apple Sent Two Men to My House vellumatlanta.com
254 points by glhaynes  2 days ago   101 comments top 13
citruspi 2 days ago 2 replies      
Context[0], since I didn't see a link in the article. Also, the discussion[1].

[0]: https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-mus...

[1]: https://news.ycombinator.com/item?id=11634600

madeofpalk 2 days ago 0 replies      
This whole saga has been a pretty interested set of events, amusing to watch as a bystander who's been observing Apple for a long time while they go through many changes.

Regardless, reminds me of Eddy Cue driving over to Federighi's house late one night to report a bug[0]:

 When Cue ran into a problem installing a new build of OS X on that iMac, in fact, he could tell as a veteran software tester that the bug might be hard to reproduce, plus he was scheduled to take a trip the very next day. I called Craig up, said have your guys look at it, I think it would be hard to re-create. He said sure, so I put the iMac in my car and drove it over, as in, to Federighis house. Cue went on his business trip, Federighis team fixed the problem, and Cue got his iMac back when he returnedkind of like a Genius Bar for the C-suite.
[0]: http://www.macworld.com/article/3033057/ios/eddy-cue-and-cra...

JarvisSong 2 days ago 5 replies      
It's great that this happened. It's annoying that it indicates the best way to get support from Apple is to make a prominent blog post.
Steko 2 days ago 2 replies      
The other day I played one of my "Jedi Mind Tricks" albums on iTunes (yes this happened in 2016, because reasons) and as each song played it disappeared from the album list. Apparently the songs were being repopulated in another album under "Army of the Pharaohs" because either iTunes (or I?) had decided to rename the artist at some point.

Probably related, possibly dated: you might want your albums from "The Dwarves" and "Dwarves" (don't judge me man) to be listed together because they are the same band and it's not my fault ITMS has it wrong but if you rename one, good old iCloud will happily download a new copy of the one you renamed.

Cheyana 2 days ago 1 reply      
About 20 years ago, when MASM was still for sale on the retail shelf, I called Microsoft with a problem when the help files wouldn't install (from floppy). The person on the other end showed me how to extract them manually, then later in the week I had not one, but two calls back on my answering machine following up and hoping everything was going okay with my installation and if I had any issues to please call them back. Those were the days.
megablast 2 days ago 1 reply      
This was originally from an article titles "Apple stole my music", whereas iTunes deleted music from his computer.

I doubt the old Apple would have gone to so much trouble, good to see the new Apple appearing to be more concerned and open.

mirimir 2 days ago 2 replies      
A couple years ago, Amazon deleted Animal Farm and 1984 from customers' devices ;)
tomc1985 2 days ago 4 replies      
The deletion thing has been known for months. Now Apple does something?

And a special version of itunes? Does nobody over there collect mp3s?

AJRF 1 day ago 0 replies      
I bet the guy turned on iCloud Music Library and hit replace.
internaut 2 days ago 1 reply      
That's one way to solve edge cases I suppose.

If your assassins tell you to watch Firefly they can finish their work before Season 2. It shows there are good people everywhere.

corndoge 2 days ago 0 replies      
I fail to see how this is interesting
anjc 2 days ago 0 replies      
I'm confused by this article. The engineers both do not dispute what their rep said on the phone, and also admit that it was not user error. But they have not said that it should not have happened.

Maybe I'm misreading? Has anybody said that this was not meant to happen? They've accepted that it deleted their files, but was it a bug or not?

arcticfox 2 days ago 4 replies      
Apple sent two engineers out to look at something that couldn't be repro'd? Super unproductive.

Sounds like solely a PR move. A good one, I think, but a little strange. Basically just "let's listen to some tunes while we score these PR points."

This Biology Book Blew Me Away gatesnotes.com
241 points by Tomte  1 day ago   99 comments top 14
dnautics 1 day ago 4 replies      
Two biology events (not sure this book covers) that are completely underestimated.

1) evolution of cyanobacteria as a freak merger of green-sulfur and purple photosynthetic bacteria. Well, biological historians DO understand the importance of this, but the reason why the chemistry is important is not well appreciated. Cyanos use water as a reductant - as an electron donor. Normally one does not think of water as a reductant, but as a facilitator oxidation. This is biology's "great umpolung chemistry" moment.

2) The great oxidation catastrophe. Because oxygen, the oxidized poop of the previous process is highly toxic, there was a huge exinction event across pretty much all clades. But some of the emergent chemistry (disulfide bonds e.g.) really enabled structural scaffolding that facilitated higher order cellular structure. Mitochondria went into hiding inside of the reducing environment of an proto-archaeal species and boom - eukaryotes.

3) The size and distance of the earth from the sun. Hydrogen at ambient temperature achieves escape velocity. This means the net chemical trend over billions of years was oxidizing. One wonders if this made the first two chemical processes somewhat inevitable.

I'd also like to point out that thinking parsimoniously about energy from an evolutionary standpoint is not necessarily productive. For example: There's a lot of junk DNA (VNTRs, e.g.) which do not seem to be subject to aggressive optimivation for energy.

rusanu 1 day ago 3 replies      
I have read The Vital Question, after hearing about it from BillG's blog. And it did blew me away. I immediately read also Oxygen, by same author, and I found it equally interesting (it goes in more detail on some topics, but some of the ideas in Oxygen are superseded by his later books).

If you can spare an hour, I recommend this video: https://www.youtube.com/watch?v=UGxAB4Weq0U . Is made by the author and it covers the ideas of life origin exposed in The Vital Question.

Also interesting his paper on the double-agent theory of aging: http://www.nick-lane.net/double-agent%20theory.pdf

apo 1 day ago 1 reply      
I really like books that expose holes in scientific knowledge. Books that attempt to fill in those holes with conjecture based on observation are even better. This book does both very well.

However, I did notice some glaring factual errors relating to chemistry (my speciality). Chapter 2 includes the passage:

> Second, and more telling, a major distinction between bacterial and archaeal membranes seems to be purely random - bacteria use one stereoisomer (mirror form) of glycerol, while archaea use the other. ...

This is false. Glycerol is achiral - devoid of stereochemistry. It is its own mirror form.

What Lane may be talking about is lipid hydrolysis, in which functionality at one of the two prochiral oxygens of a lipid is cleaved preferentially, but I haven't followed up on this yet.

Either way, Lane spins a scenario where the two main groups of organisms make two different kinds of glycerol. This simply can't be true.

This isn't merely a minor technical error. Enantiomeric purity and the same configuration of amino acids has been a hot topic in the origin of life because the kind of scenarios that are supported there are quite different than if various species used mixtures of amino acid stereoisomers or different pure isomers.

rusanu 1 day ago 2 replies      
I have a question for the HN molecular biology aficionados: Where can I find some good critique of Nick Lane's ideas? He sure convinced me, but I would like to see what the experts say about it.
niels_olson 1 day ago 9 replies      
> All complex life on earth shares a common ancestor, a cell that arose from simple bacterial progenitors on just one occasion in 4 billion years.

This strikes me as a commonly held but deeply flawed origin story, and a close read suggests Gates doesn't buy it either, but fell into the literary trap of writing it anyway.

If we believe the first form of life occurred on the sea floor, why shouldn't we believe that new forms of life are spontaneously occurring on the sea floor all the time? By now the overwhelming majority of those new forms are probably eaten by an existing critter, but in the early days, there were many, many, many events. And, like he says, there were many, many times when one cell ate another cell. I rather doubt it only succeeded to result in eukaryotes once.

red-indian 1 day ago 3 replies      
I recommend these biology books:

Molecular Cell Biology - Lodish


Molecular Biology of the Cell - Alberts


itodd 1 day ago 1 reply      
Radiolab [as usual] did a great episode on this theory called Cellmates.


ssivark 1 day ago 0 replies      
I haven't read the book, but I wish to make a subtle but important distinction between 'energy' and 'entropy'.

Whenever we say energy in common parlance, we actually mean a source of low entropy (energy). From the perspective of physics, "life" is a non-equilibrium process so the crucial input is low entropy stuff (fuel/food/etc), which can be 'used' by the organism while converting that stuff to high entropy waste.

As far as we know, energy is always conserved; strictly speaking there is never an energy crisis. It's all about (low) entropy.

allisthemoist 1 day ago 0 replies      
If you are at all interested in the origin of life and the role of energy therein, I cannot recommend any single source of knowledge more than the following paper as it has literally changed my life: http://onlinelibrary.wiley.com/doi/10.1002/cplx.20191/abstra...

It is written by two of the most intelligent people I think I've ever come across, Eric Smith, who is an external professor at the Sante Fe Institute, and Harold Morowitz, who founded the Krasnow Institute for Advanced Study at George Mason. Both men work in very disparate fields. Morowitz was a specialist of biology, origin of life scenarios, and biochemistry while Smith is a (brilliant) physicist and chemist. However, together they have assembled an encompassing theoretical structure that I am confident will lead science for several decades, once it is gradually integrated into other fields of research - e.g., Jeremy England at MIT has looked at some of the same thermodynamic phenomena using statistical physics (great article on his work - https://www.quantamagazine.org/20140122-a-new-physics-theory...)

Eric Smith actually did a video describing this work while at Sante Fe that is worth a watch: https://www.youtube.com/watch?v=ElMqwgkXguw

This is the first paragraph of the paper mentioned above:

Life is universally understood to require a source of free energy and mechanisms with which to harness it. Re- markably, the converse may also be true: the continuous generation of sources of free energy by abiotic processes may have forced life into existence as a means to alleviate the buildup of free energy stresses. This assertion for which there is precedent in non-equilibrium statistical mechanics and growing empirical evidence from chemistry would imply that life had to emerge on the earth, that at least the early steps would occur in the same way on any similar planet, and that we should be able to predict many of these steps from first principles of chemistry and physics together with an accurate understanding of geochemical conditions on the early earth. A deterministic emergence of life would reflect an essential continuity between physics, chemistry, and biology. It would show that a part of the order we recognize as living is thermodynamic order inherent in the geosphere, and that some aspects of Darwinian selection are expressions of the likely simpler statistical mechanics of physical and chemical self-organization.

keithpeter 1 day ago 0 replies      
OA title image was taken in the Micrarium at the Grant Museum of Zoology at UCL. Open to the public and well worth a visit should you happen to live in London. The whole place has a decidedly steam-punk feel with skeletons, brass instruments, and handwritten labels.



Now, I shall have to get the book. I've not read much about biology and energy since finding Schroedinger's What is life in the library at school.

restalis 1 day ago 0 replies      
"He makes a persuasive case that complex life must have the traits we see today. And he argues that it would almost certainly develop the same way everywhere. Which means that, if we find complex life on other planets, it will quite likely share the same traits." "is so compelling that its hard to imagine any other way"

The explanation is just a hypothesis about the cellular structural evolution of the living organisms we're having all around us. From what I understand that was only one successful combination of cell structures working under a given set of conditions. In another conditions another combination could have formed the basis of later evolved complex life. Sure, the E.T. life would most likely have similar composition of chemical elements (because of their abundance in the universe, if nothing else), but I can't expect it to necessarily have the same base structural cell model at their core. I think Mr. Gates' fascination with related problems affected his disposition for healthy criticism here.

jdimov10 1 day ago 0 replies      
"Why does all complex lifeevery plant and animal you can seeshare certain traits, like getting old and reproducing via sex? "

There are plenty of examples of asexual reproduction in plants and animals as well as complex organisms never getting old.

mattdeboard 1 day ago 0 replies      
I like how not even the text loads on that blog if you have JS disabled.
jrcii 1 day ago 5 replies      
"Nick reminds me of writers like Jared Diamond, people who develop a grand theory that explains a lot about the world. He is one of those original thinkers who makes you say: More people should know about this guys work."

I have original ideas that explain a lot about the world but no one cares because Bill Gates didn't tell everyone to listen to me.

* Can anyone remind me why we care what Bill Gates thinks about biology? If this was a post about the software business that would be one thing. It's so ubiquitous I don't necessarily expect anyone to understand my point, but this strikes me as a form of worshiping money. Because he has money we care what he thinks about anything and everything. It's a more covert form of the absurdity in reading and caring about what some celebrity likes to eat for breakfast.

Daydream Is Googles Android-Powered VR Platform theverge.com
180 points by T-A  1 day ago   106 comments top 11
mmanfrin 1 day ago 15 replies      
We're less than 2 months away from the first real releases of VR equipment and already the ecosystem is fractured in to at least 4 different platforms/SDKs/styles: Oculus, SteamVR/OpenVR/Vive, Google Daydream, Playstation VR.

I fear the fracturing will make total adoption lower and slower, as developers will have to choose sides or spend way more time developing for all platforms.

gregmac 1 day ago 2 replies      
Wonder if they'll rename the screensaver in Android TV, also called 'Daydream'? [1]

[1] https://play.google.com/store/apps/details?id=com.google.and...

JacobKyle 1 day ago 2 replies      
The headset doesn't look different from other mobile VR devices. I was hoping/expecting to see something with tango integration for head tracking.
DonHopkins 1 day ago 0 replies      
>"A Daydream home screen will let people access apps and content while using the headset; an early look shows a whimsical forest landscape with the slightly low-poly look that Google has used in Cardboard apps."

Google Bob! ;)

delphinius81 1 day ago 2 replies      
Was anyone able to glean from the video if the controllers/HMD provided positional tracking, or just orientation tracking? Either way, a more open mobile VR platform sounds great (though a little disappointed in that Daydream feels like an "us-too" announcement). I was hoping they would announce solving positional tracking on a mobile device. Nothing all that earth shattering here at the moment. :/
Lionleaf 1 day ago 1 reply      
This is a big deal.It basically adds the two factors that made GearVR better than cardboard to any VR ready Android phone: OS integration and better IMU sensors. It basically puts high quality (but not cutting edge) VR in everyones pocket, all you need is a dumb holder with lenses.

My prediction is that this will play a huge role in mass adoption of VR!

mcantelon 1 day ago 0 replies      
TLDR: Google's following in Samsung's Gear VR footsteps with OS optimizations/additional sensors in devices to improve VR experience.
jimrandomh 1 day ago 3 replies      
Have they said anything about latency or asynchronous timewarp? It's a bit too technical to make it into end-user marketing content, but for game developers ATW is a huge deal and it's an area where Cardboard was lagging way behind.
moron4hire 1 day ago 0 replies      
It's kind of disheartening to see how much negativity gets heaped towards VR on HN lately.
drzaiusapelord 1 day ago 7 replies      
What's the usage scenario here? I'm just not seeing mobile VR as a usable thing. Does Google expect people to carry goggles around everywhere they go like they do their phone? Worse, the crummy graphics and low framerate in the video seems like a recipe for VR sickness.

I have a Vive at home and its wonderful for gaming, if a bit undercook and still unable to deliver a pixel density that makes me happy (Vive2 perhaps?), but I can't imagine a phone remotely competing with that still underpowered experience.

I can see AR projection built into one's existing glasses a la Google Glass or perhaps what MS is doing with AR, but VR is a totally different beast. Comfort, performance, fov, pixel density, graphics quality, audio quality, "presence," etc really matter. I just don't see Google pulling that off and even if they got close, who exactly is clamoring for VR phones? I suspect Google has just become too mobile centric and is shoehorning in whatever is hot into its Android line and seeing what sticks (instead of refining Android to be a better experience it seems). I'm not sure if that's wise. VR seems to be more at home attached to a powerful computer in a safe indoor space where people can feel free to move around without injury and get a high quality VR experience. You shouldn't be doing VR at the bus stop.

whatnotests 1 day ago 1 reply      
God seriously VR right now is so lame.

By now it should be way, way better.

Whoever is driving this shit off the cliff - please just stop while you're ahead, go back to designing hospital websites or whatever it was before you tried your hand at this, and let someone else do the whole newfangled "VR" thing instead.

       cached 20 May 2016 15:11:01 GMT