hacker news with inline top comments    .. more ..    20 May 2016 Best
home   ask   best   3 years ago   
WebKit is now 100% ES6 complete twitter.com
754 points by M4v3R  3 days ago   109 comments top 17
zerocrates 3 days ago 0 replies      
The jump in support from Safari 9 (53%) to TP (99%, and then the last little bit from TP to current WebKit) is staggering.
swang 3 days ago 3 replies      
Webkit was probably one of the last major web engines to start integrating ES6 features, yet they have TCOs done. Anyone explain what's holding it up in Chrome/FF/Edge?
pygy_ 3 days ago 2 replies      
Yet only 98% ES5 compliant: https://kangax.github.io/compat-table/es5/

No browser is at 100%, BTW.

Waiting for a WATWG JavaScript standard that removes these parts from the spec...

coldtea 3 days ago 0 replies      
Heh, after I saw it was 96-7% last time (when the developer preview versions were announced), I was pretty sure they'd push it to 100% soon, to make a (small) point of it on the WWDC Keynote.
taf2 3 days ago 0 replies      
This is nice. But I still think platform features in WebKit such as webrtc and fewer bugs in indexeddb would be a better place for them to focus... Service workers too
acbabis 3 days ago 2 replies      
Does this include native module import/export? I never saw anything to suggest they were even working on that.
nfriedly 3 days ago 2 replies      
Note that this doesn't include ES6 modules. The referenced kangax compatibility table doesn't cover modules, so the tweet is technically correct, but the headline here on HN is a bit misleading.
kozak 2 days ago 2 replies      
If I had to choose between having all the ES6 features except modules, versus having ES6 modules without all the other ES6 features, I would choose the latter. Sure, I'm biased, because I'm working on a huge single-page app with gazillion lines of JS code...
joeblau 3 days ago 7 replies      
Is anyone using Safari Technical Preview as their main Browser? I was hesitant to use it, but I've heard a few people say that it's stable.
hmottestad 3 days ago 2 replies      
So, today I'm using babel to transpile my code to es5.

Anyone know of any good tooling to use transpiled code for older browsers and es6 for newer?

hoodoof 3 days ago 2 replies      
Is ES6 native much faster than transpiled?
feiss 3 days ago 2 replies      
Watching that table makes me think it's about time to start switching to es6
Bahamut 3 days ago 2 replies      
Anyone have any clue when we'll see this land in Safari?
_RPM 3 days ago 2 replies      
open up chrome console, type `class Foo { }` and it works now out of the box. For what it's worth, when you try define a class that already exists. it throws an error. It seems that re-defining an identifier with the `var` keyword will throw an error too. I don't remember this being the case before.
TylerH 2 days ago 1 reply      
Maybe now they can fix all their layout bugs they've been adding to the browser and not fixing.
nailer 3 days ago 0 replies      
I guess this means JScore, right? So Phantom will also be ES6 when it updates.
anarsdk 2 days ago 1 reply      
tfw when "complete" has degrees. sigh.

I'm 99% super confident that this is an almost optimal solution.

Google supercharges machine learning tasks with TPU custom chip googleblog.com
792 points by hurrycane  1 day ago   266 comments top 45
luu 1 day ago 15 replies      
I'm happy to hear that this is finally public so I can actually talk about the work I did when I was at Google :-).

I'm a bit surprised they announced this, though. When I was there, there was this pervasive attitude that if "we" had some kind of advantage over the outside world, we shouldn't talk about it lest other people get the same idea. To be clear, I think that's pretty bad for the world and I really wished that they'd change, but it was the prevailing attitude. Currently, if you look at what's being hyped up at a couple of large companies that could conceivably build a competing chip, it's all FPGAs all the time, so announcing that we built an ASIC could change what other companies do, which is exactly what Google was trying to avoid back when I was there.

If this signals that Google is going to be less secretive about infrastructure, that's great news.

When I joined Microsoft, I tried to gently bring up the possibility of doing either GPUs or ASICs and was told, very confidentially by multiple people, that it's impossible to deploy GPUs at scale, let alone ASICs. Since I couldn't point to actual work I'd done elsewhere, it seemed impossible to convince folks, and my job was in another area, I gave up on it, but I imagine someone is having that discussion again right now.

Just as an aside, I'm being fast and loose with language when I use the word impossible. It's more than my feeling is that you have a limited number of influence points and I was spending mine on things like convincing my team to use version control instead of mailing zip files around.

bd 1 day ago 3 replies      
So now open sourcing of "crown jewels" AI software makes sense.

Competitive advantage is protected by custom hardware (and huge proprietary datasets).

Everything else can be shared. In fact it is now advantageous to share as much as you can, the bottleneck is a number of people who know how to use new tech.

abritishguy 1 day ago 5 replies      
I think this shows a fundamental difference between Amazon (AWS) and Google Cloud.

AWSs offerings seem fairly vanilla and boring. Google are offering more and more really useful stuff:

- cloud machine learning

- custom hardware

- live migration of hosts without downtime

- Cold storage with access in seconds

- bigquery

- dataflow

manav 1 day ago 2 replies      
Interesting. Plenty of work has been done with FPGAs, and a few have developed ASICs like DaDianNao in China [1]. Google though actually has the resources to deploy them in their datacenters.

Microsoft explored something similar to accelerate search with FPGAs [2]. The results show that the Arria 10 (20nm latest from Altera) had about 1/4th the processing ability at 10% of the power usage of the Nvidia Tesla K40 (25w vs 235w). Nvidia Pascal has something like 2/3x the performance with a similar power profile. That really bridges the gap for performance/watt. All of that also doesn't take into account the ease of working with CUDA versus the complicated development, toolchains, and cost of FPGAs.

However, the ~50x+ efficiency increase of an ASIC though could be worthwhile in the long run. The only problem I see is that there might be limitations on model size because of the limited embedded memory of the ASIC.

Does anyone have more information or a whitepaper? I wonder if they are using eAsic.

[1]: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=701142...

[2]: http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.p...

semisight 1 day ago 5 replies      
This is huge. If they really do offer such a perf/watt advantage, they're serious trouble for NVIDIA. Google is one of only a handful of companies with the upfront cash to make a move like this.

I hope we can at least see some white papers soon about the architecture--I wonder how programmable it is.

mrpippy 1 day ago 3 replies      
Bah, SGI made a Tensor Processing Unit XIO card 15 years ago.

evidence suggests they were mostly for defense customers:


jhartmann 1 day ago 5 replies      
3 generations ahead of moore law??? I really wonder how they are accomplishing this beyond implementing the kernels in hardware. I suspect they are using specialized memory and an extremely wide architecture.

Sounds they also used this for AlphaGo. I wonder how badly we were off on AlphaGo's power estimates. Seems everyone assumed they were using GPU's, sounds like they were not. At least partially. I would really LOVE for them to market these for general use.

asimuvPR 1 day ago 1 reply      
Now this is really interesting. I've been asking myself why this hadn't happened before. Its been all software, software, software for the last decade or so. But now I get it. We are at a point in time where it makes sense to adjust the hardware to the software. Funny how things work. It used to be the other way around.
breatheoften 1 day ago 1 reply      
A podcast I listen to posted an interview with an expert last week saying that he perceived that much of the interest in custom hardware for machine learning tasks died when people realized how effective GPUs were at the (still-evolving-set-of) tasks.


I wonder how general the gains from these ASIC's are and whether the performance/power efficiency wins will keep up with the pace of software/algorithm-du-jour advancements.

RIMR 1 day ago 2 replies      
Somewhat off topic, but if you look at the lower-left hand corner of the heatsink in the first image, there's two red lines and some sort of image artifact.


They probably didn't mean to use this version of the image for their blog - but I wonder what they were trying to indicate/measure there.

danielvf 1 day ago 1 reply      
For the curious, that's a plaque on the side if the rack showing the Go board at the end of AlphaGo vs Lee Sedol Game 3, at the moment Lee Sedol resigned and AlphaGo won the tournament (of five games).
nkw 1 day ago 1 reply      
I guess this explains why Google Cloud Compute hasn't offered GPU instances.
fiatmoney 1 day ago 3 replies      
I'm guessing that the performance / watt claims are heavily predicated on relatively low throughput, kind of similar to ARM vs Intel CPUs - particularly because they're only powering it & supplying bandwidth via what looks like a 1X PCIE slot.

IOW, taking their claims at face value, a Nvidia card or Xeon Phi would be expected to smoke one of these, although you might be able to run N of these in the same power envelope.

But those bandwidth & throughput / card limitations would make certain classes of algorithms not really worthwhile to run on these.

Coding_Cat 16 hours ago 0 replies      
I wonder if we will be seeing more of this in the (near) future. I expect so, and from more people then just Google. Why? Look at the problems the fab labs have had with the latest generation of chips and as they grow smaller the problems will probably rise. We are already close to the physical limit of transistor size. So, it is fair to assume that Moore's law will (hopefully) not outlive me.

So what then? I certainly hope the tech sector will not just leave it at that. If you want to continue to improve performance (per-watt) there is only one way you can go then: improve the design at an ASIC level. ASIC design will probably stay relatively hard, although there will probably be some technological solutions to make it easier with time, but if fabrication stalls at a certain nm level, production costs will probably start to drop with time as well.

I've been thinking about this quite a bit recently because I hope to start my PhD in ~1 year, and I'm torn between HPC or Computer Architecture. This seems to be quite a pro for Comp. Arch ;).

bravo22 1 day ago 2 replies      
Given the insane mask costs for lower geometries, the ASIC is most likely an Xilinx EasyPath or Altera Hardcopy. Otherwise the amortization of the mask and dev costs -- even for a structured cell ASIC -- over 1K unit wouldn't make much sense versus the extra cooling/power costs for a GPU.
phsilva 1 day ago 1 reply      
I wonder if this architecture is the same Lanai architecture that was recently introduced by Google on LLVM. http://lists.llvm.org/pipermail/llvm-dev/2016-February/09511...
nathan_f77 16 hours ago 1 reply      
I'm thinking that this has the potential to change the context of many debates about the "technological singularity", or AI taking over the world. Because it all seems to be based on FUD.

While reading this article, one of my first reactions was "holy shit, Google might actually build a general AI with these, and they've probably already been working on it for years".

But really, nothing about these chips is unknown or scary. They use algorithms that are carefully engineered and understood. They can be scaled up horizontally to crunch numbers, and they have a very specific purpose. They improve search results and maps.

What I'm trying to say is that general artificial intelligence is such a lofty goal, that we're going to have to understand every single piece of the puzzle before we get anywhere close. Including building custom ASICs, and writing all of the software by hand. We're not going to accidentally leave any loopholes open where AI secretly becomes conscious and decided to take over the world.

taliesinb 1 day ago 0 replies      
I don't know much about this sort of thing but I wonder if the ultimate performance would come with co-locating specialized compute with memory, so that the spatial layout of the computation on silicon ends up mirroring the abstract dataflow dag, with fairly low-bandwidth and energy efficient links between static register arrays that represent individual weight and grad tensors. Minimize the need for caches and power hungry high bandwidth lanes, ideally the only data moving around is your minibatch data going one way and your grads going the other way.

I wonder if they're doing that, and to what degree.

harigov 1 day ago 3 replies      
How is this different from - say - synthetic neurons that IBM is working on, or what nvidia is building?
Bromskloss 1 day ago 2 replies      
What is the capabilities that a piece of hardware like this needs to have to be suitable for machine learning (and not just one specific machine learning problem)?
cschmidt 1 day ago 1 reply      
This seems very similar to the "Fathom Neural Compute Stick" from Movidius:


TensorFlow on a chip....

isseu 1 day ago 0 replies      
Tensor Processing Unit (TPU)

Using it for over a year? Wow

j-dr 21 hours ago 1 reply      
This is great, but can google stop putting tensor in the name of everything when nothing they do really has anything to do with tensors?
saganus 1 day ago 3 replies      
Is that a Go board stick to the side of the rack?

Maybe they play one move every time someone gets to go there to fix something? or could it be just a way of numbering the racks or something eccentric like that?

hristov 1 day ago 3 replies      
It is interesting that they would make this into an ASIC, provided how notoriously high the development costs for ASICs are. Are those costs coming down? If so life will get very hard for the FPGA makers of the world soon.

It would be interesting to see what the economics of this project are. I.e., what are the development costs and costs per chip. Of course it is very doubtful I will ever get to see the economics of this project, it would be interesting.

protomok 1 day ago 0 replies      
I'd be interested to know more technical details. I wonder if they're using 8-bit multipliers, how many MACs running in parallel, power consumption, etc.
eggy 22 hours ago 0 replies      
Pretty quick implementation.

On the energy savings and space savings front, this type of implementation coupled with the space-saving, energy-saving claims of going to unums vs. float should get it to the next order of magnitude. Come on, Google, make unums happen!

__jal 1 day ago 0 replies      
My favorite part is what looks like flush-head sheet metal screws holding the heat sink on.

No wondering where you left the Torx drivers with this one.

aaronsnoswell 1 day ago 2 replies      
I'm curious to know; is this announcement something that an expert in these sorts of areas could have (or did?) predict months or years ago, given Google's recent jumps forwards in Machine learning products? Can someone with more knowledge about this comment?
hyperopt 22 hours ago 1 reply      
The Cloud Machine Learning service is one that I'm highly anticipating. Setting up arbitrary cloud machines for training models is a mess right now. I think if Google sets it up correctly, it could be a game changer for ML research for the rest of us. Especially if they can undercut AWS's GPU instances on cost per unit of performance through specialized hardware. I don't think the coinciding releases/announcements of TensorFlow, Cloud ML, and now this are an accident. There is something brewing and I think it's going to be big.
eggy 17 hours ago 1 reply      
I think the confluence of new technologies, and the re-emergence / rediscovery of older technologies is going to be the best combination. Whether it goes that way is not certain, since the best technology doesn't always win out. Here, though, the money should, since all would greatly reduce time and energy in mining and validating:

* Vector processing computers - not von Neumann machines [1].

* Array languages new, or like J, K, or Q in the APL family [2,3]

* The replacement of floating point units with unum processors [4]

Neural networks are inherently arrays or matrices, and would do better on a designed vector array machine, not a re-purposed GPU, or even a TPU in the article in a standard von Neumann machine.Maybe non-von Neumann architectire like the old Lisp Machines, but for arrays, not lists (and no, this is not a modern GPU. The data has to stay on the processor, not offloaded to external memory).

I started with neural networks in late 80s early 1990s, and I was mainly programming in C. matrices and FOR loops. I found J, the array language many years later, unfortunately.Businesses have been making enough money off of the advantage of the array processing language A+, then K, that the per-seat cost of KDB+/Q (database/language) is easily justifiable. Other software like RiakTS are looking to get in the game using Spark/shark and other pieces of kit, but a K4 query is 230 times faster than Spark/shark, and uses 0.2GB of memory vs. 50GB. The similar technologies just don't fit the problem space as good as a vector language.I am partial to J being a more mathematically pure array language in that it is based on arrays. K4 (soon to be K5/K6) is list-based at the lower level, and is honed for tick-data or time series data. J is a bit more general purpose or academic in my opinion.

Unums are theoretically more energy efficient and compact than floating point, and take away the error-guessing game. They are being tested with several different language implementations to validate their creator's claims, and practicality. The Mathematica notebook that John Gustafson modeled his work on is available free to download from the book publisher's site.People have already done some type of explorator investigations in Python, Julia and even J already. I believe the J one is a 4-bit implementation of enums based on unums 1.0. John Gustafson just presented unums 2.0 in February 2016.

[1] http://conceptualorigami.blogspot.co.id/2010/12/vector-proce...

[2] jsoftware.com

[3] http://kxcommunity.com/an-introduction-to-neural-networks-wi...

[4] https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gus...

j1vms 1 day ago 2 replies      
I wouldn't be surprised if Google is looking to build (or done so already) a highly dense and parallel analog computer with limited precision ADC/DACs. I mean that's simplifying things quite a bit, but it would probably map pretty well to the Tensorflow application.
paulsutter 1 day ago 0 replies      
> Our goal is to lead the industry on machine learning and make that innovation available to our customers.

Are they saying Google Cloud customers will get access to TPUs eventually? Or that general users will see service improvements?

mistobaan 1 day ago 0 replies      
Another point is that they will be able to provide much higher computing capabilities at a much lower price point that any competitors. I really like the direction that the company is taking.
swalsh 1 day ago 0 replies      
I wonder if opening this up as a cloud offering is a way to get a whole bunch of excess capacity (if it needs it for something big?) but have it paid for.
dharma1 1 day ago 0 replies      
hasn't made a dent on Nvidia's share price yet
amelius 1 day ago 2 replies      
One question: what has this got to do with tensors?
camkego 1 day ago 1 reply      
Does anyone have links to the talk or the graphs?
ungzd 1 day ago 0 replies      
Does it use approximate computing technology?
niels_olson 1 day ago 0 replies      
I like that the images are mislabeled :)
revelation 1 day ago 0 replies      
There is not a single number in this article.

Now these heatsinks can be deceiving for boards that are meant to be in a server rack unit with massive fans throwing a hurricane over them, but even then that is not very much power we're looking at there.

nxzero 1 day ago 0 replies      
Is there anyway to detect what hardware to being used by the cloud service if you're using the cloud service? (yes, realize this question is a bit of a paradox, but figured I'd ask.)
LogicFailsMe 1 day ago 1 reply      
Perf/W, the official metric of slow but efficient processors. How many times must we go down this road?

Let's see this sucker train AlexNet...

rando3826 1 day ago 1 reply      
Why use an ANKY in the title? Using an ANKY(Acronym no one knows yet) is bad writing, makes readers feel dumb, etc. Google JUST NOW invented that acronym, sticking it in the title like just another word we should understand is absolutely ridiculous.
simunaga 19 hours ago 2 replies      
In what sense in this a great news? Yes, it's a progress, so what? After all, you - programmers - earn money for your jobs and pretty soon you might not have one. Because of these kinds of great news -- "Whayyy, this is really interesting, AI, maching learning. Aaaaa!".

"I'll get fired, won't have money for living and AI will take my place, but the world will be better! Yes! Progress!"

Who will benefit from this? Surely not you. Why are you so ecstatic then?

Google Home home.google.com
566 points by stuartmemo  1 day ago   441 comments top 72
cheald 1 day ago 22 replies      
When I was younger, I dreamed of something like this. Voice control for my home! A Star Trek computer that I can interact with conversationally! I just say what I want and it happens!

Now, I just see an internet-connected microphone in a software black box which I can only interpret as a giant frickin' security liability. I want this, but unless it's open source top-to-bottom, I won't ever actually put one in my home. We know too much about how these things can be abused for me to ever seriously consider it without being able to verify for myself what it's doing and why.

t0mbstone 1 day ago 9 replies      
Please, please, please be a completely open, extensible platform...

I want to be able to control my Apple TV with my Google Home device.

I want to be able to control my Phillips Hue and LiFX bulbs.

I want to be able to build my own custom home automation server endpoints and point my Google Home commands at them.

I want to be able to remote start my car with a voice command.

I want to be able to control my Harmony remote, and all of the devices connected to my Harmony hub.

I want to be able to access my Google calendar.

I want to be able to make hands-free phone calls to anyone on my Google contacts.

If my grandmother falls, I want her to be able to call 911 by talking to the Google Home device.

I want to be able to ask wolfram alpha questions by voice.

I want to be able to have a back-and-forth conversation to arrive at a conclusion. I don't want to have to say a perfectly formulated command like, "Add an event to my calendar on Jan 1, 2016 at 2:00 pm titled go to the pool party". I want to be able to say, "Can you add an event to my calendar?", and then answer a series of questions. I hate having to formulate complex commands as a single sentence.

I want to be able to have a Google Home device in each room, without having to give each one its own wake-up word. Just have the closest one to me respond to my voice (based on how well it can hear me).

I want to be able to play music on all of my Google Home devices at the same time, and have the music perfectly synchronized.

This is my wish list. I am currently able to do more than half of these items with Amazon Echo, but I had to do a bunch of hacking and it was a pain in the ass.

If Google Home can deliver on these points, I would switch from Amazon Echo in a heartbeat.

koolba 1 day ago 4 replies      
RFP - Request For Project

1. Train Google Home to recognize Amazon Echo's voice as its owner.

2. Train Amazon Echo to recognize Siri's voice as its owner

3. Train Siri to recognize Google Home's voice as its owner

4. Kick start some kind of endless loop between the three of them.

frik 1 day ago 1 reply      
Google, thanks for shutting down Freebase.com on 2 May 2016. By taking it offline, and using it (Knowledge Graph) for Google Home you effectively locked out all competitors. WikiData is a far cry and a fraction of the size of what was Freebase.

Freebase was a large collaborative knowledge base consisting of data composed mainly by its community members. It was an online collection of structured data harvested from many sources, including individual, user-submitted wiki contributions. Freebase aimed to create a global resource that allowed people (and machines) to access common information more effectively.


Google is using a lot of data collaborative collected data from closed Freebase and Wikipedia without giving it back.

will_brown 1 day ago 2 replies      
When Windows10/Cortana was released my buddy attached a mixer/switch to his PC allowing him to wire input mics and sound speakers to every room in his house.

And though I can't see any personal uses for such a device, he swears it has changed his life, and the only thing I believe he does with it is tells Cortana to play Van Halan first thing when he wakes up.

protomikron 1 day ago 6 replies      
Ok, controversial opinion:

"[...] and manage everyday tasks"

What exactly do we want to automate at home? I think this whole home automation and smart home stuff is complete bullshit. Obviously there are some nice things, like "play me song xyz", but IMHO it is completely oversold. There are just not that much things to automate at home.

And this does not mean that I think 640K are enough memory for everyone.

fizzbatter 1 day ago 1 reply      
I'm dying for an Echo / Home that is fully api friendly and allows custom keywords. I want to buy an interface to my own home assistant. I want a hackers friend.

Sure, offline-capable would be great too, but for now just give me the damn api hooks. :s

edit: Note that i believe Echo has a pretty good API. I just don't want to talk to echo haha. I want to talk to my system.

deprave 1 day ago 2 replies      
A company that makes money by collecting and selling access to personal information about people is offering to put a microphone in your home.

If you need a product like this, for the sake of your privacy, buy an Echo.

JarvisSong 1 day ago 1 reply      
ITT smart hackers asking for more features and noting the privacy implications. Unfortunately, this, Echo, and others are coming for the masses, the masses who have everything public on Facebook and won't really understand the issues until it's too late. Give it a few years and 'everyone' will have a Star-Trek-like home computer experience. What can we do to turn the tide in favor of privacy and security? Or do we just trust Google/Amazon will do the right thing?
izolate 1 day ago 6 replies      
Looks like something that should've been under the Nest brand. Whatever happened to that?
pbnjay 1 day ago 1 reply      
I find it odd that Google is going to take so long to get this out the door - "later this year" seems like ages. Did they start on the hardware that late?

Amazon has what, 6 months to get more competitive on the search/trivia front? or this is going to kill it.

free2rhyme214 1 day ago 2 replies      
Competition against Amazon Echo is always positive for consumers.
swalsh 1 day ago 1 reply      
As somoene who runs a small ecommerce company i'm really hoping the next platform is open, and not owned by Amazon (or Google). I sell products where purchasing them would be fantastic via a voice interface. If Amazon owns it though, there's no way I'm going to get any fraction of that business. The ownership of these voice platforms is a huge risk for market competition. The voice interface naturally lends itself to "choose the first choice that fits my paramters, and let's go with it". If you say "Alexa, book me a taxi to the airport". Alexa chooses who takes you. Being the priority choice is a huge advantage for whoever wins that. It's just so much power in the hands of so few. It's the opposite of what the internet should've been.
grownseed 1 day ago 1 reply      
The page linked here is basically an ad with no content (yet it manages to have a scrollbar no matter the window size...). Tried to look for actual specs but couldn't find anything, does anybody have anything more substantial?

On another note, is there a way to just get some sort of remote microphone array (I think that's what it's called on the Echo) and set up Alexa/Google/Cortana/... directly on a PC?

zitterbewegung 1 day ago 0 replies      
I'm confident that Amazon won't kill off Alexa (due to its success). I am not so confident that if this isn't widely successful or even in the future this will be killed off just like Revolv and bricking the device . It is good that Alexa is getting competition though.
beilharz 1 day ago 1 reply      
This gives me a 404.
shogun21 1 day ago 0 replies      
I am impressed Amazon was able to make a new product category. It's only a matter of time before Apple announces their take on Siri Home.
enibundo 19 hours ago 0 replies      
Does anyone else feel as this kind of stuff (I'd put it in the same bag as the apple Watch, and the amazon something) is completely useless?

Personally, I feel we need to use less technology in everyday life.

xiphias 1 day ago 0 replies      
,,Always on call'' - it just got the worst memories for me of waking up at 3am
partiallypro 1 day ago 0 replies      
Microsoft, where are you? Cortana on a device that is similar to the chip Master Chief has would be incredibly popular, and it done right could also be just as popular. Especially since Cortana is on every platform and completely agnostic unlike Google Home and Echo. Give it the same extensible API as Cortana has on Windows 10, etc and it could be a home run. Don't let Google and Amazon eat your lunch here.

I do wonder though how Google/Microsoft/Apple will handle there being multiple instances of their devices able to take commands. So if I say "Hey Cortana" or "Ok Google" will each device have to sort of communicate with the other to only activate the one that is closest?

bbunqq 1 day ago 4 replies      
You too can bring a slice of 1984 into your home with this lovely crafted listening device!
blabla_blublu 1 day ago 0 replies      
Competition in this space is welcome! Can't wait to see what their difference / what sets them apart from Echo. Given Google's propensity to sell Ads, it will be interesting to see if customers are willing to put a device like this in their house.

Reminded me of a humidifier for some reason - http://www.amazon.com/Aromatherapy-Essential-Oil-Diffuser-co...

Roritharr 1 day ago 0 replies      
I love how they put the LG MusicFlow Speaker on the Home presentation. I've been suffering that malpurchase for about a year now. I can rely on it not working 70% of the time, seemingly crashing, creating a mesh wifi although plugged into ethernet or attached to my home wi-fi...

If they can't get the third party vendors to get their Google Cast integration up to the reliability level of a Chromecast Audio, they should stop supporting this.

struct 1 day ago 0 replies      
Looks neat, let's hope Google leads in 3rd party applications too and not just in appearance. Also interesting that they specifically gave a shout out to the Alexa team.
dmritard96 1 day ago 1 reply      
I think the most interesting thing in the echo and now google home narrative is that these are subsets of phones. Speakers, microphones and internet connections with only two substantial differences - they are powered 100% of the time and they have better speakers/acoustics. It will be interesting to see if those are substantial enough to overwhelm the obviousness of doing these through the phone in your pocket.
evolve2k 1 day ago 1 reply      
There's no way I'm putting something like this, collecting data directly for Google in my house.

Anyone else have privacy concerns?

pluc 1 day ago 3 replies      
It's blocked in Canada.


lazyjones 1 day ago 4 replies      
What's the business model for Google Home? Will it suddenly splurt out an advertising message in the middle of the night, or will it rather include subtle product placements in otherwise harmless answers?

Remember, it's made by a company that thinks it's appropriate to put text ads on the first spots of your search results, in increasingly confusing ways.

jug 17 hours ago 0 replies      
This is interesting but to be honest I already have this on my phone, which is with me not only in my living room, but even in the street.
gopher2 1 day ago 0 replies      
Yeah, I'm sticking with Echo because business model.
xenihn 1 day ago 0 replies      
Hey, I have that pasta strainer. The one that's being used to store citrus fruits for some reason...
walrus01 1 day ago 1 reply      
How many months from release until the FISA court issues a secret order to turn one of these on 24x7x365 in a suspect's home, and stream the audio to the FBI "counter terrorism" people investigating a subject?
mattmaroon 23 hours ago 0 replies      
I love my Echo but it has a couple weak points, all of which could be solved by a competent platform. I can't, for instance, just tell Alexa to play new podcasts from my lists or directly from the net (except through TuneIn, which sucks.) It doesn't work with many home automation devices. It's AI is not that great when it comes to non-Amazon services.

I'm hopeful the Android platform will make this a better device.

kristianc 1 day ago 0 replies      
The search queries that get sent to Google are probably the least interesting part of this to them. Sure, Google will get some additional search queries and be able to target you slightly better, but it's a rounding error in terms of the data they already have.

The interest in this on Google's side is on having a permanently connected 'listener' on your network to identify which devices you're running and when. If it's running through your WiFi network, Google is going to know about it.

djloche 1 day ago 0 replies      
Voice controlled computer interactivity doesn't appeal to me, and double unappealing is the skynet factor to the whole thing.

Home automation doesn't need nor should it require signing over your privacy.

imh 1 day ago 0 replies      
In what world is "Always on call" an appealing phrase?
wodenokoto 1 day ago 1 reply      
What is this? All I get is:

 404. Thats an error. The requested URL / was not found on this server. Thats all we know.

theideasmith 23 hours ago 0 replies      
The website is down now. For those who want to check it out, here's the link: https://web.archive.org/web/20160518173022/https://home.goog...
jredwards 1 day ago 0 replies      
Google Nope
alexc05 1 day ago 2 replies      
I really don't want to come off as super negative here ... but am I the only one who finds this one UGLY?

Compare to some of the other devices from previous years, and competitors:




It sort-of looks like a cheap air freshener. Maybe it'd grow on me, but I kinda think it is ugly.

Someone should manufacture a range of "tchotchke skins"

https://www.google.ca/search?q=tchotchke&tbm=isch so it could sit on your counter and look like something that you'd be happy to mix in with the rest of your decor. (angels, golden lucky-cats, porcelain hands, googly-eyed-wooden-owls https://s-media-cache-ak0.pinimg.com/736x/78/b5/80/78b580270...)

Anything to stop that thing from lookling like a plug in air freshener really.

ComodoHacker 15 hours ago 0 replies      
Next step: chemically analysing your kitchen fumes and flavors in nearly real time to profile your gastronomic habits.
conjectures 16 hours ago 0 replies      
How is this different to having a smartphone on your person? Other than using an additional plug socket.
gcr 1 day ago 0 replies      
Will they offer a rebate for this device to burned Revolv users?

Doing that would be a great gesture.

As it stands, I would be wary of purchasing one of these. How long would it last before Google tires of it?

machbio 23 hours ago 0 replies      
Hope this is not as disappointing as Onhub, it would be helpful if they have a rich api to start with and not promise that the APIs are coming later...
exodust 22 hours ago 0 replies      
That page is so simple, yet even Google devs are "powering" these simple pages with multiple JS files. Why? Is it laziness? Or just some belief that Angular is required now for even "hello world"?

When viewing source I initially thought 'great, a nice clean HTML page'... after all, it's just 3 images fading between each other and a simple form.

But then at the bottom we see Angular, Angular Animate, Angular Scroll, and a fourth Main JS file. Way to set an example Google.

mathpepe 18 hours ago 0 replies      
When the danger is so near we admire the foresight of those warning about it. Kudos to the FSF.
sickbeard 1 day ago 1 reply      
Remember when voice commands for your computer came out? It was cool but nobody talks to their computer. They won't be talking aimlessly in their kitchen either.
irrational 1 day ago 0 replies      
So to use it I have to get up and go to wherever it is plugged in? Why wouldn't I just use my phone which is always on me?
lamein 1 day ago 0 replies      
People don't care about their privacy anymore. Many of us do care about it, but we are not the majority.

This project relies on that fact.

csrm123 16 hours ago 0 replies      
What happened to "Don't be Evil"?
sgnelson 22 hours ago 0 replies      
How long before we find out the NSA has access to this and the Amazon Echo?
raajg 1 day ago 0 replies      
Another Amazon Echo. Not at all interested.

I wish there was a text box:

Please never send me the latest updates about Google Home.

paulftw 1 day ago 0 replies      
Amazon has Echo, Google has Nest and now Home.

What could Apple's Project Titan be if not a smart home device?

pbreit 1 day ago 0 replies      
Please support 3rd party streaming audio.
swasheck 1 day ago 0 replies      
kinda looks like my wife's essential oil diffuser. it'd fit right in if i wanted one.
Joof 1 day ago 0 replies      
Initially after snowden I thought, "the government and governments around the world will crack down on this behavior now".

I was nave. Nobody cares. Now they viciously support such practices. As long as that exists, I can't buy into datamining devices. And it will always exist.

educar 1 day ago 0 replies      
Seriously, never in my wildest dreams did I think that technology would come down to this. Like many others, I dreamed a future where I could have an automated assistant at home. Just not this way! It's really all about ads and mining data, isn't it.
bobwaycott 1 day ago 0 replies      
Can we get the link changed to https://home.google.com/? Non-HTTPS just 404s.
King-Aaron 23 hours ago 0 replies      
Sucked in, anyone who bought a Nest.
Kinnard 1 day ago 1 reply      
Why wasn't this done under Nest?
gambiting 14 hours ago 0 replies      
In 4 years Google will drop support for it leaving you with a pretty paperweight. Not interested, not from Google.
58028641 1 day ago 0 replies      
till google disables it ...
tempodox 19 hours ago 0 replies      
Now, we can volunteer for the Big Brother experience.
dharma1 1 day ago 0 replies      
looking forward to replacing my Echo Dot with this
ilaksh 1 day ago 0 replies      
Only one sentence explanation unless I missed something. Its an Echo competitor.
zozo123 1 day ago 0 replies      
ck2 1 day ago 0 replies      
So it's echo/alexa by Google?


Is there going to be a patent war?

bache 1 day ago 1 reply      
Oletros 1 day ago 0 replies      
And I suppose it will be another US only product/service from Google
romanovcode 1 day ago 1 reply      
Haha, no thank you. i don't want google to listen to everything I say in my house.

Next thing you know it's going to tell me is "Smith! Put more effort in those crunches!".

jayfuerstenberg 1 day ago 0 replies      
I'm not so lazy that I can't hold my phone and google for something. Pass.
nkg 1 day ago 1 reply      
This morning a friend of mine got his gmail hacked, which means his Play, Maps, Music and everything got hacked also.

With Google Home, add your "everyday tasks" and voice history to this! ^^

Fast.com: Netflix internet connection speed test fast.com
614 points by protomyth  1 day ago   349 comments top 63
exhilaration 1 day ago 7 replies      
This is from Netflix, it downloads Netflix content and reports the speed back.

This is important because unlike your average Internet speed test (which ISPs take pains to optimize), there's a very real possibility that your ISP is happy to let your Netflix experience suffer - assuming they don't throttle it outright - as previously mentioned on HN:



CyrusL 1 day ago 6 replies      
Cool. I just redirected http://slow.com to https://fast.com .
finnn 1 day ago 7 replies      
For those hatin on speedtest.net and wanting upload, http://speedtest.dslreports.com/ and https://speedof.me/ have booth been around for a while. The reason for fast.com is that it tests download speed from netflix. ISPs can't prioritize it without prioritizing netflix as well.
nlawalker 1 day ago 3 replies      
What I'd really love to see is this concept provided as a service by all of the big streaming/gaming/large-content-blob providers and aggregated into a single page.

I have absolutely no reason to believe that every well-known "speed test" app/site/utility out there isn't being gamed by my ISP. A speed test that showed me my actual streaming bandwidth from Netflix, actual download speed of an XX MB file from Steam, actual upload bandwidth to some photo-sharing service, and actual latency to XBox Live or some well-trafficked gaming service would be awesome.

bdwalter 1 day ago 1 reply      
Seems like this is really about training their consumers to define the quality of their internet by their reachability to the Netflix CDN nodes.. Smart move on Netflix's part.
gregmac 3 hours ago 0 replies      
Some observations about this:

For me, it's getting stuff from https://*.cogeco.isp.nflxvideo.net -- which indicates my ISP (Cogeco) is part of their Open Connect [1] program with an on-network netflix cache.

Other people are reporting downloads from https://*.ix.nflxvideo.net, which appears to be the Netflix cloud infrastructure.

It downloads data from 5 URLs every time, but their sizes fluctuate, something like ~25MB, ~25MB, ~20MB, ~2.2MB, ~1.2MB.

The contents of each response appears to be the same (though truncated at a difference place), with the beginning starting with:

 5d b9 3c a9 c3 b4 20 30 b9 bc 47 06 ab 63 22 11
`file` doesn't recognize what this is.


Since it's https, ISPs shouldn't be able to easily game this (eg: make this go fast, but still throttle video content).

So one potential way would be to only start throttling after 25MB is downloaded (or after a connection is open for ~2 minutes): does anyone know how Netflix actually streams? If they have separate HTTP sessions for 'chunks' of a video, then presumably this wouldn't work.

They could see if a user visits fast.com and then unthrottle for some amount of time. I'm not sure if ISPs have the infrastructure to do a complex rule like this though (anyone know?). I also think this would be relatively easy for users to notice (anytime they visit fast.com, their netflix problems disappear for a while) and there would be a pretty big backlash about something so blatant.

[1] https://openconnect.netflix.com/en/

gdulli 1 day ago 4 replies      
When I used to be a Netflix customer it was more the variability of my connection that was an issue and not its "speed" at a given optimal time.

Usually I could begin a stream without problems. But often while streaming (often enough for me to realize streaming was a bad experience) the bitrate dynamically dropped way down to a terrible quality in response to what I imagine were poor network conditions. Netflix no doubt sees this dynamic quality adjustment as a feature, and preferable to buffering, but I chose an HD stream and I'd rather even see an SD quality video that I could be sure would stay that quality than switching between HD and very low bitrate, fuzzy, artifacty video.

I don't blame Netflix for the quality of my connection, but streaming is just not as reliable as cable and it's not one of those Moore's Law type things where throwing more processing power or memory fixes the network issues.

vessenes 1 day ago 0 replies      
I like the idea of getting ISPs into internal conflict: the folks responsible for making sure that speed checks like speedtest.net run quickly will be fighting the folks responsible for throttling Netflix.

But, I think the throttling folks will ultimately win. In that case, I guess Netflix is laying out a good case for consumers to complain, so it's win-win.

ejcx 1 day ago 0 replies      
This is super awesome! It's a good speedtest that works on mobile, which I had not been able to find.

Funny thing is I found this in the source.

 <!-- TODO: add code to remove this script for prod build --> <!--<script> document.write('<script src="http://' + (location.host || 'localhost').split(':')[0] + ':8081/livereload.js?snipver=1"></' + 'script>') </script>-->
Not a big deal, but kind of funny.

victorNicollet 1 day ago 2 replies      
Very interesting, and it confirmed my suspicions that my ISP throttles me (or at least, tries to).

I'm using Numericable from Paris and got 18Mbps to Netflix, 40Mbps to their comparison test. By going through an SSH tunnel (which makes a 230km detour through Roubaix), I get 39Mbps to both Netflix and control.

I am rather surprised that the bandwith loss caused by the SSH tunnel is so small.

kcorbitt 1 day ago 2 replies      
Really nice and easy to use -- the test starts way quicker than speedtest.net.

However, am I missing something, or does this only test downloading? I guess that makes sense for Netflix's use case, but I'm usually at least as interested in knowing my upload speed, because with typical asymmetric connections that can be a bigger bottleneck for video calls and content-production workloads.

jedberg 1 day ago 4 replies      
Oh man this is awesome. I can't wait till people start calling thier ISPs claiming they aren't getting the speeds they pay for, only for the poor agent to have to explain how peering agreements work.
mofle 1 day ago 0 replies      
I made a command-line app for it: https://github.com/sindresorhus/fast-cli
callmeed 1 day ago 1 reply      
Most interesting is comparing it to the ISP speed tests:




Fast.com is reporting about 1/2 the speed of these for me (2 seem to use the same Ookla speed test).

zodPod 1 day ago 1 reply      
I'd bet this is a move to make the ISPs that are throttling them look bad. If people start to use it to check their speeds and they are downloading Netflix content from Netflix and the ISP is throttling, it will look slower than it is and more people will likely complain.

I like it. It's suitably evil!

danr4 1 day ago 2 replies      
This is good but my god what a waste of a domain name :(
janpieterz 1 day ago 3 replies      
Odd, on a dedicated 500 mbit line I've now gotten 6 different results, ranging from 350-500. Speedtest.net indicates a stable 500+ mbit line, downloads from very fast servers always max it out at 500 as well.

Besides stabilizing it a bit, getting the upload on there would be amazing, it's certainly a lot nicer for the eye than speedtest.net.

mrbill 1 day ago 1 reply      
Interesting. Even over multiple tests, I get almost exactly 1/3rd the download bandwidth speed to NetFlix that I do testing with speedtest.net.
_jomo 1 day ago 2 replies      
I also like speedof.me which tests latency, download, and upload but purely using HTML5/JS (unlike speedtest.net with it's Flash app)
tigeba 1 day ago 4 replies      
Just for a reference point, I'm getting about 350 on Google Fiber in Kansas City.
smaili 1 day ago 4 replies      
Not to sound ignorant, but what's the point? Why would Netflix go through the trouble of acquiring what I suspect to be a fairly expensive domain just to show how fast one's internet speed is?
nodesocket 1 day ago 1 reply      
While cool, I can't believe they bought and use fast.com for something so simple. Fast.com has to be worth some coin. Anybody have any idea what that domain is worth?
iLoch 1 day ago 1 reply      
Description would be nice for anyone on mobile who doesn't want to needlessly waste bandwidth.
pazra 1 day ago 0 replies      
This is nice and great that it loads quickly with no bloat or distractions. Not sure about the domain name though, as it's not immediately obvious what the site is for.
stanleydrew 1 day ago 2 replies      
I'm pretty sure Google is about to release a speed test tool embedded directly into its SRP for speed-test-related queries.

Similarly to how they eliminated the need for third-party IP address checking tools by returning your actual IP address when you search for "what's my ip address".

pgrote 1 day ago 1 reply      
The amount of data netflix will collect from this is exciting! I can only imagine the stories it will tell once hundreds of thousands of people use it. It would be fantastic to see how the agreements between ISPs and netflix affect the data transfer rates.
isomorphic 1 day ago 0 replies      
I have multiple WAN connections (multiple ISPs). This actually (correctly) reports the aggregate download speed!

Obviously if they are "downloading multiple files," they aren't waiting for them to complete synchronously.

loganabbott 1 day ago 1 reply      
I prefer the speed test here: https://www.voipreview.org/speedtest No flash or silverlight required and a lot more details
danvoell 1 day ago 1 reply      
I feel like you could do more with this domain. Cool little tool though.
athenot 22 hours ago 2 replies      
This is interesting.

Test 1: on Comcast but connected to company's VPN: 48Mb/s

Test 2: on Comcast but not on the VPN: 11Mb/s

manmal 1 day ago 1 reply      
I have absolutely terrible Netflix quality on my Samsung TV sometimes, but it shows 68MBit here. Makes me wonder whether the firmware is to blame..
k4rtik 1 day ago 0 replies      
Is it inflating the results shown on Wi-Fi?

I am on a MacBook Air Early 2014 and my current link speed is 144 Mbit/s according to Network Utility, but fast.com shows between 210 to 230 Mbps on each run.

Speedtest.net results are consistent as before at ~38 Mpbs, which is what I would expect from the routers around me.

erickhill 1 day ago 2 replies      
Thanks Xfinity. For my home service fast.com should redirect to slow.com. 5.2 Mbps (it's sold at 50 Mbps with asterisks everywhere).
narfz 13 hours ago 0 replies      
is there a bandwidth cap? i constantly get 160Mbps but i know for sure that our office line can do way more. speedtest.net is always close to 900Mbps. maybe speedtest.net has a endpoint within the ISP backbone and netflix not? or is the peering between AWS and my ISP?
smhenderson 1 day ago 2 replies      
OK, so I get 48 on fast.com and decided to use the link to compare on speedtest.net. There I get 101 down, 112 up.

So while 48 seems very fast to me (I get 19 at work) it's a lot less than 101. Is Verizon throttling the connection or is NetFlix not giving me more the ~50? At what point is the cap on NetFlix's side and not the client connection?

ahamdy 1 day ago 0 replies      
The download speed is absolutely incorrect, I live in a 3rd world country and have a 2Mb connection I get max 200/kb max download rate, fast.com is showing a download speed of 1.2Mb I really wish it was true
jasallen 1 day ago 0 replies      
Wow, "fast.com" is one helluva valuable piece of DNS real estate Netflix is throwing at this.
lemiffe 1 day ago 1 reply      
Only downlink? For me uplink is more important, and I suspect for others as well (gaming/streaming).
EpicEng 21 hours ago 0 replies      
Well... I just found out that my connection went from 20Mb to ~120Mb recently. I have no idea when this happened and my bill hasn't changed.
IgorPartola 1 day ago 1 reply      
Yup, and it nicely confirms that (a) my Charter connection is in fact 65 Mbps down and (b) I can't get faster internet where I live.

Oh, and 5 Mbps up is just ridiculous. That's what I get with my business plan. Back up a TB of data to the cloud? Yeah, that'll take weeks.

vonklaus 1 day ago 1 reply      
My internet speed (according to fast.com) is 0. Adblock & uBlock off on the site & fast.com uses https. Not sure why it wouldn't be working, no VPN in middle. Anyone else having issues?

edit: Speedtest.net was ~38mbps down. Is a netflix subscription nec. for this?

martin-adams 19 hours ago 0 replies      
Cool tool. I'd love to know the story behind how Netflix managed to use such a lucrative domain.
mrmondo 1 day ago 0 replies      
Just tried it on our 300/300Mbit link at work, lots of people working today so it'll be heavily under use but:

- Netflix: 240Mbit/s

- Speedtest: 293Mbit/s

zmitri 1 day ago 1 reply      
Speed is halved using fast.com (140 down) vs speedtest (287 down) and I'm currently on Paxio in Oakland http://www.paxio.com
kilroy123 1 day ago 2 replies      
Why doesn't netflix just try to by-pass ISPs by rolling out their own service?

Big ISPs are starting to cap data, to stop/slow-down netflix. They should just put out their own high speed service like Google.

nodesocket 1 day ago 0 replies      
Interesting looking at Chrome developer tools, lots of magic and interesting payloads.

The http header Via is interesting as it lists the AWS instance that served the request and region. i-654a87b8 (us-west-2)

myrandomcomment 1 day ago 2 replies      
On AT&T U-verse in Palo Alto area. $72 p/month for 24mb now with even more data caps.

Fast.com ~23mbSpeedtest.net ~38mb

Hum, I wonder which is right and which is the ISP screwing with traffic?

dangson 1 day ago 0 replies      
Not surprisingly since this is downloading Netflix content, it doesn't work when I'm connected through Private Internet Access VPN.
caludio 1 day ago 0 replies      
Mhh, I get consistently (much) lower Mbps with Firefox than with Chrome. Is it how it's supposed to be? Is it my network maybe?
JustSomeNobody 1 day ago 1 reply      
How long until ISPs catch on and make sure fast.com is given a high priority?

I don't see how this will accomplish anything for Netflix.

mrmondo 1 day ago 0 replies      
Doesn't work at all well for me here in Melbourne on 4g. Netflix: 7Mbit, speedtest.net: 39Mbit
bodytaing 1 day ago 0 replies      
This is an awesome alternative to the other speed tests because it's very minimal and has a clutter-free UI.
vadym909 1 day ago 2 replies      
Wow- this is awesome. I hated speedtest.net
hacks412 1 day ago 0 replies      
Is this a way for them to optimize who they deliver faster streaming services to?
parfe 1 day ago 3 replies      
730mpbs to fast.com while only 700mbps on speedtest.net (with 853mbps up).
techaddict009 1 day ago 1 reply      
No upload speed results?
arnorhs 1 day ago 0 replies      
man, i'd love to see something like this for twitch streams. i feel like i have problems with twitch streams at specific times per day.
wil421 1 day ago 0 replies      
On my laptop:

First test: 55mbps

Second: 35mbps

Third: 22mbps

Fourth: 22mbps

Speedtest: right at 36mbps every time.

It seems to be more stable on my cellphone.

philjackson 1 day ago 4 replies      
SPEED MEGATHREAD, post your speed/location/ISP below here:

44Mbps / London, uk / BT

known 1 day ago 0 replies      
jefurii 1 day ago 2 replies      
Yawn, only checks download speed.
developer545 1 day ago 6 replies      
It's surprising that people on HackerNews don't seem to understand the basics of how the Internet works.
Chrome removes Backspace to go back chromium.org
559 points by ivank  15 hours ago   467 comments top 94
klodolph 9 hours ago 10 replies      
I'm going to take a somewhat contrarian view and say, "Thank you, Chrome developers."

It's always easy to tell apart the people who know shortcuts from the people who don't, if you watch them use their computers. Someone with a few shortcuts on tap will zoom around their monitors, switching between mouse and keyboard only when necessary.

But there are a few shortcuts and user interface quirks that are too outdated and weird, and only serve to surprise and annoy us. They herald from an earlier age when people were still figuring things out in new UI paradigms. For example, these days, you expect the scroll wheel to scroll up and down in a scrolling view. However, my coworker was changing some project settings in Visual Studio the other day, and he tried to scroll through the settings while a drop-down menu in the settings had focus. It scrolled through the menu options, selecting them, instead of scrolling through the view. He had to cancel the changes he was making and open the window again, because he couldn't remember what was originally selected.

This is the worst kind of surprise. Something you thought was just supposed to let you look at different parts of the interface instead modified the data you were looking at. Backspace to go back is a similar surprise. It's supposed to delete text, but instead it can navigate away from a page entirely, if you are in the wrong state when you press backspace. For the same reason, I'm even getting sick of the old middle mouse button paste, since it's too easy to press when I'm scrolling.

Forward and back navigation are already mapped to alt + left and right arrow. Let's reserve backspace for deleting text. (I'm not happy that it sometimes means "navigate up a level", but that might tell you what kind of computer I had growing up.)

Jedd 13 hours ago 19 replies      
Chrome / Chromium have a habit of making these arbitrary changes that seriously annoy some (arguably small) percentage of their users, while claiming that it makes it simpler / better for everyone else, while explaining impatiently why it's infeasible to make the now missing feature a configuration option.

Evidently the kinds of people that can't be bothered going into the Advanced Configuration Settings page would be confused by an additional item in the Advanced Configuration Settings page.

I never used the backspace button for back (though it's probably what's mapped to my mouse button #8 - I'll know on the next upgrade), but I did get mightily annoyed by two changes a while back, and am always happy to bring them up whenever there's a story about Chrom* devs doing this kind of thing.

1. snap-to-mouse - while dragging the scrollbar, if you move the mouse further than ~80 pixels away from the scrollbar column, the page jumps back to the original location - apparently MS Windows users love this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this, and

2. clicking inside the URL bar selects the whole contents - apparently MS Windows users are used to this feature, but chrome/chromium is the only application I've found on GNU/Linux that does this.

No idea what the defaults are for OSX, and, really, it doesn't matter - these features should be sensitive to extant defaults on whatever desktop environment the browser finds itself running on.

ruipgil 13 hours ago 7 replies      
I might be the minority here, but I think that using the backspace to go back is counter intuitive. In my mind backspace is to delete something, and I always worry about that.
floatboth 9 hours ago 5 replies      
Good. I always set browser.backspace_action to do nothing in Firefox, because this is SO infuriating. You think you have a text field focused but you actually don't (e.g. accidental mouse click removed the focus), you press Backspace and BOOM! suddenly you're on the previous page.

Ctrl/Cmd+[ and ] is the real shortcut!

oneeyedpigeon 13 hours ago 8 replies      
One of the contributors states:

"Building an extension for this should be very simple."

Why on earth isn't there just a generic keyboard-shortcut preference where I can control every possible browser action and its associated keyboard shortcut? In fact, why isn't this available at an OS level? Surely it would remove a lot of unnecessary duplicate code.

dandare 13 hours ago 3 replies      
"We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction. The latter are often cases where the user loses data. Years of user complaints have been enough that we think it's the right choice to change this given the degree of pain users feel by losing their data and because every platform has another keyboard combination that navigates back."

Personally I am shocked that the Chromium team ignored years of user complaints before they decided to fix what their own usability studies found to be a worthless yet painful gimmick.

kibwen 13 hours ago 4 replies      
This is going to sound hyperbolic, I'm sure, but backspace-as-back is enormously important to my browsing experience. When I recently installed Ubuntu I had a small moment of panic when I realized that hitting backspace in Firefox performed some Ubuntu-specific thing rather than navigating backwards (as it does in Windows), but fortunately there's an about:config pref to re-enable the behavior. Just my two cents.
ChrisArgyle 13 hours ago 3 replies      
Analysis from Chrome devs here https://codereview.chromium.org/1854963002

Though I am a frequent user of backspace in Chrome I'm inclined to agree with their decision. Almost no one is using it and casual users are confused by it.

I'll just wait for someone to implement the feature in an extension.

FollowSteph3 13 hours ago 0 replies      
I think this is very good. I can't tell you the number of times I've lost form data by hitting backspace.

For those wondering how, if you do control backspace to erase a word etc igs very easy to miss, especially as you transition between word delete and single character delete.

The other common use case for errors is when u think you're in a field editing and you're actually not, bam, you just lost all your form data.

I also like the idea that backspace is for text editing and not for a second feature such as navigation. For enter yes but not backspace

EdSharkey 13 hours ago 3 replies      
This feels a bit like how Esc was nerfed over the years in Firefox and others until it essentially did nothing. It used to mean STOP. All sockets were closed, the page stopped loading, and I think way waaay back, even animated gifs stopped cycling and JavaScript timeouts and intervals were cancelled.

Single-page webapps were the death of Esc, it was too confusing to users to have a page suddenly hang because they pressed Esc for some reason and all the XHR connections silently closed. "Stopping" just no longer made sense.

Just going to need to train the old timers on the new key strokes. It is sad though when convenient controls are taken away.

gjvc 14 hours ago 3 replies      
This is most annoying. I have used this for the past twenty years and have not lost form data using it. In any event, chrome seems to remember form contents upon navigating back to a form page.

Leave my muscle memory alone please.

pfarnsworth 10 hours ago 0 replies      
Thank GOD. So many times I've been filling out forms and sometimes I hit backspace to delete something, and maybe I clicked on a dropdown, but it goes back one page and I lose everything. Not the end of the world, but pretty annoying and I'm glad they're removing this.
andrei_says_ 17 minutes ago 0 replies      
It's the most frequently used key for me when I browse.

Any way to add it back? Maybe an extension?

Kadin 8 hours ago 0 replies      
Striking a blow for mediocrity. Ugh.

If there was really a problem with data loss, the better solution would seem to be warning the user before navigating away from the page. Removing a widely-used single-key behavior in order to protect users from themselves seems like a bad prioritization.

It'd be nice if we could still have software that is unashamedly not trying to target some sort of Archie Bunker "low information" user. Even the big Linux distros seem obsessed with making things easy for some hypothetical moron-in-a-hurry, at the expense of actual users who know what they're doing. It's unfortunate, and it seems to be a sort of antipattern that's infected a lot of software design. It wasn't always this way: there used to be an expectation that users would learn to use software, and that like any tool, if misused you could mess things up. Somewhere along the line, we've decided that it's unacceptable to tell users that they need to learn how to use software instead of blindly stabbing at it and expecting it to protect them.

I'm not against sane defaults or warning users before they really do something horrible, but the current trend towards ripping out anything and everything that might possibly be 'confusing' seems to be far overstepping the mark.

Firefox isn't much better, but at least they haven't Nerfed the back button.

spo81rty 13 hours ago 1 reply      
This has always been annoying when doing it on accident. Good riddance!
dghughes 32 minutes ago 0 replies      
Wouldn't it make more sense to have a pop up "Are you sure you want to navigate away?" solution instead?

This is the very definition of throwing the baby out with the bathwater.

dhd415 9 hours ago 1 reply      
I think comment #32 (https://bugs.chromium.org/p/chromium/issues/detail?id=608016...) is worth highlighting:

 If you can fill out a formular field correctly without losing focus, you are not part of Chrome's target audience. edit: Had to type this four times due to accidently going back.

itslennysfault 9 hours ago 0 replies      
master race!!!

...but seriously, if I had a dollar for every time I've tried to hit "delete" (backspace on mac) to delete something I had selected in a web app and had it navigate back losing my unsaved changes I'd have a couple bucks.

It's rare, but it's annoying when it does happen.

greggman 6 hours ago 0 replies      
Oh thank you thank you THANK YOU!!!!

I can't tell you how many times I've lost data because of backspace! Good riddance.

Now, please also get rid of pull down to refresh in iOS Chrome because that has also lost me data a ton of times as well. I don't even know who uses that feature. I don't need to refresh most pages and if I do there are better ways.

nikanj 8 hours ago 0 replies      
I can't count the number of times I've noticed a typo in a form, hit shift-tab one time too many or few, hit backspace and ended up losing all of the info I filled in. The forward button mostly just leads to "resubmit form data?", instead of bringing me back.
crazygringo 13 hours ago 1 reply      
Finally! It's about time. I don't know who ever thought having a command that didn't use a modifier key was a good idea -- it's not just about losing form data (even if that's protected against), a webpage can have all sorts of "state" you don't want to lose.

Also, what's so hard with tapping Cmd+Left or Ctrl+Left to go back? It's all I've ever done, incredibly intuitive, and simply to do with one hand (using the right Cmd button), at least on most keyboards I've seen.

mcrmonkey 1 hour ago 0 replies      
ffs this is stupid. backspace has always been "back" in browsers and it really vexes me when some versions of firefox on some linux distro's do this. two hands have to be used to action this because the right alt key is either not mapped ( on some os'es ) or is alt-gr.backspace works well when moving quickly too - one finger from home row and bam.

Rather then this be the fix they should probably look at the bug thats causing the user to go back when the form element is focused.

Whats next ? take away space bar for moving through the page ?

A about:config thing needs to be present for this to allow the user to switch between what they want. Sure extentions are possible to fix this too but i dont really want a 3rd party extention to re-enable whats a tried and tested keyboard shortcut. Additionally what happens if that dev's account gets hacked and the extention modded for malace ?Or if the dev pulls the thing in a way similar to the node.js module issue a month or so ago.

This part is worrying though:

We have UseCounters showing that 0.04% of page views navigate back via the backspace button and 0.005% of page views are after a form interaction.

Where is that data being gathered from and how?

Additionally what is classed as a form interaction ?

_pferreir_ 8 hours ago 0 replies      
As a web application developer, I second the motion to officially thank the Chrome development team for this. "Backspace" triggering "back" is a usability disaster, and not only for inexperienced users. We recently had issues with a 3rd party editor widget losing focus due to a bug, which led to people accidentally triggering "back" and losing their data (it was a rich text field, so you can imagine how much of a problem that was). Sure, the problem here was the widget, but using such a commonly pressed key as the shortcut for a potentially destructive operation is a recipe for disaster.More advanced users have the option to use a custom extension, or even mouse gestures. Just develop an "Advanced Chrome" plugin and the problem will be solved.

As a side note, it's interesting to see how such a small change (which, as mentioned above is even reversible) can trigger such an outcry. I've read stuff such as "I've been using this shortcut for 20 years" or "I don't want an extension"... are those even arguments? Yes, applications should be "user-centred" but the "user" here is a collective of thousands or millions of people with their own incompatible opinions. There is a (very good) reason for this change and I've seen zero achievable solutions that would not imply it.

mstade 9 hours ago 1 reply      
I wonder if this is in any way related to the exceptionally annoying thing on google.com, where if you hit backspace it doesn't navigate back, but starts removing characters from your search. It does this with other keypresses too, presumably so you can just keep typing till you find whatever you're looking for, but it's a flagrant disregard for my action of moving focus from the input field.

In any event, I use backspace to navigate back all the time, so this is sure to annoy me to no end. Especially since I use multiple browsers, and it'll be hard to break habits. Ah well..

jneal 13 hours ago 0 replies      
I've personally always used alt+left to go back. I know backspace does the same thing, but the only reason I know that is because I seem to hit more frequently than you'd expect while not focused on a form field causing my browser to go back unexpectedly. I've never lost data, though, it always seems to persists when I go forward.
djwbrown 8 hours ago 0 replies      
Cumulative time wasted using 'Command+[': none.Cumulative time wasted due to overloading of the backspace key: hours.

Relying on current context to determine the behavior of backspace was a terrible idea from the start. To hell with your muscle memory. Re-learn a shortcut that makes sense, and which will save you time one day, rather than insisting with hacker-machismo that you've never lost data in a form.

retbull 8 hours ago 0 replies      
Good fuck that was annoying. I actually came up with a new work flow for all browsers because of this. I always open links in a new tab so if I want to go back I have to close the tab and if I want to go forward I will middle click to open in a new tab. This only falls apart when I run into a multi-page form or application that requires text. When that happens I hate that backspace goes back.
pbiggar 8 hours ago 0 replies      
Well done Chrome. While it's the way that I go back and I now need to change my habits, this is the kind of hard decision that you need to make to have a really great product. They weighed the upsides and downsides, and pissed off a small subset of people (esp on HN who are likely to be the backspace-as-back users) to make a better experience. Bravo!
slavik81 6 hours ago 0 replies      
My first instinct was to bemoan it's loss, but after thinking about it, I make this mistake far too often.

I actually just lost a draft of an annual self-assessment to this. I wanted to delete some text, but I guess I didn't have focus in the text box, and hit back. The form was created by an awful website (PeopleSoft/Oracle), so hitting forward didn't bring my data back.

Sure, it was just 20 minutes of work. Sure, a better website would have had the fields autosaved, or at least not have broken the browser autofill. Sure, I could have written it in a different program and then pasted into a browser.

But seriously, that should never happen. Not like that.

Hermel 8 hours ago 0 replies      
Finally! I don't know how often I accidentally navigated away from a page by pressing backspace while writing in a textbox.
davb 13 hours ago 0 replies      
They say 0.04% of page views are a result of pressing backspace. 0.04% sounds small but imagine how many page views per month there are, globally, with Chrome. That's. Significant number.

Backspace sure is an unusual navigation choice these days, and perhaps wouldn't make sense to code in new software. But in browsers, backspace to navigate back is expect behaviour.

This isn't the first time the Chrome or Chromium teams have made sweeping changes based on usage stats, pissing off the minority who use those features and pushing ever closer to a browser with only the lowest common denominator features that everyone uses.

jbb555 13 hours ago 0 replies      
They don't spend any time fixing everything that's broken with modern computers, instead they spend time changing things that weren't broken. Great.
Pharylon 5 hours ago 0 replies      
I guess I can uninstall Backstop now (a Chrome extension that literally exists only to disable backspace to go back). I've been running it for years now.

For you 1% of people that actually use the Backspace key for going back, I'm sure someone will come up with an extension to re-enable it, don't worry.

Mithaldu 11 hours ago 1 reply      
So they applied the wrong fix, to a problem that had been solved a decade ago.

The problem: Moving away from a form can result in data loss.

Their solution: Make it harder to move away?

The actual solution implemented more than a decade ago: Cache history completely and make it easy to move forward and backward in a tab's history while maintaining form contents.

whoisthemachine 5 hours ago 0 replies      
Positive change in my book. This key was always overloaded, leading to unintentional usages. Using backspace as a "navigate back in history" shortcut never worked reliable for me in any of the browsers I've used extensively (Chrome, FF, and IE).
brudgers 12 hours ago 1 reply      
Having a friend who often operates their keyboard by the old stick between their teeth method, I'd like to see an analysis demonstrating that the breaking change improves accessibility. Particularly since the alternative posed in the thread is the chorded alt-left.
davesque 9 hours ago 0 replies      
I agree with this. Accidentally going back when you lose focus on a text field is super annoying.
marcusarmstrong 11 hours ago 0 replies      
Finally! I can get rid of my third party extension to get rid of this insane behavior.
djrconcepts 4 hours ago 0 replies      
Great News! I have never intentionally hit backspace to go back, and yet I've hit backspace on accident and been taken many times. Quite annoying when it happens.
soheil 7 hours ago 0 replies      
This is actually a no brainer, many times I have accidentally tapped on my mousepad while typing and took the focus away from a textarea then noticed a typo and tried to delete it and baaam you're no longer on that page and possibly all the text you typed has gone into the abyss.
avehn 7 hours ago 1 reply      
People who cannot use a mouse or see a screen, who rely exclusively on keyboard commands, will be greatly affected by this change.

Global Accessibility Awareness Day https://www.w3.org/WAI/perspectives/

emodendroket 9 hours ago 0 replies      
I have to say, I find it a lot more common that I accidentally lose focus inside a text box and go back than that I intentionally use that shortcut.
okonomiyaki3000 11 hours ago 0 replies      
Thank the gods! I can't imagine who ever thought backspace as back was a good idea in the first place.
monochromatic 1 hour ago 0 replies      
Morons don't know how to use our web browser? Better break it!
hkjgkjy 8 hours ago 0 replies      
math0ne 4 hours ago 0 replies      
The amount of times I've accidentally navigated away from a form by hitting that damn shortcut!
copperheart 4 hours ago 1 reply      
Big thanks to the Chrome devs for this, I applaud and personally appreciate the decision but wonder why a navigation shortcut like this couldn't be made into an option for others to enable or disable based on their preference?
jasonm23 9 hours ago 0 replies      
Wontfix - just apply the worst possible, cruddy fix and shut down discussion.

Forgive me if I do not applaud.

daveloyall 7 hours ago 0 replies      
I got to this thread late, sorry.

Here's the thing about Chrome... They don't want power users.

Remember when you first switched to Chrome? That sleek little pastel colored window elegantly fast. It worked on most websites. It was notably fast on Gmail, which at the time was the slowest website you spent a lot of time on.

You didn't mind that Chrome wasn't configurable. You might even have thought that it would become more configurable over time.

You were wrong. You were never the target audience.

I once had a infuriating (to me, at the time) argument with a Googler who was responsible for an internal app which performed better in Firefox than in Chrome. He said "Use Firefox!". I didn't get it at the time. He was a power user, all his co-workers were power users, and thus the internal app was only used by power users... They all used firefox! At least for real work... (Pretty sure they all had Chrome on hand for Mail and Maps, etc...) Anyway, the internal app correctly targeted firefox.

Meanwhile, back in time, when Chrome came out, Firefox started hemorrhaging users. Mozilla reacted. Today, it's as fast or faster than Chrome for most sites I use. And it's configurable!

If you are reading this, and don't have the latest beta or Nightly FF installed, you should go do so now! Really, trying firefox after being away from years will make you smile and renew your faith in humanity. :)

But unfortunately, this story doesn't end there...

I think some firefox devs see Chrome as a rolemodel... Maybe they want to compete with Google for those users who are not you! As a small example, I offer this: https://bugzilla.mozilla.org/show_bug.cgi?id=1118285 Note the posts that are marked "Comment hidden (advocacy)". You can click the [+] to show what was hidden (comments from power users).

There are niche browsers for power users, and there are extensions... But there isn't a mainstream browser for power users because power users aren't mainstream.

I'm just describing the problem (well, I hope!), I'm sorry but I don't have a solution.

Osiris 9 hours ago 0 replies      
This is why I switched to Vivaldi. All the shortcut keys are customizable. It also includes mouse gestures for quick back/forward with the mouse. I prefer having the choice than someone else dictating.
jdelaney 4 hours ago 0 replies      
I wrote a quick Chrome extension to fix this for those interested.

Extension: https://chrome.google.com/webstore/detail/back-to-backspace/...

Source: https://github.com/j-delaney/back-to-backspace

grandalf 12 hours ago 2 replies      
I've never used backspace for nav intentionally, and it has caused annoying data loss for me a few times.

It's never made sense to me why this behavior was ever added to browsers. The logical choice would have been the left arrow key (since there is a corresponding right arrow).

azinman2 9 hours ago 1 reply      
I love how most of the comments on the bug tracker are "I've never lost data so therefore no one has and this should go back because I'm used to it."

Typical myopic power users...

kbenson 9 hours ago 0 replies      
About time! I've been seriously annoyed a number of times when doing text/data entry (on this very site!) where somehow I removed focus from the input, and then tried to erase some text, only to find it going back a page, and my input is gone when I browse back forward (this problem is exacerbated by inputs that don't exist until Javascript creates them from some page event).

When using a laptop with a sensitive touchpad, this can get really bad.

XorNot 3 hours ago 0 replies      
I'm surprised people think this is a bad idea? I know of no one who uses backspace this way in the browser.
eropple 8 hours ago 0 replies      
So I'm going to need another Chrome extension, further exacerbating the gong show that is Chrome battery life, for something I use all the time.


usaphp 9 hours ago 0 replies      
I am glad they are making this change. I've lost my form data by accidentally going back while trying to erase something in a text field so many times.
swingedseraph 9 hours ago 0 replies      
Do I like this? Yes. Should it be an immutable part of the interface and not configurable? No. That's ridiculous.
jijojv 7 hours ago 0 replies      
Thank you. This is the right fix for 99% users who'd otherwise lose data
lr 10 hours ago 0 replies      
On OS X, Command-left bracket has worked on Chrome, Safari, Firefox (and probably more) for years. Not sure about Windows or Linux, but keyboard shortcuts are very well established across browsers in this way (like Command-L, and all of the Emacs bindings like Control-A, Control-E, etc.).
Kiro 10 hours ago 0 replies      
As someone who constantly gets screwed by this: finally!

Example from literally one minute ago: a cool thing was going on in a Twitch stream and I wanted to hype in the chat, misclicked the chat box so backspace went back to the stream list instead making me miss the moment.

perezdev 8 hours ago 1 reply      
>Are you suggesting that the only remaining options are Alt-Left (a two-hand key combo for that I have to move my mouse hand towards the keyboard, and then back)

I guess no one told this guy that a standard keyboard has two ALT keys.

ogreveins 11 hours ago 0 replies      
I would very much like them to revert this change. Using backspace to go back has been in my browsing habits since I began using the internet. Atl+left or right is annoying. Either give us a checkbox or revert it. Please. Pretty please.
henvic 6 hours ago 0 replies      
What the hell? Just ask the user if he intended to go backwards when there is a form on focus or something.
mixedCase 8 hours ago 0 replies      
Why must every browser out there suck? Servo seems like the last hope if it gets integrated into FF or a FF-like browser.
rocky1138 12 hours ago 0 replies      
I don't mind removing backspace, but this better not remove the functionality of my back button on my mouse. That's one of the worst things about having to boot into OSX at work.
tehchromic 5 hours ago 0 replies      
Hip hip hooray!!!
dc2 9 hours ago 0 replies      
I just hit backspace to go back to HN after reading this... and it didn't work.
rietta 8 hours ago 0 replies      
NOOO! That's how I go back! And use the space bar to scroll up and down!
jordache 11 hours ago 0 replies      
on the topic of annoying details that browser makers overlooks.

In safari, when opening a new tab, the focus is not on the address bar. I have to always to Cmd+L before start typing. The address bar focus works when you don't have a homepage defined (so a blank page), but who doesn't configure a default home page? arrghh

yAnonymous 12 hours ago 0 replies      
I hope they also remove support for forward/back mouse buttons. I keep accidently pressing those.
sammorrowdrums 12 hours ago 0 replies      
Good riddance! This is such a terrible double-purpose binding. When bindings seriously are common typing commands, that are not just bound, but bound in a way that is often destructive, it just needs to die.

Anyone who thinks this shouldn't die is basically a bad person. It was an affliction, and one of the poorest design choices in history. :-p

jdhzzz 10 hours ago 0 replies      
Thank you.
lpsz 13 hours ago 1 reply      
Sometimes, it's better without these features. E.g. on Mac, dragging left in the browser is a gesture for going back to the previous page, and I can't count how many times I've accidentally triggered that while filling out a web form or interacting with a page. Isn't the back button and the keyboard shortcut enough?
hosh 5 hours ago 0 replies      
The very first comment says:

"How is someone who grew up in terminal times expected to navigate back when using a two-button mouse?"

I grew up in terminal times. I was lucky that, while growing up, I had access to my father's Unix account through the university. Not only that, I do all of my development work on the terminal (via tmux, vim, and spacemacs). I like the terminal. I love keyboard shortcuts. Keeping my hands in the home row -- awesome!

The backspace in the browser has always struck me as misfeature. I've lost data when typing in forms.

In contrast, when I browse a page, I rarely hit the back button. I'm more likely to open a link in a new page when I am doing serious research.

Times move on. Some things are lost, and our civilization is not for the better. This is not one of those cases.

bluhue 7 hours ago 0 replies      
Space-bar next!
autoreleasepool 8 hours ago 0 replies      
I can finally uninstall BackspaceMeansBackspace!
ravenstine 8 hours ago 0 replies      
OJFord 7 hours ago 0 replies      

I've only ever done this by accident.

optimuspaul 6 hours ago 0 replies      
finally, now maybe I can go back to Chrome.
givinguflac 9 hours ago 0 replies      
One more plus for Vivaldi.
logicallee 4 hours ago 0 replies      
I'm a bit late to the party (already 408 comments) but, guys, here is an example of what happens currently in many browsers:




(this example is not prescriptive, it's just what happens)

At any rate, the GIF shows the current situation. You should watch it.



I actually wrote to this app creator that they should throw up a confirmation window ( like these https://www.google.com/search?q=confirm+navigation&tbm=isch )

but the fact is that the browser is the one that decided to navigate away. Now what's very interesting, is that even in this, HN's, thread we have people saying "Yes!!!" and people saying "No!!!" to the change.

So people who simply have never used backspace for navigation, like me, have many times accidentally touched backspace or thought we were focused on a form, and ended up losing data (because the page didn't throw up a confirmation window after navigating back, and after clicking forward the page is blank again.) While other people, who have no convenient single key they can use to navigate back, have come to rely on it. I'm not sure what the solution is, but here's the current situation so everyone understands it.

smegel 7 hours ago 0 replies      
So it takes 51 versions before common sense kicks in?

Have a pat on the back.

kaonashi 9 hours ago 0 replies      
Next up: remove form submit on return from textarea fields.
april1stislame 9 hours ago 0 replies      
Never lost data on firefox by going forward after going back for accident while writting on a form, but whatever...Google only wants dumb users who can't see past what they're doing.
kruhft 7 hours ago 0 replies      
Good, now I don't have to fix one of my sites to handle backspace 'properly' that uses a custom keyboard handler for input. What a pain.
ryanlol 9 hours ago 0 replies      
Because adding customizable keybinds is too difficult? Hell, if configs looking scary to normal users is a problem why not just have a json/sqlite/whatever file in the profile directory?
dredmorbius 6 hours ago 0 replies      
Google Chrome have fixed a longstanding UI/UX bug and state overload of the Backspace key

Backspace key in Chrome browser no longer navigates backward, but instead is limited to its initial and rightful role: deleting the previous character under the pointer (mouse / text cursor).

I swear by His Noodliness I'd ranted on this at G+ some time, though unfortunately since Microsoft Bing Search isn't available on Google+, I cannot actually find shit in a useful fashion.

That said, I applaud this change, thumb my nose at the fuckwits who are bitching about it, and note again the Flaw of Averages: One Size Fits None.

As to the justification of not relying on Backspace for Navigation

I ordinarily take exception to blame-the-user / taunt-the-user practices, and should hasten to explain my own here.

Learning a New Backward Navigation Method is a Temporary Training Inconvenience.

Repeatedly losing Vast Quantities of Newly Composed Content is an Irrevocable User State Loss.

Among the canons of human-computer interface design is this: Thou shalt not fuck with thine users' State.

Which by definition makes those who fail to make this distinction fuckwits. Perhaps only ignorant fuckwits, a curable state, though quite possibly and regrettably stupid fuckwits, a State of Extreme Durability in my experience.

The larger fault is arguably for the lack of a clear stateful separation of editing from _ browsing_ modes in Web browsers. Editing involves creating novel user state which can be easily lost through capricious client behavior, such as, to draw on a randomly selected example, fucking overloading the backspace key with the behavior of "delete my highly considered and Very Important Message to the Univers by immediately and irrevocably moving off this page.

It's with some irony that I note that console-based Web browsers rarely have this problem. The w3m browser, for example, when editing a text field, dumps the user to a local full-powered editor, and in fact defaults to that one specified by the users environment ($VISUAL, $EDITOR, etc.). The result is that a "primitive" browsing tool actually has an exceptionally powerful editing environment.

(At this point, the Emacs users in the room are of course laughing and pointing at me, but they in fact entirely substantiate my claim in doing so. And, my dear good friends, I've given not inconsiderable thought to actually joining you, as it seems that via Termux, a commandline environment for Android, emacs and all its capabilities are in fact available to me, and may vastly surpass the Android applications environment in capabilities. The fact that Viper is a well-established and long-standing component of the Emacs landscape means that the One True Operating System now does in fact have a useful editor.)

Chrome has other utterly unredeemable failures on Android, including an utter lack of ad-blocking capabilities. But for the task of composing and editing, this is a nice touch.

But it does raise one futher point: why is editing via Web tools so abysmally poor?

Despite various deficiencies, the G+ app actually does favourable compared to a number of other platforms, and virtually all Web editable tools. Reddit and Ello stand out particularly. As much as I love the Reddit Enhancement Suite full-screen editor (it's a browser extension for Firefox and Chrome desktop), it's not available on Android. Meaning I've got to jump through Multiple Divers Hoops in order to compose long-form content on Reddit. Android's various content-creation deficiencies make this a tedious process. This accounts for some of my Diminished Output in recent months.

In particular, Firefox/Android has proven Exceptionally Capable at Losing My Shit, at least in memory not exceptionally distant (considering I've owned my present Samsung[tm] Infernal Device[r] only since October last), a characteristic which makes me Exceptionally Leery of Embarking on Enterprises of Extensive Prose Composition within that context.

Given the, shall we say, exceptional advancement of text-composition in other contexts, I find this particular failure mode of the Browser Development Community in General most unpardonable.


soperj 9 hours ago 0 replies      
And i'll never use chrome again.
imaginenore 13 hours ago 0 replies      
exabrial 13 hours ago 0 replies      
THANK YOU!!!!!! Progress
optforfon 13 hours ago 1 reply      
Anyone want to place bets on how long till Firefox copies them?
IvanK_net 11 hours ago 1 reply      
I wanted to use Ctrl+N, Ctrl+O and Ctrl+T shortcuts in my webapp. I reported a bug 3 years ago https://bugs.chromium.org/p/chromium/issues/detail?id=321810 which is not fixed yet, but they have "fixed" Backspace ... that seems crazy to me.
alexc05 10 hours ago 1 reply      
> "We're doing this via a flag so that we can control this behavior should there be sufficient outcry."

I love that they decided to do this. I think the justification for taking it away is really good.

I also think that the decision to disable via "flag" shows some prescience with respect to how the public reacts to things.

Great move and a template for "sound product development".

Online tracking: A 1-million-site measurement and analysis princeton.edu
540 points by itg  14 hours ago   250 comments top 28
randomwalker 13 hours ago 11 replies      
Coauthor here. I lead the research team at Princeton working to uncover online tracking. Happy to answer questions.

The tool we built to do this research is open-source https://github.com/citp/OpenWPM/ We'd love to work with outside developers to improve it and do new things with it. We've also released the raw data from our study.

brudgers 10 hours ago 2 replies      
Google has a vested interest in information leakage. I have a suspicion that the Chromium project expresses a strategic desire to shape the direction of browser development away from stopping those leaks. The idea of signing into the browser with an identity is a core feature and in Google's branded version, Chrome, the big idea is that the user is signed into Google's services.

Google only pitches the idea of multiple identities in the context of sharing devices among several people: https://support.google.com/chrome/answer/2364824?hl=enand even then doesn't do much to surface the idea. https://www.google.com/search?hl=en&as_q=multiple+identities...

ultramancool 11 hours ago 4 replies      
As soon as I saw these APIs being added I immediately dropped into about:config and disabled them. How the hell do these people think this is a good idea to do without asking any permissions?

Put these in your user prefs.js file on Firefox:

user_pref("dom.battery.enabled", false);

user_pref("device.sensors.enabled", false);

user_pref("dom.vibrator.enabled", false);

user_pref("dom.enable_performance", false);

user_pref("dom.network.enabled", false);

user_pref("toolkit.metrics.ping.enabled", false);

user_pref("dom.gamepad.enabled", false);

Here's my full firefox config currently:


Privacy on the web keeps getting harder and harder. Of course this should only be used in conjunction with maxed out ad blockers, anti-anti-adblockers, privacy badger and disconnect.

We need browsers to start asking permission. When you install an app on Android or iOS it says "here's what it's going to use, do you want this?". The mere presence of the popup would annoy people and prevent them from using these APIs.

rdancer 13 hours ago 3 replies      
This is the kind of nonconsensual sureptitious user tracking that the EU privacy directive 2002/58/EC concerns itself with, not those redundant, stupid cookie consent overlays.
f- 10 hours ago 0 replies      
Although the emphasis on the actual abuse of newly-introduced APIs is much needed, it is probably important to note that they are not uniquely suited for fingerprinting, and that the existence of these properties is not necessarily a product of the ignorance of browser developers or standards bodies. For most part, these design decisions were made simply because the underlying features were badly needed to provide an attractive development platform - and introducing them did not make the existing browser fingerprinting potential substantially worse.

Conversely, going after that small set of APIs and ripping them out or slapping permission prompts in front of them is unlikely to meaningfully improve your privacy when visiting adversarial websites.

Few years back, we put together a less publicized paper that explored the fingerprintable "attack surface" of modern browsers:


Overall, the picture is incredibly nuanced, and purely technical solutions to fingerprinting probably require breaking quite a few core properties of the web.

pmlnr 12 hours ago 2 replies      
So... what we need is a browser, which says it supports these things but blocks or provides false data on request and looks as ordinary as possible for "regular" browser fingerprinting.

Is anyone aware of the existence of one?

jimktrains2 13 hours ago 6 replies      
NoScript is an all-or-nothing approach. Are there any JS-blockers that allow API-level blocks?
anexprogrammer 12 hours ago 3 replies      
Colour me unsurprised. Disappointed though.

I'm glad I disabled WebRTC when I first discovered it could be used to expose local IP on a VPN.

These "extension" technologies should all be optional plugins. Preferably install on demand, but a simple, obvious way to disable would be acceptable. (ie more obvious than about:config)

Not a great deal can be done about font metrics other than my belief that websites shouldn't be able to ferret around my fonts to see what I have. Not like it's a critical need for any site.

codedokode 11 hours ago 1 reply      
Some methods of fingerprinting are probably used to distinct between real users and bots. Bots can use patched headless browsers that are masquaraded as desktop browsers (for example as latest Firefox or Chrome running on Windows). Subtle differences in font rendering or missing audio support can be useful to detect underlying libraries and platform. Hashing is used to hide exact matching algorithm from scammers.

There is a lot of people trying to earn on clicking ads with bots.

Edit: and by the way disabling JS is an effective method against most of the fingerprinting techniques.

cptskippy 13 hours ago 1 reply      
All of this makes me wonder how some of these interfaces should be more closely guarded by the user agent.

Perhaps instead of a site probing for capabilities, they should instead publish a list of what the site/page can leverage and what it absolutely needs to work. Maybe meta tags in the head or something like the robots.txt. Browsers can then pull the list and present it to the end user for white-listing.

You could have a series of tags similar to noscript to decorate broken portions of sites if you wanted to advertise missing features to users and, based on what features they chose to enable/disable for the site, the browser would selectively render them.

kardos 13 hours ago 3 replies      
So given this information, how can we poison the results that the trackers get?
wodenokoto 13 hours ago 0 replies      
What annoys me the most is how many useless cycles these trackers use to track me.
MichaelGG 11 hours ago 0 replies      
WebRTC guys get around this by stating fingerprinting is game over, so don't even bother. They ignore that they are going against the explicitly defined networking (proxy) settings. Browsers are complicit in this. If the application asks "should I use a proxy", then ignores it, silently, wherever it wants, that's deceptive and broken.

There's still zero (0) use cases to have WebRTC data channels enabled in the background with no indicator.

If all these APIs are added, the web will turn into a bigger mess than it is. They can't prompt for permissions too much. So they'll skip that, like WebRTC does.

ape4 11 hours ago 0 replies      
Seems like browsers should ask the user's permission to use these html5 features. Then whitelist. For example, a site that does nothing with audio should be denied access to the audio stack.
pjc50 11 hours ago 1 reply      
I think it's time for HTML--, which would contain no active content at all and simply be a reflowable document display format.
makecheck 10 hours ago 0 replies      
Over 3,000 top sites using the font technique, and from the description this sounds really wasteful (choosing and drawing in a variety of fonts for no reason other than to sniff out the user).

Each font is probably associated with a non-trivial caching scheme and other OS resources, not to mention the use of anti-aliasing in rendering, etc. So a web page, doing something you dont even want, is able to cause the OS to devote maybe 100x more resources to fonts than it otherwise would?

A simple solution would be to set a hard limit, such as 4 fonts maximum, for any web site; and, to completely disallow linked domains from using more.

aub3bhat 12 hours ago 0 replies      
There is an acceptable tradeoff between pseudo anonymous access through browsers vs non-anonymous access through native apps.

To interpret this research as reason for crippling web or browsers would be a giant mistake. Crippling browsers will only work against users, who will be then forced into installing apps by companies.

Two popular shopping companies in India exactly did this, they completely abandoned their websites and went native app only. This combined with large set of permission requested by apps lead to worse experience in terms of privacy for consumers. As the announcement for Instant Apps at Google I/O demonstrate, web as an open platform is in peril and its demise will be only hastened by blindly adopting these types of recommendations.

Essentially web as open platform will be destroyed in the name of perfect privacy. Only to be replaced by inescapable walled gardens. Rather consider that web allows a motivated user to employ evasion tactics, while still offering usability to those who are not interested in privacy. While with native apps where Apple needs a credit card on file to install, offer no such opportunity.

I am happy that Arvind (author of the paper) in another comment recommends a similar approach:

"""Personally I think there are so many of these APIs that for the browser to try to prevent the ability to fingerprint is putting the genie back in the bottle.But there is one powerful step browsers can take: put stronger privacy protections into private browsing mode, even at the expense of some functionality. Firefox has taken steps in this direction https://blog.mozilla.org/blog/2015/11/03/firefox-now-offers-....Traditionally all browsers viewed private browsing mode as protecting against local adversaries and not trackers / network adversaries, and in my opinion this was a mistake."""


cdnsteve 13 hours ago 1 reply      
After reading this it makes me want to disable JavaScript entirely, along with cookies, and go back to text browsing. I've been using Ghostery on my phone, it's been pretty good.
wyldfire 13 hours ago 3 replies      
Whoa, what's the use case for exposing battery information?
radicalbyte 12 hours ago 0 replies      
Of course this is something you do. Throw it together with all of the other information you can clean from a browser (referrer, ip) and you can get a match with a very high confidence level.

Shops can do the same with baskets, you find that people are either identified by one very rare feature which reoccurs often or their little graph of 4-5 items which correlate 99% to them.

chatmasta 8 hours ago 0 replies      
If you want to see a live demo of all the ways your browser can fingerprint you, this is a great website: https://www.browserleaks.com/
buremba 13 hours ago 2 replies      
All these things make the websites the new apps. Most probably we won't need to use many desktop applications a few years later.
youaretracked 6 hours ago 0 replies      
Since the original web based ad campaigns were launched we have been tracked. Serious web analytics companies know these tactics already.

So what exactly is the research contribution being made here? What's new and interesting?

id122015 12 hours ago 0 replies      
I think its similar to how Absolute Computrace rootkit identifies Android and Lenovo devices. Each hardware compoment has a unique ID, like your ethernet, bluetooth, even microphones and batteries.
coygui 8 hours ago 0 replies      
Would it be more secure to use tor than traditional browser. The only drawback is the longer RTT.
jkot 13 hours ago 1 reply      
Malware filtering is needed.
tomkin 11 hours ago 1 reply      
Ahhh. Remember when this was just a Flash problem, and getting rid of Flash was going to rid the world of evil?

Spoiler: that didn't happen.

ysleepy 13 hours ago 2 replies      
Well, who would have guessed. Surprise surprise.

The web is such a shit technology.

JavaScript async/await implemented in V8 googlesource.com
507 points by onestone  3 days ago   217 comments top 25
tanto 3 days ago 7 replies      
The moment I started using async await (with babel) combined with the new fetch API so many libraries got obsolete.

Getting data is as easy as:

 async function main () { try { const res = await fetch('https://api.github.com/orgs/facebook'); const json = await res.json(); console.log(json); } catch (e) { // handle error } }
So I am quite happy when this lands in modern browsers asap.

bigtones 2 days ago 1 reply      
Thank you to Microsoft and Anders Hejlsberg for inventing Async/Await (and to F# for the original inspiration)https://en.wikipedia.org/wiki/Futures_and_promises
hoodoof 3 days ago 1 reply      
This news is almost as important as the rest of ES2015 being implemented in Webkit.

async/await is the final piece in the puzzle of providing real solutions to callback hell in JavaScript.

ch49 2 days ago 5 replies      
Why is "async" keyword needed? Can't JS engine infer from the use of "await" in a function that this function need to be async? I'm using async/await for a while now, and so many times I've introduced bugs in my code because i forget to put "async" in front of the function, or put "async" in front of wrong function. It's simply annoying to go back and put "async" when in middle of writing a function I realise I need to use await.
dmihal 3 days ago 1 reply      
md224 2 days ago 0 replies      
Looking at Kangax's support table:


Apparently Microsoft Edge seems to have been the first browser to implement it... good job Microsoft!

andrewstuart2 3 days ago 10 replies      
I have yet to see a convincing argument that this feature is necessary or even helpful beyond one-liners. The Q promises API, to me, is the right way to reason about asynchrony. Once you understand closures and first class functions, so much about complex asynchronous flows (e.g. multiple concurrent calls via Q.all, multiple "returns" via callback arguments) become so simple. The "tons of libraries" argument doesn't make tons of sense either. I've done a lot of async and I've never needed anything beyond Q that I can recall.

This feels like a step toward added confusion rather than language unity. Much like the majority of of ES6 feels to me: bolted-on features from popular synchronous languages that themselves are only now adding features like lambdas.

I don't want to write faux-event-driven code that hides the eventedness beneath native abstractions. And I definitely don't want to work with developers, new and old, trying to learn the language and whether/when they should use async/await, promises, or fall back to an es5 library or on* event handlers. I want developers who grok functional event-driven code with contextual closures.

ht85 2 days ago 2 replies      
Kinf of OT, but can anyone share their experience about using Babel's async/await in production instead of regular Promises?

I'd love to hear about people who have used it in large and complex projects, from a debugging standpoint.

As of now, using Bluebird (with its source in a different, blackboxed script), it is possible to follow the code execution through the event loop with async debugging, in a very elegant and enjoyable fashion.

I find async/await much more appealing when coding, but I'm worried about quality of life when hardcore debugging, as in my current project it can make me waste hours at a time when something if completely fringe happens.

z3t4 1 day ago 0 replies      
Promises are like cancer, and async/await is just treating the symptoms.

 // Callback dataCollection.find('somethingSpecific', function getIds(dataArray) { var ids = dataArray.map(item => item.id)); display(ids) }); // Promise var dataArray = dataCollection.find('somethingSpecific'); var ids = pmap(dataArray, function(item) { // Cancer cell return item.id; }); pdisplay(ids); // Cancer cell // The cancer grows ... function pmap (dataPromise, fn) { return dataPromise.then( function(data) { return map(data, fn); }); } // The cancer grows ... function pdisplay(dataPromise) { dataPromise.then(function(data) { display(data); },function(err) { display(err); }); }

serguzest 2 days ago 0 replies      
It has been in Microsoft Edge/Chakra for a while. But I couldn't make it work with Babel + webpack2. I still needed babel for static properties and React. It was either webpack parsing couldn't recognize async/await or webpack executes modules on Nodejs which wasn't supporting asyn/await by the time. So bundling was failing.

I wonder if there is any babel/webpack gurus out there can make it work ?

Oh btw, I recommend windows users to try microsoft edge for debugging and runtime inspection, it is so slick :)

rattray 3 days ago 0 replies      
> [esnext] prototype runtime implementation for async functions

Well, it claims to be a prototype. Can anyone from the team comment?

Quite exciting in any case!

lath 2 days ago 1 reply      
In the meantime there's the co library which provides the next best thing using yield.
chukye 2 days ago 0 replies      
Man, I used to love the times when try/catch was used to exception only, and with exceptions that leaves the program in a bad state, I used to think when you see a throw something really bad is going on, not just a simple ajax fail.

Dont know why people love so much async/await. In the end of the day, this all (in node land, for instance) will be just a function call in the libuv, this will never change, this is because the pattern is really good.. Why overcomplicate that?

johnhenry 2 days ago 2 replies      
I'm definitely rooting for the feature to be included in the spec as soon as possible, but I'm a little weary when features are added to the engine before being standardized. Object.observe, anyone?
ihsw 3 days ago 3 replies      
Realistically does this mean we will see async/await in node-v7.0?
dclowd9901 2 days ago 2 replies      
I think my problem with this approach is the way it throws away the functional programming paradigm JS has been sharpening over these last few years. When I see await, I'm reminded of Java or .net, languages that didn't traditionally have functions as first class citizens.

OTOH, I can't complain about removing dependencies like "when" or "bluebird", and it'll be nice to not have to simulate promise returns in testing.

jdgiese 2 days ago 0 replies      
Super excited to start seeing async/await in some of the browsers. Async/await makes certain function decorator patterns even more useful, e.g. http://innolitics.com/10x/javascript-decorators-for-promise-...
dkuznetsov 3 days ago 0 replies      
I like how it rhymes.
xanderjanz 2 days ago 2 replies      
From all my research, I feel like Promises end up making better code than async await. Am I the only one who thinks that?

Like, what's the equivalent of Promise.all with async/await? And how do tou do stuff synchronously after kicking off an async process?

treenyc 2 days ago 2 replies      
What about this nodejs async,


May be a replacement until the async is implemented in V8

derrickroccka 1 day ago 0 replies      
This is so good. Great days for JS. This is going to save us from a lot of headaches.
avodonosov 2 days ago 6 replies      
what's the point of explicitly declaring functions "async", and then explicitly saying "await". IMHO all this could be done automatically by language.
ConAntonakos 2 days ago 0 replies      
So what does this mean in terms of being implemented in Node.js? How soon can we expect that to happen?
z3t4 2 days ago 3 replies      
I can't see how wrapping everything in a promise and a try/catch, plus adding async/await is any easier then a callback.
jcoffland 2 days ago 0 replies      
The one extra diff I would really like to see in that list is some documentation. The V8 docs have always been too sparse.
Firebase expands to become a unified app platform googleblog.com
518 points by Nemant  1 day ago   197 comments top 41
mayop100 1 day ago 19 replies      
(firebase founder here) Im thrilled to finally be able to show everyone what weve been working on over the last 18 months! When I said big things are coming in the HN comments back when our acquisition was announced, I was talking about today : )

Were really excited about these new products. There are some big advances on the development side, with a new storage solution, integrated push messaging, remote configuration, updates to auth, etc. Perhaps more important though are the new solutions for later in your apps lifecycle. Weve added analytics, crash reporting, dynamic linking, and a bunch more so that we can continue to support you after youve built and launched your app too.

I'd suggest reading the blog post for more info:https://firebase.googleblog.com/2016/05/firebase-expands-to-...

This is the first cut of a lot of new features, and were eager to hear what the Hacker News community thinks. I look forward to your comments!

primitivesuave 1 day ago 4 replies      
Firebase is an incredibly powerful tool, and in a sense is a "democratizing force" in web development. Now anyone can build a complete web application without needing to know anything about setting up servers, content delivery networks, AWS (which is still quite difficult to use), and scaling. I teach kids as young as 10 years old to build iOS apps and websites with Firebase - they can develop locally and push to Firebase hosting with a single command. After exploring this new update, I can say with confidence that literally everything is easy-to-use now.

Whenever there is a Firebase announcement there are many replies along the lines of "this won't work for me because it's owned by Google, may be discontinued, doesn't have on-premise solution, etc". If these are your thoughts then you are missing the point of Firebase. It enables small web development shops like mine to focus on building beautiful web applications without having to give up manpower toward backend engineering. The cost of using Firebase is peanuts compared to the savings in employee hours.

Perhaps some day we will have to migrate elsewhere, but I find that possibility extremely unlikely because the clear amount of effort it took to create the Google-y version means this is a long-term play.

zammitjames 1 day ago 0 replies      
We were part of the Early Access Program for the expanded Firebase and used it to build our music collaboration app Kwaver. With the new features, they did a nice job of collecting a bunch of related mobile products (Analytics, Push Notifications, Storage, Authentication, Database, Deep Linking, etc) into a pretty cohesive platform, and it's saved us a bunch of time.

With Firebase Analytics we can track events, segment audiences (according to user properties; active, dormant, inactive) and take action according to the user segment. We are able to send push notifications (also using Firebase) to dormant male users who play the piano for example. Another cool feature is Remote Config, which gives you the option to ship a number of separate experiences and track the user interaction. Like A/B Testing but way more flexible.

For us, the best product is the existing database product they had, as it really improves our user experience to ditch the 'pull to refresh' button' and have our app respond to changes live.

We have been waiting for Google to provide developers a more complete mobile solution for a while now, and theyve done it superbly through Firebase!

Feedback; It would be really cool if Firebase could implement UTM codes to be able to track user acquisition and be able to automate actions according to User Properties.

Shameless plug: if you're a musician (or a music fan), we'd really appreciate if you could download our music collaboration app, try it out and give us feedback. Its available for free on the app store; The following link will re-direct you there later today. http://kwaver.com

timjver 1 day ago 1 reply      
I love Firebase, but the Swift code in the iOS guide is of really low quality. For example (https://firebase.google.com/docs/database/ios/save-data#dele...):

 if currentData.value != nil, let uid = FIRAuth.auth()?.currentUser?.uid { var post = currentData.value as! [String : AnyObject] var stars : Dictionary<String, Bool> stars = post["stars"] as? Dictionary<String, Bool> ?? [:] // ... }
What this should really be:

 guard let post = currentData.value as? [String : AnyObject], uid = FIRAuth.auth()?.currentUser?.uid else { return FIRTransactionResult.successWithValue(currentData) } let stars = post["stars"] as? [String : Bool] ?? [:] // ... }

chatmasta 1 day ago 2 replies      
Interesting that Google is doubling down where Facebook divested. The obvious difference is that Google has a cloud platform and Firebase is a funnel into it, whereas Facebook had nothing to funnel Parse users into.

I wonder if Facebook will ever launch a cloud platform. They've got the computing resources for it.

bwship 1 day ago 0 replies      
We've been using the Firebase platform for a while now. It's pretty cool to see them expand from 3 products to ~15 overnight. I'm most excited about their analytics and crash reporting. I must say that their system has been one of the best we have used in a longtime, I am really excited to see other aspects like analytics and ads being housed under this same umbrella, as I think it is going to help with development time overall. One area that I'd like to see improved though is a deeper querying language for database, or even better would be a way to automatically export the system in realtime to a postgres database for better SQL type analytics.
davidkhess 1 day ago 1 reply      
The concern I've always had with Firebase is the lack of a business logic layer between clients and the database. This tends to force the business logic into the clients themselves.

Trying to change the schema if you have Firebase clients deployed that can't be instantly upgraded via a browser refresh (i.e. iOS and Android mobile apps) seems an extremely challenging task.

ivolo 1 day ago 0 replies      
We used the original Firebase database product to build http://socrates.io/ 3.5 years ago, and I remember getting Socrates running in a few hours. Im looking forward to seeing them up the bar on speed of development / ease for their next 10 products :) Nice work team!
mybigsword 1 day ago 4 replies      
way too risky to use it for startups. Google may discontinue this project at any time and you have to spend months to rewrite everything for another database. IF google open source it and we will be able to install it on premise and patch without Google, that would be ok. So, I would recommend to use PostgreSQL instead.
dudus 21 hours ago 0 replies      
Even if you don't want to use any Firebase service you might still want to use it only for Analytics. Drop the firebase SDK in the App and you are done. Free, unlimited and unsampled Analytics reports for your App.



fredthedinosaur 1 day ago 1 reply      
When will it support a count query? now to be able to count number of children I have to download all the data. Count is such an important feature for me.
fahrradflucht 1 day ago 2 replies      
I have build apps with firebase in the past and the feature I missed the most was performing scheduled tasks on the database.Now we are getting this BIG app platform update and this feature is still not in there. AWS Lambda with Scheduled Events for a long time to come :sad-panda:
joeblau 1 day ago 1 reply      
I remember walking into Firebase's offices about 4 years ago when it was 4 people on Townsend St in SOMA in a 300 square foot share office space. It's amazing to see how far they've come; Congrats to the whole team.
skrebbel 1 day ago 1 reply      
As a current Firebase customer, I'm pretty thrilled about all this (especially since I was afraid Google would pull a Facebook here). However, there's quite a bunch of API changes and absolutely no info about how long the old JS library, endpoints, etc etc are to keep working. Should I get stressed out?
maaaats 1 day ago 5 replies      
This may be a stupid question, but: What do you use it for? Cannot everyone basically edit the client code and do whatever with your data? I've only used Firebase for prototyping.
oceankid 1 day ago 1 reply      
The thought of reliable, managed hosting is interesting.

But how does one extend an app outside storing and fetching data? What if you want to run a background job to send emails, parse a complex csv or create a custom pdf and write to firebase storage?

albeva 18 hours ago 0 replies      
I think services like firebase are a very scary thing. Too much dependence on one vendor, too much black box magic, too much logic that is beyond control. And services like this contribute to general dumbing down of software developers. We're heading towards world of script kiddies where html and js rule and all complex logic is handled and controlled by service providers. Is it a good thing? You can deliver fast, but in the long term is it worth it?
blairanderson 20 hours ago 0 replies      
From my experience with the new API, it's a little less intuitive and worse documentation. I think it's rad that Google invested a ton of resources into firebase.

we have been super successful with firebase, and are proponents of using it as a notification system and less of a datastore. That would be easy, but unwise. Use it to notify clients of changes so they can fetch data. Read from Firebase, write to your own server/DB.

WalterSear 1 day ago 0 replies      
I'm in talks with a company regarding building an application for users in developing countries, where Android 2.0 is still the dominant OS version.

Firebase 2.0 looks like a great fit for their needs otherwise, but is the new sdk backward compatible to Android 2.0?

mcv 12 hours ago 0 replies      
I intend to use Firebase as at least a temporary backend while developing my app. Maybe I'll move to a real server later, but during development it's really easy to just have some place you can shoot json at. And I can always add interaction later by having some other application listen to it.

I don't really need the actual realtime communication stuff all that much (though it might turn out to be useful), but just a lightweight place to store json is really useful.

pier25 1 day ago 1 reply      
So how would one address server side logic?

Like for example doing something with the data before sending it to the client?

ddxv 20 hours ago 0 replies      
This appears to just be a way to limit the growth of third party tracking which threaten Google by encouraging user acquisition from many sources.

I say this because they dont specifically say they will postback events to advertising networks other than Google's.

wiradikusuma 23 hours ago 1 reply      
For Firebase/Google Cloud Platform engineer: does it mean Google Cloud Endpoints is being phased out? if i'm already using Google Cloud Endpoints, should i move to Firebase? what's the advantage?
Philipp__ 1 day ago 0 replies      
It looks like it is here to stay... But that surprise Parse shutdown will leave me asking, what if...
robotnoises 1 day ago 2 replies      
I don't think it was explicitly mentioned in the keynote, but it looks like they updated pricing:


Can't find the old pricing now, but it seems similar, but with less plan types.

1cb134b57283 1 day ago 0 replies      
As a server engineer already having trouble finding a new job, how worried should I be about this?
aj0strow 1 day ago 0 replies      
I've had only good experiences with firebase. They added an HTTP api, web hosting, multiple security rule preprocessors (pain point), and got faster and cheaper. Yeah only good things.
robotnoises 1 day ago 0 replies      
Not expressly mentioned anywhere that I've seen: the Free plan now includes custom domains + SSL cert. Under the previous firebase.com, that was $5 a month.

Sounds good to me!

intellegacy 1 day ago 1 reply      
Is there a tutorial that explains how to setup a backend for user-taken videos? for an IOS app

one thing I liked about Parse was that it's documentation was newbie-friendly.

gcatalfamo 1 day ago 1 reply      
Can somebody explain the new Firebase reframing towards GCP? Maybe with another provider analogy? (e.g.,Firebase is to GCP like Parse is(was) to Facebook)
eva1984 17 hours ago 0 replies      
Feel like the new Wordpress/Drupal/CMS, just in App space.
welanes 1 day ago 0 replies      
FYI, new docs on data structure mention rooms, which was an example in the old docs. Should read messages or conversations: https://firebase.google.com/docs/database/web/structure-data...
kawera 1 day ago 1 reply      
Question: Would Firebase be a good option where the desktop/web app is the main access point, mobile been secondary (around 3:1) ?
tszming 21 hours ago 0 replies      
The biggest problem with any Google cloud services nowadays is you don't know if it was/will blocked in China, of course it is okay if you don't care the users in China.
Kiro 1 day ago 1 reply      
I'm building a simple web app where I want signed in users to be able to add a JSON object to a database and then list all JSON objects publicly. Only the user who created the object should be able to edit it. Is this a good use-case for Firebase or should I look into something else?
dmitriz 17 hours ago 1 reply      
Is user email confirmation finally supported by Firebase? Last time I checked it wasn't.
ssijak 1 day ago 1 reply      
What is the state of AngularFire library, there are no guides for angular in the new documentation? And when will the angularfire for angular2 be ready to use?
themihai 1 day ago 0 replies      
"... and earn more money." Is this really necessary on the homepage? Sounds like a old misleading spam page
sebivaduva 1 day ago 0 replies      
for all of you looking for a real-time api platform that's open source and not owned by a cloud giant come join us build telepat.io
Blixz 1 day ago 3 replies      
So, Still no offline persistence for JS. What a huge disappointment.
choward 1 day ago 1 reply      
Provide a self hosting option or GTFO.
Tesla Announces $2B Public Offering to Accelerate Model 3 Ramp Up bloomberg.com
394 points by dismal2  1 day ago   175 comments top 11
jboydyhacker 1 day ago 17 replies      
The big surprise here isn't that Tesla was doing an offering - it was Goldman did a huge research note 24 hours before the offering while actually participating in said offering.

Super bad form and just goes to show the community- don't trust investment bankers. Such bad form.

Animats 1 day ago 2 replies      
Well, $1.4 billion for Tesla, $0.6 billion for Musk personally, and an option for Goldman Sachs to get $0.21 billion.[1] Tesla stock is down in after-hours trading, but that doesn't mean much. If the stock is down significantly at the close tomorrow, the market didn't like this.

It's a legit offering. The company intends to build a big factory and make stuff. Real capital assets will be bought with that money. It's not to sell stuff at a loss to gain market share in hopes of raising prices later. (Looking at you, Uber.)

Tesla just hired Audi's head of manufacturing, Peter Hochholdinger. About a week ago, the previous two top people in manufacturing quit, right after Musk announced he wanted the production line running two years sooner. Maybe Hochholdinger can do it.

[1] https://www.sec.gov/Archives/edgar/data/1318605/000119312516...

jernfrost 1 day ago 11 replies      
Why do people keep spouting this nonsense that Tesla is losing money on EVERY car? They make money on every car otherwise they wouldn't be selling any cars. The lose money due to their high R&D.
vessenes 1 day ago 2 replies      
This is not a surprise; there's an old saw that I think I first read in a Buffet annual report. It says that financing tends to alternate forms for companies in terms of what makes sense: debt -> equity -> debt -> equity.

Equity offering seems likely to be much cheaper than debt right now; Tesla has great mindshare among consumers, and lots of doubters on the professional investor side.

crabasa 1 day ago 3 replies      

 echo "Tesla to offer $1.4 billion shares, remaining to be sold by Elon Musk. Musk is exercising options to buy 5.5m shares and will boost overall holdings on net basis. Developing... " | wc 1 30 176
News articles and tweets are converging at an alarming rate.

11thEarlOfMar 1 day ago 0 replies      
It's neither here nor there, but I feel like Bugs Bunny in "High Diving Hare", and Musk just raised the platform another 50 feet:


marvin 1 day ago 1 reply      
From the press release, it appears that the capital raise is "only" $1.4 billion -- the remainder is Elon Musk selling shares to cover his tax liability for simultaneously exercising options from 2009. Hopefully 1.4 billion is enough.
slantaclaus 11 hours ago 0 replies      
Tesla has a really great business. They're not just cars, they're batteries. Their home battery for storing solar energy is a huge deal at least in terms of future cash flows. Also, they're a white label supplier of batteries to companies like Toyota and Mercedes. Anyway--new long term TSLA shareholder here. Bought in at $205.
syngrog66 4 hours ago 0 replies      
When you have 375k $1000 preorder deposits its probably an ideal time to raise investment.
mjbellantoni 14 hours ago 1 reply      
Anyone have thoughts as to why they're selling stock as opposed to issuing bonds?
jgalt212 14 hours ago 0 replies      
It's pretty obvious at this point that Tesla's number one product is their stock. Which makes it no different from a number of other high fliers.

At first they were an innovative car company. Then the stock price shot well above the level sustainable by an electric car company. Elon realized this, and then builds the Giga factory. We're not just a car company, we're a power company!

Now they are raising more equity off of an inflated stock price. I'd stay away from this one.

Not a total hater, Tesla cars are great, but one of these day's Elon's moon shots and obsession with the stock price will catch up with him (he'll still be rich) and his investors (they may be significantly less rich).

Horizon 1.0: a realtime, open-source JavaScript back end from RethinkDB horizon.io
599 points by coffeemug  2 days ago   127 comments top 47
coffeemug 2 days ago 10 replies      
Hey guys, Slava @ Rethink here. The team is really excited to launch Horizon -- it's based on a lot of feedback from users of very different backgrounds, and we think it will make web development dramatically easier.

I've been up for about twenty-four hours, but I'll be around to answer any questions for the rest of the day.

kcorbitt 2 days ago 2 replies      
> Horizon is distributed as a standalone server for browser-based JavaScript apps. When you're ready to write custom backend code, you can load the Horizon modules directly in Node.js

This is key. Firebase seems great for quick prototypes, but I never built against it because it was unclear (to me at least) how to deal with cases where you want to leave the sandbox and add custom backend functionality that they haven't built out for you. Horizon seems ideal -- it gives you the ease of use of a turnkey system starting out, with the flexibility to customize backend logic/validation later when you need to. That, plus the fact that it's put together by the awesome RethinkDB team, makes it very interesting to me!

lewisl9029 2 days ago 2 replies      
For apps that don't necessarily have the realtime requirement, Kinto [1] is another really cool open-source backend solution that focuses on the offline-first scenario. It's built by the Firefox Sync team and used in production at Mozilla.

That said, at first glance there doesn't seem to be anything stopping anyone from building a caching layer on top of Horizon to enable offline-first functionality. Definitely going to be keeping an eye on this as well!

EDIT: It looks like they are in fact considering offline-first support already [2]!

[1] http://kinto.readthedocs.io/en/latest/overview.html

[2] https://github.com/rethinkdb/horizon/issues/58

avital 2 days ago 2 replies      
(Former Meteor core dev here) This is cool!

Does Horizon also solve "optimistic updates"? If so I'd love to learn more details. For comparison, Meteor keeps a local datastore that updates immediately when data is mutated and then reconciled with the real database.

mattste 2 days ago 0 replies      
I'm continuously impressed by Rethink. When I first used it, I ran into a few issues with changefeeds. Since they work in the open, I could quickly look at the Github issues and see that the fixes/features were already slated for future release milestones. To me, that's a huge win.
nodesocket 2 days ago 2 replies      
Slava, Horizon looks cool, and I'm a huge fanboy of RethinkDB. However, in the demo video you write queries client side. How do you protect against users modifying front-end JavaScript and thus the queries?

 // ex this.props.horizon.order('datetime', 'descending').limit(8).watch()

andrewsomething 2 days ago 0 replies      
If you're looking for a way to check out Horizon and don't already have RethinkDB up and running locally, we've put a One-Click app together over on DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-use-...
spdustin 2 days ago 0 replies      
From their blog post:

> Horizon isnt prescriptive or particularly opinionatedits designed to work well with the JavaScript frameworks that you already know and love.

It is seriously a breath of fresh air to read something like that. Bravo, Horizon/RethinkDB team.

inglor 2 days ago 1 reply      
I think it's amazing that it's working with real RxJS streams for changes now - given how Redux store now returns an observable thing, Cycle is Rx based and Angular 2.0 uses a lot of Rx.

This guarantees that things will interop nicely with eachother and we'll be able to use Rx all around.

sandstrom 2 days ago 1 reply      
Although this is about horizon, if you haven't looked at RethinkdB you should definitely check it out.

It's an awesome database for a variety of use-cases.

Personally, I think their query language + support for joins makes it much more useable than MongoDB (which I've used quite a bit).

bluecmd 1 day ago 2 replies      
Sorry, but coming from an embedded world the use of Real-time here seems borderline offensive to me. Anyone care to explain what it means in this context? From the dawn of computer science it has been about strict time guarantees - and I highly doubt this is what's being referred to here.
lming 1 day ago 0 replies      
Hey Slava. Promising stuff! One question:

A websocket connection between client and server is great for real-time apps, but for non real-time app, the unnecessary websocket connections create more work for devops -- specifically DNS and load balancer. Does/will Horizon support that clients only use the Collections API to get/set data without websocket (for real-time sync)?

fuzionmonkey 2 days ago 0 replies      
manigandham 2 days ago 2 replies      
> Horizon is built on RethinkDB, a massively scalable, open-source database capable of millions of realtime updates per second.

Have there been any performance benchmarks or reviews of RethinkDB yet? Especially at this scale?

zenlikethat 2 days ago 1 reply      
Saw a demo of this at a realtime meetup in Palo Alto and was pretty impressed. Like everything RethinkDB does I anticipate it will be a fantastic one-two punch of sound engineering and smooth user experience.
jkarneges 2 days ago 0 replies      
At last, an open source alternative to Firebase.
cdata 2 days ago 0 replies      
How does Horizon handle offline network conditions? Are there network + app states (e.g., cold boot, refresh) under which users might not have access to data?
stemuk 2 days ago 0 replies      
Looks really promising to me, relatively similar to www.deepstream.io . However, will there be additional clients /are they already in development by the community? So far I only found support for vue.js and React. Polymer elements would be pretty dope... ;)
kevinSuttle 2 days ago 1 reply      
I'm wondering how this works (or if it does) with GraphQL subscriptions. https://github.com/rethinkdb/rethinkdb/issues/3711
jakegarelick 2 days ago 0 replies      
Congrats! I remember seeing someone from your team doing a demo at an SF JavaScript meetup. Looks very interesting.
gjmveloso 2 days ago 0 replies      
Great move, RethinkDB folks!

I'm looking forward start prototyping some apps using Horizon. On a first look seems very promising.


code_coyote 1 day ago 0 replies      
How well does this play with Electron (http://electron.atom.io/ ) as a front-end? I didn't find that many references to it (mostly questions about putting them together).
sqlcook 2 days ago 1 reply      
Excellent work Slava! Have been planning to migrate to RethinkDB for one of the existing projects, great timing with Horizon :D
jmakeig 1 day ago 0 replies      
This looks really cool, but I still dont understand how Rethink plans to cash out their VCs, though~$12M, according to Angel List. Cloud hosting is trending toward zero and support/consulting only scales with people. Im not to trying to be (intentionally) critical; Im genuinely curious and hoping there is some business model innovation going on here too.
sgdesign 2 days ago 1 reply      
Congrats on the launch! As a long-time user of Meteor I'm always excited to see other approaches to similar issues.

One thing that I always found a bit lacking with Meteor was pagination. The problem in a nutshell is that the client doesn't know how much data is potentially available on the server unless it requests it, making things like showing "page 1 of 12" type result counts tricky (I talk more about it here: https://www.discovermeteor.com/blog/pagination-problems-mete... )

I'd be curious to know how Horizon deals with that problem?

robotnoises 2 days ago 0 replies      
I love Firebase, but this is very exciting. Congrats on 1.0!
sidcool 2 days ago 1 reply      
How does this differ from Meteor?
caleblloyd 2 days ago 1 reply      
This looks really great! Does Horizon support pluggable transport? I would like to use Ajax paired with EventSource/SSE over HTTP/2 for bidirectional communication instead of Web Sockets.
yesimahuman 2 days ago 0 replies      
Congrats on the launch, this looks like an amazing product that made immediate sense to me. I have a feeling the Ionic community is going to be quite drawn to this over time.

Are there any dockerized versions of this yet?

brianbi 1 day ago 0 replies      
This is super exciting. I have been playing around with it in closed beta and I'm really impressed. Open Source FTW.
amelius 2 days ago 1 reply      
How does access control work? For example, if I have a database containing a table "users", and a table "files", and the files table contains a column "owner" (pointing to the users table), then how do I prevent the client-side database engine from accessing the files which do not belong to the user that is currently logged in?
kevincox 2 days ago 1 reply      
The Horizon blog could really use a feed that I can subscribe too. I would love to head about all of your future news.
absurd 1 day ago 0 replies      
@slava - how would you compare a pouchdb <-> couchdb setup vs horizon?
d0m 2 days ago 2 replies      
Great thinking from the RethinkDB side. One answer I couldn't find on their website is about the offline and optimistic update. It's really tricky to get right. Having a feed of changes is just a small part of having a robust solution.
kinkdr 2 days ago 0 replies      
Congrats guys. As usual, amazing work!
velohx 2 days ago 1 reply      
This looks great. I wrote my first complicated Firebase app a couple weeks ago, and looking forward to rewriting a version of it with Horizon to compare the two.

I couldn't find it in the docs, but what is the equivalent to Firebase.ServerValue.TIMESTAMP?

z3t4 1 day ago 0 replies      
It would be cool to have a object oriented database with prototype support, rather then table oriented. I think Rethink is a step in the right direction.
hohohoho 1 day ago 0 replies      
Do you have a timeline on when will GraphQL be supported in horizon?
hoodoof 2 days ago 1 reply      
I'm intrigued but after reading a couple of pages it's not really clear to me what benefits I get or really exactly what this is....

Why should I use it over my current approach ? Why is this compelling?

csmajorfive 2 days ago 1 reply      
This looks cool. Congrats on the launch! Is it a project or a business? Do you intend to make money from hosting? Or something else?
tacone 2 days ago 0 replies      
Looks very interesting!

You may want to make your homepage clearer, before navigating to the FAQ I did not have a clue of what Horizon really does.

rvdm 2 days ago 0 replies      
I'm really excited to see a framework make security and authentication part of it's foundation.

Amazing work Horizon team!

nileshtrivedi 2 days ago 2 replies      
Anyone got the chat example working?
ilaksh 2 days ago 0 replies      
Does Horizon handle using npm packages, requires or imports on the front end?
sandGorgon 2 days ago 0 replies      
so .. can one write a spreadsheet app that is able to show results reactively in the front-end and will still reflect that same logic in some kind of backend api ?

Kind of being able to have the same logic in my spreadsheet as well as my api ?

This could be huge!

placeybordeaux 2 days ago 0 replies      
The faq is really pretty strong.

Looks cool!

tacone 2 days ago 1 reply      
What's the gzipped size of the client lib?
Reason: A new interface to OCaml facebook.github.io
589 points by clayallsopp  2 days ago   267 comments top 50
Cyph0n 2 days ago 4 replies      
This looks very interesting. I've always had OCaml in mind but never actually got around to using it in a project. Facebook could have done a better job describing what exactly this is, but they do provide a good overview at the end of the page (strangely!) [1].

In summary, Reason [2] is a new language (correction: interface to OCaml) that shares a part of the OCaml compiler toolchain and runtime. I don't know of any language that uses a similar approach, that is, plugging into an existing compiler toolchain. I guess a reasonable yet inaccurate analogy would be Reason -> OCaml is like Elixir -> Erlang or Clojure -> Java.

I hope Reason can provide OCaml with the extra push needed to bring it into the mainstream PL space and more widespread adoption.

[1]: http://facebook.github.io/reason/#how-reason-works

[2]: https://github.com/facebook/reason

mhd 2 days ago 1 reply      
I hope this doesn't sound like trolling, but JavaScript's syntax is now a selling point? I kinda-sorta get the reason why people want an actual JavaScript stack on the backend, but I never heard that syntax/semantics brought people from e.g. Rails to Node.

Sure, OCaml isn't even the nicest syntax in the ML family, but I'm not sure whether that's worth it, especially considering that almost any "X-like" language often turns out to be an Uncanny Valley for "X" programmers -- close enough to make some frustrating errors.

e_d_g_a_r 2 days ago 3 replies      
I for one welcome the syntax. I run the OCaml meetup in Silicon Valley and syntax is definitely an issue for newcomers. This makes it easier for other programmers to instantly just jump into OCaml/ML rather than ask about what is `in` or what is `let foo = function`, etc etc.

EDIT: Hosting a Meetup this friday at 6pm in San Francisco about Reason and how to instantly start using it, http://www.meetup.com/sv-ocaml/events/231198788/

jameshart 2 days ago 2 replies      
Wonder if this project has anything to do with Eric Lippert's move to Facebook (https://ericlippert.com/2016/02/08/facebook/ - Eric has also been producing a series of blog posts implementing a Z-Machine interpreter in OCaml to run mini-Zork on, starting here: https://ericlippert.com/2016/02/01/west-of-house/). Eric was on the C# compiler team at Microsoft and previously worked on JScript.
civilian 2 days ago 4 replies      
I know that it's common to have namespace collisions, but their logo is so similar to Reason magazine's. https://reason.com/
alex_muscar 1 day ago 2 replies      
Nice to see that OCaml is getting so much love at facebook. Unfortunately, adding a new syntax that's almost OCaml, but not quite, doesn't seem like such a great idea. While it might make the language accessible to more people, it runs the risk of fragmenting the community.

I know syntax is subjective, but some of the choices seem a bit odd. For example, declaring variants and using their constructors looks like Haskell, but the semantics is still OCaml. In Haskell, constructors are first order so they can be passed as functions, and partially applied. It makes sense that their declaration and use looks like function declaration and function calls. In OCaml they are not first class, that is, you can't pass the as arguments, or partially apply them. That's why it makes sense for the declaration to look like a tuple, and the use to look like a function applied to a tuple--well, somewhat, you can still argue that it's still confusing because you might expect to be able to apply the constructor to a tuple variable, but well, such is life :). Unless constructors are first class in Reason--it doesn't look like it from a quick scan through the docs--this particular syntactic difference is of dubious value, and, worse, it can be misleading to newcomers.

Also, changing `match` to `switch` seems gratuitous as well, and it also loses some of the meaning of the original. i.e. "I want to match this value against this set of patterns".

Finally, I know that using `begin` and `end` for blocks is verbose and Pascal-ish--which people seem to hate for some reason--but using { } for scopes looks out of place, and leads to awkward cases like this:

 try { ... } { | Exn => ... };
I don't mean for this to sound ranty, or like I'm picking on Reason. I think it's good that facebook is tryiog to spice things up in the OCaml community.

avsm 2 days ago 1 reply      
There's a screencast fresh off the presses on the info page at https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml

I'm finally going to switch away from my ancient nvi setup and use Atom instead! MirageOS recently moved all our libraries over to using the new PPX extension point mechanism in OCaml instead of the Camlp4 extensible grammar. This means that MirageOS libraries should be compatible with Reason out of the box -- so it'll be possible to build unikernels from a slick editor interface quite soon hopefully!

MichaelGG 2 days ago 4 replies      
I started off a bit skeptical with the <- renaming to =. Mutability should be rare enough that <- makes things stand out. But apart from that I think I rather like this syntax, on the whole. Not a fan of semicolons. It also makes me appreciate F#'s #light syntax (now its default). Using whitespace really clarifies stuff, and there's always in and ; for fallback.

What's OCaml's status with multithreading? Are there any proposals for more flexible operators, so there doesn't need to be different operators for different numerics? (F# solves this by allowing inlined functions.)

greyhat 2 days ago 1 reply      
The slowness in Firefox appears to be solely due to this:

 @media (min-width: 1180px) { body:not(.no-literate) .content-root { background-color: #fdfcfc; -webkit-box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); box-shadow: inset 780px 0 #fff, inset 781px 0 #e7e7e7, inset 790px 0 3px -10px rgba(0,0,0,0.05); } }
Removing it in the Firefox style editor restores normal performance.

Edit: And they have commented out the box-shadow! Hah.

TY 2 days ago 5 replies      
Ok, it might be the end of the day for me and I'm denser than usual, but I can't understand what is this? Ocaml to JS transpiler?

Checked this out, but the reason still eludes me:https://ocaml.io/w/Blog:News/A_new_Reason_for_OCaml pun intended)

hellodevnull 2 days ago 8 replies      
Site doesn't load in Firefox. Works in Chrome.
chenglou 2 days ago 0 replies      
I've worked on the Atom plugin for this, itself written in Reason and compiled to JS using js_of_ocaml: https://github.com/facebook/reason/tree/7f3b09a75cacf828dd6b....

Having worked with Reason, JavaScript, and the bridge between the two, most of my errors seem to fall on the JavaScript side. So I guess the type system's indeed working =).

mseri 2 days ago 1 reply      
I love OCaml, but that's a really nice reshape of OCaml syntax! And apparently things will be interoperable. I am really curious to see where it goes.

EDIT: and they want to use and maintain compatibility with ppx. Great news

ipsum2 2 days ago 1 reply      
Looking at http://facebook.github.io/reason/mlCompared.html it looks like regular OCaml, with a sprinkling of JS syntax.
haches 1 day ago 0 replies      
If you'd like to play with Reason you can do it online here:


Of course, you can also create your own Reason projects.

Paul_S 2 days ago 3 replies      
Website fries the CPU (FF).
nikolay 2 days ago 1 reply      
Nice, but I always wonder why function is abbreviated as the longer unambigous fun and not just fn?!
akhilcacharya 2 days ago 1 reply      
Do want to learn this - does anybody know any interesting projects that can take advantage of the OCaml ecosystem and functional aspects?
cwyers 2 days ago 2 replies      
I really wish they'd taken the pipeline (|>) operator from F#, if they were going to rework OCaml.
grhmc 2 days ago 1 reply      
I'm seeing "BUILD SYSTEMS RAPIDL" over here on Linux.
robohamburger 2 days ago 2 replies      
I took ocaml for a spin a couple months ago and compared to more recently created languages it seems a bit crufty.

If they can simplify the build system to be on par with something like cargo that would be swell.

Also: having rust style traits or haskell classes would be amazing. Also macros that aren't obscure and hard to use compiler plugins please :)

Hopefully it ends up being more than just questionable sugar around ocaml and actually adds some sorely needed language features.

honua 2 days ago 1 reply      
What problems would be well solved by Reason/OCaml?
bjz_ 2 days ago 2 replies      
Would be nice to see modular implicits like those that are being proposed for OCaml. It's a shame to not have any form of ad-hoc polymorphism.
oblio 2 days ago 4 replies      
Has anyone here built something say, over 10k lines in Ocaml? How is the development experience? IDEs, debuggers, linters, deployment, etc.
incepted 2 days ago 2 replies      
Interesting but since they are designing a revised syntax, I wish they had got rid of Ocaml's semi colon. These stand out in 2016.
konschubert 2 days ago 1 reply      
> A new, developer experience for rapidly building fast, safe systems.

The comma placement suggests that developer is an adjective for experience.

xvilka 1 day ago 1 reply      
It would be nice if they'll make it work on Windows platforms. There is already an issue for that[1]. It also depends from the Windows support in OCaml itself and opam[2].

[1] https://github.com/facebook/reason/issues/470

[2] https://github.com/ocaml/opam/issues/2191

SwellJoe 1 day ago 5 replies      
So, I know OCaml is impressively fast. And, I know OCaml is impressively terse ("concise" may be a more positive term). But, I wonder what would make one choose OCaml (or a variant of it like this) over some of the other new or old languages that exhibit some excellent characteristics for modern systems. In particular, a convincing concurrency story seems mandatory. I don't know enough to know if OCaml (or this variant) has a convincing concurrency story, and nothing on the front page of website tells me.

So, why do I want to learn this, rather than, say, Go or Elixir?

johnhenry 1 day ago 0 replies      
Wondering how, or even if, this compares to elm? http://elm-lang.org/
swuecho 2 days ago 3 replies      
Do it provide a usable standard lib? If so, I may try to use it in side project.
mark_l_watson 1 day ago 0 replies      
Reason looks interesting. I have had a 5 year run of alternating between really liking Haskell, and sometime thinking that my own development process was too slow using Haskell. I am putting Reason on my try-it list.

Documentation suggestion: add examples for string manipulation.

ubertaco 2 days ago 4 replies      
As excited as I was to see a big new thing in OCaml-land, I have to say my excitement died down as I read on.

I don't really see most of the changes as improvements.

Having a different, explicitly-noticeable syntax for mutable updates is nice, because it calls out mutability (which should be used sparingly).

I don't see extra braces as necessarily an improvement, given that OCaml's local scopes are already quite unambiguous thanks to "let ... in". On that note, Removing "in" and just going with semicolons removes another "smelly-code-callout" by making it less obvious what's imperative and what's functional.

I actually don't like ambiguity between type annotation and value assignment in my records. It's clear in current OCaml that {a: int} is a type declaration and {a = 1} is a value declaration/assignment. Moving to colons-for-record-values is at best a bikesheddy, backwards-incompatible change for change's sake, and at worst a breaking-change way of code less clear.

Speaking of making code less clear, how is "int list list" not clear? It's an int-list list. As in, a list of int-lists. So of course it should parse as "(int list) list". Why change to backwards annotations? Just to prevent existing code from working as-is, and making people used to reading ML spend extra brain cycles on remembering that your types read the opposite way?

And they make a huge deal out of their type for tuples being "(a, b)" instead of "(a * b)". Yeah, okay, I get it. It's not that big a deal, since people are used to reading product types as, well, products.

The other thing that seems weird to me is the need to change to a "fat arrow" instead of a "skinny arrow", again for no real reason. In fact, it just makes it more likely that you'll confuse it with a comparison operator. Nobody tries to type ">-", but people try to type ">=" all the time. You're just switching for the sake of switching, and it's not an improvement.

Their example code of their replacement for match...with is especially egregious. If you showed me the OCaml snippet and the Reason snippet unlabelled, I would think that the OCaml snippet is the new-and-improved version, since it's much more compact, much less noisy, and reads more like what it's trying to do ("match my_variable with either SomeValue x or SomeOtherValue y").

Another thing they make a lot of noise about is requiring fewer parens in some places. But then, they also require more parens in other places. So...okay? I guess? Not really a win.

And why rename equality operators? Are you really going to tell me that people prefer that their languages have "==="?

yegle 2 days ago 0 replies      
This is the new low of search engine unfriendly :-(
cm3 1 day ago 1 reply      
I miss dead code elimination the most, especially when building code that uses Core.
breatheoften 2 days ago 1 reply      
Is Facebook using mirage or similar ocaml unikernel tool chain? Is part of the goal of reason to make a more approachable syntax available for authoring code that will run inside next-generation containers?
partiallypro 2 days ago 0 replies      
Does anyone Else's Firefox absolutely slow to a crawl on this page?

Edit: just doesn't load at all on Edge. Does load in Chrome/Opera and surprisingly IE 12 but doesn't load the logo's font.

elcapitan 1 day ago 0 replies      
Is there an overview in which regard this differs from "classical" Ocaml?
zem 2 days ago 2 replies      
i noticed this in the examples:

 | List p (List p2 (List p3 rest)) => false /* 3+ */
has the regular list destructuring in pattern match syntax been removed? that's pretty sad, if so - lists are the default data structure in ocaml, and it's worth retaining some special syntax for cons especially in pattern matches.

stuartaxelowen 2 days ago 3 replies      
Can we please keep using parens for function invocation? Leaving them out hurts readability.
querulous 2 days ago 0 replies      
if this had come out five years ago i'd probably be all over it, but i think i'd rather just use rust at this point. different syntax but better safety and it's not like the ocaml ecosystem has a lot to offer
andrew_wc_brown 2 days ago 0 replies      
Everything reads like double talk.Not sure what I would want to use this for.
intrasight 2 days ago 0 replies      
Pretty disappointed that they'd release something that butchers Firefox.
molotok 2 days ago 0 replies      
Fry Firefox RAPID.
aerovistae 2 days ago 0 replies      
fixxer 2 days ago 1 reply      
Why rtop?
ulber 2 days ago 7 replies      
This page is completely unusable due to lag. From the other comments it seems this is FF specific. One would think FB would have the resources to test new pages at least on common browsers before publishing.

Edit: The fix came quickly though.

carapace 2 days ago 0 replies      
Another site that is useless with JS disabled. Nice work.
ClosureChain 2 days ago 0 replies      
I wonder if the people at Propellerheads will sue Facebook for using the name of their software https://www.propellerheads.se/reason
zump 2 days ago 2 replies      
Facebook just won't let OCaml die.
devit 2 days ago 5 replies      
It seems to me that Rust would be pretty much strictly better than this.

In particular Rust has similar syntax, seems to have all Reason's features plus the linear types and regions/borrowing that allow memory and concurrency safety while still being able to mutate memory and not being forced to use GC.

They are aware of Rust since they cite it in their page, so I wonder why they decided to create this instead of using Rust.

It would be nice if they explained this in the FAQ.

I guess it might be useful if you have an OCaml codebase to interface with but don't already know OCaml, but given the relative obscurity of OCaml that seems a pretty narrow use (and also Facebook isn't known to make extensive use of it, afaik).

Twitter to Stop Counting Photos and Links in 140-Character Limit bloomberg.com
386 points by davidbarker  3 days ago   188 comments top 32
askafriend 3 days ago 7 replies      
The fact that it took this long for them to make such an obvious change speaks to how afraid they are (were?) to challenge core assumptions about the product.
krinchan 2 days ago 0 replies      
There's a lot of weird misunderstandings about how Twitter worked in the early days. I.E. "SMS encoded in 7-bit is 140 bytes." Something about native photo urls. A lot of this, I think, is because people lack context of what the social media and technological landscape looked like in 2006.

Twitter basically started on SMS, back in the day. There wasn't really an app because there weren't major smartphone platforms outside of PalmOS and Blackberry. A lot of my friends made SMS posts to Facebook or LiveJournal, but you never got comments or responses back, so it was very one-way.

That's where Twitter really hooked you back then. You signed up, registered your phone number, and tweets got sent as SMS messages to your phone. The 140 limit provided 18 characters for a username, colon, space, and the tweet. There were commands for following, blocking, etc. and later, direct messaging.

So you got tweets back from people you followed when you sent out a tweet. It really, truly was, as other folks have said in here, mass SMS.

You'd meet someone at a bar, and just send follow NewFunPerson to the Twitter short code and bam, their tweets were texted to you.

All the other stuff that people like about microblogging was just a side effect. Twitter was written to get to people's phones back when the only universal for mobile platforms (in the United States at least) were that you could send a text message. That immediacy, the ability to blast out something quick and get the replies back on your phone was everything.

Also, at a time when Facebook was still struggling with the fact that "Friends" were a two-way street (Following and Pages weren't a thing yet), the one-way nature of the follow relationship allowed you a lot of access to celebrities with minimal effort on the part of the celebrity. You just found Britney Spears, hit follow, and done. She (rather, her publicist) did exactly nothing to get you there, and now you know there's a new single coming out exclusively at FYE tomorrow. Cha-ching.

Twitter seized upon all the weak points of Facebook, made do with what was available in mobile, and hit gold. After that, when mobile apps hit, Twitter took all those interesting "side effects" of their 140 character limit and built on those instead, pivoting to emphasize microblogging, hashtags, and immediacy, since SMS wasn't there. And, to be honest, these things are very, VERY likely something that you get because of 140 characters.

So yeah, Social Media History 101.

ComputerGuru 3 days ago 4 replies      
What I find stupid is that the length of usernames counts in the message. A message to @myfriend can be longer than a message to @myfriendblessednaycursedwithalongername
rdancer 2 days ago 3 replies      
I wonder what they will do once people start using http://the.linkshttp://to.write.longerhttp://messages
Bootvis 3 days ago 0 replies      
I hope they only zero-count the first link. That allows 140 characters of commentary + the link and limits abuse.
eecks 3 days ago 4 replies      
ldong 3 days ago 5 replies      
The beauty of twitter is to tweet with concise messages. I'd be disappointed if this 140 characters restriction got removed.
robk 3 days ago 1 reply      
I'm sure the spammers and hashtag abusers will be delighted at this change. More opportunity to stuff a tweet #with #all #sorts #of #extra #hashtags. /s
bbody 3 days ago 0 replies      
A very good idea, I just hope it isn't abused with Tweets full of links.
riffic 3 days ago 0 replies      
Good, they should have done this a long time ago.

It always would have been trivial to remove URLs from the data and throw it into metadata, where character limits do not matter.

Shame it took 2446 days for this basic idea to come to fruition:


hiram112 3 days ago 3 replies      
Real question. I avoided Twitter for years, and finally bit the bullet recently.

The problem is my feed is out of control. Is there a way to filter it?

For example, I want to only match the travel deals by regex (my city), the politics stuff by popularity, etc.

I was under the impression that Twitter closed off their API to 3ed parties, so maybe this is no longer possible.

On a side note, my current company recently switched to a MicroService framework based on Twitter Finagle / Finatra. At first, I was grumpy (damn Hipsters, another failure a la Node and Mongo), but as I learn more about it and Scala, I'm really impressed!

mrmondo 2 days ago 0 replies      
A lot of people are quick to jump on the criticise Twitter bandwagon here - but I think A) this is a good compromise between giving people what they want and a bit more freedom without losing the core aspect of the service and B) there are plenty of people that like Twitter the way it is and now no longer use sites like Facebook because Twitter is good at what it's good at - sharing small pieces of information, globally, without intruding on your life.
intoverflow2 2 days ago 0 replies      
Would be nice if pic.twitter.com links didn't link to videos with sound too that auto-play with sound when in their mobile app
6stringmerc 3 days ago 1 reply      
Could this be seen as some kind of attempt to hold-off Snapchat/Instagram photo-sharing competition and keep attention in their platform? Personally I don't like the image taking away from the limit, feeling pretty indifferent on the links part. Overall my impression is that this would be a reasonable change, whereas taking a hammer to the 140 limit as a general concept might not be a proper avenue.
altitudinous 2 days ago 0 replies      
This sounds like the number of photos could be unlimited as number of photos is also affected by the 140 char limit. This kept the quality of photos / videos high. So could expect galleries of rubbish / low quality attached to a tweet?

But maybe this is discussed. I haven't RTFA.

ck2 3 days ago 0 replies      
How are they technically going to SMS the photos and links within 140 characters then?
chestervonwinch 2 days ago 0 replies      
Stackexchange comments behave the same way, and it has always bugged me. Maybe they will notice this and change the char count behavior in comments as well?
brianbreslin 2 days ago 0 replies      
I would venture to guess that fewer and fewer of their users are on sms interfaces for Twitter, and more than 90% are on smart phones. Thus the character limit isn't as big of a deal.
JulianMorrison 2 days ago 0 replies      
They should just quit with the limit. It was an amusing "brand". But it's become a millstone, blocking thoughtful posts and promoting shoutiness.
pmlnr 2 days ago 0 replies      
This is how you break backwards compatibility.
oh_sigh 3 days ago 1 reply      
Expect to see t.co/links-that-look-like-this
Animats 2 days ago 1 reply      
Also, I think Unicode characters only count as one character. This is now totally disconnected from SMS.
Thrymr 3 days ago 0 replies      
They should count the words in photos of text posted to get around the limit, though.
listentojohan 2 days ago 0 replies      
Nice update - Been missing this for a loooong time.
eng_monkey 2 days ago 0 replies      
Wow, disruptive technology!
c3534l 3 days ago 0 replies      
I predict Twitter's character limit will either double every two years or it's user base will.
smegel 3 days ago 0 replies      
When all else fails, turn your product into a spamvertising platform.
slantaclaus 3 days ago 0 replies      
RIP twitter
mceoin 3 days ago 0 replies      
aaronsnoswell 3 days ago 2 replies      
Why did this make the front page of HN?
stephenitis 3 days ago 7 replies      
This is long over due. Can they fix this...

When a Tweet starts with a @username, the only users who will see it in their timeline (other than the sender and the recipient) are those who follow both the sender and the recipient.

".@someone Hello I want to respond to you but in public!"

realitycheckxxx 3 days ago 0 replies      
Twitter popularized those URL shortening services. It is now kinda like Chinese foot binding tradition(making feet shorter to conform to social expectations) - was popular for some time, but then it went away. The same could happen to URL shortening companies out there.
High CPU use by taskhost.exe when Windows 8.1 user name contains user microsoft.com
444 points by ivank  3 days ago   202 comments top 29
skykooler 2 days ago 10 replies      

To resolve the issue, do not create a user account contains the string "user" on the computer."

That's not really a resolution so much as a workaround.

praptak 2 days ago 1 reply      
In other news, the printer won't print on Tuesdays (please print on Mondays instead):


gwenzek 2 days ago 2 replies      
My Indian cousin Useraji doesn't find this funny
accounthere 2 days ago 2 replies      
Patient: "It hurts when I touch my abdomen"

Doctor: "Solution: stop touching your abdomen"

dave2000 2 days ago 3 replies      
from the people who bought you "if you don't want your contacts to use your wifi for free, change your wifi station name to something with "_optout" in the name.

They're using pretty shit programmers these days I guess. Don't Apple and Google use pretty smart guys? How were they hoping to catch up with these muppets running the show?

kazinator 2 days ago 1 reply      
Do Nvidia updates still make that UpdatusUser account? Oops.
hoodoof 2 days ago 2 replies      
I'd love to see the source code of the bug.
acqq 2 days ago 0 replies      
Just checked, one of the automatically present accounts on the Windows 8.1 machine I have access to is:


It seems to be mostly undocumented "feature," searching for something "official" I've just found this "errata":


"Creating or joining a HomeGroup creates the HomeGroupUser$ account and the HomeUsers group, and adds all local accounts to the group."

People have real problems with it:


alkonaut 2 days ago 3 replies      
This has to be trivially solvable in a backwards-compatible manner (at least if the library at fault is one of windows' own which they update with windows update). Win 8.1 is an OS they actively update.

So why isn't this just fixed in the next update?

It would be one thing if they had to research the problem, and presented a workaround in the meantime. But they appear to have found the cause! To me it seems it would take ten times longer to find this bug, than to fix it. Once they spent the effort to find it, why not patch it?

rbanffy 2 days ago 1 reply      
I am not sure how I would even write this bug.
0xmohit 2 days ago 0 replies      
Maybe "user" in user name causes recursion. Perhaps the "recursion" isn't trivial to fix.

I hope that Microsoft doesn't use the same reasoning for security vulnerabilities.

pbnjay 2 days ago 2 replies      
The truly mind-blowing thing here is that it's "intermittent" ... so they only _sometimes_ check the username for "user" ??
JdeBP 2 days ago 0 replies      
This is another example of a poorly written MS KnowledgeBase article. It's version 1.0, dated 2015-01-05. It apparently has not been touched in a year despite at least one glaring spelling mistake.

For those who want to know what's going on, see this report from the year before, complete with stack trace:

* http://answers.microsoft.com/en-us/windows/forum/windows8_1-...

Friedduck 13 hours ago 0 replies      
Short of completely re-imaging the machine, you could delete the offending user profile and re-create it. As people have pointed out renaming a user leaves messy jumble of old and new, and may not solve the problem.
spriggan3 2 days ago 0 replies      
Is it possible to rename an account previously named user ? my computer running Windows 8.1 came with such account name AS an administrator.
tempodox 2 days ago 0 replies      
High CPU use when Windows is installed.
ape4 2 days ago 1 reply      
On windows8.1 I found that a program with "update" in the name - eg MyUpdate.exe was treated differently by the OS.
zerr 2 days ago 1 reply      
I guess someone who implemented this passed a whiteboard interview...
anon_peace 2 days ago 0 replies      
Maybe this "feature" enables some kind of backdoor either within the task or corresponding library. It's strange that the task is functioning in a manner where the task is dependent upon a string (account) that contains the word "user."
cakes 2 days ago 1 reply      
Hopefully there aren't too many enterprises out there with [domain]\user[0-9]+ type definitions!
gberger 2 days ago 3 replies      
What might be the root cause?
padmabushan 2 days ago 2 replies      
Master Fix could be :"Avoid using this Operating System."
steve371 2 days ago 1 reply      
Most hilarious KB I read today.
blinkingled 2 days ago 1 reply      
I actually got a couple Win81 VMs that have VMUser as a user and I never noticed any CPU usage issues.

But the non-solution is just pathetic response from MS - they used to talk in updates and fixits but with all the focus on Windows 10 now, they hardly seem to have a reason or incentive to care about older OSes.

brett40324 2 days ago 1 reply      
Is this resolved or able to replicate on windows 10?
malkia 2 days ago 0 replies      
Oh strstr() - that witchery... your foul black magic craft has been poisoning people's minds for 40 years...
sklogic 2 days ago 1 reply      
Sort of like that hilarious systemd bug with parsing the 'debug' kernel argument.
chris_wot 2 days ago 0 replies      
Sucks to be in the Grundhauser clan right now.
userulluipeste 2 days ago 0 replies      
This is how it looks when in charge is a corporation which does not see any plausible reason for throwing resources at a given kind of problems. No mater that this is something that affects the paying customer in a very direct/personal way, no mater that this is just one symptom for possibly a wider range of buried issues. Imagine this report occurring in the community alternative ReactOS, and about the kind of attitude it would receive. ReactOS just released 0.4.1, BTW!
Academics Make Theoretical Breakthrough in Random Number Generation threatpost.com
379 points by oolong_decaf  1 day ago   155 comments top 23
tptacek 1 day ago 0 replies      
I'm sure this is as important to computer science as the article claims, but not having even read the paper I can say pretty confidently that it isn't going to have much of an impact on computer security. Even if it became far easier to generate true random numbers, it wouldn't change (a) how we generate randomness at a systems level or (b) what goes wrong with randomness.

Our problem with cryptography is not the quality of random numbers. We are fine at generating unpredictable, decorrelated bits for keys, nonces, and IVs. Soundly designed systems aren't attacked through the quality of their entropy inputs.

The problem we have with randomness and entropy is logistical. So long as our CSPRNGs need initial, secret entropy sources of any kind, there will be a distinction between the insecure state of the system before it is initialized and the (permanent) secure state of the system after it's been initialized. And so long as we continue building software on general purpose operating systems, there will be events (forking, unsuspending, unpickling, resuming VMs, cloning VMs) that violate our assumptions about which state we're in.

Secure randomness isn't a computational or cryptographic problem (or at least, the cryptographic part of the problem has long been thoroughly solved). It's a systems programming problem. It's back in the un-fun realm of "all software has bugs and all bugs are potential security problems".

It's for that reason that the big problem in cryptography right now isn't "generate better random", but instead "factor out as much as possible our dependence on randomness". Deterministic DSA and EdDSA are examples of this trend, as are SIV and Nonce-Misuse Resistant AEADs.

(unsound systems frequently are, but that just makes my point for me)

hannob 1 day ago 2 replies      
While this may be an interesting theoretical result it almost certainly has zero practical implications for cryptography.

We already know how to build secure random number generators. Pretty much every real world problem with random numbers can be traced back to people not using secure random numbers (or not using random numbers at all due to bugs) or using random number generators before they were properly initialized (early boot time entropy problems).

This random number thing is so clouded in mystery and a lot of stuff gets proposed that solves nothing (like quantum RNGs) and stuff that's more folklore than anything else (depleting entropy and the whole /dev/random story). In the end it's quite simple: You can build a secure RNG out of any secure hash or symmetric cipher. Once you seeded it with a couple of random bytes it's secure forever.

oolong_decaf 1 day ago 0 replies      
Here's a link to the actual paper: http://eccc.hpi-web.de/report/2015/119/
electrograv 1 day ago 3 replies      
> We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number

"Independent and no correlations" sounds like a crippling assumption if you want to use any two deterministic PSRNGs. How can you possibly guarantee they're completely un-correlated and independent without seeding them with collectively more bits of entropy than you can get out of the combined system?

I'm not sure what "independent" is even supposed to mean for a deterministic sequence, which by definition is recursively dependent.

beambot 1 day ago 3 replies      
Reminds me of the Von Neumann method of using a biased coin to generate unbiased random coin flips: http://web.eecs.umich.edu/~qstout/abs/AnnProb84.html

(Edit: not the algo itself, just the notion of combining randomness.)

deckar01 1 day ago 0 replies      
> Abstract:

> We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least logCn for a large enough constant~C. Our extractor outputs one bit and has error n(1). The best previous extractor, by Bourgain, required each source to have min-entropy 499n.

> A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n1, for any 0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown nq bits are chosen almost \polylog(n)-wise independently, and the remaining q=n1 bits are chosen by an adversary as an arbitrary function of the nq bits. The best previous construction, by Viola, achieved q=n12 .

> Our explicit two-source extractor directly implies an explicit construction of a 2(loglogN)O(1)-Ramsey graph over N vertices,improving bounds obtained by Barak et al. and matching independent work by Cohen.


Dagwoodie 1 day ago 8 replies      
What makes randomness so hard? I had this crazy thought awhile back and wondering if it would work out:

Say you took a small disk shaped object like a hockey puck with a window on it and you filled it with sand. 50% white sand and 50% black sand. Inside the puck would be blades that are attached to a motor and rotated slowly to constantly change the pattern. The pattern formed in the window would be truly random wouldn't it? You could mount this to a PCIE card with a camera...

dave2000 1 day ago 2 replies      
What is the possibility that this is an attack on cryptography; convince people that it's safe to produce random numbers this way using an inaccurate "proof" and then have an easy/easier time decrypting stuff produced by anyone who uses it?
wfunction 1 day ago 1 reply      
Could someone explain why XORing the outputs of the two sources isn't optimal?
jaunkst 1 day ago 5 replies      
I have always wondered why not introduce physical randomness into cryptography. Let's take scalability out of the question and look at the problem at the fundamental level. If we used a box of sand that shifted each time a random number was requested and a camera to scan and produce a number from this source would it not more random than any other method? I'm not a professional in this field I am just truly asking why not..
kovvy 1 day ago 0 replies      
How well does this handle a biased source of random numbers in one or more of the inputs? If someone has set up your random number source to be more easily exploitable (or just done a really bad job setting it up), does combining it with another poor source with this approach mean the results are still useful?
Cieplak 1 day ago 2 replies      
Does this imply that XORing /dev/urandom with /dev/random is a good practice?

PS: Thanks for clarifying @gizmo686. The arch linux wiki suggests that urandom re-uses the entropy pool that dev/random accumulates, so this is indeed a BAD idea.

I found this helpful as well:

Overall, their construction quite reminds me of a double pendulum, which is one of the simplest examples of deterministic chaos.

csense 1 day ago 1 reply      
"...if you have two low-quality random sources...you can combine them in a way to produce a high-quality random number..."

I tried to skim the paper, but it's really dense. Can someone who understands it explain how what they did is different than the obvious approach of running inputs from the two sources through a cryptographically strong hash function?

Houshalter 1 day ago 1 reply      
I read the article and the comments and I'm still confused why this is important.

I mean it sounds trivial. Why not take the hash of the first random number, and xor it with the first random number. Then optionally hash the output and use that as a seed for a RNG. If any part of the process isn't very random, that's fine, it's still nearly impossible to reverse and doesn't hurt the other parts.

marshray 1 day ago 1 reply      
How is this different than taking two independent bits with < 1 bit entropy and XORing them together to combine their entropy? (up to a max of 1 full bit
nullc 1 day ago 0 replies      
But can anyone extract the algorithm from the paper?


wfunction 1 day ago 4 replies      
Isn't "Independent and no correlations" redundant? How can two random variables be independent but correlated?
mirekrusin 1 day ago 0 replies      
Can someone explain why it's considered so hard to get randomness? I mean you can take old radio and you hear random noise, is it hard to create tiny antenna in the computer?
bootload 1 day ago 0 replies      
another article via UT (Uni. Texas), "New Method of Producing Random Numbers Could Improve Cybersecurity" ~ http://news.utexas.edu/2016/05/16/computer-science-advance-c...
Bromskloss 1 day ago 1 reply      
> A source X on n bits is said to have min-entropy at least k if

Can a rigorous definition of "source" be found somewhere?

nullc 1 day ago 0 replies      
But can anyone extract an algorithm from the paper? :)
roschdal 1 day ago 5 replies      
We show that if you have two low-quality random sourceslower quality sources are much easier to come bytwo sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number,

So Math.random() * Math.random() ? :)

ninjakeyboard 1 day ago 0 replies      
praise RNGesus!
Open Whisper Systems Partners with Google on End-To-end Encryption for Allo whispersystems.org
326 points by ThatGeoGuy  1 day ago   208 comments top 14
robert_foss 1 day ago 7 replies      
To me it seems like Open Whisper Systems are accepting a lot of concessions in order to have Signal included into products. The trust I once had for moxie is quickly dissipating.

* Privacy is only provided in Allo in a secondary mode. Not by default.

* Federation of the Signal protocol has been rejected for non-technical reasons.

Also, on a personal note, the desktop client requiring chrome is pretty awful.

cm3 1 day ago 4 replies      
Has anyone given this


more thought and whether one should avoid Signal and work with a more friendly project that doesn't seemingly fail at its desire to have widespread use of the protocol and actually tried to sue WireApp? WireApp's now approved as a non-infringing implementation in Rust, so that's great for reliability.

Edit: The suing part was initiated by Wire as a response to Moxie demanding GPL compliance over their claim Wire is infringing. I got that backwards.

tptacek 1 day ago 7 replies      
This is fantastic news. The two largest messaging platforms on the Internet will both be using Signal protocol.

I could ask for more: E2E could be the default for Allo, and it isn't. That's not great. But the E2E you get when you ask for it will apparently be best-in-class.

NetStrikeForce 15 hours ago 1 reply      
I'm not sure I got it right.

Is Google going to be scanning all my conversations to give me suggestions on what to say next? Really?

I understand the price of things like Gmail, where I get a robust email system in exchange of scanning my emails and mining my data. I got something very good from Google, they got my data. Not the best of the deals I ever made, but it has (had?) a strong appeal.

On the other hand I don't understand this Allo thing: There's no appeal in the smart assistant, it doesn't bring anything I want to have.

Jarwain 1 day ago 2 replies      
What I'm curious about, and think would be really neat, is if one could take advantage of the shared Signal Protocol to send messages cross-platform. Specifically, sending an encrypted message to a Whatsapp user from Allo. Or to a Signal user from Whatsapp. Or any combination/permutation really.
Roritharr 1 day ago 2 replies      
I really wonder what the people of allo.im are thinking now.
superkuh 1 day ago 3 replies      
Does this require a phone number like the rest of Open Whisper Systems products?
lawnchair_larry 21 hours ago 0 replies      
For someone concerned about privacy, it's baffling to me that we'd be forced into sharing our phone number in order to communicate.
sigmar 1 day ago 0 replies      
Great news! Hopefully this also means that identity verification (through a key fingerprint) will be available in Allo (and in Duo?)
mahyarm 1 day ago 0 replies      
I wonder how many other signal protocol integrations are in progress...
chinathrow 1 day ago 4 replies      
That is awesome - now we also need to kill metadata collection. Is this feasible?

Oh and off-the-record was there on Hangouts/Gtalk before - I used it but the chats were replicated across clients (e.g. Pidgin vs gmail.com) - so not really off-the-record (i.e. they lied).

dang 1 day ago 1 reply      
Please don't do this here.
dang 5 hours ago 1 reply      
Please don't do this. Personal attacks are not ok on HN, regardless of how wrong or annoying someone is.

We detached this subthread from https://news.ycombinator.com/item?id=11728339 and marked it off-topic.

p0ppe 1 day ago 5 replies      
Why didn't Google just develop this in house? It almost feels like they're admitting to having no credibility on privacy without an external partner.
Warren Buffett's Berkshire Hathaway Takes $1B Position in Apple wsj.com
358 points by stevenj  3 days ago   252 comments top 28
jalopy 3 days ago 6 replies      
[edit]Disclosure - I am a Berkshire Hathaway investor and more intense than average follower of the company and it's top management.[/edit]

This is almost certainly a bet by Ted Weschler or Todd Combs - Buffett's chief investing lieutenants.

Buffett has maintained his aversion to tech as he doesn't "understand"[1] it, and I see nothing to indicate he's changed his mind at this stage in the game.

Also - a $1bn investment is relatively small change for Buffett, but fits squarely within the size range of Ted and Todd's reported $8-10b (each) investment warchest.

[1]: Not "understanding" doesn't mean he doesn't or couldn't understand the technology aspects; rather, it means he doesn't have the ability to see which of the participants will survive and thrive in 10 years time to justify an investment today. IBM is a notable exception.

whafro 3 days ago 6 replies      
This isn't a huge bet in BH's world, barely cracking their top 20 holdings list, and it barely registers a blip for Apple, but it's culturally interesting to see Buffet make another no-tech exception for Apple, and it will be interesting to see how much that confidence transfers to the street.

Buffett and Berkshire do tend to love companies that generate cash, and Apple certainly does that. They don't tend to chase massive growth, but rather steady climbs backed by real profits. It seems like a pretty reasonable fit.

lujim 3 days ago 5 replies      
Took. This happened in March at an average price of $109. Today it is around $93. Berkshire just disclosed it today.
MicroBerto 3 days ago 14 replies      
I'm just here to set in stone and state for the record that I believe this will be a losing play. My reasons are here:

Rationale here from a comment over 3 months ago:


> last time I was on an airplane (December), every single older woman over the age of 60 had an iPhone. This means that it's not only reached critical mass (the late majority on the technology adoption curve has been achieved), but now it's no longer hip.

> I'm not sure what will be next, but I'm guessing it won't be Apple's.

> Were I gambling man, I'd have shorted Apple's stock right there after that airplane ride.

So when I'm wrong, y'all can roast me proper.

swalsh 3 days ago 2 replies      
Let's say you're an owner of one of the nations biggest car insurance companies... and the biggest threat to auto insurance are automated cars. Perhaps the smart thing to do, if you're an investor is to hedge your bests by investing in a company that is likely to dominate the sector.

Of course Alphabet would be a more logical path, so this theory might be bollocks.

Rainymood 3 days ago 2 replies      
To get around the paywall, follow this link


This bit.ly link redirects you to the first google hit after looking for this article on Google. For some reason this circumvents the paywall.

Negative1 3 days ago 2 replies      
And here I was thinking "what a great time to buy Apple stock".

I don't think Buffet is betting on technology. From his perspective he is betting on the car and car brand of the future.

Mikeb85 3 days ago 0 replies      
Not a terrible bet. Their dividend yield is fairly respectable at current prices, the price has fallen to around 2012 levels, and Berkshire mostly likes long-term, stable bets.

I personally don't see Apple doing too much in the short term, but they'll certainly continue to be around, and will probably match the market in returns for the foreseeable future.

zhte415 3 days ago 2 replies      
Very interested in how the trade was executed. A big fish order, was it done in small chunks, by proxies? It was certainly not done in one go.

Experience: Even 10 years back putting when in an investment management team, a $100 million order (of roughly same magnitude for total outstanding) was done painstakingly, often over several days, via various brokers, varying what was done based on intra-hour liquidity. Now, intra-hour liquidity is much less than hours and highly automated, so anyone HFT looking, would be interested.

paulpauper 3 days ago 6 replies      
it seems like a lot until you realize it's 1/300 of Berkshire Hathaway and 1/550 of Apple
lingben 3 days ago 0 replies      
"I can tell you this was not a purchase Warren Buffett made. It was one of his two lieutenants"

@BeckyQuick on CNBC this morning

reviseddamage 3 days ago 1 reply      
What I get from BH signal: Well I don't have to worry and put AAPL into retirement portfolio, although probably means no longer salacious enough for shorts or immediate growth investors.
11thEarlOfMar 3 days ago 2 replies      
Seems that there is a significant difference of opinion between Carl Icahn and Warren Buffet.


louprado 3 days ago 2 replies      
A bit off topic, but when you buy that many shares of a public company can you negotiate a discount? Is the trade done directly with Apple, or does Berkshire Hathaway just buy it through Nasdaq ?
shrugger 3 days ago 3 replies      
I swear this is just for headline generation.

I mean, a billion IS a lot of money, but it's a very small risk on BH's part, and an even smaller part on Apple's.

That amount of money is a lot of money relative to other money, but it's not really a big deal relative to either party here.

However, I think that Buffett is just making a statement of 'look how much I think of Apple' by throwing that bn around, perhaps inspiring other investors?

Not really my preferred science...

elcapitan 3 days ago 3 replies      
I thought Buffett doesn't invest in tech companies - is Apple now so much away from tech and so close to being a product like Coca Cola?
bernardlunn 3 days ago 0 replies      
Apple is now almost a value stock. With an incredible franchise. Don't bet against Buffet
EGreg 2 days ago 0 replies      
This doesn't make much sense to me. I would expect Apple to have nowhere to go but down since a few years ago. I realize this isn't a huge bet, but what are they betting on? Apple for Cars?
nxzero 3 days ago 0 replies      
Curious, anyone have any info on the relationship between Jobs and/or Cook and Buffet?
programminggeek 3 days ago 0 replies      
To be clear, it's not even a technology investment. It's more of a clear, easy to understand investment in a hardware product company with great margins, great marketing, great customer loyalty. It's like investing in Coke.
qaq 3 days ago 2 replies      
For everyone dismissing this as a small play for BH it's not that small also we have no clue about other possible elements of this position (e.g. derivatives, options etc.).
darawk 3 days ago 2 replies      
As someone who is short AAPL, my perspective is that they simply don't have room for growth. At the moment, they are essentially a one product company: the iPhone. And yes, that product is insanely profitable, but those numbers are going down, and personally I don't see a way for that trend to change.

Apple has expanded into every available market on the planet (literally). The only way for them to grow at this point is to increase their market share relative to android, or increase world economic growth sufficiently that more people in developing countries can buy iPhones.

Increasing their market share relative to android in any significant way seems incredibly unlikely to me at this point. As technology stagnates (as is happening with smart phones), the premium products lose cachet and the lower-end products start to achieve parity with their premium competition.

The two markets where they could still theoretically hope to achieve more growth/penetration are China and India. But China has been antagonistic to them of late, and has demonstrated an interest in protecting its own incumbents who are now making phones that even Westerners will buy (e.g. Huawei). India on the other hand is a highly tech-oriented culture, and as such has a predilection for customization and control that tends to make them prefer android phones. Not to mention that the CEOs of both Microsoft and Google (the only two competitor platforms to the iPhone) are currently Indian, and both companies have demonstrated a specific interest (especially Google) of expanding in the Indian market. And I think that gives them an advantage that's hard to overstate.

All of this would be fine if the market itself were expanding. But it's not. People are upgrading less and less frequently. This looks to me just like the PC market of 5-10 years ago or so. Things have gotten "good enough" for most people. I certainly no longer feel compelled to have the latest and greatest phone right away, and it seems to me that most people feel the same.

Lastly, there is the possibility that they will create some new category defining product. This is of course a real possibility, but I feel pretty confident that anything they attempt to do in the car market will fall flat on its face. I could certainly be wrong here, but I just don't see how they could possibly offer something so much better than existing cars that i'd want to pay an Apple-level premium for it. Especially when they're competing against someone like Tesla, who has already captured all of the rebellious smart-person cool points in this category.

Of course, I could certainly be spectacularly wrong. And to be honest, if I am, I don't think i'd mind losing the money too much. Because it'd mean that we'd all probably have some cool new product to play with.

vadym909 3 days ago 3 replies      
Wow- This must be hard for Buffett to explain to Bill Gates- his close friend. Did he ever invest in Microsoft?
SixSigma 3 days ago 0 replies      
Maybe he's gambling on Trump's "onshore your cash or else"
mudil 3 days ago 0 replies      
Berkshire has too much money and too few ideas as to what to do with it. Why not the second largest stock with tons of untaxed cash? And they can push for tax amnesty to repatriate it.
mandeepj 3 days ago 3 replies      
Not sure why Buffet decided so late to invest in apple. Returns will not be as high as pre iPod or iPhone era
ZoeZoeBee 3 days ago 1 reply      
Berkshire Hathaway was down 12% last year, has under-performed the S&P 500 for five years running, bought Apple North of $100, and are heavily invested in the railroads via Burlington Northern which has seen freight revenue dropping for over a year. Which part of this is not true?
Firefox tops Microsoft browser market share for first time arstechnica.com
338 points by okket  2 days ago   203 comments top 22
mmastrac 2 days ago 7 replies      
Wow. I remember watching these stats, waiting for Firefox to edge past IE in the 2000s. Chrome really took the ball and ran with it. They had the advantage of seeing things that FF did that really worked (tabs, extensions, find-as-you-type) and the intelligence to iterate on that (multi-process support, a world-class debugger) and a really solid foundation in WebKit (ne KHTML).

In a way Chrome was in the right place at the right time, with the right bunch of people. If they hadn't launched, Firefox would likely be sitting where Chrome is now, though likely without a bunch of drive to improve things as much as they currently have.

Of course, in an alternate universe where Firefox was #1 and IE #2, we might have had royalty-free video codecs mandated by standards and no W3C-endorsed DRM...

SimeVidas 2 days ago 4 replies      
Mozilla: We are a non-profit and we protect your privacy; we wont use your data to make money.

Most users: Nah, man. Well go with the search engine and ad network company.

shmerl 2 days ago 3 replies      
Recently I experienced a high profile site throwing some errors in Firefox, and their support rep suggesting me to use Chrome (of course I didn't). This brought the memories of "best viewed in IE" times...
AdmiralAsshat 2 days ago 2 replies      
I'm curious how this graph would look if you compared only Linux users (obviously IE and Edge would drop off). It didn't strike me how important Firefox is to the FOSS world until I started experimenting with Linux distros and found that it was consistently pre-installed on flavors of Ubuntu/Debian/Mint/Fedora/FreeBSD. It becomes even more valuable on distros that refuse to pre-load any proprietary software.
jedberg 2 days ago 1 reply      
This isn't entirely surprising to me.

For one, Edge only runs on Windows. Windows is losing market share to Mac OS, iOS and Android (Firefox is on all three as well as Windows).

For two, I personally use Safari, Firefox, Chrome, and Chrome Canary on my box, each for their own purpose, but I have yet to find a need for Edge. There is no site that I go to that works best in Edge.

lucasmullens 2 days ago 0 replies      
About a month ago I saw a news story that said Chrome finally passed IE, based on some other browser usage tracker. Each website that tracks this can have some huge bias, so making a general claim about a browser passing another doesn't really mean much.
hackuser 2 days ago 0 replies      
More significant is that Firefox' marketshare dropped from 16.1% to 15.6% in three months, and all lost to Chrome, it appears. Yikes!

What is Chrome doing right now to advance so quickly? Or has that been the rate of increase for a long time?

EdSharkey 2 days ago 2 replies      
This has to be worrisome to the Chrome team. I take this to mean that Firefox is getting faster and less janky with recent releases. Enough Windows users who are in the know and who occasionally kick the Firefox tires are finally settling on it. I don't view this as a knock on the quality of Microsoft products as much as an overall improvement in Firefox.

I'm on Mac, and I believe Chrome software quality has slipped in the past 12 months. Lots more crashes than I have ever seen and some new CSS curiosities. And that's with me rarely running Chrome - Firefox is my primary browser.

Maybe Mac is a much tougher platform to target and isn't the priority for Goog, or perhaps the transition to Blink has been rough. Whatever the reason, Chrome seems to be getting creaky in its old age.

netheril96 2 days ago 1 reply      
Firefox is really bad at rendering Chinese fonts. It chooses different font than other browsers, sometimes using a Japanese font to render Chinese text, and the pixelization is just palpable. I know most people here couldn't care less, but the poor internalization may be one of the reasons for a global disadvantage compared to Chrome. Internationalization is really something open source products have been lacking traditionally, as it is neither fun nor sexy.
wodenokoto 2 days ago 3 replies      
Didn't Firefox use to have almost 30% ? Or was that only in select markets?
partiallypro 2 days ago 1 reply      
I prefer Firefox to Chrome, Edge is still unstable but I do like some of the direction they are going in. The lack of extensions I think killed it out of the gate. It's also odd to me that Edge doesn't update through the Windows Store, and instead is updated through Windows Update...doesn't that defeat the purpose of it being a UWA?

I actually find what Opera is doing a lot nicer than any other others are the moment though, find myself using it more and more.

ksec 1 day ago 0 replies      
As some had pointed out on Installing Chrome by other software etc, but normally these are real bad software. Most users who had no idea what Chrome was at the time was presently surprised by how fast Chrome was, and they stick with it.

In ther early 2010s I forced switch the whole company about 200s to Firefox, but few years down the road everyone quietly installed Chrome and was using it. It was fast, and they dont care about anything else. Forcing them back on Firefox were met with opposition, saying using somthing inferior was insane.

Complaining to Mozilla has been served with deaf ears. It was mostly an management issues from top to bottom, they were so full of themsevles, so righoutous, that they fail to relaize the marketing is changing. By the time they realize they have shrink below 20% market share already and were continuing downwads.

Somewhere along the line, Mozilla changed their tone, it wasn't about market shares. It is about Open Web, and always has been. How do you force the Web to be Open if had no influence on it what so ever? They wasted resources on Firefox OS, which is a dismal failure. They think JS is king, and evrything, including the OS should be someday written in JS.

Firefox is dying, and i am glad. Becasue as a users from Netscape era everytime Mozilla / FIrebird / Firefox reborn things has become better.

Firefox is very noble. But the world has never been about one way or the other.

Luckiy in the past year something happen within Mozilla. I have no idea what it is because i am not following their post anymore. ( e10s is STILL not shipped ) But things are getting better. User experience matters, less Janks, memory usage kept low and most importanly Chrome has been getting worst with every release. ( Strange indeed ) This mean more people are swithing back to Firefox.

And for users with specific workflow and 100s of Tab, Firefox is still the only option on the market.

KORraN 1 day ago 0 replies      
I always wonder how big influence on these statistics have all kind of adblockers. Like a lot of you said, there are a LOT of "forced" installs of Chrome on computers of not-so-power users. Most of them don't even know what the browser is so I bet they don't have any adblocker (unless someone installed it for them). On the side, we have power users that have adblockers and (AFAIK) they are not visible for scripts from StatCounter and similar companies. So results of browsers like Firefox, Opera, Vivaldi, Brave may be understated, am I right?
cpeterso 2 days ago 0 replies      
The numbers just show Firefox losing market share more slowly than IE/Edge, not that Firefox is gaining. Chrome is still climbing. I wonder when Chrome's market share will peak and level out.
xiphias 2 days ago 1 reply      
It must be really hard for the Microsoft CEO. He's doing what he can to fix the problems that Steve Ballmer left, but it's extremely hard for him.
tkubacki 1 day ago 0 replies      
What I like about Firefox and Chrome is that they democratized Linux as a desktop OS. Till IE was mainstream all sites assumed you are Windows box. Now it's rare that there is site I can't use except few Silverlight based dinosaurs.
pessimizer 2 days ago 0 replies      
Chrome's share is a Firefox's reward for making their browser indistinguishable from Chrome in every way except for the fact that their entire UI freezes every time it hits some bad javascript. Great choice, to go from being a great Firefox to a crappy Chrome clone.
beefsack 2 days ago 1 reply      
If Servo ever lands in Firefox I wonder what sort of impact that'd have on market share.
iLoch 2 days ago 0 replies      
What an awful way to present that data...
erikb 2 days ago 0 replies      
Didn't that happen, like, 10 years ago?
dukenuke 2 days ago 9 replies      
Firefox also sends enormous amounts of metadata about your browsing to the cloud and it's very difficult for non-savvy users to turn this off. They call it telemetry and 'safe browsing', but users overlook that every URL is checked against a database of URLs already in Google's 'safe browsing' repository. Firefox is not actually private and their business model can't allow for privacy, because they're in bed with Google.

Use something like Palemoon and configure about:config a bit more and you should be fine. But be very skeptical of Mozilla claiming FF is some privacy enhancing tool. Their plugins ecosystem is also a security nightmare...

hrbrtglm 2 days ago 3 replies      
For me, it's all about the details, and I just can't come back to Firefox for silly reasons, even if I appreciate their stance on privacy.

The tabs shape is repulsing me, I know that's a strong feeling and I can't explain it.The disymetrical back and forward buttons bothers me as well.I can't find how to whitelist domains accepting cookies in Firefox.Switching profiles is much easier with chrome.I'm used to chrome developper tools.

I don't like neither safari neither MS edge.

If it wasn't for the missing onenote extension, maybe I'd be using opera. Again, all about the details.

AWS X1 instances 1.9 TB of memory amazon.com
338 points by spullara  1 day ago   178 comments top 22
jedbrown 1 day ago 1 reply      
Does anyone have numbers on memory bandwidth and latency?

The x1 cost per GB is about 2/3 that of r3 instances, but you get 4x as many memory channels if spec the same amount of memory via r3 instances so the cost per memory channel is more than twice as high for x1 as r3. DRAM is valuable precisely because of its speed, but the speed itself is not cost-effective with the x1. As such, the x1 is really for the applications that can't scale with distributed memory. (Nothing new here, but this point is often overlooked.)

Similarly, you get a lot more SSDs with several r3 instances, so the aggregate disk bandwidth is also more cost-effective with r3.

lovelearning 1 day ago 14 replies      
This is probably a dumb question, but what does the hardware of such a massive machine look like? Is it just a single server box with a single motherboard? Are there server motherboards out there that support 2 TB of RAM, or is this some kind of distributed RAM?
MasterScrat 1 day ago 3 replies      
As a reference the archive of all Reddit comments from October 2007 to May 2015 is around 1 terabyte uncompressed.

You could do exhaustive analysis on that dataset fully in memory.

ChuckMcM 1 day ago 4 replies      
That is pretty remarkable. One of the limitations of doing one's own version of mass analytics is the cost of acquiring, installing, configuring, and then maintaining the hardware. Generally I've found AWS to be more expensive but you get to "turn it on, turn it off" which is not something you can do when you have to pay monthly for data center space.

It makes for an interesting exercise to load in your data, do your analytics, and then store out the meta data. I wonder if the oil and gas people are looking at this for pre-processing their seismic data dumps.

1024core 1 day ago 1 reply      
Spot instances are about $13 - $19/hr, depending on zone. Not available in NorCal, Seoul, Sydney and a couple of other places.
dman 1 day ago 4 replies      
Going to comment out the deallocation bits in all my code now.
pritambarhate 1 day ago 4 replies      
Question for those who have used monster servers before:

Can PostgreSQL/MySQL use such type of hardware efficiently and scale up vertically? Also can MemCached/Redis use all this RAM effectively?

I am genuinely interested in knowing this. Most of the times I work on small apps and don't have access to anything more than 16GB RAM on regular basis.

vegancap 1 day ago 8 replies      
Finally, an instance made for Java!
krschultz 1 day ago 8 replies      
A bit under $35,000 for the year.
realworldview 1 day ago 0 replies      
Recompiling tetris with BIGMEM option now...
Erwin 1 day ago 0 replies      
I'm curious about this AWS feature mentioned: https://aws.amazon.com/blogs/aws/new-auto-recovery-for-amazo...

We've experiemnted with something similar on Google Cloud, where an instance that is considered dead has its IP address and persistent disks taken away, then attached to another (live or just created instance). It's hard to say whether this can recover from all failures however without having experienced them or even work better than what Google claims it already does (moving around failing servers from hardware to hardware). Anyone with practical experience in this type of recovery where you don't duplicate your resource requirements?

jayhuang 1 day ago 0 replies      
Funny how the title made me instantly think: SAP HANA. After not seeing it for the first 5 paragraphs or so, Ctrl+F, ah yes.

Not too surprising given how close SAP and Amazon AWS have been ever since SAP started offering cloud solutions. Going back a couple years when SAP HANA was still in its infancy; trying it on servers with 20~100+ TB of memory, this seems like an obvious progression.

Of course there's always the barrier of AWS pricing.

zbjornson 19 hours ago 0 replies      
How does this thing still only have 10 GigE (plus 10 dedicated to EBS)? It should have multiple 10 Gig NICs that could get it to way more than that.
0xmohit 1 day ago 0 replies      
Wow! http://codegolf.stackexchange.com/a/22939 would now be available in production.
manav 1 day ago 0 replies      
Hmm around $4/hr after a partial upfront. I'm guessing that upfront is going to be just about the cost of a server which is around $50k.
micro-ram 1 day ago 2 replies      
What happened to the other 16 threads?

18(core) * 4(cpus) * 2(+ht) = 144

ben_jones 1 day ago 0 replies      
I'd be guilty if I ever used something like this and under utilized the ram.

"Ben we're not utilizing all the ram."

"Add another for loop."

mrmondo 1 day ago 0 replies      
I'm taking it this is so people can run NodeJS or MSSQL on AWS now? Heh, sorry for the jab - what could this be used for considering that AWS' top tier provisioned storage IOP/s are still so low (and expensive)?

Something volatile running una RAM disk maybe?

samstave 1 day ago 2 replies      

Thats amazing.

amazon_not 1 day ago 1 reply      
The pricing is surprisingly enough not terrible. Given that dedicated servers cost $1-1.5 per GB of RAM per month the three year price is actually almost reasonable.

That being said, a three year commitment is still hard to swallow compared to dedicated servers that are month-to-month.

samstave 1 day ago 3 replies      
16GB of ram should be enough for anyone.

Edit, y'all don't get the reference: famous computer urban legend...


0xmohit 1 day ago 1 reply      
Encouraging folks to write more inefficient code?

I'd be interested in hearing what Gates [1] has to say about it, though.

[1] "640 kB ought to be enough for anybody"

Going dark: online privacy and anonymity for normal people troyhunt.com
331 points by danso  21 hours ago   105 comments top 19
sixhobbits 16 hours ago 2 replies      
I'm surprised he doesn't mention NoScript, Privacy Badger, etc. "Normal people" should be more concerned about about the highly detailed profiles that companies are building based on browsing habits. "Normal people" read about data breaches and embarrassing leaks that force politicians to resign. "Normal people" know nothing about the behind the scenes tracking that goes on when you google medical symptoms[0] or visit pages which have Facebook like buttons as footers[1].

Yes, this article is targeted at people who don't understand the problem of using their .gov email address to sign up for dodgy sites, but think about whether you'd rather have your bank statement made public or a large, visualizable data set representing most of your browsing history.

I would love to see more work done on privacy through noise/obfuscation, such as that started by Adnauseum[2] and TrackMeNot[3] - not necessarily publishing your credit card details online as suggest in another comment here, but in making random search queries and clicking on random ads when your device is idle. Most of us have sufficient processing power and bandwidth for the overhead not to be a problem. It's sad that it looks like both add-ons have failed to make a splash, and seem to have fallen out of active development (end of 2015 marks the last commits for both projects, which is too soon to pronounce them dead, but they definitely don't seem to be hives of activity).

[0] http://motherboard.vice.com/read/looking-up-symptoms-online-...

[1] http://www.allaboutcookies.org/cookies/cookie-profiling.html

[2] http://adnauseam.io/

[3] http://cs.nyu.edu/trackmenot/

xrorre 14 hours ago 2 replies      
I appreciate the intention of this article. Written for people only starting to change their surfing habits in light of Snowden. But the example of the tools they should use are not thought out very well.

First: Freedome by F-Secure is closed source and there is no OpenVPN alternative. Always choose a VPN that has OpenVPN so that users can configure the connection to their needs. No need for this bloated mess.

Second: Whilst disposable Google accounts might seem like a good idea, there are any number of ways for Google to cross-correlate a disposable identity with your actual identity using fingerprinting captchas or even your screen resolution. Google does this to spot serial re-registrations and to stop people gaming Google Plus voting rings and spammers in general.

Third: Be careful of online websites offering fake-name services. Most of this data is generated server-side and logged for the purposes of cross-correlation with your IP address and useragent string. Quite possibly the vast majority of fake-identity sites are run by LEA

- I like to write some quick and dirty ruby gems to generate fake identities because then it can't be correlated. (The names are pulled in from disparate sources and I always ensure true-randomness).

- In terms of email, use things like Riseup which use TLS at every hop so that passive dragnets cant sniff the password. 99% of all IMAP and SMTP services can be passively sniffed because they use weak STARTTLS.

- Use 'honeywords' in an email to correlate different emails with different activities. For example:

 john.doe+shopping@riseup.net john.doe+gaming@riseup.net john.doe+correspondant@riseup.net
This way you can whitelist those addresses for the purposes of filtering out spam and phishing attempts.

apecat 15 hours ago 1 reply      
Great article.

The only real omission I noticed is the lack of mention of advanced browser fingerprinting techniques that can be used against browsers, even if caches are emptied, 'porn modes' activated, VPNs opnened. As demonstrated here by the EFF's Panopticlick initiative. https://panopticlick.eff.org/

One of the most important points about the anonymity provied by the Tor project to remember is that the Tor Browser is painstakingly hand crafted to avoid many of these problems. In other discussions about TOR it is worryingly common to see other ways to route browser traffic through TOR, without mentions of the implications.

For those interested, here's a recent look into the Tor Browser system by one of the developers.


huuu 18 hours ago 3 replies      
Doesn't this create a risk of committing fraud and identity theft in some countries?

I can understand it wouldn't be a crime to create a random email address but creating a fake house address and using this for payments sounds a little tricky.

btrask 18 hours ago 5 replies      
This is just a list of more things for them to clamp down on.

I'm thinking about going in the opposite direction, and broadcasting all of my personally identifying information (credit card, SSN, etc). Obviously I would have to set aside a large amount of time to deal with issuing fraud reports, and make sure that I wasn't risking anything that I can't afford to lose--but it does seem simpler in some ways.

After all, if you don't have anything to hide, you're bulletproof, right?

rkrzr 16 hours ago 2 replies      
TLDR: Use a VPN + Incognito mode + fake email and info

The VPN hides your IP. Incognito mode prevents your cookies from giving away your identity. And the fake info helps with things like sites being hacked and the data being dumped online.

amelius 14 hours ago 4 replies      
> Going dark: online privacy and anonymity for normal people

Caveat: normal people don't care about such things.

descript 8 hours ago 0 replies      
It is so difficult to balance productivity/convenience and privacy/security.

Only recently did I stop worrying about privacy/security, and frankly my online experience is much better. I can now participate in any services/apps that catch my eye, I now save CC data at some sites, don't have a VPN/Tor slowing traffic and giving me cloudflare walls/"im not a bot" verification, don't have noscript/ublock/privacy badger breaking most sites, can sync across devices and backup online.

Having both secure & private online behavior is a massive inconvenience. You basically can't participate in the online world as it exists. (There are definitely opportunities to create secure/private versions of existing tools)

mirimir 11 hours ago 0 replies      
It's a good piece, but the treatment of VPNs is bad. There's a new site about choosing a VPN service: https://thatoneprivacysite.net/ It summarizes a huge amount of information, for 159 VPN services.
ChefDenominator 9 hours ago 1 reply      
The article recommends going to Fake Name Generator (tm) to get a random online identity. The page is not encrypted and looks very, very fishy.

That page recommends going to Social Security Number Registry. Again, an unencrypted totally scammy looking page. If you enter a random name and select a random state, it will 'verify' that your identity has been stolen. Then, if you click on 'Validate', you can enter your SSN (unencrypted, of course).

I don't even know how to code, and this is a news site for hackers? This tripe makes it to the top of the front page?

maglavaitss 13 hours ago 0 replies      
This submission has some more tips for preserving your privacy https://news.ycombinator.com/item?id=11706680
ikeboy 13 hours ago 0 replies      
The SMS receiving sites don't work so well IME. They tend to use a single number for everyone, and the demand by spammers etc is so much higher than the free supply that for any given service, your number will probably already be blocked. Or the receiving will be unreliable, etc. I've gotten it to work sometimes, but usually not. Definitely too hard and time consuming for "normal people".

Is there a site that sells phone numbers for viop and sms for bitcoin without requiring identity?

bunkydoo 9 hours ago 1 reply      
Here's the thing, if you are a normal person - you aren't going to read a guide on something like this. I have a 1 sentence guide on this for the 'normal person' - If you wouldn't want your grandma to see it, just don't enter it in an internet browser.
tmaly 10 hours ago 0 replies      
As I am cranking away on some Go services on my laptop on my local coffee shop wifi, I see log entries popup of people trying to access php pages.

I go and ask the staff, and they said their POS is full of some weird software.

a good VPN provider is worth it, but finding one that will not keep logs on you is another story.

fulafel 11 hours ago 1 reply      
The article exemplifies why the widespread misappropriation of of the VPN term is unfortunate (in same series as "router" for NAT boxes...), it serves to confuse people about the potential of real overlay networks.
coldpie 13 hours ago 0 replies      
Is this really still the best way to pay for stuff anonymously online? Lie to a financial institution? I understand the desire to avoid fraud, but boy does that irk me. Hrmm...
jrcii 19 hours ago 4 replies      
I really object to the language of "dark" to describe privacy or anonymity, which are thereby painted with a sinister connotation.
astazangasta 10 hours ago 0 replies      
I'm interested in 'phishing and malware protection', which I think means all my traffic gets reported to Google. This plus Google Analytics means the electric eye is on me wherever I go. Tips to browse safely without these?
kevingrahl 12 hours ago 3 replies      
Skimmed the article, saw that he reccomended using Googlemail. Looked at the title of the post again.Looked at Googlemail reccomendation.Laughed and made a mental note not to trust "Troy Hunt".
Improving Docker with Unikernels: Introducing HyperKit, VPNKit and DataKit docker.com
288 points by samber  1 day ago   37 comments top 6
kevinmgranger 1 day ago 1 reply      
docker's go-9p now makes for the 3rd implementation of 9p in go:

docker/go-9p https://github.com/docker/go-p9p

rminnich/ninep: https://github.com/rminnich/ninep

rminnich/go9p: https://github.com/rminnich/go9p

There's also the Andrey Mirtchovski and Latchesar Ionkov implementation of go9p, but all I can find is a dead Google Code link from here: http://9p.cat-v.org/implementations

pjmlp 1 day ago 2 replies      
With lots of OCaml love it seems, from a quick glance through the source repositories.
tachion 1 day ago 1 reply      
I wonder if we'll see a move towards getting Docker working on FreeBSD using either Jails or bhyve finally, since it talks about using bhyve hypervisor... That would be really great.
kordless 1 day ago 4 replies      
Seems like only a year ago Docker changed how it used Virtualbox to boot VMs using machine (and caused me endless amounts of suffering trying to figure out how to fix it). Now it would seem they are getting rid of Virtualbox entirely with their own VM...which needs contributions.
chuhnk 1 day ago 0 replies      
Very interesting work. I find go-9p quite fascinating and think it could really have broader applications. Docker if you see this, I actually think you're on to something for microservice development thats native to the docker world. I've been trying to come up with ways of replicating the unix philosophy around programs that do one thing well and the use of pipes but was always limited in my thinking in terms of http, json, etc, etc.

My advice, as a guy who's currently building something in the microservice space, explore this further. Spend some time building fit for purpose apps with this and see where it goes.

andrew_wc_brown 1 day ago 1 reply      
I guess I just want to know the take away.eg. Will consume less memory on mac.
Cool URIs don't change (1998) w3.org
296 points by benjaminjosephw  2 days ago   122 comments top 23
Communitivity 2 days ago 1 reply      
It is confusing to a lot of people, but they aren't functionally interchangeable.

Basically you have Uniform Resource Locators (URLs), Uniform Resource Names (URNs), and Uniform Resource Identifiers (URIs). You also have International Resource Identifiers (IRIs), which are URIs with rules allowing for international character sets in things like host names.

Every URN and URL is a URI. However, not every URI is a URN, or a URL.

A URN has a specific scheme (the front part of a URI before the :), but it does not contain instructions on how to access the identified resource. We humans might automatically map that to an access method in our head (e.g., digital object identifier URNs like doi:10.1000/182, which we who have used DOIs know maps to http://dx.doi.org/10.1000/182), but the instruction isn't in the URN.

A URL is not just an identifier but also an instruction for how to find and access the identified resource.

For example http://example.org/foo.html says to access the web resource /foo.html by using the HTTP protocol over TCP to connect to IP address which example.org resolves to, on port 80.

An example of URIs which are not URLs are the MIME content ids used to mark the boundaries within an email (cid scheme), e.g., cid:foo4%25foo1@bar.net.

You can get more information at:https://tools.ietf.org/html/rfc2392

makecheck 2 days ago 2 replies      
A lot of missteps in the early days of web technologies have made stable URLs impractical, unfortunately.

One problem is that someone decided to include file name extensions. Maybe this happened naturally because web servers made it so easy to expose entire directory structures to the web. And yet, this continues to be used for lots of other things. It is so ridiculous that a ".asp" or ".php" or ".cgi" causes every link, everywhere to depend on your arbitrary implementation details!

Another problem is that many software stacks are just not using brains when it comes to what would make a useful URL. Years ago I was very frustrated working with an enterprise software company that wanted to sell us a bug-tracking system and they didnt have simple things like "server.net/123456" to access bug #123456; instead, the URL was something absolutely heinous that wouldnt even fit on a single line (causing wrapping in E-mails and such).

Speaking of E-mail, I have received many E-mails over time that consisted of like TWELVE steps to instruct people on how to reach a file on the web. The entire concept of having a simple, descriptive and stable URL was completely lost on these people. It was always: 1. go to home page, 2. click here, ..., 11. click on annoying content management system with non-standard UI that generates unbookmarkable link, 12. access document. These utterly broken systems began to proliferate and it rapidly reached the point where most of the content that mattered (at least inside companies) was not available in any sane way so deep-linking to URLs became pointless.

zymhan 2 days ago 1 reply      
" There is nothing about HTTP which makes your URIs unstable. It is your organization. "

I think this could be applied to more than just how companies manage URLs.

Also, I'm trying to find a post I recently read that talked about how calling URLs "URI"s is just confusing nowadays since almost everyone still only knows the term URL, and they're functionally interchangeable.

pidg 2 days ago 2 replies      
Another addition to their 'Hall of Flame' might be the British Monarchy. A couple of weeks ago, they broke every existing URI when they moved from www.royal.gov.uk to www.royal.uk. Every URL from the old domain gets redirected to the root of the new site.


chias 2 days ago 2 replies      
Interesting, the link as posted here violates the guidelines. Perhaps you meant to link to



seagreen 2 days ago 4 replies      
This is a make-work trap for conscientious people.

If it's more efficient for your business/project to change your URIs when going through a website design, go ahead (with the knowledge that you'll lose some traffic, etc.)

Seriously, there's no reason to feel guilty over this. It's not your fault, it's the fault of a system that built two UIs into every website (the website's HTML and the URL bar -- the second of which is supposed to be useful for browsing and navigation just like the first).

If W3C actually cared about links continuing to work, they would fix it at the technical level by promoting content-addressable links instead of trying to fix it at the social level (which will never work anyway, the diligent people that care about these things will always be just a drop in the bucket).

alistairjcbrown 2 days ago 4 replies      
I remember Jeremy Keith talking about this at dConstruct conference; he put a bet on Long Bets that the URI of the bet wouldn't change [0]

[0] - http://longbets.org/601/

wtbob 2 days ago 6 replies      
On a related note, URIs shouldn't end in extensions (use content negotiation!), content should be readable without executing code (no JavaScript necessary), content should be available in multiple languages (use content negotiation!), and RESTful interface should offer a simple forms-based interface for testing, &c.
zeveb 2 days ago 3 replies      
I was recently digging through my old blog's archives, and it was appalling how many URLs from the early 2000s have completely disappeared, despite the fact that the sites which served them remain and gratifying when I was able to reload some fringe resource from 1998 or 2003.

The Web is about webs of durable readable content, not about ephemeral walled-garden apps.

hyperpape 2 days ago 3 replies      
One downside of this: I now feel like I can't create a proper place to keep my writing or other ideas until I carefully think of a URL scheme that I can maintain for eternity.
brightshiny 2 days ago 1 reply      
There's RFC 4151 to keep in mind, TagURI, a scheme for URIs that is independent of URL (but can use it too). One reason to use it would be to mark a page as being the same resource even though the domain or URL had changed.



I wrote a library for it in Ruby which is how I know about it.

tremon 2 days ago 2 replies      
I wonder if the footnote was also written in 1998:

Historical note: At the end of the 20th century when this was written, "cool" was an epithet of approval particularly among young, indicating trendiness, quality, or appropriateness.

thudson 2 days ago 0 replies      
Cool URLs still work in OmniWeb on a NeXT with a decades old bookmarks file: https://www.flickr.com/photos/osr/17082625625/lightbox

The Mondo 2000 interview bookmarks, not so much.

alpb 2 days ago 1 reply      
The example URL they list from w3.org website (http://www.w3.org/1998/12/01/chairs) is now broken. Great deal of irony.
kerrsclyde 2 days ago 1 reply      
What about case sensitivity? https://www.w3.org/provider/style/uri doesn't work.
prsutherland 2 days ago 1 reply      
Pre-DMCA. If this gets an update, TBL should add legal reasons and law enforcement as a reason for URI change.
nommm-nommm 2 days ago 3 replies      
Shouldn't there be some tradeoffs?

Say I sign up with service x with the name y. My URL is www.x.com/users/y. Years later I delete my account. Someone else signs up with the name y. Now www.x.com/users/y goes to someone else's resources. The old URL is broken.

The only way to prevent this is either give the user a url (that they will want to share) that is not meaningful or disallow anybody to sign up with a name that was ever in use and names are a resource that is very limited.

Neither seems ideal. I do agree on principal that URLS shouldn't change, though.

Hotmail actually has this problem, or at least they used to. They delete accounts that are inactive for a long time and someone else can sign up with that name. The new person can get email addressed to the previous owner.

kazinator 2 days ago 1 reply      
"Except insolvency, nothing prevents the domain name owner from keeping the name."

Ah, the halcyon days of the Internet's adolescence.

This was before fiascos like Mike Rowe, of Canada, having his mikerowsoft.com taken away.

nathancahill 2 days ago 0 replies      
At least one of their example URLs on the page still points (eventually) to the same content: http://www.nsf.gov/cgi-bin/getpub?nsf9814

Although they seem to have not learned anything, and are now using .jsp instead of .pl.

miseg 2 days ago 1 reply      
It's tough!

I have a custom PHP app that includes marketing pages.

I'd like to crowbar Wordpress into the server to serve the marketing pages instead, to make it easie to change text over time.

A .htaccess set of redirect rules may indeed work, but it's hard work to keep all URLs working.

avar 2 days ago 1 reply      
I wonder if TBL would have written the same article 10 years later when search engines had gotten much better.

It's still an inconvenience when a URL moves, but before the likes of Google that used to be a huge inconvenience and it would often take tens of minutes to track down the new location, now it's on the order of tens of seconds at most.

prashnts 2 days ago 1 reply      
Oh well, Hall of fame #1 leads to a 404 for both links. So, I propose that it should be fair to call it (Hall of fame #1).
nxzero 2 days ago 3 replies      
Idea that something cool would stay cool forever sounds like oxymoron.

For example, Yahoo.com has remained the URI for Yahoo's homepage and will continue to until it's the 404 page. Yahoo.com is not cool.

Warren Buffett and Dan Gilbert Unite in Bid to Acquire Yahoo nytimes.com
288 points by interconnector  3 days ago   129 comments top 23
howlingfantods 3 days ago 4 replies      
Buffett is no fool. Berkshire is providing the financing to Dan Gilbert's group and will receive guaranteed interest as well as an option to convert to equity. I'm sure that financing is jammed packed with warrants and covenants.

Buffett has basically parlayed the prestige of his name into sweetheart deals with provisions that no other company could get (eg. his investment in Goldman Sachs).

ChuckMcM 3 days ago 3 replies      
I find these things amusing "according to people who aren't authorized to speak publicly" except that they are talking to a reporter so that's kinda public. But really what they want to do is let potential other players know that "oh yeah, its real, we're bringing it and we're gone sell this thing, if you want a piece of this you better wake up and call us or your going to lose out." kind of vibe which attempts to incent other buyers to please make a bid and bring the price up. According to the Credite Suisse banker who helped with a transaction I participated in the ideal number of buyers is 3, and it helps if at least two of them both know each other and are competitive (think Benioff and Ellison for example).

I can see Microsoft's goal, add it the Bing group and give Bing the portal as well as it already has all the search traffic. Not so clearly on Berkshire though, breaking it up works if you can get it at the right price. I could also see IAC wanting to play, they could use a portal property to link all their front ends together.

gtk40 3 days ago 7 replies      
Looking forward to a redesign of Yahoo! to match Berkshire Hathaway's website: http://www.berkshirehathaway.com/
JoshTriplett 3 days ago 2 replies      
It's not obvious what the acquiring group mentioned in this article would do with Yahoo after they've bought it, to make it worth the price. They'd have the 5th most visited domain name on the Internet, but as Yahoo has demonstrated, visits don't automatically turn into money. (Twitter has a similar problem, and sits at #8, but they have a social aspect that Yahoo doesn't.)

Unless Alibaba comes with the purchase at a discount, or someone wants to acqui-hire whatever talent hasn't already fled, an acquisition doesn't seem even remotely sensible.

A few quick checks suggest that Yahoo's searches-per-day traffic is still decent, at 12.4% of the market (2.2 billion searches/month); perhaps redirecting that to some competing search engine might be worth it for a cheap enough price.

yalogin 3 days ago 9 replies      
Doesn't Buffet famously stay away from Tech? What changed? He is buying Yahoo and has invested in Apple.
icc97 3 days ago 0 replies      
Buffett is a fan of the cigar butts with one last puff left [1]. It seems like he's applying a similar philosophy here.

[1]: http://basehitinvesting.com/warren-buffett-letter-on-walter-...

chrisan 3 days ago 2 replies      
This is the emotional Dan Gilbert who enjoys Comic Sans


1024core 3 days ago 1 reply      
As a shareholder, I welcome the competition. :)

Also interestingly: Bain Capital is in the running. In the past, Yahoo has used Bain Capital as consultants to reorg, restructure, etc. It would almost seem like a conflict of interest, since they are acutely familiar with the innards of Yahoo.

Edit: as /u/mcmoose75 mentions below, "Bain Capital" and "Bain Consulting" (the one I was thinking of) are two separate entities.

strictnein 3 days ago 0 replies      
Read this first as "Warren Buffet and Dilbert Unite in Bid to Acquire Yahoo". I think I prefer that headline.
ppierald 3 days ago 0 replies      
Sue Decker, ex-Yahoo CFO & President sits on the Board of Directors for Birkshire Hathaway. I'm sure she has plenty of insight to the value of the company, and the complexities of its business.
jonbarker 3 days ago 0 replies      
I've been following Buffett since the 1990s and his reputation for shunning tech seems to be based on his wise choice not to play in tech in the late 1990s. This seems to be based on relatively simple valuation techniques as well as asking the important question "Do I understand the business?". Of course he missed some winners as a result of this but overall it helped his results. His two tech moves so far, IBM and Apple, don't violate that approach at all so it makes sense.
tgb29 3 days ago 2 replies      
When I consider Yahoo's value, I think of email, fantasy football, news, and tumblr. All four seem to be struggling when compared to their alternatives, but each of the products appear to have great potential value. It's hard to determine the quality of Mayer's work as CEO; some decisions were good, some look bad. I'm not confident she is a product person, and this is based on her management of Tumblr and the lack of development in email functionality and UI.

I could be wrong. I do go to Yahoo news everyday and it's not a bad service. It's fun to think about what the world would be today if Yahoo acquired Facebook LOL.

geogra4 3 days ago 2 replies      
Wonder if he'll bring a chunk of Yahoo to Detroit?
anonql 2 days ago 0 replies      
I'm weary of this. As a Detroiter I'm not a fan of Gilbert.

Though the prospect of bringing a large tech co to Detroit is nice, Gilbert and his people are very unpleasant to work with. They pay lip service to the importance of technology but generally don't respect tech people, or know how a real tech company operates. Not to mention their questionable morals (politically manipulating the State of Ohio so they could have a casino monopoly, instant mortgages, reverse mortgages, etc.).

Firstly, the Bizdom incubator was a mess. Very poorly run. Not a single successful business came out of it. No actual founders taught students. Just ex-QL people or trusted friends; the only thing in common was that none of them had ever started or ran a startup. Most of the startups that gained any traction did so by selling to Gilbert's other companies rather than proving that they have a real market - lots of incest going on.

On top of that, some entrepreneurs got straight-up screwed. At a minimum, by highly abusive investment terms (such as Bizdom owning 67% of the company and having the ability to modify operating agreements at will) - and to top it off, multiple founders in their system have had their ideas ripped off by Gilbert's people.

That's on top of their ridiculous real estate ventures. Such as offering startups hip, beautiful office space in Downtown Detroit - in exchange for a percentage of their company (I hear it's over 10%, with very few people biting). The startups that went through their investment funnels were "heavily encouraged" to get space there.

My guess is that if he gets Yahoo, he'll open up an office in Detroit, try to QL-ify it (i.e. make it a sales company that is a fairly close parody of Glengarry Glen Ross), it will flounder for a few years, and either get sold again or just die.

I really wish Detroit had a better advocate.

shirro 3 days ago 0 replies      
I don't know what is going on here but my bet is it has more to do with rich people doing tricky stuff with money than a vote of confidence in Yahoo, its products or potential to make money

If I was saddled with a dinosaur like Yahoo I would split it up and try and get some cash then rename what was left and I still think you would just be delaying the inevitable. The Yahoo name has about as much value as Netscape or Novell. It pretty much says outdated, failed technology company that has been overtaken by the competition.

smegel 3 days ago 1 reply      
This is Buffett, so he must be thinking about this in terms of business, not tech.

Maybe they have calculated they could break Yahoo up and sell the pieces for more than they bought as a whole?

Otherwise I'm out of ideas.

Communitivity 2 days ago 0 replies      
Idea which I find interesting, but is probably not on target, is BH leveraging their investments in Yahoo and Apple to have Apple take over Yahoo. The sense of style which Apple cultivates applied to digital content curation, combined with a personal digital assistant tweaked for librarian reference desk responses (Viv-ianne the Librarian).
some_guy1234 7 hours ago 0 replies      
yahoo -> $0. good luck Buffet
kiproping 3 days ago 1 reply      
Is the death of Yahoo due to poor leadership or did it just die a natural death like myspace or AOL
edpichler 2 days ago 0 replies      
I remember to read a lot of times that Warren does not invest in technology, because it's too risk and he only invest in what he understands. So, now it seems he learned.
gcb0 2 days ago 0 replies      
every time I missclick a New York Times link in my no-JavaScript mobile browser the site manages to redirect me back to the referrer url I was at. it's really uncanny. I click a link on HN, and after a page load, I'm back at hacker news.

except this one link. what should I think of that? why that single link is different than ever other nytime.com links?

arjun1296 2 days ago 0 replies      
What I was interested in Yahoo was the YQL.

After it closed the chatroom services I almost quit using yahoo.

gopi 3 days ago 0 replies      
So if it happens, does Dan Gilbert slowly move Yahoo workforce to Detroit to save money?
My wife has complained that OpenOffice will never print on Tuesdays (2009) launchpad.net
414 points by hardmath123  2 days ago   154 comments top 29
Animats 2 days ago 8 replies      
Did this get fixed, 7 years later?

Yesterday, we had a story about Microsoft's disk management service using lots of CPU time if the username contained "user". Microsoft's official reply was not to do that.

I once found a bug in Coyote Systems' load balancers where, if the USER-AGENT ended with "m", all packets were dropped. They use regular expressions for various rules, and I suspect someone typed "\m" where they meant "\n". Vendor denied problem, even after I submitted a test case which failed on their own web site's load balancer.

Many, many years ago, I found a bug in 4.3BSD which prevented TCP connections from establishing with certain other systems during odd numbered 4 hour periods. It took three days to find the bug in BSD's sequence number arithmetic. A combination of signed and unsigned casts was doing the wrong thing.

sampsonetics 2 days ago 1 reply      
Reminds me of my favorite bug story from my own career. It was in my first year or two out of college. We were using a commercial C++ library for making HTTP calls out to another service. The initial symptom of the bug was that random requests would appear to come back with empty responses -- not just empty bodies, but the entire response was empty (not even any headers).

After a fair amount of testing, I was somehow able to determine that it wasn't actually random. The empty response occurred whenever the size in bytes of the entire request (headers and body together) was exactly 10 modulo 256, for example 266 bytes or 1034 bytes or 4106 bytes. Weird, right?

I went ahead and worked around the problem by putting in a heuristic when constructing the request: If the body size was such that the total request size would end up being close to 10 modulo 256, based on empirical knowledge of the typical size of our request headers, then add a dummy header to get out of the danger zone. That got us past the problem, but made me queasy.

At the time, I had looked at the code and noticed an uninitialized variable in the response parsing function, but it didn't really hit me until much later. The code was something like this:

 void read_status_line(char *line) { char c; while (c != '\n') { c = read_next_byte(); *(line++) = c; } }
Obviously this is wrong because it's checking c before reading it! But why the 10 modulo 256 condition? Of course, the ASCII code for newline is 10. Duh. So there must have been an earlier call stack where some other function had a local variable storing the length of the request, and this function's c variable landed smack-dab on the least-significant byte of that earlier value. Arrrrgh!

mpeg 2 days ago 2 replies      
The title reminds me of "the 500 mile email"


icambron 2 days ago 5 replies      
The most interesting part of this story to me is actually that his wife noticed that the printer didn't work on Tuesdays. I'd have never, ever put that together, no matter how many times I saw it succeed or fail. I'd actually be more likely to figure it out by debugging the CUPS script than I would be observing my printer's behavior. Can a lot of people pick up on correlations like that? "Ever notice how it's always Tuesday when the printer won't work?"
mazda11 1 day ago 1 reply      
My most memorial bugfix was when I was on a team ,temporary ,that did email encryption/decryption.They had one customer where some mails could not get decrypted, they had been figthing with this for one year, no one could figure out what was going on.I told them to do a dump for a week with the good and bad emails.After one week I was given the dump of files, looked at the count of bad vs good, did some math in my head and said:"Hmm, it appears that about 1/256 mails is bad.That could indicate that the problem is releated to a specific byte having a specific value in the random 256 bit AES key.If there is a specific value giving problems it is probaly 0x00 and the position I would guess being at the last or first byte."

I did a check by decoding all SMIME mails to readable text with openssl- sure, all bad emails had 0x00 as the least signicant byte.Then i looked at asn1 spec and discovered it was a bit vague about if the least significant byte had to be there if it was 0x00.I inserted a line into the custom written IBM 4764 CCA driver written in c called by JNI.Then all emails decrypted.

The team dropped their jaws- they had been figthing with it for 1 year and I diagnosed the bug only by looking at the good/bad ratio :)

I might remember some details wrong- but the big picture is correct :)

alblue 2 days ago 2 replies      
The TL;DR is that the "file" utility was miscategorising files that had "Tue" in the first few bytes of a file as an Erlang JAM file, with knock on effects for PostScript files generates with a header comment with Tue in the date.
nilstycho 2 days ago 1 reply      
The weirdest case at my tenure as a neighborhood computer tech was a personal notebook computer that would not boot up at the customer's apartment. Of course we assumed user error, but further investigation revealed that if the computer were running as it approached the home, it would bluescreen about a block away.

We guessed it was due to some kind of RF interference from a transmitter on the apartment building. Removing the WiFi module and the optical drive had no effect, so we further guessed it was interference within the motherboard or display. Rather than investigate further, we replaced the notebook at that point.

mark-r 1 day ago 0 replies      
I have an anecdote, which isn't mine but comes from someone I know personally. This guy was working as a service tech, and was called out to diagnose a problem with a computer that had been recently moved. It worked most of the time, but any attempt to use the tape drive failed within a certain number of seconds (this was long ago, when tape drives were still a thing). Everything had worked fine before the move, and diagnostics didn't show anything out of place. Then he happened to look out the window - this was a military installation, and there was a radar dish rotating nearby. The failures occurred exactly when the radar dish was pointed their direction. It turns out the computer had been moved up one floor, which strengthened the interference just enough to cause the failure.
kazinator 2 days ago 0 replies      
But "Tue" is not at the fourth byte in the example, which has:

 %%CreationDate: (Tue Mar 3 19:47:42 2009)
Something munged he the data. Perhaps some step which removes all characters after %%, except those in parentheses?

 %%(Tue Mar 3 ...)
Now we're at the fourth byte. Another hypothesis is that the second incorrect match is kicking in.That is to say, some fields are added above %% CreationDate such that the Tue lands on position 79. The bug that was fixed in the magic database is this:

 -+4stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 -+79stringTue Jan 22 14:32:44 MET 1991Erlang JAM file - version 4.2 ++4stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2 ++79stringTue\ Jan\ 22\ 14:32:44\ MET\ 1991Erlang JAM file - version 4.2
(This is a patch of a patch: a fix to a an incorrect patch.) There are two matches for this special date which identifies JAM files: one at offset 4, but a possible other one at offset 79 which will cause the same problem.

The real bug here is arguably the CUPS script. It should identify the file's type before munging it. And it shouldn't use a completely general, highly configurable utility whose data-driven file classification system is a moving target from release to release! This is a print script, so there is no reason to suspect that an input file is a Doom WAD file, or a Sun OS 4 MC68000 executable. The possibilities are quite limited, and can be handled with a bit of custom logic.

Did Brother people write this? If so, I'm not surprised.

Nobody should ever write code whose correct execution depends on the "file" utility classifying something. That is, not unless you write your own "magic" file and use only that file; then you're taking proper ownership of the classification logic, such that any bugs are likely to be your own.

The fact that file got something wrong here is a red herring; the file utility is wrong once in a while, as anyone knows who has been using various versions of it regularly regularly for a few decades. Installations of the utility are only suitable for one-off interactive use. You got a mystery file from out of the blue, and need a clue as to what it is. Run file on it to get an often useful opinion. It is only usable in an advisory role, not in an authoritative role.

Adaptive 2 days ago 4 replies      
I've noticed that printing is still one of the poorest UX aspects of *nix/OSS and regularly seems to suffer from errors so egregious that they can only be attributed to OSS devs not dogfooding these features. I'm assuming they just don't print much (I mean, we ALL print less than 20 years ago, but all the more reason to test these features which, when you need them to work you REALLY need them to work).
t0mek 1 day ago 0 replies      
During my studies I had a course called "Advanced Network Administration". I learnt about the OSPF routing protocol and its Quagga [1] implementation and I had to prepare a simple installation that consisted of 3 Linux machines. They were connected with cheap USB network adapters.

After everything was configured I started the Quagga daemons and somehow they just didn't want to talk to each other. I've opened tcpdump to see what happens and the OSPF packets were exchanged properly. After a while the communication and routing was established. I thought that maybe the services just needed some time to discover the topology.

I've restarted the system to see if it's able to get up automatically, but the problem reoccured - daemons just didn't see each other. Again, I launched tcpdump, tweaked some settings and now it worked - until it didn't a few minutes later.

It take me a long time to find out that diagnostic tool I've used had actually changed the observed infrastructure (like in the quantum world). tcpdump enables the promiscuous mode on the network interfaces and apparently this was required for Quagga to run on the cheap USB ethernet adapters. I've used the ifconfig promisc and after that the OSPF worked stable.

[1] http://www.nongnu.org/quagga/

pif 1 day ago 0 replies      
CERN: LEP data confirm train time tables http://cds.cern.ch/record/1726241

CERN: Is the moon full? Just ask the LHC operatorshttp://www.quantumdiaries.org/2012/06/07/is-the-moon-full-ju...

carapace 2 days ago 1 reply      
Stuff like this is why I find "Synthetic Biology" so fucking scary.
BrandonM 1 day ago 0 replies      
Near the end of that post, the commenter suggested a fix that includes the most qualified Useless Use of Cat entry[0] that I've ever seen!

 cat | sed ... > $INPUT_TEMP
[0] http://porkmail.org/era/unix/award.html#cat

krylon 1 day ago 0 replies      
One of our users complained that she could no longer print PDF documents. Everything else, Word, Excel, graphics, worked fine, but when she printed a PDF ... the printer did emit a page that - layout-wise - pretty much looked like it was supposed to, except all the text was complete and utter nonsense.

Or was it? I took one of the pages back to my desk, and later in the day I had an idle moment, and my eyes wandered across the page. The funny thing is, if I had not known what text was supposed to be on the page, I would not have noticed, but the text was not random at all. Instead, all the letters had been shifted by one place in the alphabet (i.e. "ABCD" became "BCDE").

I went back to the user and told her to check the little box that said "Print text as graphics" in the PDF viewers printing dialog, and voila - the page came out of the printer looking the way it was supposed to.

Printing that way did take longer than usual (a lot longer), but at least the results were correct.

To this day, I have no clue where the problem came from, and unfortunately, I did not have the time to investigate the issue further. I had never seen such a problem before or after.

In a way it's part of what I like about my job: These weird problems that seem to come out of nowhere for no apparent reason, and that just as often disappear back into the void before I really understand what is going on. It can be oh-so frustrating at times, but I cannot deny that I am totally into weird things, so some part of me really enjoyed the whole experience.

gchadwick 1 day ago 0 replies      
Surely the real bug is the reliance on the 'file' utility in the first place? It attempts to quickly identify a file that could be literaly anything so it's not surprising (and indeed should be expected) that sometimes it gets it wrong.

I don't know the details of the CUPS script but presumably it can only deal with a small number of different file types. Implementing it's own detection to positively identify PS vs whatever other formats it deals with vs everything else would be far more robust.

kinai 1 day ago 0 replies      
I once had the case with a desktop system that when you sat down and started typing it often hardware reseted. Turned out Dell left some metal piece in the case which was hanging between the case and the motherboard (in those few millimeter) and with some stronger desk vibration caused a shortcut.
mark-r 2 days ago 1 reply      
I love the modification that pipes the output of cat into sed; doesn't he realize that cat is redundant at that point?
gsylvie 2 days ago 0 replies      
Here's a great collection of classic bug reports (including the never-printing-on-tuesdays): https://news.ycombinator.com/item?id=10309401
sklogic 1 day ago 0 replies      
No, it is a cups bug indeed. File was never guaranteed to be precise in the first place, it is not a good idea to rely on it.
rcthompson 1 day ago 0 replies      
I once found a bug in a weather applet that only occurred when the temperature exceeded 100 degress. The 3-digit temperature caused a cascade of formatting issues that rendered part of the applet unreadable. I believe the author used Celsius, and so would never have encountered this bug on their own.
DonHopkins 1 day ago 2 replies      
My 6502 based FORTH systems would sometimes crash for no apparent reason after I tweaked some code and recompiled it. Whenever it got into crashy mode, it would crash in a completely different way, on a randomly different word. I'd put some debugging code in to diagnose the problem, and it would either disappear or move to another word! It was an infuriating Heizenbug!

It turns out that the 6502 has a bug [1] that when you do an indirect JMP ($xxFF) through a two byte address that straddles a page boundary, it would wrap around to the first byte of the same page instead of incrementing the high half of the address to get the first byte of the next page.

And of course the way that an indirect threaded FORTH system works is that each word has a "code field address" that the FORTH inner loop jumps through indirectly. So if a word's CFA just happened to straddle a page boundary, that word would crash!

6502 FORTH systems typically implemented the NEXT indirect threaded code inner interpreter efficiently by using self modifying code that patched an indirect JMP instruction on page zero whose operand was the W code field pointer. [2]

JMP indirect is a relatively rare instruction, and it's quite rare that it's triggered by normal static code (since you can usually catch the problem during testing), but self modifying code has a 1/256 chance of triggering it!

A later version of the 65C02 fixed that bug.It could manifest in either compiled FORTH code, or the assembly kernel. The FIG FORTH compiler [3] worked around it at compile time by allocating an extra byte before defining a new word if its CFA would straddle a page boundary.I defined an assembler macro for compiling words in the kernel that automatically padded in the special case, but the original 6502 FIG FORTH kernel had to be "checked and altered on any alteration" manually.

[1] http://everything2.com/title/6502+indirect+JMP+bug

[2] http://forum.6502.org/viewtopic.php?t=1619

"I'm sure some of you noticed my code will break if the bytes of the word addressed by IP straddle a page boundary, but luckily that's a direct parallel to the NMOS 6502's buggy JMP-Indirect instruction. An effective solution can be found in Fig-Forth 6502, available in the "Monitors, Assemblers, and Interpreters" section here. (The issue is dealt with at compile time; there is no run-time cost. The word CREATE pre-pads the dictionary with an unused byte in the rare cases when the word about to be CREATEd would otherwise end up with a code-field straddling a page boundary.)"

[3] http://www.dwheeler.com/6502/FIG6502.ASM

 ; The following offset adjusts all code fields to avoid an ; address ending $XXFF. This must be checked and altered on ; any alteration , for the indirect jump at W-1 to operate ! ; .ORIGIN *+2

 .WORD DP ;) .WORD CAT ;| 6502 only. The code field .WORD CLIT ;| must not straddle page .BYTE $FD ;| boundaries .WORD EQUAL ;| .WORD ALLOT ;)

GigabyteCoin 1 day ago 0 replies      
"tue" means "kill" in french... I wonder if a french programmer somewhere had something to do with this?
lifeisstillgood 2 days ago 2 replies      
And this is why we won't ever get AI. Humans seem to only manage to get to a certain level of complex before it all gets too much.

There are supposedly people in Boeing who understand literally every part of a 747, the wiring and the funny holes in the windows. But there is probably no one who understands all parts of Windows 10.

We're doomed to keep leaping like dolphins to reach a fish held too high by a sadistic Orlando world trainer

chris_wot 1 day ago 0 replies      
Wait till you see where they found the print server!


gregschlom 2 days ago 2 replies      
So what's the lesson here? What should we learn from that?
broodbucket 2 days ago 1 reply      
Is it just me or does this get posted every month?
meeper16 2 days ago 1 reply      
Yet another reason I don't let OpenOffice or any Linux UIs slow me down. It's all about the command line and always will be.
How Intel missed the smartphone market mondaynote.com
292 points by JeremyMorgan  3 days ago   196 comments top 43
vtail 3 days ago 6 replies      
Expect the same fate for many more tech companies.

Although not many people realize, tech is a _mature_ sector of the U.S. economy now - it grows at about 3% a year, pretty much in line with GDP. But the perception is different - just ask your friends - and the reason for this is that there are highly visible _pockets of growth_, like Uber/SpaceX/name-your-favorite-unicorn, and people often extrapolate their growth to the rest of the industry.

Now what happens with many tech companies is that they often have a product with exceptional margins and double-digit growth rate, and it makes sense to invest all the available resources into it - better from ROIC perspective - ignoring all the alternatives which either lack volume or high margin to look attractive. This inevitably leads to problems once your exceptional product stops growing, and you realize you have barely invested in anything else.

Just much like Intel with x86 and ARM chips, or Qualcomm, or EMC, or RIM, ... - the list goes on and on.

Even when you look at Google, most of their resources are invested into search/ad business, so when that's stops growing - or, rather, they got all the share from the TV and start growing at a GDP rate - they will be in the same boat.

Edit: typos.

Rezo 3 days ago 1 reply      
If you don't cannibalize yourself, someone else will.

Intentionally limiting Atom performance, selling of their ARM division, etc. was all done in order to not harm their main cash cow. By the time they woke up and really tried to push for x86 on Android it was too little too late.

Just from an engineering perspective it was always going to be a monumental task. Because guess what, that small dev shop with the hit-of-the-month mobile game is not going to bother cross-compiling or testing on the 1% of non-ARM devices. And if "Ridiculous Fishing" or whatever doesn't work flawlessly, your device is broken from the consumer perspective.

But what should really have Intel pissing their pants is the recent AMD x86 license deal with Chinese manufacturer's to pump out x86 server class chips. I'l love to hear if they're taking it seriously at all, or dismissing it as usual.

sp332 3 days ago 1 reply      
Looks like the article just got deleted? Here's a cache http://webcache.googleusercontent.com/search?strip=1&q=cache... It's not available in the Internet Archive because of robots.txt, which makes me wonder how Google justifies caching it but whatever
samfisher83 3 days ago 5 replies      
Lets look at QCT which I think is the biggest if not the biggest ARM company out there.


QCTs operating margin fell from 16.9% in fiscal 1Q16 to 5% in fiscal 2Q16. The margin was on the higher end of the low- to mid-single-digit guidance as the ASP (average selling price) of 3G and 4G handsets equipped with Qualcomm chipsets rose by 6% YoY to $205$211. The price rose due to a favorable product mix and higher content per device.

The Margins on cell phone chips are terrible. QCT made 2.5 bil on 17bil in revenue.


Would it really make sense to invest in Cellphone business when every dollar you put in gets you less ROI compared to what you have now. From a finance perspective it would make more sense to return it to the shareholders and let them invest in QCOM if they want to.

markbnj 3 days ago 1 reply      
Take any company that once rode atop a huge market and then suffered through declines as that market changed and you could filter back through the history of their decision making and say the exact same thing Gassee is saying here about Intel. So I'm not sure what point he is trying to make when he says their "culture ate 12,000 jobs." Does he mean their culture of focusing on the market they were currently very successful in, and their culture of not seeing huge paradigm shifts happening just under the surface of it? I suspect he just had a dull axe, and decided the time was finally ripe to sharpen it.
Mendenhall 3 days ago 7 replies      
I found this quote to be one of particular interest.

I couldnt see it. It wasnt one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.

and, perhaps more importantly:

The lesson I took away from that was, while we like to speak with data around here, so many times in my career Ive ended up making decisions with my gut, and I should have followed my gut. My gut told me to say yes.

I personally love data,facts,hard science, but I find too often many can ignore gut feelings. I almost always follow my gut instinct, It has proven itself to me over and over again even while taking what is often perceived as long shots but somehow my gut tells me "you got this".

In particular I would say gather your own data on how often or not your gut instinct is correct and use that as a data point in addition to the hard science, facts etc.

Instinct evolved to keep you alive, it is often wise to not ignore it.

skynetv2 3 days ago 3 replies      
Many are focussing on the 12,000 number. Many forget Intel has acquired a whole bunch of companies over the past few years including McAfee, Altera etc. Now mix in a change in strategy, you find yourself with a bunch of people you no longer need. Redundancy of services like HR, IT, test, design, management etc along with another class of engineers whose skill set you no longer need in your new direction.

Organizations continue to evolve and change direction. I am not saying leadership has not failed spectacularly in mobile, but some of the 12,000 jobs are a natural progression of acquisitions.

voiceclonr 3 days ago 0 replies      
Intel's problems are long time coming. They lost track of real customer needs and have been operating in their own x86 centric world. It was always "let me build it first and then try to look for a good usage". I am ex-Intel and used to do software things. None of the leadership had a real sense of what customers really wanted and the h/w architects just built things that they are really good at. And they justified it in unnatural ways ("You know why you need such a fast processor ? You can run this flash game at 10K fps").

Things have been getting very efficient both on the client and the server side. With Cloud, they will have some momentum behind - but long term, I think the glory days are gone where they can just produce chips and someone would take it.

kazinator 3 days ago 0 replies      
This goes back farther than 2005!

At one point at the height of the bubble, Intel was involved in mobile devices.

I worked for a company that developed secure, efficient wireless communication middleware for that space. Our client hardware was mainly Pocket PC and Windows CE at that time.

We partnered with Intel to port our stack to the Linux device they were developing (codenamed "PAWS"). This was around 2000-2001, if I recall.

Thing were going very well, when practically overnight, Intel decided to pull out of this market entirely. They shut down the project and that was that.

It didn't bode very well for that little company; we gambled on this Intel partnership and put a lot of resource into that project in hopes that there would be revenue, of course. Oops!

udkl 3 days ago 0 replies      
The post was deleted.

Here is the google cache link for quick access : http://webcache.googleusercontent.com/search?q=cache:https:/...

And here is the raw text(without any links) : http://pastebin.com/e10Yw0zi

jonstokes 3 days ago 12 replies      
I hate this piece. I hate that in 2016, people still believe in Magical ARM ISA Performance Elves. I hate that this is like my sixth or seventh post-Ars comment on HN in as many years debunking the ARM Performance Elves theory.

Back when the PPro was launched there was a low double-digits percentage of the die taken up with the x86 translation hardware (for turning x86 instructions into internal, RISC-like micro-ops). Now that number is some small fraction of a percent. It kills me that I still read this nonsense.

The reason that Intel didn't get into cell phone chips is because margins were (and are) crap, and Intel is addicted to margins. The reason everyone else went ARM is because a) licenses are dirt cheap, and b) everyone knows that Intel is addicted to margins, and they know about all of Intel's dirty tricks and the way Intel leveraged Windows to keep its margins up in the PC space and screw other component vendors, so everyone has seen what happens when you tie yourself to x86 and was like "no thanks".

Of course Intel didn't want to make ARM chips for Apple (or anyone else), because even if they had correctly forecast the volume they'd have no control over the margins because ARM gives them no leverage the way x86 does. If Intel decides it wants to make more money per chip and it starts trying to squeeze Apple, Apple can just bail and go to another ARM vendor. But if Apple went x86 in the iPhone, then Intel could start ratcheting up the unit cost per CPU and all of the other ICs in that phone would have to get cheaper (i.e. shrink their margins) for Apple to keep its own overall unit cost the same (either that or give up its own margin to Intel). Again, this is what Intel did with the PC and how they screwed Nvidia, ATI, etc. -- they just ask for a bigger share of the overall PC unit cost by jacking up their CPU price, and they get it because while you can always pit GPU vendors against each other to see who will take less money, you're stuck with Intel.

(What about AMD? Hahaha... Intel would actually share some of that money back with you via a little kickback scheme called the "Intel Inside" program. So Intel gives up a little margin back to the PC vendor, but they still get to starve their competitors anyway while keeping AMD locked out. So the game was, "jack up the cost of the CPU, all the other PC components take a hit, and then share some of the loot back with the PC vendor in exchange for exclusivity.")

Anyway, the mobile guys had seen this whole movie before, and weren't eager to see the sequel play out in the phone space. So, ARM it was.

The only mystery to me was why Intel never just sucked it up and made a low-margin x86 chip to compete directly with ARM. I assure you there was no technical or ISA-related reason that this didn't happen, because as I said above that is flat-earth nonsense. More likely Intel didn't just didn't want to get into a low-margin business at all (Wall St. would clobber them, and low-margin x86 would cannibalize high-margin x86), and by the time it was clear that they had missed the smartphone boat ARM was so entrenched that there was no clear route to enough volume to make it worthwhile, again, especially given that any phone maker that Intel approaches with a x86 phone chip is going to run the other way because they don't want x86 lock-in.

petra 3 days ago 3 replies      
The main issue here is: Intel doesn't want to work on stuff with lower margin(even though it can make money and it's strategically valuable). That's common to many businesses.

Isn't there any financial engineering trick to enable companies to solve this dillema that often leads to their death?

chillingeffect 3 days ago 1 reply      
FWIW, AMD also struck out HARD in the mobile market. When I interviewed there (just before release of iPhone 3G), I was jazzed about the "imminent" opportunity for mobile CPUs. I expected AMD to scale right down and into the phone, tablet and CPU market. We watched the world tumble around us as if on a ferry boat.

When I voiced the words, "mobile CPU" to anyone there, people were oblivious and silent. The company was just out of touch, thinking everyone was going to keep buying tower PCs and laptops. It seemed the only variable they thought customers cared about was performance. People would buy AMD if it was just a little faster. They didn't realize it was simply the wrong product/market. Performance wasn't nearly as important, sigh.

adekok 3 days ago 0 replies      
Intel also bet big on Wimax. Which (mostly) ended up going nowhere. That was another multi-billion dollar mistake. And one which cost thousands of jobs at Intel.
shmerl 3 days ago 0 replies      
It is sad. Because in comparison with sickening ARM scene, Intel actually put effort into making open GPU drivers.
hoi 2 days ago 0 replies      
The mobile space was aready hotting up. Nokia, alone in 2005 was already selling over 250M devices at around 35-40% market share. They were already using ARM chips. Forecast for smartphones was already high, since the whole mobile industry was posiioning itself to upgrade. It was always a question of getting the telcos to improve their 2G networks to 3G before the smartphone explosion was expected. Intel should have known this.
tracker1 3 days ago 0 replies      
I'm not sure the "Just You Wait" is without merit, just taken longer than thought... I'm not sure why... given the transitions that have happened through process advancements the past 6 years or so, we now have CPUs that burst to higher speeds very transparently, and at really low power compared to a decade ago. While not suitable for phones, Intel is a pretty great choice for laptops and tablets (OS concerns aside).

I do think that ChromeOS (or whatever it's called) offers a distinct difference to most users from what Windows is. That changes things up a lot.

I feel that within 4 years, Intel will have some competative x86 offerings compared to ARM... On the flip side, by then, ARM will be competitive in the server/desktop space much more than today. It's kind of weird, but it will be another round of competition between different vendors all around. I'm not sure what other competitors will come around again.

That's not even mentioning AMD's work at their hybrid CPUs... also, some more competition in the GPU space would be nice.

It really reminds me of the mid-late 90's when you had half a dozen choices for server/workstation architectures to target. These days, thanks to Linux's dominance on the server, and flexibility and even MS's Windows refactoring, it's easy to target different platforms in higher-level languages.

There will be some very interesting times in the next few years.

frik 3 days ago 2 replies      
I was shocked the other day the Intel discontinued Intel Atom. So if one needs a cheap SoC board, there are now only ARM like Raspberry Pi, and no cheap ($50) x84/x64 compatible boards with Windows drivers. The Intel Atom 230 (1.6GHz dual core with HT) of 2008 was cheap ($50) and fast (Pentium M based). And on the high-end side there is still no 5GHz and there is little single-core improvement since 2004, since the Pentium 4 3GHz. We need more competition with x84/x64 to get lower prices and faster cores.
protomyth 3 days ago 0 replies      
I do wonder what would have happened if Intel had concentrated on being a custom chip maker. With a continuing of advancement in a low-powered x86 core and custom options, it would have been interesting. I guess when both game machine manufactures skip Intel to get AMD you really have to wonder where they thought things were going. A company so against customization that it closed its bus from third-parties probably isn't going to do well in a custom world.
dredmorbius 2 days ago 0 replies      
Article error: "In 1985, fresh from moving the Macintosh to the x86 processor family, Steve Jobs asked Intel to fabricate the processor that would inspirit the future iPhone. The catch: This wouldnt be an Intel design, but a smaller, less expensive variant of the emerging ARM architecture, with the moderate power appetite required for a phone."

That should be 2005.

bsder 3 days ago 3 replies      
Intel didn't miss the smartphone market. It simply made no sense (and still makes no sense) to go after a less profitable market.

When the margins on x86 cross below that of ARM chips, Intel will come in and destroy all the ARM manufacturers.

inlined 2 days ago 1 reply      
I'd love help understanding how Intel's failure in the smartphone market didn't also doom the IoT market. The x86 chips never took off because they were expensive and took too much energy. If IoT is to be ubiquitous, those barriers seem even more important.

Intel's comment about IoT makes me wonder: do they think they just lost the first mover advantage for mobile & the industry got hooked on ARM ISAs? Do they still believe the story that their chips will become cheaper and more powerful than ARM if they change nothing?

stephenmm 3 days ago 0 replies      
They made several attempts (http://www.cnet.com/news/intel-sells-off-communications-chip...) but I think Intel has a huge cultural problem that does not allow for true innovation outside of core competencies. I worked for the Marvell group and while Intel did not know how to be nimble Marvell had there own issues which is in part (IMHO) why blackberry lost its leadership position. But the biggest problem with Intel is the culture as you are truly just a cog in the wheel there. Just my $.02
cowardlydragon 3 days ago 1 reply      
Article gone?

Well, I'd guess fundamentally it was about tying yourself to the Microsoft sociopath mothership.

Intel could have taken Linux by the reigns and made an OSX-equivalent and certainly windows-beating decades ago.

But they didn't.

So they missed the boat.

tmaly 3 days ago 2 replies      
Intel has some amazing research. I was at an internal fair back in 1999 when I was doing an internship there in the Product Dev group that did the Itanium chipset.

They has this handheld touch screen device that resembled a modern smart phone that was driven by voice recognition. This was 1999 way before the iphone came around.

In my opinion, it is the internal politics and the leadership that has caused them to lose out.

Chip design books in academia were already moving in the direction of low power designs during the late 90s. They just did not take any action.

oldmanjay 3 days ago 0 replies      
I guess the point of the article is that this is a failure of some sort but it never really makes a strong enough case. The closest it comes is trying to make some sort of assertion that the executives at Intel should have been able to predict the future. If I were inclined to hate the company for some reason, I would find this emotionally satisfying, but I'm not, so I just find it to be a recap of things everyone already knew, expressed with an unjustified tone of smug finger-wagging
payne92 3 days ago 0 replies      
I wonder if Intel saying "no" to Apple re: ARM will go down as the same scale decision as Digital Research saying "no" to IBM for an operating system.
darklajid 2 days ago 0 replies      
Time to get out my Maemo (RIP Nokia) and Meego (too bad, Intel) merchandise and go cry in a corner.

 Maemo/Meego WebOS Firefox OS
All of these were interesting, all failed. I'm still convinced that there's a market for a non Android/iOS solution.

brisance 3 days ago 0 replies      
Article is back up. He accidentally deleted his 9 years' worth of posts. https://twitter.com/gassee/status/732368830190620673
dhimes 3 days ago 0 replies      
The lesson I took away from that was, while we like to speak with data around here, so many times in my career Ive ended up making decisions with my gut, and I should have followed my gut. My gut told me to say yes.

Killer, for us fans of rational strategy.

lquist 3 days ago 1 reply      
A friend of mine at Intel told me that Intel doesn't care that much about not owning this market because the size of the profit pool around server chips is orders of magnitude greater and they have monopolistic control of that market.
draw_down 3 days ago 0 replies      
This case is a real problem for the "data-driven thinking" set. Or rather, the problem with that idea is you have no way to tell which data you should be looking at. Just because you have numbers doesn't make them the right ones.
elmar 3 days ago 0 replies      
they just forgot that only the paranoid survive.
mastazi 2 days ago 1 reply      
OT but somehow related:

The article contains the sentence "Jobs asked Intel to fabricate the processor" and "Intel styled itself as a designer of microprocessors, not mere fabricator".

Question: according to your own experience, is it common in English to use "fabricate" and "fabricator" as synonyms of "manufacture" and "maker"?

I am much more familiar with the negative meaning (e.g. "fabricated lies") but since I'm not a native speaker my vocabulary might be limited.

ricardobeat 3 days ago 0 replies      
Why was the title editorialized by mods?
nkjoep 3 days ago 0 replies      
Unable to read the story. Seems delete from medium.
albasha 3 days ago 0 replies      
> The author deleted this Medium story
blhack 2 days ago 1 reply      

Intel didn't miss anything, they sell the hardware that powers the infrastructure behind these new, always-connected devices.

EGreg 3 days ago 0 replies      
The author deleted this Medium story.
Agathos 2 days ago 0 replies      
Hey, it's the BeOS guy!
crorella 3 days ago 2 replies      
He just deleted the post :/
visarga 3 days ago 0 replies      
... and also the GPU market.
known 3 days ago 0 replies      
The 9 lines of code that Google allegedly stole from Oracle majadhondt.wordpress.com
349 points by nkurz  1 day ago   190 comments top 50
jbob2000 1 day ago 9 replies      
Looks like something I've written a hundred times. It's a common pattern, you could "steal" this just through organically writing a program.

The more I hear about this case, the more I realize it's just a bunch of lawyers trying to pad their bank accounts. No sane engineer would claim this is infringement.

sgc 1 day ago 0 replies      
Given the guy who wrote this wrote both the first and supposedly infringing code, I have a bit of an analogy here from personal experience from another field.

For a while I worked in translating, and I translated a couple of books for the same author. One of the later books quoted about a page from the first one I had translated a couple of years earlier. I just translated it again because it was faster than finding the passage in my other translation (first point). Afterwards, I went back out of curiosity and checked the two translations against each other. I was quite surprised to see that in one full page of translation, after years of further experience, there was only one or two prepositions that were meaninglessly changed (point two).

Some things are just so obvious that the same guy doing the same thing years apart will produce the same results, especially if he is an expert in his craft. Unless there is some way to prove otherwise, this point of the case should be definitively dropped.

nedsma 1 day ago 4 replies      
Dear goodness. And there are tens if not hundreds of people involved in trying to prove/disprove this case and they're all getting some hefty money. What a waste of human intellect and time.
guelo 1 day ago 1 reply      
This article is from 2012 and is very outdated. The "famous 9 lines" are not being contested anymore. Google lost that case. The current trial is about whether Google's copyright infringement constituted "fair use".
AdmiralAsshat 1 day ago 4 replies      
One thing I've been thinking about as I've read through the trial:

It's my understanding (I am a wee lad compared to the grizzled vets here, so bear with me) that most of our common *nix tools were written during the UNIX days and were technically proprietary (awk, grep, cut, etc). When Linux came around, these tools were "ported" to become GNU tools and completely rewritten on the backend, while still keeping the same name so that existing UNIX developers would feel at-home using the same tools on Linux,BSD, etc.

The key point here is that they intentionally kept the same command names, for familiarity's sake.

Given that, could one make the analogy that a command name would be similar to an "API" and should also have been illegal by Oracle's logic?

worldsayshi 1 day ago 2 replies      
Wow, this is legal bullshiting beyond comprehension. It is the equivalent of one engineer copycating the way another engineer moves his arm when fastening a screw. To give anything beyond 5 minutes attention to this in a court is an insult to society.
gvb 1 day ago 0 replies      
More relevant information: "What are the 37 Java API packages possibly encumbered by the May 2014 Oracle v Google decision?"


From the #1 answer (it is worth clicking the link and reading the full answer):

 java.awt.font java.beans java.io java.lang java.lang.annotation java.lang.ref java.lang.reflect java.net java.nio java.nio.channels java.nio.channels.spi java.nio.charset java.nio.charset.spi java.security java.security.acl java.security.cert java.security.interfaces java.security.spec java.sql java.text java.util java.util.jar java.util.logging java.util.prefs java.util.regex java.util.zip javax.crypto javax.crypto.interfaces javax.crypto.spec javax.net javax.net.ssl javax.security.auth javax.security.auth.callback javax.security.auth.login javax.security.auth.x500 javax.security.cert javax.sql

foobarrio 1 day ago 6 replies      
I thought it "not obvious to a practitioner of the craft" was a requirement for a patent no? Give 10 programmers the task to write "rangeCheck()" and you'll end up with very similar looking code.
holtalanm 1 day ago 5 replies      
am I the only one that when looking at the implementation sees that there is a major flaw in the code?

if(toIndex > arrayLen) does not handle the case in which toIndex == arrayLen, which should still throw an ArrayIndexOutOfBoundsException if we are dealing with 0-based indexes.

Please correct me if I am wrong.

Aelinsaar 1 day ago 1 reply      
Incredible. The amount of money being set to the fire for the sake of something that even a student knows is utter crap.
devy 1 day ago 0 replies      
Sort of off topic, anyone know who's this Tim Peters who created the TimSort? Python docs and Wikipedia[1] has virtually no bio for him even though he's very well known Python contributor and his code becomes a legacy in this billion dollar lawsuit, among other things(like Zen of Python[2]).

[1]: https://en.wikipedia.org/wiki/Timsort

[2]: https://www.python.org/dev/peps/pep-0020/

ZeroGravitas 1 day ago 0 replies      
The worst part is that the programmer only "stole" these lines as he was contributing an improvement back to the OpenJDK and wanted his stuff to be compatible. Which adds one more level of absurdity.
hermannj314 1 day ago 0 replies      
Yeah, a bunch of jurors will be ruined financially while being forced to watch billionaires fight over how to best split up their empire. Sortition is how you spell slavery in the 21st century.
enibundo 1 day ago 1 reply      
As a software engineer, I get sad when I read news like this.
erikb 1 day ago 0 replies      
When the content of a trial are 9 lines of code then of course the topic are not really the 9 lines of code. It's just a way to gain something else. Everybody involved probably knows that.

I personally am very happy if powerhouses fight each other with lawsuits instead of giving me a sword and asking me to die for them. In that regard I feel humanity has come quite far over the last centuries.

tantalor 1 day ago 1 reply      
cognivore 1 day ago 1 reply      
That has to be a joke. By pursuing this Oracle just makes themselves look like idiots to anyone who actually has an technical knowledge.

So, they're idiots.

gsylvie 1 day ago 0 replies      
"I May Not Be Totally Perfect But Parts of Me Are Excellent" - I think this is a useful article to read when considering the 9 lines of code, because copyright law tends to treat novels, pop songs, and software code as the same: http://fairuse.stanford.edu/2003/09/09/copyright_protection_...
Twisell 1 day ago 1 reply      
This is total FUD. (EDIT ND: because thoses lines of code are already out of every discussions to be held in the current retrial, they are already ruled out the only remaining question is fair use)

This trial should now be entirely focused about wether Google "stole" the API SSO under a fair use exception and shall be relieved.

The preceding phases of this case already determined that:-thoses a nines lines are not significant-Google used API SSO without consent of Sun/Oracle and without any license-API SSO of Java are indeed copyrightable (this was ruled in appeal and confirmed by the Supreme Court)

This retrial is only happening because judge Aslup did a half baked first trial and the appeal court returned him the case after invalidating his bad ruling about non-copyrightability of API.

For thoses who seek deep insights about this case, take a look at Florian Mueller's blog:http://www.fosspatents.com

He pretty accurately predict the reversal of the first ruling against the opinion of many mainstream analysts. And he frequently publish link to public court documents so you can make up you mind by yourself.

EDIT: If you downvote please argument, otherwise it's very suspicious. I'm totally open to discussion but I can't fight against a hidden lobbyst activity that systmatically downvote diverging views.

EDIT2: I edited the first sentence to be more explicative. I've seen I got some upvote, but silent bashing seems to continue. Again, please argument!

I don't get why the name of this blogger unleash so much passions while he actually always publish documents and link to actual rulings. Yes he clearly don't write as elegantly as some, and yes he's by now pretty opinionated but why such much hate?

laverick 1 day ago 1 reply      
Uhm. That code wouldn't even compile...
chiefalchemist 1 day ago 0 replies      
Actual code aside, I would think this should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience, else they'll fear "a temporary measure" they didn't ask for. That is, suddenly, experience might not be as valuable as it used to be.
sleepychu 1 day ago 0 replies      
I'm pretty sure I've seen


0xmohit 1 day ago 0 replies      
Thankfully the patent system didn't exist when the number system was developed. Otherwise one would need to pay a royalty for counting.
meganvito 1 day ago 0 replies      
In the university I graduated, the professor definitely will mark plagiarism and give an F, unless a strict rule of sourcing followed. Most openjdk source has the first line a usual header. Maybe I am a late student of JDK. or may be there the court may prevail an exception. Finally you have to consider yourself what do we mean to contribute to open source?
curiousgal 1 day ago 0 replies      
>if i is between 0 and 11 before retrieving the ith element of a 10-element list.

Shouldn't i be between 0 and 9?

foldablechair 1 day ago 0 replies      
Reminds me of all those court cases of 'stolen' logos, using a small and fixed set of geometric primitives, the probability of coincidences is just high that way. Of course, some people believe all art is immitation and nothing ever gets created from first principles.
chiefalchemist 1 day ago 0 replies      
Code aside. This should strike fear in the hearts and minds of any dev who wishes to change jobs and doesn't change industries / product type. I would think that push come to shove employers will opt for less direct experience.
chenster 19 hours ago 0 replies      
Thanks for wasting course time on non-sense like this. Things like this squatting our legal system and yields absolutely nothing.
Matt3o12_ 1 day ago 0 replies      
Does anyone have an idea what is really going on?

I've heard people say that Google actually copied the API structure (which is copyright-able) but I've also heard that this lawsuit was actually about Google using a wrong (or missing license). And I've heard that Google also manipulated the developer community by only propagating "we only copied 7 lines of code" and big evil oracle sues us.

From what I know Google used Java's API structure but did not include a license. They could have paid oracle for a license to use it conmercially or they could have used the GPL from OpenSDK and be bound to its restrictions. What they did instead was not to include a license at all, because try did not want to pay oracle but also did not want to be bound by the GPL (which might complicate things with phone manufacturers that change the code).

Could anyone tell me what the fuck this lawsuit is actually about?

eps 1 day ago 0 replies      
Am I reading this correctly that it's actually buggy?

It doesn't properly work if an array is zero-based nor it works if it's 1-based. It neither properly work if toIndex is meant to be included in the range or excluded from it.

nutate 1 day ago 0 replies      
The resonance with left-pad and the questions of "how exactly to we share super simple code" evolves through so many different prisms. From legal to organizational to not invented here to...
Tloewald 1 day ago 0 replies      
Is it just me or does this code seem to have an off-by-one error (i.e. throwing on toIndex > arrayLen and not toIndex >= arrayLen, given that the lower bound check implies zero-based arrays)?
knodi123 1 day ago 0 replies      
Interesting that these 9 lines were apparently re-typed by hand, or possibly even from memory.... or so I suppose based on the missing close-paren on the first line...
cm2187 1 day ago 0 replies      
There is a lot of vested interest in this case and I do not know the author of this article. Are we sure the claim is down to the implementation of this function?
mark242 1 day ago 1 reply      
A void function that does nothing but throw exceptions. Scala engineers everywhere cringe at the thought of converting this kind of code to native Scala.
rootlocus 1 day ago 0 replies      

 > Google owes Oracle between $1.4 billion and $6 billion in damages if liable
In what damages, exactly?

udkl 22 hours ago 0 replies      
Naively, that's $200 million to $800 million per line of code.
shubhamjain 1 day ago 0 replies      
Perhaps, someone should make a software that checks code to see if it is infringing any copyright. :)
meganvito 1 day ago 0 replies      
I would leave my last comment, doing 'cheap things' is/are habitual.
eb0la 1 day ago 0 replies      
I bet you can get a similar code from BSD, EMACS, Ingres, or any venerable open source codebase and use it as prior art against that patent claim.

Ok, maybe that venerable codebases doesn't have exception handling like Java but you can prove to have the same logic maybe 10 or 20 years before that code was written.

masters3d 1 day ago 0 replies      
One billion dollars per line.
BurningFrog 1 day ago 0 replies      
Is this what the whole case rests on, or is it just one of many details?
hathym 1 day ago 0 replies      
wow, each line costs nearly one billion dollars
smaili 1 day ago 2 replies      
tldr -

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }

vladaionescu 1 day ago 0 replies      
Pretty sure that the only reason they copied that code was that they didn't know how to do it themselves.
CiPHPerCoder 1 day ago 2 replies      
This code is ugly anyway:

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex { if (fromIndex > toIndex) throw new IllegalArgumentException("fromIndex(" + fromIndex + ") > toIndex(" + toIndex+")"); if (fromIndex < 0) throw new ArrayIndexOutOfBoundsException(fromIndex); if (toIndex > arrayLen) throw new ArrayIndexOutOfBoundsException(toIndex); }
Missing a closing paren in the function prototype, among other things.

 private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) { if (fromIndex > toIndex) { throw new IllegalArgumentException( String.format("fromIndex(%d) > toIndex(%d)", fromIndex, toIndex) ); } if (fromIndex < 0) { throw new ArrayIndexOutOfBoundsException(fromIndex); } if (toIndex > arrayLen) { throw new ArrayIndexOutOfBoundsException(toIndex); } }
There you go Google, Oracle, et al. I release this snippet under MIT / WTFPL / CC0. You're welcome.

Oletros 1 day ago 0 replies      
This case is not about RangeCheck, is about the 37 Java classed declaration
draw_down 1 day ago 0 replies      
> Every company tries to control its developers actions, but does management really know what goes into the software?

This is backwards, developers do what management allows. If management cares to know what goes in the software, they will know. There are ways to know. Whether business people want to pay for that is a different matter. Of course they don't, for this precise reason- so they can throw up their hands and say, "those darn developers!"

Software Design Patterns Are Not Goals, They Are Tools exceptionnotfound.net
295 points by kiyanwang  16 hours ago   154 comments top 30
userbinator 14 hours ago 12 replies      
It probably seems like an obvious statement to a lot of HN, but I have a feeling that it isn't to the majority of developers, who for some reason appear to love immense complexity and solving simple problems with complex solutions. I think a lot of them started with OO, which immediately raises their perception of what is "normal" complexity --- at that point, they're already creating more abstraction than is really necessary. Then they learn about design patterns and all the accompanying "hype" around them, so they think "awesome, something new and shiny to use in my code!" and start putting them in whenever they can, I guess because it feels productive to be creating lots of classes and methods and hooking everything together. It's easier to dogmatically apply design patterns and generate code mindlessly than to think about what the problem actually needs to solve it. The result is code that they think fulfills all the buzzwordy traits of "good software engineering practice" (robustness, maintainability, extensibility, scalability, understandability, etc.), but in reality is an overengineered brittle monstrosity that is only extensible in the specific ways thought of when it was first designed. That almost never turns out to be the case, so even more abstractions are added (including design patterns) on the next change, on the belief that it will help with the change after that, while leaving the existing useless ones in, and the system grows in complexity massively.

I did not start with OO, I never read the GoF book, and don't really get the obsession with design patterns and everything surrounding them. I've surprised a lot of others who likely have, by showing them how simple the solutions to some problems can be. Perhaps it's the education of programmers that is to blame for this.

The statement could be generalised to "software is not a goal, it is a tool".

Related article: https://blog.codinghorror.com/head-first-design-patterns/

jrochkind1 10 hours ago 1 reply      

Design patterns are super useful as tools.

As "goals" they are idiotic. I think lots of people that think they are idiotic have been exposed to them as "goals", or don't realize that's not the point.

I think there is a larger issue here, which is that many kinds of software development, including web dev, has become enormously more complex in many ways than it was when many of us came up.

People coming up now are looking for magic bullets and shortcuts and things they can just follow by rote -- because they are overwhelmed and don't know how to get to competence, let alone expertise, without these things.

It's easy for us to look down on people doing this as just not very good developers -- and the idea of 'software bootcamp' doesn't help, I think it's probably not _possible_ to get to competence through such a process -- but too easy to forget that if we were starting from scratch now we ourselves would likely find it a lot more challenging than we did when we started. There's way more stuff to deal with now.

"Design patterns" are never going to serve as such a magic bullet or thing you can follow by rote, and will often make things worse when used that way -- but so will any other potential magic bullet or thing you can follow by rote. Software doesn't work that way. It's still a craft.

dantheman 14 hours ago 2 replies      
Patterns are from software archaeology, they were naming things that were commonly seen and what they were for -- they were helping build a vocabulary to talk about larger constructs.

They are useful if you have a problem and one fits it perfectly, it can help you start thinking about it -- but it might not be a good fit.

In general we should be keeping software as simple as possible, with the understanding that it can be changed and adapted as needed. Often large "pattern" based projects devolve into a morass of unneeded complexity to support a level of flexibility that was never required.

prof_hobart 15 hours ago 1 reply      
>, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

Unless I'm misunderstanding him, I would disagree with this. When you're doing it wrong is when you use a design pattern without understanding what problem its solving, and whether you have that specific problem.

To use his tool analogy - if you're a joiner who turns up to a job thinking "we always need to use a hammer" and start randomly hitting everything, then you've gone wrong. But equally, if you're halfway through knocking a nail in with your shoe and think "Oh look, I'm using the hammer pattern now", you're doing it just as wrong.

If you're looking at two things you need to attach together and you've considered whether glue, a screw, a nail or something else is the most appropriate for this specific job, decide it's the nail and then think - "I need to use my hammer now", then you're doing it right.

rootlocus 12 hours ago 0 replies      
I found both his definition of the adapter pattern and his example to be a bit off. In his example, the adapter extends the external interface instead of the client interface. By definition the adapter must implement the client interface. It's even in the UML diagram displayed on the website he quotes (http://www.dofactory.com/net/adapter-design-pattern)

 > The fact was that I just didn't understand them the way I thought I did. > To be clear, I've never read the Gang of Four book these patterns are defined in
After admitting he has a less than desired understanding of design patterns (proven by his poor example), he makes bold claims like:

 > if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.
I'm having problems taking this article seriously.

gwbas1c 11 hours ago 0 replies      
Design patterns aren't the problem. All a design pattern is, is a well-known way of doing something.

When you build a house, do you re-invent how to frame, plumb, wire, and roof it? No. That's all a design pattern is. Choosing the right design pattern is akin to making sure that your basement is made out of cement and your walls framed with wood. (You don't want to put shingles on your countertops!)

The problem is that some developers think they are some kind of magical panacea without really understanding why the pattern was established and what it tries to achieve. These are the over-complicated projects that everyone is complaining about in this thread. (These are the projects where the basement is made with wood or the concrete walls too thick; or the projects where someone decided to put shingles on the countertop.)

I try to pick, establish, and follow design patterns in my code. It helps ensure that I don't spend a lot of time re-learning why some other technique is flawed; and it helps achieve a consistent style that everyone else on the team can work with.

MoD411 15 hours ago 2 replies      
"Software Design Patterns Are Not Goals, They Are Tools" - I do not understand why this needs to be said in the first place.
Arzh 10 hours ago 0 replies      
This article makes way more sense when he says he never read the Design Patterns book. If he had, he would know that before he started. They explain that the book is a collection of patterns that they have compiled from a bunch of people and from years of experience. The patterns did come about organically, and they were never meant to be the way to design software. They were only trying to come up with a common lexicon for something that they were all already doing.
emodendroket 12 hours ago 3 replies      
As far as I can tell design patterns are mostly about taking something simple and obvious and using terms to describe it that make it obscure and difficult to understand.
madeofpalk 11 hours ago 0 replies      
I'm reminded of a set of tweets from Harry Roberts about whatever new hot CSS naming convention was popular for the week:

> Modularity, DRY, SRP, etc. is never a goal, its a trait. Dont let the pursuit of theory get in the way of actual productivity.

> Thats not to say Modularity, DRY, SRP, etc. arent great ideasthey are!but understand that theyre approaches and not achievements.

There's nothing super revolutionary about these thoughts, but they've stuck in the back of my mind for a while now.


rhapsodic 13 hours ago 1 reply      
A design pattern is a reusable solution to a recurring problem. Too many inexperienced devs forget that part, and use a pattern where the problem it's designed to solve doesn't exist. Had the author read the GoF book (he admits he still hasn't) he might have avoided that pitfall.
awinter-py 13 hours ago 1 reply      
design patterns are guru thinking. they're bad ways to describe self-descriptive tricks like callbacks. don't let a person who talks this way write docs ever; they'll focus on 'what's being used' rather than what's happening.

design patterns are like when a consultant creates a stupid name for something that already exists -- the name isn't about expressive power, it's about declaring ownership so the consultant can sell the 'Consulting Method' to solve your problem.

when a phenomenon or trick has an easily understood one-word name, don't let a nonexpert rename it to something nobody understands.

RangerScience 6 hours ago 0 replies      

The point of design patterns is a way to describe what you've made succinctly.


When you set out to do something that you don't yet know how to do, having a crank you can turn to get out functioning code is a good thing.

I think what you mean is "Design Patterns are Tools, not Dogma".

Plus, a lot of design patterns only make sense in typed and/or OOP languages, so under those circumstances, they can't be applied as goals.

apo 11 hours ago 0 replies      
> Here's the problem I have with design patterns like these [Adapter Pattern]: they seem to be something that should occur organically rather than intentionally. We shouldn't directly target having any of these patterns in our code, but we should know what they are so that if we accidentally create one, we can better describe it to others.

It's not clear what the author would have done differently in this example. It's one thing to raise concerns about pattern-first thinking in general, but quite another to spell out what exactly is wrong with reaching for the Adapter Pattern to solve a very specific problem under a given set of constraints. I can imagine a number of situations in which going straight for an Adapter is the only sane choice.

I've come to view with great suspicion any general discussion of programming divorced from its context. Architecture Astronauts and Cowboy Coders can each do a lot of damage if left to their own devices.

badloginagain 12 hours ago 1 reply      
Design patterns, OOP, to a large degree programming languages are just tools. You don't hear of craftsmen saying things like "The only thing you really need is a hammer. It's been around longer than the other tools and you can use it on every project". Replace "hammer" with C or Java and you have a legitimate comment on a score of threads.

> What patterns don't help with is the initial design of a system. In this phase, the only thing you should be worried about is how to faithfully and correctly implement the business rules and procedures.

I submit that should be your overriding concern at all times, not just the design phase. If you have to refactor some code in order to extend it, tie it back to the changed requirement. This forces you to make the least amount of changes, refactoring the least amount code, breaking the least amount of unit tests and introducing the least amount of bugs into production.

EliRivers 13 hours ago 0 replies      
While we're here, SOLID is a nice acronym that is helpful as a checklist of generally good ideas to consider. It's not a law of physics, it's not compulsory, following it blindly can lead to worse outcomes and if transgressing it leads to a better outcome (with all things considered) then it should be transgressed.
arxpoetica 13 hours ago 1 reply      
Just now realizing there is ambiguity around the terms design patterns. Say it in a different crowd, they'll think you are talking about the kind of design patterns Brad Frost is writing about. http://atomicdesign.bradfrost.com/
V-2 9 hours ago 0 replies      
As pointed out (arguably a bit harshly) in comments under the original article, this is really a strawman argument. That's because that ol' classical GoF book on design patterns - which the author admits has not even read - addresses this concern already. It's still a valid argument, but not exactly a fresh one. And speaking on the subject without even bothering to read the piece widely considered as canonical is a bit arrogant.
matchagaucho 9 hours ago 0 replies      
Stated in other terms, patterns are a means to an end. Not the end goal.

Patterns will organically emerge as the result of ongoing refactoring.

exception_e 10 hours ago 0 replies      
Kind of relevant to the discussions in this thread: https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...

When I do hit the magic 3 and can justify restructing code, I consider my options in terms of design patterns (which are very much tools!)

mirekrusin 12 hours ago 0 replies      
His problem may be learning about those concepts from snake-oil sellers - he mentions he didn't bother to read GoF and gets his knowledge from things like http://www.dofactory.com/products/net-design-pattern-framewo... .

My advice is to learn from people like Martin Fowler or Kent Beck and if you want to look at companies, look at something like ThoughWorks.

bradenb 13 hours ago 3 replies      
> In other words, if you ever find yourself thinking, "I know, I'll use a design pattern" before writing any code, you're doing it wrong.

I completely disagree... if I'm working with a team. I've spent far too many hours trying to fix fragile code that comes about as a result of different devs with different methodologies trying to tie their code together.

id122015 10 hours ago 0 replies      
I can say the same thing about programming.

Thats why when I read HN I'm trying to understand what are you trying to achive. Something that goes beyond staying in front of the computer 10 hours a day.

johanneskanybal 12 hours ago 0 replies      
"I didn't read the article or the comments but I think you're all wrong, maybe it's bad upbringing or maybe something else but whatever". ok thanks for sharing.
smoreilly 10 hours ago 0 replies      
How can someone doing research on these patterns not have read the most basic/important piece of literature on the subject?
golergka 13 hours ago 2 replies      
I have been interviewing a lot of developers recently, and one of the best questions I've found is to ask them _why_ they have used MVC pattern in the test assignment (most do). Most of developers misunderstand the question at first and either start to explain how MVC works or explain how they would've implemented it without MVC (when you ask people why they did something, they often take it as "you shouldn't have done it"). But even when I clarify the question, a surprising number just can't even begin to answer it instead they stumble and at best just tell that that's how they have always been taught to do it.
projektfu 11 hours ago 0 replies      
When I was in college, I assumed (like most) that patterns were received wisdom in how to construct software. Then I actually attended a talk with John Vlissides and realized that patterns were an entirely different thing, closer to the "archaeological" sense dantheman mentioned. In this way, the study of design patterns correspond better to the study of rhetoric or poetics in human language. "Homeric Simile" could be a design pattern in poetry.

In software, some rigidity of expression might be preferred, and so the design patterns also help us avoid creating new terminology for things that have been appropriately described.

There are places where each pattern might have utility, and I suppose if there is any sense to the term "software architecture" it is in the ability to make sense of what the system should look like in a way that can be explained to the relevant parts of the team.

There is a tendency, as well, among software developers to think that a complicated architecture must be the result of countless stupid decisions, probably made by junior technicians, who were doing things without understanding what's going on. Thus you find people exhorting others for simplicity, and acting like they've done their job at that point. But instead, complicated architecture is the result of compromises and rewrites throughout the software's life, and attempts to discard those old architectures and start afresh with similar tools usually result in an initially simplistic, but ultimately inflexible, design that will eventually evolve into a different complex architecture.

The Linux kernel is an example of a complicated architecture that was designed from a standpoint of simplicity initially, and developed its own object-oriented layer on top of C, with pluggable elements all over, loadable modules, etc., and millions of lines of code. BSD is smaller and more coherent, but also much more limited in scope.

There are also examples like Windows NT, which suffered from being the second system to 3 systems: Windows, OS/2 and VMS. In this kernel, there are so many design features that were included before implementation, that it seems incredible it was ever built. But they persisted and made it happen, and even eventually made it fast, in some cases by working around its design with new compromises and approaches. Still, it lacks the simplicity of a Plan9 or an Oberon, but what it doesn't lack is users.

Anyhow, I digress. What is important to me about patterns is the language that we get from them, and the ability to recognize what's going on in code. They can provide useful hints about implementation gotchas, and they can also help people stop reinventing the wheel.

bjr- 10 hours ago 0 replies      
Read the book. Then read the books that inspired the book.
olleicua 8 hours ago 0 replies      
EGreg 11 hours ago 0 replies      
Goals should include:

 1) Solve the problem 2) Make it maintainable 3) Make it extensible 4) Make it scalable (server) 5) Optimize it for memory, speed
So the reason to use an existing paradigm and a well-tested framework is because it makes the above easier, especially #2. And over time, #2 winds up saving you a lot resources and probably saves your project from tanking.

Finally, using an existing well known platform also lets you hire developers who know what they're doing from the beginning, leading to more prosuctivity and less dependence on any one particular developer. We leverage the knowledge that's already out there.

Indefinite prison for suspect who wont decrypt hard drives, US government says arstechnica.co.uk
265 points by LukeB_UK  2 days ago   215 comments top 25
virmundi 2 days ago 5 replies      
The best part of this whole story is the unintended consequence of attack. Don't like someone, encrypt a zip drive with drivel, toss it in his car and call the cops. Say you saw him looking at what could be kiddie porn. The guy doesn't know the password. Life in prison. No excuse.

This applies not merely to Bob in Accounting that's a dick, but to everyone: Congress! Start sniping political enemies. A jump drive here. A hard drive there. Soon, you could have 6 or so Congressional individuals going to jail for a child porn ring. The Feds would think it's a great prisoner dilemma. No one's turning on each other. Again, anonymous tip claiming that the right honorable Representative Duggans was watching kiddie porn late at night in his office. The same tipster told the police that while he was jacking it he thanked Representative O'Connel for the present over the phone (make sure to wait for an actual call so their is evidence).

Sure eventually all of this will die down. Until then, for $100 bucks and a few hours you can sit back, eat some popcorn and watch the system implode. Do it right and you'll get years of fun for everyone.

coreyp_1 2 days ago 7 replies      
This is an important constitutional issue! The subject may be deplorable, but the root issue is not.

If it is a "foregone conclusion", then they should have no problem convicting the guy without forcing him to testify against himself. If it is not a "foregone conclusion", then they have been lying and are illegally (unconstitutionally) depriving him of his freedom for months, without even charging him with a crime!

tetrep 2 days ago 1 reply      
I find this interesting, as I would almost certainly have forgotten my password by now, it only takes me 4-6 months of never using a password to forget it, so I wonder what happens if you can't decrypt your drives, due to honest forgetting from sitting in jail refusing to, or from some sort of deadman's switch or similar that drops keys after N days.
upofadown 2 days ago 1 reply      
>... it's a "foregone conclusion" that illegal porn is on the drives, ...

Obviously not if the government needs the suspect to tell them where to find the porn in the keyspace. The porn at this point literally does not exist on the computer. The government is asking the suspect to find it for them there.

So the question here is; can the government compel someone to help the government find evidence against them?

This reminds me of something a Pakistani coworker said. He said that in the area of Pakistan he grew up in they had the best police force anywhere. There were no unsolved crimes. Someone always confessed...

So this is the same sort of thing. Torture someone long enough with indefinite detention and they will eventually come up with something to indict themselves with. There has to be something illegal in any well used computer.

tptacek 2 days ago 0 replies      
Contempt is appealable, and in stories about appeals for long contempt sentences it appears that the likely sentence for the underlying matter is a factor. So, as a practical matter, this might mean that refusal to unlock a drive will net you the same sentence as if you were tried and found guilty for whatever crime was supposed to be on the drive.
mmf 2 days ago 1 reply      
This effectively makes forgetting one's password a crime with a lifetime jail sentence... Not bad, not bad
Havoc 2 days ago 0 replies      
Thats pretty scary.

I've travelled with encrypted drives where I didn't know the password before. (Forgot it & was bringing the drive to someone for their own use after formatting).

probably_wrong 2 days ago 1 reply      
I agree with the overall idea that this is an interesting/problematic case, but I think the discussion would be better served if we stopped assuming that the authorities here are morons.

No, forgetting your password is not a lifetime sentence. No, not knowing the password for an item you've never seen is not a lifetime sentence either. Refusing to obey a judge's order, however, will get you in trouble. Again, these people are not morons, and if they say the guy is only pretending to have forgotten his password (a dishonest criminal? shocking), they might have a good reason.

silveira 2 days ago 4 replies      
Apart from the obvious assault on the presumption of innocence here, is there a cryptographic file-system that stores a secret X and Y that given a key k_x would decrypt the content X and given k_y would decrypt the content Y without revealing to an attacker that there are multiple contents?

If yes, than one could store a real secret X and store a false secret Y, something that looks like a secret enough to be perceived as a secret. Then in case of torture, government persecution, etc, the victim could reveal only Y.

jmnicolas 2 days ago 1 reply      
I'm quite surprised that they can break Freenet but can't break FileVault (I remember reading an article about a Filevault master password that is short and brute-forceable but can't find it atm). I would have bet the other way around.
cypherpunks01 2 days ago 2 replies      
This certainly points towards having more widespread support for plausible deniability, no? Are there any mass encryption tools that are reasonably simple to set up providing this (besides TrueCrypt Hidden Volumes)?

Would someone continue to be held in contempt if they furnished a decrypted drive that didn't contain the information that court held as a "foregone conclusion" that it contained?

BillinghamJ 2 days ago 1 reply      
In the UK, this is actually legal sadly.
jimrandomh 2 days ago 0 replies      
Seven months imprisoned without trial and counting. The technicalities about contempt and hard drives are a distraction; the real injustice is that, as a routine matter, the US government no longer gives trials without extensive pre-trial punishment.
maremmano 2 days ago 4 replies      
and what happens if I forgot my password?
vox_mollis 2 days ago 0 replies      
The exam showed that Doe accessed or attempted to access more than 20,000 files with file names consistent with obvious child pornography

Is nobody else alarmed that OS X apparently logs any and all( or at least 20k records )file accesses by default? This is way too many to be found in the HFS journal, so it's clearly intentionally logging all accesses.

Edit: They also appear to have been able to deanonymize the defendant's FreeNet usage, though this could have easily been OPSEC violations rather than technical shenanigans.

ommunist 2 days ago 0 replies      
Scary sh#t. What if inmate forgot the password? I cannot remember 4-digit PIN on a year old card I hardly used.
kragen 2 days ago 0 replies      
Ars Technica chose to illustrate this article with a perspective-distorted screenshot of md5-crypt-encrypted passwords, the entire point of which is to prevent the person who has the encrypted password from being able to decrypt it.
joshfraser 2 days ago 0 replies      
The ACLU or EFF need to jump on this case. The precident set by this is too important to leave to some randomly assigned public defender.
andai 2 days ago 0 replies      
What would happen if the suspect destroyed keys prior to arrest? (ignoring the similar difficulty of proving this)
jupp0r 1 day ago 0 replies      
One more reason to have plausible deniability features.
astazangasta 2 days ago 3 replies      
Why doesn't the Fifth Amendment cover this?
andai 2 days ago 0 replies      
what would happen if a suspect destroyed the keys?
davideous 2 days ago 2 replies      
Obligatory XKCD:https://xkcd.com/538/
lossolo 2 days ago 3 replies      
"USA land of the free..."Really what happened with your country ? In EU you can't imprison someone for not decrypting hard drive if he says it can incriminate him, everyone understand this but not the biggest democracy in the world?
nluux 2 days ago 1 reply      
Let's not use euphemisms like "kiddie porn" and realize the danger that child pornography feeds the child slavery industry. The core issue is that the suspect is potentially hiding his network, clients or victims' identities. Until he surrenders his hard drives, the truth may never come out.
Play Store and Android Apps Coming to Chromebooks googleblog.com
268 points by ojn  7 hours ago   125 comments top 20
spot 3 hours ago 2 replies      
from the post:

> Schools in the US are now buying more Chromebooks than all other devices combined -- and in Q1 of this year, Chromebooks topped Macs in overall shipments to become the #2 most popular PC operating system in the US*.

that's pretty amazing actually. congrats to google & the chromebook team!

caffinatedmonk 5 hours ago 6 replies      
I'm curious why they didn't mention something so game changing as this in the keynote.
radarsat1 6 hours ago 4 replies      
I'm curious just on the technical side, what does this mean for the many apps that include ARM code? (i.e. apps that use the NDK) Will there be some emulation, or do apps generally ship with multi architecture?

Edit: Ok, the answer is, both. Thanks ;)

magnumkarter 7 hours ago 3 replies      
This is great!!! I wonder if it will be possible to install the Play Store in Chromium OS. I know that Chromium has some support for installing Android .apk files.
bonaldi 6 hours ago 3 replies      
No support for the original Pixel? It's more powerful than quite a few on the list. Damn.
headmelted 6 hours ago 1 reply      
Obviously there's no-one in the world that didn't know this was coming, but even so, I feel for the Remix OS guys.

I assumed at the time their objective was to be acqui-hired by Google, but I can't see why there would be a reason for that now, or how they'd hope to compete in this situation.

Congratulations to the Chrome O/S and Android teams. I was briefly on a Chromebook when my laptop packed in, and but for the absence of solid developer tools, I'd have stayed forever. There's a lot to be said for convenience.

chrisper 4 hours ago 1 reply      
Is there a way to try out ChromeOS without owning a chromebook?
sharms 7 hours ago 3 replies      
This is a big move and will majorly impact desktop / laptop computing. Now the entire ecosystem of Android apps (even Microsoft Office, Snapchat, Photoshop Express) is going to be available, and arguably this platform is much more complete than say, Universal Apps (Microsoft)
stkoelle 6 hours ago 1 reply      
Intellij for Android, would help a lot some developers ;-)
jimmcslim 3 hours ago 3 replies      
Why are Chromebooks such a US phenomenon... Here in Australia retail availability is pretty dire. I wonder if this development might see that start to change?
jbigelow76 4 hours ago 1 reply      
I'd be more interested in seeing Electron apps on ChromeOS before Android apps, not expecting that to happen mind you, Electron on ChromeOS probably does nothing to move the Google ecosystem forward.
pgrote 7 hours ago 1 reply      
While this is a great step forward, I am disappointed in the list of chromebooks supported.

I looked over the list and cannot find a common thread as to what is supported and what isn't. Does anyone know?

My Acer C720 with an i3 isn't on the list, but my Toshiba Chromebook 2 with lesser specs is on the list.

ralmidani 6 hours ago 0 replies      
Hopefully this leads to the release of ARM devices with more than 32GB of storage.
asimuvPR 6 hours ago 0 replies      
Google: What does this mean for ARC users?
koolba 5 hours ago 3 replies      
Will apps run natively on Chromebooks or will my fart app slow down because it's being emulated?
genieyclo 6 hours ago 2 replies      
After the Android Chrome app gets extensions, what's the point of keeping ChromeOS alive? It's the only thing Android's missing that ChromeOS has.
hackaflocka 5 hours ago 2 replies      
To the Googlers on here -- any idea when it'll come to Chrome browser on other platforms. I really hope Google doesn't artificially delay that to boost Chrome OS penetration.
TazeTSchnitzel 7 hours ago 1 reply      
Coming soon: Chrome OS made into merely an alternative Android home screen, and Chromebooks becoming Droidbooks.
ncr100 5 hours ago 0 replies      
I assume Google IAB be supported on Chromebooks, too?

Cross-device purchase restoration, etc?

jimjimjim 5 hours ago 1 reply      
year of the linux desktop?
Why I don't spend time with Modern C++ anymore linkedin.com
273 points by nkurz  1 day ago   250 comments top 39
jupp0r 1 day ago 6 replies      
In my experience, the opposite of what the author claims is true: modern C++ leads to code that's easier to understand, performs better and is easier to maintain.

As an example, replacing boost::bind with lambdas allowed the compiler to inline functor calls and avoided virtual function calls in a large code base I've been working with, improving performance.

Move semantics also boosted performance. Designing APIs with lambdas in mind allowed us to get rid of tons of callback interfaces, reducing boilerplate and code duplication.

I also found compilation times to be unaffected by using modern C++ features. The main problem is the preprocessor including hundreds of thousands of lines for a single compilation unit. This has been a problem in C and C++ forever and will only be resolved with C++ modules in C++2x (hopefully).

I encourage the author to try pasting some of his code into https://gcc.godbolt.org/ and to look at the generated assembly. Following the C++ core guidelines (http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) is also a good way to avoid shooting yourself in the foot (which is surprisingly easy with C++, unfortunately).

justsaysmthng 1 day ago 5 replies      
HFT is a pretty limited and extreme application case.

From what I understand - everything is not enough for HFT - network cards, kernel drivers, cables, etc.

You have milliseconds (edit: nanoseconds !) to receive, process and push your orders before someone else does it and gets the prize.

It's an arms race between technologists for the purpose of making a small number of people rich.

I doubt that these requirements apply to other application fields where C++ is used - and it's used almost everywhere, with great success I might add.

In my view C++ is actually a couple of languages mixed into one.

The hard part is knowing which part of the language to use for which part of the problem.

The "modern" C++ solves a lot of the nuisances of the "old" C++, but you can do without these features just fine. I apply them carefully to my code and so far it's been a pleasant experience. Even if I don't use all of the new features, it's nice to know that I can (and I will some day!).

So I don't really buy this rant..

pjc50 1 day ago 4 replies      
There are two separate rants here that aren't delineated well.

1) C++ is too complicated, and therefore hard to reason about and slow to compile.

We're going to argue about this forever, but you'll have to agree that the spec is very large and warty compared to other languages, and that C++ tends to take far longer to compile (this was already a problem a decade ago, it's not specific to "modern" C++).

2) The future of software development will include more of what I'm going to call ""non-isotropic"" software; rather than assuming a flat memory model and a single in-order execution unit, and exerting great effort to pretend that that's still the case, programmers will have to develop effectively on GPUs and reconfigurable hardware. Presumably this speculation is based on the Intel-Altera acquisition.

You can sort of do hardware programming in C (SystemC) but C++ is really not a good fit for hardware. Personally I'd like to see a cambrian explosion of HDLs, but the time is not yet right for that.

It sounds like the author favours the "C with classes" programming style, maybe including smart pointers, and is probably not keen on lambdaization of everything.

hellofunk 1 day ago 5 replies      
This article is not very general. Much of what it tries to convince us is not going to matter for most developers, and has the cost of suggesting modern features are not good for any developers. For example:

>It is not rare to see Modern C++ applications taking 10 minutes to compile. With traditional C++, this number is counted in low seconds for a simple change.

This is simply a bogus statement with respect to what at least 90% of c++ developers do on a daily basis.

I have benchmarked unique_ptr, auto, brace initialization, lambdas, range-based-for and other modern idioms and found them all to be at least as fast, and often faster, than their older counterparts. Now, if I were to instead go off and write template-heavy code using new features, that would be different. But in reality, the vast majority of c++ developers -- I'd wager at least 95% -- are not writing variadic templates on a daily basis (nor should they be).

The memory safety and many benefits from unique_ptr [0] is one of many modern tools that is a non-brainer to use in nearly all contexts. No, not nearly all contexts, allow me to rephrase: all contexts. It just is, and if you compare its use to manual new/delete code, the benefits are solid and faster.

The author further claims that modern C++ is less maintainable and more complex. The absolute opposite is true in nearly all cases. Using unique_ptr again as an example, it leads to less code, less complex code, more clear code, and better maintainability and code readability. Uniform brace initialization is another example that prevents many common older problems in the language.

FYI the author keeps talking about high frequency trading as an example of why modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. This author is either naive or political.

I think the title of this article is highly misleading, and the contents are not relevant. Overall, this article is just bad advice for most of us.

[0] https://news.ycombinator.com/item?id=11699954

n00b101 1 day ago 1 reply      
As has always been the case, effective use of modern C++ requires knowing which subset of the language to use and which to avoid.

I agree with the author's criticisms of many C++ features. At the same time, I think that a proper simple, modern subset of C++ exists that is much more productive and safer than C, without sacrificing performance. You can also optimize progressively, for example start with using std::string and std::vector and then replace the stock implementations if they aren't performant on your target architecture. I would not, however, recommend using C++ for GPU kernel code - a mix of C++ for CPU code and C for GPU kernel code works best. It is not ideal, but it's the best toolset available for serious industrial development.

FPGAs are exciting, but they've also been the "next big thing" in general purpose computing forever. Obviously it makes sense to use FPGAs for certain HFT and embedded applications, but that's not the same as general purpose computing which is what C/C++ is for. Not to mention, FPGA compile times can take hours or even days, which pales in comparison to most C++ template overhead.I would also say that for IOT, I'm not sure why it is obvious that "$10 FPGAs" should dominate. Why not a $0.50 microcontroller? Or the $5 Raspberry Pi Zero board? Both of which are eminently programable in C and even C++. Embedded devices have been around long before "IOT" became a buzzword, and we can see that microcontrollers, FPGAs, SOCs, and custom ASICs all have a role to play depending on the application.

typon 1 day ago 1 reply      
If he is complaining about C++ being bad and suggesting Verilog on FPGAs as an alternative, boy do I have some bad news for him.

HDLs (yes including Systemverilog) have 10x worse design than the worst software languages. This is why there are entire companies out there that make high level synthesis tools or high level HDL specification languages (like Bluespec).

And I haven't even said anything about the quality of FPGA tool chains.

kangar00 1 day ago 3 replies      
> If you cannot figure out in one minute what a C++ file is doing, assume the code is incorrect.

This statement at first resonated with me, and then I thought about it: this doesn't reduce the complexity of the overall application or service, it just means that one file is simple. You could have 10,000 files instead of 1 much shorter one; is that any more simple?

jonathankoren 1 day ago 8 replies      
I know why I don't like C++ anymore, it's just no fun.Its slow to compile, the errors are like 6 lines long full of template and class hierarchy that makes it hard to understand what exactly happened, and then of course there's the common coding shortcut of declaring everything auto. (What type is this list? I don't know, it's auto all the way down.) Then there's the whole thing about making constructors, but leaving the bodies empty because everything should be on initialize lists now, and now there's wrapped pointers for some reason.

I hated writing modern C++. It was just so depressing and frustrating.

halayli 1 day ago 0 replies      
This article is coming from a frustrated developer and lacks any scientific evidence. The frustration (understandably) is coming from the overwhelming complex new features and patterns that barely a compiler can understand.

C++11 onward revamped the language to make up for the lack of progress in the past 10 years. The majority of C++ developers that aren't keeping up with the new features because they are busy with their daily jobs feel that they are falling behind and the language they thought they new has changed underneath them.

C++03 already had a steep learning curve, but with C++11+ that learning curve is orders of magnitude more.

On the upside, you can use C++11 without understanding most of the details and it will do the right thing most of the them. And I think that's the bet that the language is making.

messel 1 day ago 1 reply      
Ok. Try a different language :)?

A single language needed to solve all problems is a fallacy.

I don't see FPGA programming ousting c++, but expect higher level languages with strong parallel semantics to gain "market share". You can always call a dedicated process written in optimized c for the hottest components. Compose the rest in go, elixir, or any high level language (lisp).

Architectures will naturally gravitate to higher level languages that support cleaner composition. The tools and interfaces will push towards higher abstraction without impacting build or run time. Maybe this process is related to Kevin Kelly's inevitable. I'm an optimist here.

aninteger 1 day ago 0 replies      
I've come to the conclusion that one should "use C++ when they absolutely have to and C when you can." There just aren't many areas where C++ is absolutely required when plain old simple C can be used. (Not to mention using higher languages if possible).
Const-me 8 hours ago 0 replies      
I never programmed HFT software, but I agree with the criticism of the modern C++.

Its bad the author hasnt defined what exactlys modern is. I saw some comments compared boost with C++/14.I think boost is also modern. Even Alexandrescus Loki is also modern, despite the book was published in 2001.

I think that modern stuff was introduced in C++ because in end 90s-start 2000s there was expectation C++ will remain dominant for some time. There was desire to bring higher-level features to the language, to make it easier to learn and safer to use even at the cost of performance.

People didnt expect C++ will lose its market that fast: very few people now use C++ for web apps or rich GUI. However, due to the inertia and backward compatibility, the features remain in the language.

Personally, Im happy with C++.

C++ is excellent for system programming, also for anything CPU bound. For those you barely need those modern features, and fortunately, theyre completely optional: if you dont like them, dont use them.

But if you do need higher-level language for less performance-critical parts of the project, I find it better to use another higher-level language and integrate it with that C++ library. Depending on the platform, such higher-level language could be C#, Lua, Python, or anything else that works for you.

syngrog66 4 hours ago 0 replies      
I was once a C++ programmer but migrated first to Java, when I thought it was better designed and more convenient, and then to Python when I wanted less verbosity while having greater freedom to choose between a procedural style or OO.

C++ may still be an ideal choice in some problem spaces but I think the number and size of them has shrunk as more and better alternate choices have appeared and ate away at the C++ share.

DrBazza 1 day ago 1 reply      
There are only two kinds of languages: the ones people complain about and the ones nobody uses. - Stroustrup.

C++30, might end up being D, today.

shanwang 1 day ago 0 replies      
Such rant appears once every few months on HN, this one is one of the least convincing. Many problems he mentioned are not "Modern C++" problems, but problems with C++ from beginning, some of them already have reasonable solutions, for example ccache + distcc for speeding up compilation.

The real problem with C++ is the standard committee, the design by committee approach for such a complex language is failing. If C++ is taken over by a company, it will be a much better language.

fsloth 1 day ago 2 replies      
This sounds like it's written from the point of view of implementing something inhouse. I fail to see how FPGA programming will be relevant if one wants to distribute software for consumers (or am I technologically clueless...).
cpwright 1 day ago 0 replies      
I find the beginning and end of the article quite contradictory. Basically that C++ is too complicated; and oh by the way we should start programming FPGAs, which are much harder to get right.

I like modern C++, because I think it simplifies a lot of things (RAII for the win here). Templates let you engage in duck typing, but with (if you are careful) very performant results.

aspiringuser 1 day ago 1 reply      
20 year C++ programmer here. I work on multithreaded server code. Stopped using modern C++ features 5 years ago. I'd compare my use of C++ to be roughly equivalent to the use of C++ in the NodeJS project or the V8 project. I'm not a user of Boost.

I have to agree with the author of the article. It takes longer to train developers to write idiosyncratic modern C++ code and compilation times explodes. Compiler support for bleeding edge C++ features is spotty at best. Harder to reason about the correctness of modern C++ code.

Philipp__ 1 day ago 0 replies      
While some pretty good points were stated in this post, I cannot but feel OP is a bit biased. Too narrow sort to say.

I feel totally opposite in terms of new Modern C++. I guess the thing is how, where and when you use it will define your opinion/experience.

dahart 1 day ago 2 replies      
> Today the "Modern Technologist" has to rely on a new set of languages: Verilog, VHDL

That was a complete surprise ending! :)

I like surprise endings, and he makes a lot of good points, whether or not I agree with them. But, I totally wasn't expecting "I'm done with C++ because: hardware." I was expecting because web or because awesome new high performance functional scripting language <X>.

A lot of what he's talking about there will still run compiled software though... FPGA programming and C++ aren't exactly mutually exclusive, right?

stormbrew 1 day ago 0 replies      
One of the biggest users (some would say abusers) of template metaprogramming I know works on HFT software. He trades extremely long compile times for performance at runtime and finds that C++ allows him to do this and maintain a decent architecture (through what amounts to compile-time polymorphism as well as RAII).

For him, it's actually the older features of C++ that have no use. He doesn't use deep class inheritance and never touches virtual functions, for example.

thinkpad20 1 day ago 0 replies      
> After 1970 with the introduction of Martin-Lf's Intuitionistic Type Theory, an inbreed of abstract math and computer science, a period of intense research on new type languages as Agda and Epigram started. This ended up forming the basic support layer for functional programming paradigms. All these theories are taught at college level and hailed as "the next new thing", with vast resources dedicated to them.

This seems pretty dubious. Dependently typed languages and other projects embracing advanced type theory are still the realm of niche enthusiasts. While some of the more academic colleges might teach them in one or two courses, the vast majority of education a CS college student receives will be taught in traditional imperative languages. If "vast resources" have been devoted to Agda and Epigram, then I'm not sure what kind of language should be used to describe the resources devoted to C, C++, Java, etc. Also as the author mentions, Intuitionistic Type Theory has been around since the 70's, in fact the same year that C was introduced. Certainly it hasn't been taking over the CS world by storm since its inception, as he seems to claim.

Beyond that, the author's argument seems to be a bit incoherent. He critiques the readability of Modern C++, but C++ is notoriously hard to understand, including or especially prior to the development of C++11. It's never going to be an easy language to read except to seasoned developers. If anything, modern C++11 seems to provide abstractions that increase readability and safety. He critiques the performance of modern C++, but then he ends up recommending that people ditch C++ entirely and learn VHDL/verilog instead. Not even vanilla C++ is fast enough for him, then why criticize modern C++ on the grounds of performance?

cm3 1 day ago 1 reply      
I recently had to switch a project to -std=c++11 because a header I include now uses C++11 files. This change alone made compilation at least twice if not three times as slow. The new safety and convenience features are nice but compile times seem to be out of focus and getting slower and slower every year. I don't know how I feel with g++ 6.1 defaulting to -std=gnu++14.
ausjke 1 day ago 0 replies      
Just started to relearn c++ and QT for cross-platform GUI programs, c++ is not easy, but its performance is still unbeatable and in certain use cases, e.g. games or video-related-performance-critical-apps or GPU-opencl-etc, c++ seems to be the sole candidate still.
bitL 1 day ago 0 replies      
I agree with the author; I still long for the not-overly-complicated C++ back in the 00s I could write super-fast 3D rendering engine without much bloat. I find it very appalling when C++ went from a poster child of imperative programming to implementing monads in its libraries (mind you, monads are used to "simulate" imperative programming in functional programming). Something went wrong there...
jcbeard 1 day ago 0 replies      
I have a few problems with this article:>structure leads to complex code that eventually brings down the most desired characteristic of a source code: easiness of understanding.

If done well, the structure of things like variadic templates make libraries easier to use, and make coding faster (granted, code bloat can be an issue with N different function signatures).

>C++ today is like Fortran: it reached its limits

Not quite. Fortran died because well, object oriented programming came out and lots of people like it. And well, C was always more popular regardless so...C-like C++ was the obvious next choice. There is a lot of cruft in any new library, so some things aren't as performant as if you wrote them in say assembly, which is what the author seems to suggest. Yes, if I built a bare metal iostream-like functionality it would be more performant (ha, used the word :) ). People know iostream isn't that performant. Could it be better? Perhaps. Is it safe? Yes! If you want perf, use the C interface directly. Is that safe to use, probably not for the general careless user.

>To handle the type of speed that is being delivered in droves by the technology companies, C++ cannot be used anymore because it is inherently serial, even in massively multithreaded systems like GPUs.

Well, yes but so is just about every language. People are trained to write sequentially (left to right, top to bottom), with many exceptions...but none the less, sequentially. There are very few languages that do multithreading natively. There are lots of additions/libraries to C++ that enable very nice ways to consider parallelism (including w/in standard: std::thread), outside of standard (raftlib.io,hpx(https://github.com/STEllAR-GROUP/hpx),kokkos (https://github.com/kokkos), etc.). There are lots, some are quite easy to use. C++ is inherently serial, but there is no better way to write. It is fairly easy to pull out "parallel" pieces of code to execute. It is even easier if the programmer gets quick feedback (like the icc loop profiler,etc.) on things like ambiguous references and loop bounds that can be fixed quickly.

Interesting read, but don't agree at all.

hackerweb 1 day ago 0 replies      
How are Verilog and VHDL a "new set of languages"? That set has been around 30 years, almost as long as C with classes.
sickbeard 1 day ago 0 replies      
His argument about simplicity resonates with me. Sure you can learn variadic templates and all that fancy stuff but in practice when you are working on production software in any company involving more than one person using the code base, it just pays in heaps to write the simplest easiest to understand code; meaning all that nice fancy stuff is almost never used.
progman 1 day ago 0 replies      
The problem with modern C++ is that it wants to be everything. Now this behemoth is crushing under its own weight.

People who are not forced to use C++ should consider other languages which are way cleaner and even more performant. Code written in Ada and Nim for instance is much easier to maintain.

Nano2rad 18 hours ago 0 replies      
Functional language programs have to run as interpreted. If compiled they will be too bloated.
koyote 1 day ago 1 reply      
Am I the only one being redirected to a linkedin sign up screen?
afsafafaf 1 day ago 1 reply      
Wonder if they tried IncrediBuild to reduce their compile time? They are right that C++ - while faster than ever before - takes much longer to compile than many other languages.
sitkack 1 day ago 0 replies      
> "that is where the unicorns are born: by people that can see on both sides of the fence"
blux 1 day ago 1 reply      
Anybody got an idea to which video series of Chandler Carruth he is referring to?
je42 1 day ago 0 replies      
Actually, the Author wants GO.
frozenport 1 day ago 0 replies      
Being an expert FPGA programmer is easy, the problem is that small things take a really, really long time.
known 1 day ago 0 replies      
Kernel is my new home;
known 1 day ago 0 replies      
Me too :)
ensiferum 1 day ago 2 replies      
It just sounds like someone who couldn't handle C++ whining and making a bunch of blanket statements without really having any proper understanding.

I agree that some of the features such as lambdas can use to hard to track bugs (lifetime issues) and difficult to follow code when abused. When used nicely though they can lead to simple, elegant and straightforward code (anyone who tried to use the STL algorithms before lambdas knows what a pita it was most of the time).

Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.

Stoned drivers are safer than drunk ones, new federal data show washingtonpost.com
251 points by pkaeding  3 days ago   266 comments top 36
JPKab 3 days ago 11 replies      
As a full supporter of cannabis legalization, I'm also a big supporter of a technology that can instantly detect whether someone's reflexes are impaired by cannabis.

Notice that I'm focusing on whether or not they are impaired, as opposed to the amount in their bloodstream. The point being that people that use cannabis regularly for medical reasons don't seem to be impaired by it at all, while newer or more occasional users are certainly impaired.

It would be fascinating to see what kind of technologies could be used to achieve this that folks on HN are aware of.

kdamken 3 days ago 14 replies      
Do not read this and think it's okay to drive high. It's not. Do not drive high. Call an uber or something.

I worry that people read studies like this and suddenly justify going out and driving while impaired. Driving is an incredibly dangerous activity on its own. Doing it when you're stoned is a stupid and risky move, and comparing it to how much more dangerous drunk driving is doesn't make it less stupid.

If you've never smoked pot, you don't understand how it feels and affects your thinking/reactions/perception of time and reality. If you smoke pot everyday, it may affect you less due to tolerance, but it's still affecting you, much like a functional alcoholic.

I have nothing against usage and I strongly encourage legalization, but I don't think it's okay to pretend that it's safe to get high and go driving.

beat 3 days ago 7 replies      
Alcohol has two synergetic problems for drivers... it reduces reflexes and increases confidence, simultaneously. Less capable, and more aggressive. No wonder it's so awful for drivers.

Marijuana doesn't significantly impair reflexes (as generations of musicians have shown). As for judgment, it tends to make users more cautious, not more aggressive. They're not trying to get around that car in front of them to blow through the used-to-be-yellow light - they're trying to make sure they remember where they're going...

oarsinsync 3 days ago 1 reply      
> "At the current time, specific drug concentration levels cannot be reliably equated with a specific degree of driver impairment"

Massively leading title given the statement actually issued, and their own remarks later on:

> There's plenty of evidence showing that marijuana use impairs key driving skills[0]. If you get really stoned and then get behind the wheel, you're asking for trouble.


iconjack 3 days ago 4 replies      
From the comments here, it seems like people either didn't read the article, or just don't believe the study, because it pretty much says it is ok to smoke and drive.

 For marijuana, and for a number of other legal and illegal drugs including antidepressants, painkillers, stimulants and the like, there is no statistically significant change in the risk of a crash associated with using that drug prior to driving.

mywittyname 3 days ago 2 replies      
I can't argue with this data, but I have to wonder how much tolerance/individuality plays a role in this. I know that I personally could not drive high on THC, my propensity to zone out would almost certainly result in an accident, but I (used to) have friends who drove regularly during and after partaking and I was completely confident in their abilities.

Personally, I think prohibitionists/law enforcement is looking for an easy money-grab and are hunting for justification of their desires rather than responding to an actual harmful issue that society is facing.

hristov 3 days ago 1 reply      
This means nothing. The big difference between alcohol and other drugs is that there is a relatively straightforward correlation between the amount of alcohol in the bloodstream and the current effect of alcohol on the brain.

Somebody testing positive for marijuana may not be stoned at all. It could be they were stoned last week and tested positive.

Furthermore, alcohol testing is much easier and is the only type of testing really done by police in the field. Police will try to get you to admit being impaired by other drugs if they stop you, but they do not have anything like the breathalyzer to test for other drugs.

Current data comparing driving impairment of various drugs and alcohol is almost entirely an artifact of the difference in testing methods.

Symmetry 3 days ago 1 reply      
I remember an article on this in New Scientist back in the 90s that went into a lot more depth. A study showed that while alcohol impaired multiple aspects of driving skill marijuana mostly impaired just time judgement. And stoned drivers tended to overestimate their level of impairment and slow down while drunk drivers tended to underestimate how impaired they were. They actually found that drivers who were moderately drunk and stoned were less likely to crash than drivers who were just moderately drunk because they slowed down even though they were more impaired.
dimino 3 days ago 0 replies      
I'm going to upgrade my view of this article to linkbait title. The title should be, "THC found to not correlate with impairment in the same way that blood alcohol concentration does".

From the article:

> The study's findings underscore an important point: that the measurable presence of THC (marijuana's primary active ingredient) in a person's system doesn't correlate with impairment in the same way that blood alcohol concentration does.

The story here is that there is no "THC impairment" test like there is a blood alcohol test, not that driving while stoned is safer.

curiousgeorgio 3 days ago 3 replies      
You can distort the data all you want, or in this case, put it in an absurd context to try and prove a point, but the fact remains - the legalization of marijuana is not without negative consequences.

I'm surprised no one here has mentioned AAA's findings (legalization followed by a 2x increase in fatal marijuana-related crashes): http://newsroom.aaa.com/2016/05/fatal-road-crashes-involving...

EDIT: Link to the study [PDF]: http://publicaffairsresources.aaa.biz/wp-content/uploads/201...

binarymax 3 days ago 5 replies      
I'm on vacation in Colorado right now, and you can point out the stoned drivers. They tend to drive slow and passive, they fear the merge, and spend lots of time stopped at stop signs making sure they really are stopped and not going to crash. The impression I have is that their judgement and reactions are definitely impaired, but in a stark contrast to the invincibility and carelessness of alcohol induced drivers.
vlunkr 3 days ago 0 replies      
That chart has sections for both "Any Legal Drug" and "Any Illegal Drug", and both bars are barely visible. Are they claiming that all drugs, legal or not, don't affect driving?? Because that would be an incredibly stupid claim.
breatheoften 3 days ago 0 replies      
Are there any ways to directly measure impairment rather than using chemical trace detection as a proxy for the affect?

Maybe a driver could take a short impairment test on their cellphone or in car computer prior to starting their drive -- if they pass the test then they would have an argument against impairement should they become subsequently involved in an accident... If they fail the test and drive anyway then there's an even stronger argument that they made an irresponsible decision for which they must take responsibility.

samsonradu 3 days ago 0 replies      
I was watching a documentary the other day about life in a Delhi prison and I was thinking to myself if there's the slightest possibility for me to ever end up in prison for one reason or another, considering of course that I don't have the slightest intention of committing a felony. Then I realized that a driving mistake has the best odds of causing such a thing. It's crazy how many don't take such a high-responsibility task seriously, texting, drinking, smoking while driving.
green_lunch 3 days ago 0 replies      
It's studies like this that have caused people in my city to support legally driving while high. I've seen so many people that somehow think there is no effect on driving skills.

I'm all for legalization, but I will not support the potential of endangering other people.

skuunk1 3 days ago 0 replies      
The title seems misleading. People with marijuana in their bloodstream != stoned drivers. Marijuana use is detectable for a long time past the point of impairment and the data only reflects that they HAVE used marijuana vs being stoned at the point of driving.

Maybe instead of breathalysers and drug detection kits we should have standardised attention/reaction tests to determine if someone is safe to drive (you could probably even run them on a tablet). This would also weed out (pun intended) tired drivers.

zymhan 3 days ago 0 replies      
This is from February of 2015. Nothing about this data is new anymore.
brickmort 3 days ago 0 replies      
> Stoned drivers are less dangerous than drunk ones
jacquesm 3 days ago 0 replies      
The only valid comparison is with a driver that is sober.
slipperyp 3 days ago 0 replies      
KIRO TV in Seattle did a small test of impairment on a closed course with participants using varying amounts of marijuana a few years ago around the time Washington voters were choosing to legalize pot and found similar results:


exodust 3 days ago 0 replies      
"Safer than drunk" is a poor choice or words, irresponsible even.

If you drive within an hour or two or three of smoking sticky buds, the risk is not reflexes or reaction time, it's getting distracted by your own thoughts, or zoning out. That can easily translate into driving through a red light. The mind wanders when stoned, which is the opposite of what you want when operating machinery.

_dark_matter_ 3 days ago 1 reply      
I wonder if, as public perception changes on the issue, stoned drivers will be in even less accidents as their own perception changes - i.e. "I'm high on cannabis, but that doesn't impair driving, so my own driving isn't impaired", and thus they drive better. Compared to before, "I'm high, high drivers are bad drivers, so my own driving is impaired".
bunkydoo 3 days ago 0 replies      
I'm a bit curious if the "any illegal drug" column includes LSD or hallucinogens. Part of me thinks most people have the common sense not to drive while on such a substance, part of me thinks there would be no way to gather data
WelcomeToHeaven 3 days ago 0 replies      
Was federal data really needed to determine stoned drivers are safer than drunk ones? I thought it would be known that drunk driving is more dangerous than stoned driving.
musgrove 3 days ago 0 replies      
I think a lot of people could have told them that. Including, most likely, some of the researchers themselves as well as a few presidents, especially our current one, and most Kennedys.
RichardCA 3 days ago 0 replies      
Carl Sagan had some things to say about this topic.


mjhm2539 3 days ago 0 replies      
"Neighbor beats his wife less often than other neighbor, new federal data shows"
chipotle2 3 days ago 0 replies      
Maybe they get in fewer accidents because they are driving like 8 miles per hour.
cphoover 3 days ago 0 replies      
Confirming what everyone already knew.

Also... Driving impaired in anyway is a big mistake.

return0 3 days ago 0 replies      
Both should only be allowed to drive self-driving cars.
kalehrishi 3 days ago 0 replies      
They must be smoking good stuff when they decided to publish this article :D
elcapitan 3 days ago 0 replies      
On the other hand, studies have shown that red wine is good for the heart and chocolate as well. So the best of all worlds is to get stoned, drink a lot of red wine and then drive while eating chocolate. Why is there no study yet showing that?
dang 3 days ago 1 reply      
Please don't be personally abusive, even when a comment is annoying.

We detached this subthread from https://news.ycombinator.com/item?id=11706682 and marked it off-topic.

riggins 3 days ago 4 replies      
But what about their propensity to drive really slow?
draw_down 3 days ago 0 replies      
If anything is true about how this country acts with cannabis, it's that facts and data are pretty much 100% ignored.
ck2 3 days ago 0 replies      
I've had two friends killed by drunk drivers in two different decades and I have to run on the side of the road

If you are on anything, legal or illegal that influences you, I hope you get a nice long prison sentence for driving impaired.

If you see someone step out of a bar or whatnot that is either drunk or high and goes to drive, please call the police and possibly save someone's life - you don't have to call 911, call the anonymous number for your city.

       cached 20 May 2016 02:11:01 GMT