hacker news with inline top comments    .. more ..    19 Mar 2016 Best
home   ask   best   2 years ago   
1
Lee Sedol Beats AlphaGo in Game 4 gogameguru.com
1389 points by jswt001  5 days ago   448 comments top 65
1
mikeyouse 5 days ago 9 replies      
Relevant tweets from Demis;

 Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now... Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87 When I say 'thought' and 'realisation' I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87 Lee Sedol wins game 4!!! Congratulations! He was too good for us today and pressured #AlphaGo into a mistake that it couldnt recover from
From: https://twitter.com/demishassabis

2
argonaut 5 days ago 6 replies      
If it's true that AlphaGo started making a series of bad moves after its mistake on move 79, this might tie into a classic problem with agents trained using reinforcement learning, which is that after making an initial mistake (whether by accident or due to noise, etc.), the agent gets taken into a state it's not familiar with, so it makes another mistake, digging an even deeper hole for itself - the mistakes then continue to compound. This is one of the biggest challenges with RL agents in the real, physical world, where you have noise and imperfect information to confront.

Of course, a plausible alternate explanation is that AlphaGo felt like it needed to make risky moves to catch up.

3
dannysu 5 days ago 3 replies      
In the post-game press conference I think Lee Sedol said something like "Before the matches I was thinking the result would be 5-0 or 4-1 in my favor, but then I lost 3 straight... I would not exchange this win for anything in the world."

Demis Hassabis said of Lee Sedol: "Incredible fighting spirit after 3 defeats"

I can definitely relate to what Lee Sedol might be feeling.Very happy for both sides. The fact that people designed the algorithms to beat top pros and the human strength displayed by Lee Sedol.

Congrats to all!

4
fhe 5 days ago 4 replies      
My friends and I (many of us are enthusiastic Go lovers/players) have been following all of the games closely. AlphaGo's mid game today was really strange. Many experts have praised Lee's move 78 as a "divine-inspired" move. While it was a complex setup, in terms of number of searches I can't see it be any more complex than the games before. Indeed because it was a very much a local fight, the number of possible moves were rather limited. As Lee said in the post-game conference, it was the only move that made any sense at all, as any other move would quickly prove to be fatal after half a dozen or so exchanges.

Of course, what's obvious to a human might not be so at all to a computer. And this is the interesting point that I hope the DeepMind researchers would shed some light on for all of us after they dig out what was going on inside AlphaGo at the time. And we'd also love to learn why did AlphaGo seem to go off the rails after this initial stumble and made a string of indecipherable moves thereafter.

Congrats to Lee and the DeepMind team! It was an exciting and I hope informative match to both sides.

As a final note: I started following the match thinking I am watching a competition of intelligence (loosely defined) between man and machine. What I ended up witnessing was incredible human drama, of Lee bearing incredible pressure, being hit hard repeatedly while the world is watching, sinking to the lowest of the lows, and soaring back up winning one game for the human race.. Just incredible up and down in a course of a week. Many of my friends were crying as the computer resigned.

5
jballanc 5 days ago 7 replies      
So AlphaGo is just a bot after all...

Toward the end AlphaGo was making moves that even I (as a double-digit kyu player) could recognize as really bad. However, one of the commentators made the observation that each time it did, the moves forced a highly-predictable move by Lee Sedol in response. From the point of view of a Go player, they were non-sensical because they only removed points from the board and didn't advance AlphaGo's position at all. From the point of view of a programmer, on the other hand, considering that predicting how your opponent will move has got to be one of the most challenging aspects of a Go algorithm, making a move that easily narrows and deepens the search tree makes complete sense.

6
keypusher 5 days ago 3 replies      
The crucial play here seems to have been Lee Seedol's "tesuji" at White 78. From what I understand this phrase in Go means something like "clever play" but is something like sneaking up on your opponent with something that they did not see coming. Deepmind CEO confirmed that the machine actually missed the implications of this move as the calculated win percentage did not shift until later.https://twitter.com/demishassabis/status/708928006400581632

Another interesting thing I noticed while catching endgame is that AlphaGo actually used up almost all of its time. In professional Go, once each player uses their original (2 hour?) time block, they have 1 minute left for each move. Lee Sedol had gone into "overtime" in some of the earlier games, and here as well, but previously AlphaGo still had time left from its original 2 hours. In this game, it came down quite close to using overtime before resigning, which is does when the calculated win percentage falls below a certain percentage.

7
mizzao 5 days ago 6 replies      
Another way to look at this is just how efficient the human brain is for the same amount of computation.

On one hand, we have racks of servers (1920 CPUs and 280 GPUs) [1] using megawatts (gigawatts?) of power, and on the other hand we have a person eating food and using about 100W of power (when physically at rest), of which about 20W is used by the brain.

[1] http://www.economist.com/news/science-and-technology/2169454...

8
jonbarker 5 days ago 1 reply      
AlphaGo's weakness was stated in the press conference inadvertently: it considers only the opponent moves in the future which it deems to be the most profitable for the opponent. This leaves it with glaring blind spots when it has not prepared for lines which are surprising to it. Lee Sedol has now learned to exploit this fact in a mere 4 games, whereas the NN requires millions of games to train on in order to alter its playing style. So Lee only needs to find surprising and strong moves (no small feat but also the strong suit of Lee's playing style generally).
9
minimaxir 5 days ago 4 replies      
There were a few jokes made during the round about how AlphaGo resigns. Turns out it's just a popup window! http://i.imgur.com/WKWMHLv.png
10
Houshalter 5 days ago 2 replies      
We were discussing the probability that Sedol would win this game. Everyone, including me, bet 90% that no human would ever win again, let alone this specific game: http://predictionbook.com/predictions/177592

I tried to estimate it mathematically. Using a uniform distribution across possible win rates, then updating the probability of different win rates with bayes rule. You can do that with Laplace's law of succession. I got a 20% that Sedol would win this game.

However a uniform prior doesn't seem right. Eliezer Yudkowsky often says that AI is likely to be much better than humans, or much worse than humans. The probability of it falling into the exact same skill level is pretty implausible. And that argument seems right, but I wasn't sure how to model that formally. But it seemed right, and so 90% "felt" right. Clearly I was overconfident.

So for the next game, with we use Laplace's law again, we get 33% chance that Sedol will win. That's not factoring in other information, like Sedol now being familiar with AlphaGo's strategies and improving his own strategies against it. So there is some chance he is now evenly matched with AlphaGo!

I look forward to many future AI-human games. Hopefully humans will be able to learn from them, and perhaps even learn their weaknesses and how to exploit them.

Depending how deterministic they are, you could perhaps even play the same sequence of moves and win again. That would really embarrass the Google team. I hear they froze AlphaGo's weights to prevent it from developing new bugs after testing.

11
adnzzzzZ 5 days ago 3 replies      
According to the commentary of both streams I was watching, after losing an important exchange in the middle (apparently move 79 https://twitter.com/demishassabis/status/708928006400581632) it seems AlphaGo sort of bugged out and started making wrong moves on an already dead group on the right side of the board. After that it kept repeating similar mistakes until it resigned a lot of moves after. But the game was already won for Lee Sedol after that middle exchange. It was really interesting seeing everyone's reactions to AlphaGo's bad moves though.
12
versteegen 5 days ago 0 replies      
I found this comment on that thread quite insightful:https://gogameguru.com/alphago-4/#comment-13410

Edit: here's another great one on MCTS: https://gogameguru.com/alphago-4/#comment-13479

13
sethbannon 5 days ago 0 replies      
I wouldn't be surprised if, in a month, Lee Sedol was able to beat AlphaGo in another match. This is what happened in chess. The best computers were able to beat the best humans, until the best humans learned how to play anti-computer chess. This bought them a year or so more, until computers finally dominated for good.
14
emcq 5 days ago 3 replies      
That was really cool! It seemed after the brilliant play in the middle the most probable moves for winning required Lee Sedol to make impossibly bad mistakes for a professional, which would be a prior that AlphaGo doesn't incorporate. I've heard the training data was mostly amateur games so perhaps the value/policy networks were overfit? Or maybe greedily picking the highest probability, common with tree search approaches, is just suboptimal?
15
magoghm 5 days ago 2 replies      
Right now I don't know if I'm more impressed by AlphaGo's artificial intelligence or its artificial stupidity.

Lee Sedol won because he played extremely well. But when AlphaGo was already losing it made some very bad moves. One of them was so bad that it's the kind of mistake you would only expect from someone who's starting to learn how to play Go.

16
hasenj 5 days ago 1 reply      
The game seemed to be going in AlphaGo's favour when it was half way through. Black (AG) had secured a large area on the top that seemed nearly impossible to invade.

It was amazing to see how Lee Sedol found the right moves to make the invasion work.

This makes me think that if the time for match was three hours instead of two, maybe a professional player will have enough time to read the board deeply enough to find the right moves.

18
herrvogel- 5 days ago 2 replies      
Am I right by asumming, that if they would play another game (AlphaGo black and Lee Sedol white), that Lee Sedol could pressure AlphaGo into makeing the same mistake again?
19
kronion 5 days ago 3 replies      
After AlphaGo won the first three games, I wondered not if the computer had reached and surpassed human mastery, but instead how many orders of magnitude better it was. Given today's result, it may be only one order, or even less. Perhaps the best human players are relatively close to the maximum skill level for go, and that the pros of the future will not be categorically better than Lee Sedol is today.
20
Bytes 5 days ago 2 replies      
I was not expecting Lee Sedol to come back and win a game after his first three losses. AlphaGo seemed to be struggling at the end of the match.
21
Angostura 5 days ago 1 reply      
Bizarre. I felt a palpable sense of relief when I read this. Silly meat-brain that I am.
22
pmontra 5 days ago 0 replies      
GoGameGuru just published a commentary of the game with some extra insight https://gogameguru.com/lee-sedol-defeats-alphago-masterful-c...

The author thinks that Lee Sedol was able "to force an all or nothing battle where AlphaGos accurate negotiating skills were largely irrelevant."

[...]

"Once White 78 was on the board, Blacks territory at the top collapsed in value."

[...]

"This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves."

"Weve talked about AlphaGos bad moves in the discussion of previous games, but this was not the same."

"In previous games, AlphaGo played bad (slack) moves when it was already ahead. Human observers criticized these moves because there seemed to be no reason to play slackly, but AlphaGo had already calculated that these moves would lead to a safe win."

Which, I add, is something that human players also do: simplify the game and get home quickly with a win. We usually don't give up as much as AlphaGo (pride?), still it's not different.

"The bad moves AlphaGo played in game four were not at all like that. They were simply bad, and they ruined AlphaGos chances of recovering."

"Theyre the kind of moves played by someone who forgets that their opponent also gets to respond with a move. Moves that trample over possibilities and damage ones own position achieving less than nothing."

And those moves unfortunately resemble what beginners play when they stubbornly cling to the hope of winning, because they don't realize the game is lost or because they didn't play enough games yet not to expect the opponent to make impossible mistakes. At pro level those mistakes are more than impossible.

Somebody asked an interesting question during the press conference about the effect of those kind of mistakes in the real world. You can hear it at https://youtu.be/yCALyQRN3hw?t=5h56m15s It's a couple of minutes because of the translation overhead.

23
awwducks 5 days ago 0 replies      
Lee Sedol definitely did not look like he was in top form there. I would say (as an amateur) his play in Game 2 was far better. It was the funky clamp position that perhaps forced AlphaGo to start falling apart this game. [0]

I wonder if Lee Sedol can find a way to replicate that in Game 5.

[0]: https://twitter.com/demishassabis/status/708928006400581632

24
creamyhorror 5 days ago 1 reply      
Here's the post-game conference livestream:

https://www.youtube.com/watch?v=yCALyQRN3hw

At the end, Lee asked to play white in the last match, and the Deepmind guys agreed. He feels that AlphaGo is stronger as white, so he views it as more worthwhile to play as black and beat AlphaGo.

Conference over, see you all tomorrow.

25
rubiquity 5 days ago 0 replies      
This is a great day for humans. Glad to see all those years of human research finally pay off.
26
asdfologist 5 days ago 1 reply      
On a tangential note, apparently AlphaGo has been added to http://www.goratings.org/, though its current rating of 3533 looks off. Shouldn't it be much higher?
27
kibaekr 5 days ago 2 replies      
So where can we see this "move 78" that everyone is talking about, without having to go through the entire match counting?
28
h43k3r 5 days ago 0 replies      
The post match conference analysis with Lee Sedol and the CEO of deepmind about the different aspects of the game is beautiful to watch. There seems to be a sense of sincerity rather than the greed to win from each of the side.
29
overmille 5 days ago 0 replies      
Now that we have two points for interpolation, expectations are down to near best human competency in go using distributed computation. Also from move 79 to 87 the machine wasn't able to detect the weak position, that shows its weakness. Now Lee can try and aggressive strategy creating multiple hot points of attacks to defeat his enemy. The human player is showing the power of intelligence.
30
jacinda 5 days ago 0 replies      
<joke>AlphaGo let Lee Sedol win to lull us all into a false sense of security. The robot apocalypse is well underway.</joke>
31
kazinator 3 days ago 0 replies      
Also:

Lee Sedol doesn't have RAM that can be crammed with faithfully recalled gigabytes of information, and that allow exhaustive, yet precise searching of vast information spaces. The amount of short-term information Sedol can remember perfectly is very small by comparison, and doing so requires a lot of concentration and effort.

Secondly, the faculty with which Lee Sedol plays Go wasn't designed for the exclusive task of playing Go. Without having to load a different program, Sedol's brain can do many other things well.

32
yoavm 5 days ago 0 replies      
okay human race, let's sit back and enjoy our last moments of glory!
33
hendryau 4 days ago 0 replies      
Has anyone heard the crazy theory that alphago bugged out because of daylight savings (the cutover happened mid-game)? Anyone know the exact time at which alphago made its first wonky move?
34
zkhalique 5 days ago 0 replies      
Wow! Incredible! Now we know that they have a chance against each other. I would say that this was a very major point... otherwise we wouldn't know whether AlphaGo's powers have progressed to the point where no one can ever beat it. Now I take what Ke Je said much more seriously: http://www.telegraph.co.uk/news/worldnews/asia/china/1219091...
35
conanbatt 5 days ago 1 reply      
This game is a great example for the people that said that AlphaGo didnt play mistakes when it had a better position because it lowered the margin, because it only looks at winning probability.

AlphaGo made a mistake and realized it was behind, and crumbled because all moves are "mistakes"(they all lead to loss) so any of them is as good as any other.

Im very suprrised and glad to see Humans still have something against AlphaGo, but ultimately, these kind of errors might dissapear if AlphaGo trains 6 more months. It made a tactical mistake, not a theory one.

36
conceit 4 days ago 0 replies      
I just noticed a pun in the name: All Phago, devourer of worlds. Especially funny as beating a stone could be imaged as swallowing.
37
yulunli 5 days ago 1 reply      
AlphaGo obviously made mistakes in game 4 under the pressure from LSD's brilliant play. I'd like to know if the "dumb moves" are caused by the lack of pro data or some more fundamental flaws with the algorithm/methodology. AlphaGo was trained on millions of amateur games, but if Google/Deepmind builds a website where people (including prop players) can play with AlphaGo, it would be interesting to see who improves faster.
38
hyperpape 5 days ago 0 replies      
It's worth mentioning that while 79 is where Black goes bad, not everyone is sure that 78 actually works for White (http://lifein19x19.com/forum/viewtopic.php?f=15&t=12826). I'm sure we'll eventually get a more complete analysis.
39
yk 5 days ago 1 reply      
Apparently AlphaGo made two rather stupid moves on the sidelines, judging from the commentary. Which incidentally is the kind of edgecase one would expect machine learning against itself is bad at learning, since there is a possibility that AlphaGo just tries to avoid such situations. It will be interesting to see if top players are able to exploit such weaknesses once AlphaGo is better understood by high level Go players.
40
GolDDranks 5 days ago 1 reply      
https://gogameguru.com/lee-sedol-defeats-alphago-masterful-c...

> This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves.

It seems to me, that these bad moves were a direct result of AlphaGo's min-maxing tree search.

According to @demishassabis' tweet, it had had the "realisation" that it had misestimated the board situation at move 87. After that, it did a series of bad moves, but it seems to me that those moves were done precisely because it couldn't come up with any other better strategy the min-max algorithm used traversing the play tree expects that your opponent responds the best he possibly can, so the moves were optimal in that sense.

But if you are an underdog, it doesn't suffice to play the "best" moves, because the best moves might be conservative. With that playing style, the only way you can do a comeback is to wait for your opponent to "make a mistake", that is, to stray from a series of the best moves you are able to find, and then capitalize that.

I don't think AlphaGo has the concept of betting on the opportunity of the opponent making mistakes. It always just tries to find the "best play in game" with its neural networks and tree search in terms of maximising the probability of winning. If it doesn't find any moves that would raise the probability, it picks one that will lower it as little as possible. That's why it picks uninteresting sente moves without any strategy. It just postpones the inevitable.

If you're expecting the opponent to play the best move you can think of, expecting mistakes is simply not part of the scheme. In this situation, it would be actually profitable to exchange some "best-of-class" moves to moves that aren't that excellent, but that are confusing, hard to read and make the game longer and more convoluted. Note that this totally DOESN'T work if the opponent is better at reading than you, on average. It will make the situation worse. But I think that AlphaGo is better in reading than Lee Sedol, so it would work here. The point is to "stir" the game up, so you can unlock yourself from your suboptimal position, and enable your better-on-average reading skills to work for you.

It seems to me that the way skilful humans are playing has another evaluation function in addition to the "value" of a move how confusing, "disturbing" or "stirring up" a move is, considering the opponent's skill. Basically, that's a thing you'd need to skilfully assess your chances to perform an OVERPLAY. And overplay may be the only way to recover if you are in a losing situation.

41
esturk 5 days ago 2 replies      
LSD maybe the only human to ever win against AlphaGo.
42
techdragon 5 days ago 3 replies      
I was hoping to see how AlphaGo would play in overtime. Now I'm curious, does it know how to play in overtime? Can the system handle evaluating how much time it can give itself to 'think' about each move, or does it fall into the halting problem territory and it was programmed to evaluate its probability of winning given the 'fixed' time it had left.
43
samstave 5 days ago 0 replies      
So I am a completely ignorant of the game go. I mean I've heard about it my whole life but never bothered to understand it ever.

But after watching the summary video of AlphaGos win... I'm fascinated.

I'm sure there are thousands of resources that can teach me the rules, but HN; can you point me to a resource you recommend to get up to speed?

44
jonbarker 5 days ago 1 reply      
Would it not be beneficial to the deepmind team to open at least the non-distributed version to the public to allow for training on more players? I was surprised to learn that the training set was strong amateur internet play, why not train on the database of the history of pro games?
45
ctstover 5 days ago 0 replies      
As a human, I'm pulling for the human. As a computer programmer, I'm pulling for the human. As a romantic, I'm pulling for the human. As a fan of science fiction, I'm pulling for the human. To me it will matter even he can pull off a 3-2 loss over a 4-1 loss.
46
ljk 5 days ago 0 replies      
Does this mean Lee found AlphaGo's weakness, and AlphaGo wasn't player at a out-of-reach level?
47
spatulan 5 days ago 1 reply      
I wonder what the chances are of a cosmic ray or some stray radiation causing AlphaGo to have problems. It's quite a rare event, but when you have 1920 CPUs and 280 GPUs, it might up the probability enough to be something you have to worry about.
48
devanti 5 days ago 0 replies      
I wonder if Lee Sedol were to start as white again, and follow the exact same starting sequences, would AlphaGo's algorithms follow the exact same moves as it did before?
49
piyush_soni 5 days ago 0 replies      
I am super excited and all about the Progress AI has made in AlphaGo, but a part of me feels kind of relieved that humans won at least one match. :). Sure, won't last for long.
50
uzyn 5 days ago 1 reply      
It seems Lee Sedol fares better at late to end game than AlphaGo. Makes one wonder if Lee might have won the earlier games had Lee pushed on until the late game stages.
51
agumonkey 5 days ago 0 replies      
Way to go humans. (I felt that AlphaGo was unbeatable and a milestone in computing overthrowing organic brains... I gave in the buzz a bit prematurely).
52
_snydly 5 days ago 4 replies      
Was it AlphaGo losing the game, or Lee Sedol winning it?
53
partycoder 5 days ago 0 replies      
Montecarlo bots behave weirdly when losing.
54
makoz 5 days ago 0 replies      
Some pretty questionable moves from AlphaGo in that game, but I'm glad LSD managed to close it out.
55
codecamper 5 days ago 0 replies      
Wow that is awesome news. Very happy to read this this morning. It's a good day to be human.
56
eslaught 5 days ago 1 reply      
Is there a place I can go to quickly flip through all the board states from the game?
57
atrudeau 5 days ago 2 replies      
Move 78 gives us hope in the war against the machines.

78 could come to symbolize humanity.

What a special moment.

58
another-hack 5 days ago 0 replies      
Humans strike back! :P
59
vc98mvco 5 days ago 1 reply      
I hope it won't turn out they let him win.
60
pelf 5 days ago 0 replies      
Now THIS is news!
61
Dowwie 5 days ago 0 replies      
But, did he pull a Kirk vs Kobayashi Maru? :) (yes, I went there)
62
repulsive 5 days ago 0 replies      
negativist paranoid skeptic could say that it would be a good move for the team to intentionally make go lose single battle in the moment it has already won the war..
63
conanbatt 5 days ago 0 replies      
Maybe Alpha Go understood it won the 5 series, so its reading that it can lose the last 2 games and still win and hence plays suboptimal :P
64
gcatalfamo 5 days ago 1 reply      
I believe that after winning 3 out of 5, AlphaGo team started experimenting with variables now that they can relax. Which will in turn be even more helpful for future AlphaGo version than the previous 3 wins.
65
antonioalegria 5 days ago 3 replies      
Don't want to sound all Conspiracy Theory but somehow this feels planned.. It plays into DeepMind's hand to not have the machine completely trouncing the human. It's less scary and keeps people engaged further into the future.

Also seems in-line with the way Demis was "rooting" for the human this time they already won so now they focus on PR.

2
Google Puts Boston Dynamics Up for Sale in Robotics Retreat bloomberg.com
814 points by doener  1 day ago   378 comments top 48
1
aresant 1 day ago 7 replies      
Robotics, particularly without the gov't pig trough, needs a visionary and a long game point.

Andy Rubin was a visionary and he bailed.

He left Apple to found Danger which was one of the "genesis" products that led to the smartphone era. (1)

He made the next huge leap with Android and almost failed until Google showed up to help him fully execute his vision under their stewardship.

After Android became a very mature business in its own right what to do with the visionary founder?

The article states that "Page is interested in robots" - can you imagine if Larry Page came to you and said "Hey how about you literally build robots all day. You could be Tony Stark and here's $100m to get you started. We'll change the world!"

It's a hard to turn down offer, Rubin accepted and tried to recapture the magic pursuing somebody else's vision.But it's damn tough to be a founder / visionary under somebody else's thumb, especially when you're set for life financially.

That's a story that never works out, but is played out again and again in technical acquisitions as big organizations attempt to find a place for founders.

(1) https://en.wikipedia.org/wiki/Danger_(company)

(2) https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...

2
temuze 1 day ago 10 replies      
This is surprising to me. It seems that many prominent Googlers are very optimistic about robotics.

For example, in the FAQ of Jeff Dean's recent talk in Seoul, he mentioned how Deep Learning has a lot of potential to reinvent the field of robotics. Also, Demis Hassabis recently tweeted about progress in learning 3-D environments. I'd be surprised if Google wasn't looking into general purpose robotics...

Perhaps Google is disappointed in their robotics acquisitions and wants to start from scratch? It seems that they are farther on the software front than anyone at the point. I wonder what they'll do in their hardware/power divisions...

(Also, it kind of seems like Tesla and Google are on a crash course here. Tesla is ahead in power/hardware and is developing a top-tier AI team for self-driving cars. Elon also seems very interested in Robotics + AI. Google seems to be working from the opposite end.)

3
itg 1 day ago 9 replies      
"Theres excitement from the tech press, but were also starting to see some negative threads about it being terrifying, ready to take humans jobs, wrote Courtney Hohne, a director of communications at Google and the spokeswoman for Google X.Hohne went on to ask her colleagues to distance X from this video, and wrote, we dont want to trigger a whole separate media cycle about where BD really is at Google.Were not going to comment on this video because theres really not a lot we can add, and we dont want to answer most of the Qs it triggers, she wrote."

I found this to be disappointing. More concerned about their brand image than trying to push robotics research forward.

4
Futurebot 1 day ago 7 replies      
I really hope that the last paragraph is wrong. Getting out of that area because they're afraid of neo-luddite backlash is ridiculous. Instead, they should be using their considerable power to push the much-lauded "Silicon Valley Democracy" (http://www.vox.com/2016/2/19/11057836/silicon-valley-democra...). Things like:

- Guaranteed income, or at least a stopgap like the old welfare system, but with far less means-testing and much more generosity

- Universal, free higher education

- Free, universal health care

- Smart, flexible regulatory apparatus

- A complete rethought system of unions (a la Sweden)

- Massive push (or even buildout) of dense urban housing developments. Make the modern "company town" an explicit goal if you must, then expand it to regions across the country

I really wonder if the Nordics have a leg up on us here; they're already 3/4 of the way towards this ideal, both in terms of policy and a cultural understanding of the benefits of a truly progressive taxation system/public goods and services. Would a Danish or Finnish robotics company bail out just because of fear of backlash, or would they say "society already has you covered, people"?

5
Isamu 1 day ago 5 replies      
The graph of govt funding (almost all defense dept) of Boston Dynamics can be seen here:

https://www.usaspending.gov/transparency/Pages/RecipientProf...

Peaked in FY 2012, went to nothing (negative?) after Google X bought it at the end of 2013. Not sure if that is more because Google X wanted a real product, or that the defense dept didn't want to continue the relationship, or what.

The marine corps passed on big dog as "too loud" at the end of 2015, that killed a potential customer for that product.

6
Rezo 1 day ago 2 replies      
"Googles public-relations team expressed discomfort that Alphabet would be associated with a push into humanoid robotics".

Observe the Innovator's Dilemma in effect: a successful company is putting too much emphasis on customers' current needs, and fails to adopt new technology or business models that will meet their unstated or future needs.

When realized, humanoid robotics will make the (self-driving or not) entire car industry seem like a quaint little side business.

7
colordrops 19 hours ago 2 replies      
Everyone saw the recent Atlas videos. Even with the QR code shortcuts, the demo was nothing short of miraculous. No one is even close to what Boston Dynamics is doing with legged locomotion, which is an insanely valuable technology. There's no way Google would just let this company go because of monetization or PR reasons. Something else is happening behind the scenes that we aren't yet aware of.
8
gene-h 1 day ago 0 replies      
This is not very surprising. From Boston Dynamic's recently released robots and their proposed direction for the future it was pretty clear they were not working with the other robotics divisions.

Almost all of the robots Boston Dynamics has made are hydraulically actuated. In fact, pretty much all the robots Marc Raibert made when he was at MIT's leg lab were hydraulic. They are pretty dead set on this approach[0].

This is in contrast with google's other divisions Meka Robotics and SCHAFT which use electric actuation. Heck, SCHAFT's whole shtick is that they were able to make stronger more powerful electric actuators. Not to mention a big part of Meka's tech are their electric series elastic actuators.

Boston Dynamic's message that their next robot will be hydraulic too but 3d printed, was a pretty strong indication they were not going to work with them. It might even show that Boston Dynamics is more of a hardware company, focusing on building robot systems, rather a software company, focusing on walking algorithms.

This battle over hydraulics vs electric actuators has been fought before in the robotics industry and hydraulics lost[1]. While hydraulics were stronger, electric actuators were more reliable. More reliability means said robot generates more value before it breaks down.

[0] http://www.3ders.org/articles/20150818-google-partially-3d-p...[1] http://www.roboticsbusinessreview.com/article/the_first_indu...

9
crb002 1 day ago 1 reply      
Hoping John Deere buys it.https://m.youtube.com/watch?v=lYh54Qdh_5g

Only farm big iron should be combines. Smaller bots for planting, weeding, and spraying are the future. Bosch is getting too far ahead.http://spectrum.ieee.org/automaton/robotics/industrial-robot...

10
bunkydoo 1 day ago 2 replies      
Not to knock "Google X" but they seem to be extremely wishy washy when it comes to picking a direction. (Glass, robotics, curing aging) they really don't seem to have the "skate to where the puck is going to be" mentality at least from the outside looking in. Apple is struggling here too, but only very recently has this come to the attention of media and shareholders.
11
hitekker 1 day ago 0 replies      
If this article is to be believed, shortsightedness and bottom-line-thinking are contradicting what Alphabet is suppose to be.

>Alphabet is about businesses prospering through strong leaders and independence... Sergey and I are seriously in the business of starting new things. Alphabet will also include our X lab, which incubates new efforts like Wing, our drone delivery effort. We are also stoked about growing our investment arms, Ventures and Capital, as part of this new structure.[1]

From the Alphabet website, to this article:

>Aaron Edsinger, director of robotics at Google in San Francisco, said that he had been trying to work with Boston Dynamics to create a low-cost electric quadruped robot

>In the meeting, Rosenberg said, we as a startup of our size cannot spend 30-plus percent of our resources on things that take ten years," and that "theres some time frame that we need to be generating an amount of revenue that covers expenses and (that) needs to be a few years.

Focusing on building revenue is good for established companies, sometimes even for company devisions. Not always useful for startups (imagine if FB bootsrapped), which in my understanding, was what Alphabet was trying to incubate.

> Theres excitement from the tech press, but were also starting to see some negative threads about it being terrifying, ready to take humans jobs, wrote Courtney Hohne, a director of communications at Google and the spokeswoman for Google X.Hohne went on to ask her colleagues to distance X from this video, and wrote, we dont want to trigger a whole separate media cycle about where BD really is at Google.

Ignorance aside, you can glean more of the poor distinction between Alphabet and Google here. X[2] is suppose to be a separate company from Google, under Alphabet, yet it is referred to as Google X here. Even Courtney Hohne, a director of communication, lists herself as working at Google, not at X or at Alphabet.

This confusion between the two hints at larger problems, which bodes poorly for Alphabet's lofty aspirations.

[1]http://abc.xyz[2] https://en.wikipedia.org/wiki/X_(company)

12
rwhitman 11 hours ago 0 replies      
They mentioned this in the article, but I'd say it has a lot to do with image.

Boston Dynamics seems fixated on posting videos of people abusing their robots to demonstrate their ability to regain balance, but the general public finds them pretty disturbing and are easy fodder for late night comedy jokes about robot uprisings.

Not really the kind of visual you want to put in people's minds when your company is simultaneously developing AI that was broadcast besting a human at the world's hardest board game.

No short term ROI + product milestones in the form of creepy videos = Google's black sheep

13
ChuckMcM 1 day ago 0 replies      
Interesting comments at the end about corp comm not wanting to be too close to Boston Dynamics. A self driving car is "cool" a robotic driver that can get in to a car and drive it away is "scary"[1]. Coupled with the stories about progress in AI and I can see why it might give someone the jitters. Doesn't help that the connection is still there, if only unacknowledged.

I've been following robotics since the 80's and one of those places that seems to get started and then vanish time and again are robot security guards. Certainly an ambulatory robot like some of the ones BD built with AI to investigate disturbances around a facility would be a potentially revenue generating product. The optics though leave a bit to be desired.

[1] This was one of the tasks of the DARPA robotics challenge.

14
dbcurtis 1 day ago 0 replies      
BD has done a lot of work on DARPA grants, and is very good at specmanship. If the DARPA contract says "must run continuously for 10 hours" then you can bet that the MTBF of all the critical components is slightly north of 10 hours. BD is stuck in an academic research mindset, and hasn't really internalized the mindset necessary to create something sufficiently robust to be a product. I got to tour the maintenance garage at the DRC Finals, pretty much every Atlas and every Atlas spare part in the world were in the building. The Atlas looked like a delicate "lab queen" contraption next to the CHIMP robot used by Tartan Rescue.
15
Analemma_ 1 day ago 4 replies      
I have to assume the timing of this, coming so soon after the Pentagon bailed on BD, is no coincidence. Does Google figure that the division cannot turn a profit without that sweet spigot of defense dollars?
16
Animats 1 day ago 1 reply      
I was afraid this would happen.

Boston Dynamics did good work, but it was all DoD funded at a very high price point. It cost about $125 million and ten years to get to BigDog. That's all R&D; there's no marketing cost in that. DARPA is a patient customer. Google, from the parent article, is not.

BigDog is a great achievement, but it was developed to DoD specs, and it's just too big, heavy, and noisy. The militarized version, the Legged Squad Support System, was bigger and stronger, but the USMC decided it just wasn't useful militarily. Atlas is really an evolved BigDog that stands upright, with similar hydraulic actuators. A hydraulic humanoid is just too bulky.

BD has some problems that aren't well known. BD's CEO, Marc Raibert, is around 67, and due for retirement. He and his girlfriend owned the company, so they're the only ones who benefited from the Google buy. I doubt that the employees got anything. Also, the brains behind BigDog was Dr. Martin Buehler, who previously had an ambulatory robotics lab at McGill, and whose group built the first good running robot quadruped. (Using my anti-slip algorithm, I found from reading a thesis that cites my work.) He was the chief engineer on BigDog, and quit right after BigDog was publicly demoed.[1] He's now at Disney.[2]

Raibert seemed to like hydraulic systems; his name is on patents for BigDog's rather clever hydraulic actuators. But that approach seems to be too heavy for a humanoid robot. Atlas weighs 330 pounds. Schaft, a University of Tokyo spinoff which Google also bought, uses water-cooled electric motors, like Tesla, to get the power to weight ratio needed. You need huge torque only a small fraction of the time, so overloading motors is fine if you can keep them cool. I'd expected that Google would get Boston Dynamics and Schaft to work together, and from that would come a new, lighter humanoid with good balance. But the Bloomberg article said that BD didn't play well with Google's other robotics companies. BD is near Boston, Schaft is near Tokyo, and Google never tried to get them under one roof.

Whatever happened to Schaft, anyway? They built one very nice humanoid robot before Google bought them, and haven't been heard from since. Google wouldn't let them enter the DARPA Robotics Challenge.

Computationally, BigDog/Atlas are not that compute intensive. The balance and locomotion algorithms for BigDog ran on a Pentium 4 running QNX, with the servovalve control loop at 1KHz and the balance control loop at 100Hz. Google's expertise isn't in that kind of hard real time work. You need that stuff down at the bottom to keep from falling down. BD didn't do much work at the higher levels of control; they were mostly building teleoperators with, in the later versions, automatic foot placement.

From the article: "In a private all-hands meeting around that time, Astro Teller, the head of Google X, told Replicant employees that if robotics arent the practical solution to problems that Google was trying to solve, they would be reassigned to work on other things." (Probably related to maximizing ad revenue.) That's a great way to lose your robotics expertise for which you overpaid.

I used to work in the legged locomotion area. But I could never see a path to a profitable product. Toys were at too low a price point (even Sony gave up), and a legged working robot for maintenance tasks was a long way off. We're getting closer now; a useful robot that costs about as much as a car seems well within reach on the hardware side.

[1] http://www.robotics.org/content-detail.cfm/Industrial-Roboti...[2] https://www.linkedin.com/in/martinbuehler

17
afokken 1 day ago 1 reply      
What if Google is already using AI to make business decisions and AlphaCEO has determined that BD wasn't a good investment? :)
18
Simon321 14 hours ago 1 reply      
>Theres excitement from the tech press, but were also starting to see some negative threads about it being terrifying, ready to take humans jobs, wrote Courtney Hohne, a director of communications at Google and the spokeswoman for Google X.

>Hohne went on to ask her colleagues to distance X from this video and wrote, we dont want to trigger a whole separate media cycle about where BD really is at Google.

>Were not going to comment on this video because theres really not a lot we can add, and we dont want to answer most of the Qs it triggers, she wrote.

AI fear-mongering is already doing damage. What a disappointment...

19
sharemywin 1 day ago 4 replies      
They need to talk to Disney. They could build a tomorrow land but with real robots.
20
maxerickson 1 day ago 1 reply      
I wonder if they are keeping any of the people or IP.
21
psbp 1 day ago 2 replies      
Depending on how much it sells for, this might be the first case of the Alphabet strategy actually working.

Otherwise, BD might have just embarrassingly died inside of Google.

22
ww520 22 hours ago 0 replies      
This sounds like turf fighting gone bad. The immediate revenue generation is just an excuse. Google has many other research projects that won't be profitable for a long time, like self driving.
23
iamgopal 1 day ago 1 reply      
I have been robotic engineer for quite some time now, and I can predict that, adding intelligence to hardware already existed is quite easy, like smart refrigerator which close automatically after some time, or door which send you notification when kids are at home, but to build robot that can paint a complex building is next to impossible, not that it can't be done, but to do it in a economically rewarding way is not just possible, because of relatively high number of technology involved for relatively low return of investment.
24
mchahn 1 day ago 2 replies      
Was the push towards humanoid robots a PR gimmick? I would think the ideal robot would look very different than us and not have the limitations that our bodies have. Airplanes don't flap their wings.
25
bravura 1 day ago 1 reply      
Executives at Google parent Alphabet Inc., absorbed with making sure all the various companies under its corporate umbrella have plans to generate real revenue, concluded that Boston Dynamics isnt likely to produce a marketable product in the next few years and have put the unit up for sale

Serious question, but how is DeepMind going to generate real revenue?

26
bitL 1 day ago 0 replies      
Google obviously has the need to be adored; if somebody with a reason contradicts what they say (what can you know - you are just a founder of Boston Dynamics, not a mighty Googler!), they get rid of them. Seriously Google, you were once great, now your search is unusable and I can't see anything worth coming from you :-/ If PR dictates your focus, what is Ray Kurzweil going to do there - developing a new VA synth instead of singularity? Do we need to travel to Osaka to see real humanoids from Hiroshi Ishiguro and get the feel of the future? Does Japan have to be again 20+ years ahead of the US? Are we going to stop humanity progress because some retarded internal politicians want to keep their power or some shortsighted capitalists want to keep milking their cows? We have the possibility to improve human condition tremendously with the robotic technology at hand and we are going to be stupid about it and fall back to "business as usual" idiocy? Is this Larry why Tesla inspired you so much when you were still young and idealistic? Even Intel seems to be 5 years ahead of you when it comes to releasing consumer humanoid robotics... [sorry for a rant]
27
louprado 1 day ago 1 reply      
Can anyone comment on the target use case for bi/quadruped robots ? Industrial operations have smooth flat predictable flooring where wheels are just fine. It's also too expensive for a consumer grade household assistant (but this would be the killer App).

This seems targeted for disaster/rescue ops and military applications. Perhaps Google doesn't want to enter those markets.

28
SemiTom 1 day ago 0 replies      
I wonder if this impacts the Robotics organization they had on California Ave in Palo Alto. I saw them testing out some robotic dogs
29
tim333 23 hours ago 3 replies      
Boston Dynamics always seemed an odd fit for Google give they were making robots for the military and Google has its "don't be evil" motto. I'm not sure how you bring those two together.
30
drewda 1 day ago 0 replies      
31
ausjke 1 day ago 0 replies      
Maybe more of a culture conflict between east and west coast? sort of like MIT vs Stanford, they behave differently actually.

Otherwise I would think Google can buffer money&time for pure R&D with no quarterly revenue pressure in certain fields, e.g. robotics in this case.

32
1qaz2wsx3edc 1 day ago 0 replies      
I for one approve of this pre-emptive strike against robot-kind and skynet-like future. Especially after seeing the abuse we've put countless robots through. I mean, it's probably a good idea to have robots decentralized from google's AI.

:P

33
riazrizvi 1 day ago 1 reply      
It's interesting that this is a top story, we can see from the popularity on this site. Yet on http://reddit.com/r/google it is nowhere to be found...
34
njharman 1 day ago 0 replies      
Hope some new/tech Billionaire buys it, just cause robots are cool and they want them to advance. I would! I really want to ride a big dog around my neighborhood / to work.
35
elpasi 1 day ago 0 replies      
This is going to be a hard blow for all the people determined to believe the conspiracy theories about Google trying to take over the world...
36
cm2187 1 day ago 1 reply      
I don't know if the future of robotics is a vicious terminator.

I see it more as a mix of a maid, a cook, a butler and a handyman....

A domestic robot that will clean the house, walk the dog, take delivery of parcels, cook like a chef, repaint the walls while you are on holidays, iron your clothes, change the bulbs, ensure there is always enough toilet paper, refill your beers while you watch your match on tv,....

37
ocschwar 1 day ago 0 replies      
Now I can go work for Boston Dynamics and be evil!
38
wpietri 1 day ago 0 replies      
My sympathies to all the Boston Dynamics employees. This sort of "we love you"/"we hate you" whipsaw has got to be hard on them.
39
partycoder 1 day ago 0 replies      
I hope this doesn't include the self driving cars.
40
100ideas 1 day ago 0 replies      
Are they keeping Bot&Dolly?
41
golergka 1 day ago 1 reply      
Am I stupid to think about Xerox PARC or Bell labs? Seems like a company that will not produce immediate revenue in near future, but at the same time create technologies that will influence the world for dozens of years.
42
newobj 1 day ago 0 replies      
Guess I won't be getting a bipedal robot as Google I/O swag this year.... or will I?
43
dschiptsov 19 hours ago 0 replies      
The story for robotics, it seems, is similar to classic AI - our models are too primitive and technology too crude.

Nature has its reasons to make everything from tissues of specialized cells. This is the way to achieve precision, elasticity and adaptability.

44
yeukhon 1 day ago 0 replies      
If someone build a real Iron Man, can he be arrested for building a military grade weapon? I mean I would love to become a real Tony Stark and building an armored AI robot for my own (perhaps minus the destructive part of the robot...)
45
known 1 day ago 0 replies      
I bet Google self driving car will be next
46
samstave 1 day ago 4 replies      
One of my personal failures was that I interviewed at danger years ago, but I was deathly ill with the flu, but I wanted to work there so bad I went with the interview regardless and did extremely poorly as I had a fever of 103 or so and I was hardly able to answer questions....

I wish I would have rescheduled that interview as I believe I would have got the job had I not looked like I was about to expire.

47
xyzzy4 1 day ago 2 replies      
Yet another company destroyed by Google.
48
gjvc 1 day ago 1 reply      
"Don't be evil." is clearly still true!
3
A Government Error Just Revealed Snowden Was the Target in the Lavabit Case wired.com
769 points by runesoerensen  1 day ago   162 comments top 18
1
zik 1 day ago 10 replies      
So an American government agency destroyed an American business merely as collateral damage of trying to persecute an unrelated guy who'd revealed their wrongdoing. And they won't face any consequences for doing this. Something is very wrong with this.
2
jdavis703 1 day ago 1 reply      
This is the same kind of government we're expected to trust with private keys and source code from companies like Apple and Google. Of course there's been a long history of government not knowing how to secure IT infrastructure, including a data leak that exposed the identities of it's own undercover spies: http://money.cnn.com/2015/09/30/technology/china-opm-hack-us....
3
jacquesm 1 day ago 0 replies      
If the mountain of papers is high enough someone will mess up. The funny thing is that this is symmetrical, the same kind of math that underlies 3 felonies a day applies to court filings like these. The more opportunities to mess up the bigger the chance that someone will mess up.

That said I don't think that there is anybody out there who is shocked by this confirmation, it was as far as I'm concerned a certainty, the timing would have been too much of a coincidence.

4
caf 1 day ago 5 replies      
Why is Wired telling me I'm running an adblocker when I try to view this article? I'm not...

(edit) Ahh - it turns out that Firefox's built-in "tracking protection" feature triggers their ad-blocker-blocker.

5
peterkelly 1 day ago 1 reply      
I wonder if the government employee who screwed up the redaction will face the same penalties of jail time that Ladar Levison was threatened with.
6
ipsin 1 day ago 1 reply      
Does this increase the chance of the order being vacated?

What things can Levison still not talk about, aside from the identity of the target?

7
fixxer 1 day ago 0 replies      
Beyond the case itself, doesn't this sort of ineptitude illustrate exactly why we should not want the government holding the keys?
8
twobuy 10 hours ago 0 replies      
This is important information, but man, I really applaud the writer for stretching the story "they forgot to redact his email" and repackaging it into 10-15 paragraphs.
9
hartator 1 day ago 0 replies      
It was an open secret that Lavabit was targeted because of Snowden account there, but that's a really good thing to be sure for real, just for the sake of transparency.

Even if in this case, it was non-intentional transparency.

10
aranw 1 day ago 18 replies      
Couldn't even read the article due to Wired's anti-adblock banner.
11
oceanswave 13 hours ago 0 replies      
Just shows that the FBI can't keep their own secrets, not to speak of keeping anyone else's.
12
duncan_bayne 1 day ago 3 replies      
It's worth noting that the same bunch of people responsible for this mistake are those who want the public to believe they're sufficiently responsible and trustworthy to:

a) Safely store and analyse the results of mass public surveillance.

b) Hold 'master' keys to encrypted systems.

Of course no-one in the know seriously believes either claim, but this is a great counter-example to put to the general public.

13
subverso 14 hours ago 1 reply      
Hello, I'm from Brazil. First time I read about this case. I'd like to know how Gov. managed to shut down the Lavabit. Which were the arguments?
14
mnw21cam 12 hours ago 0 replies      
Web site broken - it says I have an ad blocker when I don't.
15
fit2rule 17 hours ago 0 replies      
The US Government just demonstrated exactly why we do not let technology decisions be made by government bureaucrats: mistakes get made.

Today, its just an un-redacted email address. Tomorrow, it could be the keys to the back-door that the government wants to impose on the world.

17
tptacek 1 day ago 8 replies      
Wasn't this already known?
18
strathmeyer 23 hours ago 0 replies      
No Wired I'm not paying $1 to read your article.
4
AlphaGo Beats Lee Sedol in Final Game gogameguru.com
700 points by doppp  3 days ago   289 comments top 35
1
johnloeber 3 days ago 9 replies      
This was probably the closest game in the series. Livestream: https://www.youtube.com/watch?v=mzpW10DPHeQ

A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level. Now it seems that we've already surpassed that point. What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.

In game four, we saw Lee Sedol make a brilliant play, and AlphaGo make a critical mistake (typical of monte carlo-trained algorithms) following it. There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human: games one through three already featured extraordinarily strong (and innovative) play on part of AlphaGo.

Previous Discussions:

Game 4: https://news.ycombinator.com/item?id=11276798

Game 3: https://news.ycombinator.com/item?id=11271816

Game 2: https://news.ycombinator.com/item?id=11257928

Game 1: https://news.ycombinator.com/item?id=11250871

2
tunesmith 3 days ago 7 replies      
By the way, for those who want to learn by themselves, there are a lot of ways to play Go against a computer in a way that is friendly for beginners.

My rough journey so far - on a Mac, but much of this can be done on Linux - I started out playing 9x9 games against Gnugo, giving myself as much handicap as possible (without it resigning immediately), and then removing stones as I improve. I got to the point where I could sometimes beat 9x9 when I started with two extra stones, and then I started with 19x19.

Took me a while to win 19x19 with 9 stones, but then I won by learning a bit more about extending on hane. Then you can improve from there.

After that point, you can also switch to fuego or pachi, which are stronger by default. The end result is it really is easy and possible to learn a ton just by playing against software, tracking your ability throughout, just by picking programs with different strength and handicap levels.

I've also enjoyed using GoGui to pit two computer programs against each other and watch how they play with various handicaps.

Then there's all the puzzles - goproblems.com, smartgo, etc. Finally, there are plenty of ebooks you can buy through smartgo books.

This doesn't get into playing against humans on the various servers, but there's plenty of information about that online.

3
skarist 3 days ago 1 reply      
Great game and amazing series/match. This last one was absolutely nail biting. My hat off to the AlphaGo team and to Mr. Lee Sedol. Sedol showed incredible fighting spirit and stamina. Just imagine sitting through a 5 hour game like that last one, with full concentration all the time. And seeing the expression of exhaustion and disappointment on Sedol's face after last moves and his resignation. Phew... I bet that he came in rather confident into this last game, after beating AlphaGo in the fourth, figuring he had found a weakness. And he seemed to have a rather good start, securing a decent territory in the lower right corner. We can all marvel at the machine/software the DeepMind team has built, but still I feel that the real marvel is the human brain. Will we learn anything from this series, about how it functions and evaluates game positions in a stratetgic games? The classic problem/mystery is how extremely good the human brain is at pruning game-trees. Whole branches are thrown out in split seconds and probably never explored. Currently taking a watt-for-watt comparison there is no question about whose "hardware" is superior -> Lee Sedol's brain. But I guess the DeepMind team and the community will take plenty of lessons from this and in a few years span, Lee Sedol's phone will beat him 100% of the time. At least I wouldn't be willing to bet against it, even though we are hitting the roof in Moore's law.
4
awwducks 3 days ago 3 replies      
My rough summary of the match, informed by the various commentators and random news stories.

Game 1: Lee Sedol does not know what to expect. He plays testing moves early and gets punished, losing the game decisively.

Game 2: Lee Sedol calms down and plays as if he is playing a strong opponent. He plays strong moves waiting for AlphaGo to make a mistake. AlphaGo responds calmly keeping a lead throughout the game.

Game 3: Lee Sedol plans a strategy to attack white from the start, but fails. He valiantly plays to the end, creating an interesting position after the game was decided deep in AlphaGo's territory.

Game 4: Lee Sedol focuses on territory early on, deciding to replicate his late game invasion from the previous game, but on a larger scale earlier in the game. He wins this game with a brilliant play at move 78.

Game 5: The prevailing opinion ahead of the game was that AlphaGo was weak at attacking groups. Lee Sedol crafted an excellent early game to try to exploit that weakness.

Tweet from Hassabis midgame [0]:

 #AlphaGo made a bad mistake early in the game (it didnt know a known tesuji) but now it is trying hard to claw it back... nail-biting.
After a back and forth late middlegame, Myungwan Kim 9p felt there were many missed chances that caused Lee Sedol to ultimately lose the game by resignation in the late endgame behind a few points.

Ultimately, this match was a momentous occasion for both the AI and the go community. My big curiosity is how much more AlphaGo can improve. Did Lee Sedol find fundamental weaknesses that will continue to crop up regardless of how many CPUs you throw at it? How would AlphaGo fare against opponents with different styles? Perhaps Park Jungwhan, a player with a stronger opening game. Or perhaps Ke Jie, the top ranked player in the world [1], given that they'd have access to the game records of Lee Sedol?

I also wonder if the quick succession of these games on an almost back-to-back game schedule played a role in Lee Sedol's loss.

Myungwan Kim felt that if Lee Sedol were to play AlphaGo once more, the game would be a coinflip since AlphaGo is likely stronger, but would never fix its weakness between games.

[0]: https://twitter.com/demishassabis/status/709635140020871168

[1]: http://www.goratings.org/

5
vermontdevil 3 days ago 2 replies      
Ken Jennings just welcomed Lee Sedol to the "Human Loser Club"

http://www.slate.com/articles/technology/technology/2016/03/...

Pretty good article here.

6
malanj 3 days ago 6 replies      
After the first 3 games I thought that AlphaGo was far beyond human level, but it's a harder call to make now. It seems very unlikely that an AI would be very close to exactly matching a human, one would expect it to be much stronger or much weaker.

Perhaps humans are closer to the "Perfect Game" than we think? http://hikago.wikia.com/wiki/Hand_of_God The top players estimate they would need a 4 stone advantage to win a perfect player.

7
krig 3 days ago 2 replies      
Really interesting and close match, it was great listening to the expert player analyse the game and having the final score be uncertain until very late in the game.

I found the discussion around weaknesses in the Monte Carlo tree search algorithm interesting. It sounds like the opinion from the expert is that there are some inherent weaknesses in how MCTS tends to play moves against theoretical moves from the opponent that don't make sense; ie. that AlphaGo sees a potential win that would only happen if the human player made very bad moves. It's fascinating that the seeming weakness in AlphaGo would come from the algorithmic part of the AI and not the neural net. Could it be that as the neural net becomes stronger and stronger at the game, eventually the algorithmic part of it would become less useful to it? If that's the case, it really feels like this could be the path to truly general AI.

8
wowzer 3 days ago 2 replies      
While what the AlphaGo team has accomplished is nothing short of amazing, I'm not sure if everyone's thinking about this in the right context. While playing there's a "super computer" behind the scenes with these specs 1,920 CPUs and 280 GPUs [0]. Then consider all the machines used to train this neural net. I'd say Sedol's brain is pretty freaking powerful. Also, with that much computing power I would expect AlphaGo to win with the right team and the right approach to solving the problem. It would be very interesting to change the rules and limit the processing power of the computer playing a human.

[0] https://en.wikipedia.org/wiki/AlphaGo

9
Wildgoose 3 days ago 2 replies      
Four components:

Learning (viewing millions of professional game moves).

Experience (playing different versions of itself)

Intuition (ability to accurately estimate the value of a board)

Imagination (evaluating a series of "what if?" scenarios using Monte Carlo Tree Search)

I think the significant thing about AlphaGo is that apart from some hand-crafting in the Monte Carlo Tree Search routines, this is all general purpose programming.

It may only be baby-steps, but it does feel like a genuine step towards true (general) AI.

10
nichochar 3 days ago 1 reply      
Guys, this is fantastic, but lets not forget: What "shows how capable the human brain actually is," is:1) The human brain invented Go to begin with2) The long and celebrated history of Go3) The human brain made DeepMind4) The human brain finds value and beauty in all of this, which no machine would
11
typon 3 days ago 1 reply      
Does anyone else find it funny that the Game 4 in which Lee Sedol won got the most upvotes on Hacker News? We're still firmly with team human it seems :P
12
tunesmith 3 days ago 0 replies      
AlphaGo was strong enough to survive a mistake, not knowing a known tesuji, and still claw back to win by a couple of points. I wonder wonder that means in terms of handicap, maybe it is a stone stronger than Sedol?
13
trott 3 days ago 4 replies      
I tried to learn Go a decade ago. After spending some time on it, I came to the conclusion that it's just not an enjoyable game for me. Here's why:

As you can see in this match, games are often won and lost by just a few points (1% of the whole territory). So, not only do you have to count territory precisely at the end, but throughout the game, and this isn't easy to do in your head.

Maybe if you are an autistic accountant, that's fun, but not for me. If I have to play a strategic board game, it will be good old chess. And now that computers are finally beating people at both, there is no longer any need to look at Go as some kind of mythical last refuge of humanity.

14
zubspace 3 days ago 4 replies      
Google improved the outcome by putting in large amounts of processing power. What happens, if humans would do the same?

Instead of just Lee Sedol, how about putting the top 10 Go players in a room vs. AlphaGo? Would the chance to win increase?

Maybe we find out, that 3 top go players vs. AI is the optimal way and adding more humans decreases the odds to win the match?

This would lead to following question: Why does AI improve, if we add more processing power while adding more human brainpower decreases their overall power?

Maybe we find out, that 3 good developers working on a project are optimal and more decrease the chance of success?

15
goquestion 3 days ago 1 reply      
What's needed to use programs like AlphaGo to enhance human enjoyment of Go (and other games like chess where i have more experience)? I'm more interested in this than in the "man vs. machine" narrative.

Ideally we could take AlphaGo and produce an algo that can smoothly vary its playing proficiency as a human opponent increases in skill. The problem I've seen in chess computers is that setting them to "amateur" results in 3-4 grandmaster-perfect moves followed by a colossal blunder to enable the human opponent to catch up.

Ideally you could use a computer opponent as an always-available, continuously adapting challenger to train hard against all the time.

16
conanbatt 3 days ago 0 replies      
One interesting thing happened to me: I got to the game before knowing which side was which color or the result, and I could tell which one was the computer, an exercise I hadn't tried with the previous games.
17
dopamean 3 days ago 0 replies      
I've gone from thinking "it will be impressive if AlphaGo wins a game in this series" to "wow, it's pretty impressive that Sedol took a game off AlphaGo." Craziness.
18
brador 3 days ago 3 replies      
I question how much of it's success is down to the AI/understanding the game/outplaying the opponent vs. simple culled brute force. Especially when they can throw Google level computing power at it and they have mentioned using heat maps and looking at move sets.

It could be argued that it's only AI when it understand the game rules and plays to them without iterating random choices until it finds a hit. Machine learning would be between the two, but still not what many would consider true AI.

19
ioncube 3 days ago 1 reply      
Anybody knows the music playing in the background?

-EDIT-Thats what Shazaam was able to recognizeHit Me! - Dreamliner - https://soundcloud.com/hit-me-music-production/dreamliner

20
mikhail-g-kan 3 days ago 0 replies      
Yann Lecun commented AlphaGo regarding General AI:

https://www.facebook.com/yann.lecun/posts/10153426023477143

in short - we are at least 1 big step before creating human-level AI

21
yangtheman 3 days ago 0 replies      
Would it be conceivable that similar type of AI can deploy and manage unmanned military vehicles, e.g. unmanned drones and tanks, and monitor battle progress (assuming that the other side is managed by human)? It wouldn't necessarily be turn-based, but constantly evaluating its moves against changing environment outside its control and reach its objective? I think such future is conceivable and scary at the same time.
22
thatsadude 3 days ago 1 reply      
The game is amazingly close though.
23
tetraodonpuffer 3 days ago 0 replies      
note that form whomever is interested reddit will do an AMA with all the various pro commentators this Saturday it seems, check r/baduk for more information

https://www.reddit.com/r/baduk/comments/4ai8e8/what_do_i_nee...

24
xefer 3 days ago 1 reply      
I'm curious, if these machines can consistently beat 9-dan players like this, is there talk of creating a 10-dan level?
25
partycoder 3 days ago 0 replies      
It was very close though. Lee Sedol resigned in the yose after noticing he was behind by no more than a moku.
26
eatbitseveryday 3 days ago 0 replies      
I wonder what factors of game play create different advantages. For example, if per-turn time limits are below some threshold, whether humans would be at an advantage. It would certainly make for an interesting game.
27
haffi112 3 days ago 0 replies      
Next challenge: Can AlphaGo beat 10 or 100 humans playing together against it?
28
marcell 3 days ago 1 reply      
I've heard that in Chess, there are specific strategies that are effective against computers, but not against humans. Is this the case, and is it possible to do anything similar in Go?
29
anocendi 3 days ago 0 replies      
When it comes to competitive gaming, Koreans were Gods, and now AI has come to top them.

Before we realize it, we will hear Google's StarCraft 2/3 bot beating the Korean World Champion.

30
atrudeau 2 days ago 0 replies      
Has anyone found the 15 minute summary for match 5? They had them for the first 4 matches. Really great stuff.
31
ep103 3 days ago 0 replies      
Does anyone have any suggested ways to learn the algorithmic techniques alphago uses? I've heard monte carlo tree search, and neural nets both mentioned
32
pervycreeper 3 days ago 3 replies      
Could anyone suggest a good introduction to the rules and basic strategies of this game?
33
guilhas 3 days ago 0 replies      
Shouldn't Lee have had the opportunity to train 1 year against this AI before the competition?

training (Human vs Human)not the same as (Human vs Algorithm)or (Algorithm vs Algorithm)

34
nickjj 3 days ago 3 replies      
Is it just me or is it really lame that Google isn't going to pay Lee?

From the article:

> Lee was competing for a $1 million prize put up by Google, but DeepMind's victory means it will be donated to charity.

So Lee provides Google knowledge that only he is capable of providing due to his extreme skill in the game and Google won't even pay him?

35
vbezhenar 3 days ago 0 replies      
What I'm finding terrifying is that we compare best players of entire human population to machine. Even if machine is only barely on the same level, honestly, 99.9% of people probably wouldn't stood a chance against it, no matter how hard they would be trying. Those professional players are best of best.

Now compare that level of AI with average person. Go game might not be directly applicable to our lives, but it's only a demonstration. And it's replicable as easy as copy'n'paste, compare that to amount of time, money and efforts required to grow and train a human.

That future where not only drivers and factory workers are replaced by robots, but anyone who's not doing extremely intellectual work, is getting closer. Factory robot is not that cheap, it requires manipulators, repairs. But cheap office work does not require anything physical, it's replicable extremely fast and gonna cost very low. It's exciting and terrifying feature. It's not going to look well with current capitalistic economical model.

5
Encryption, Privacy Are Larger Issues Than Fighting Terrorism npr.org
641 points by Osiris30  3 days ago   176 comments top 17
1
jacquesm 3 days ago 5 replies      
> No, David. If I were in the job now, I would have simply told the FBI to call Fort Meade, the headquarters of the National Security Agency, and NSA would have solved this problem for them. They're not as interested in solving the problem as they are in getting a legal precedent.

That's quite the quote, especially given his history of employment.

The weirdest thing about this whole cell-phone saga to me is that the perps are dead, did not appear to be part of some organized group and that very little could be done to them that hasn't been done already based on evidence found on the phone.

Then there is the bit that a lot of the information that is on the phone is also already in the log files of the carriers. It's as if that phone somehow magically is going to yield an entirely new class of information that may not even exist in the first place.

To me it has been evident from day one that this is not about this phone or the data that's on it but just about the legal precedent, getting it in black-and-white from the former head of counter terrorism is quite an indictment of his successors.

2
jccc 3 days ago 6 replies      
Before you comment, please consider whether you'd prefer to vent your frustration in online message boards with like-minded people, or spend that potential energy in other ways:

"What you can do about it:

-- You can contact the Obama White House online to comment on strong encryption.

https://www.whitehouse.gov/webform/share-your-thoughts-onstr...

-- You can contact your state Senators and Representatives via the contact information supplied by ContactingTheCongress.org.

http://www.contactingthecongress.org/

-- You can specifically contact Senators Richard Burr (R-NC) and Dianne Feinstein (D-CA) to express concerns about their bill intended to force companies to weaken or work around encryption under court orders.

http://www.contactingthecongress.org/cgi-bin/newseek.cgi?sit...

http://www.contactingthecongress.org/cgi-bin/newseek.cgi?sit...

http://appleinsider.com/articles/16/03/09/proposed-senate-bi...

Express yourself with the honesty and clarity that the government's charm offensive is lacking."

http://appleinsider.com/articles/16/03/14/take-a-stand-again...

3
Gratsby 3 days ago 10 replies      
>PRESIDENT BARACK OBAMA: If, technologically, it is possible to make an impenetrable device or system where the encryption is so strong that there's no key - there's no door at all - then how do we apprehend the child pornographer? How do we solve or disrupt a terrorist plot?

It's so disappointing to me to hear a quote like that from the President.

4
clarkmoody 3 days ago 1 reply      
The larger issue, by far, is whether we are a free people.

From the article:

 CLARKE: No, the point I'm trying to make is there are limits. And what this is is a case where the federal government, using a 1789 law, is trying to compel speech. And courts have ruled in the past, appropriately, that the government cannot compel speech. What the FBI and the Justice Department are trying to do is to make code writers at Apple - to make them write code that they do not want to write that will make their systems less secure.
If the FBI gets its way in this case, forcing Apple employees to perform a service for the government, then it sets the precedent for the government to compel anyone to do anything the government wants. When you are forced to work for someone against your will, this is called slavery.

Of course the FBI used a terrorist attack to try and get what it's always wanted, and it will abuse the unlock power in the future if it gets it now, but judges could easily cite this case as a defense for the government to compel other action from the people.

Clarke makes it sound like there is court precedent against this compulsion, but that would be overturned if the FBI wins.

Indeed, encryption and privacy are very important, but our very liberty is more important.

5
emodendroket 3 days ago 3 replies      
It seems clear to me that if all the money we spent on fighting terrorism since 9/11 were instead spent on, say, reducing traffic fatalities, it would have saved a lot more people.
6
sugarfactory 3 days ago 0 replies      
I think Apple and all other tech companies that support it move as the FBI (or whoever controls the FBI) expected or wanted.

What was revealed a few years ago was the fact that big tech companies betrayed people's trust. So quite naturally they should attempt to regain that trust. Because if majority of people stop trusting tech companies and start using end-to-end encryption, use of encryption stops working as a signifier that indicates a higher likelihood that the user's doing something wrong. Thus it's crucial to keep ordinary people away from using encryption. In order to achieve this, it's important to make people trust big tech companies again.

In my opinion, this is what the writer of the plot of the dispute between the FBI and Apple thinks.

7
exabrial 3 days ago 0 replies      
"You dont need a gun"

"You don't need encryption"

It's not the bill of needs. I was born with these rights. This is the danger of eroding the constitution, the arguments can be used against whatever issue you want. If we want it changed, do it the right way and pass an amendment. But please, protect the integrity of the most important document we have.

8
kiba 3 days ago 2 replies      
Sometime I wonder if the FBI and other security agencies lost perspectives or they know something that we don't.

Time and time again, their argumentation are not particularly persuasive.

I don't doubt the existence of terrorists, but it seems that they are more boogeymen rather than an actual threats.

And when it came down to it, the power of terrorists is to inspire fear, rather than kill people. They can change us because we felt the need to change.

9
themartorana 3 days ago 0 replies      
"You know, we could, at the far extreme to make the FBI's job easier, put ankle bracelets on everybody so that we'd know where everybody was all the time. That's a ridiculous example, but my point is encryption and privacy are larger issues than fighting terrorism."

Ok so replace "ankle bracelets" with "GPS/cell triangulated device" and it's a ridiculous example because what, things that are already real aren't really "examples"?

10
ck2 3 days ago 1 reply      
Other governments are definitely going to force manufacturers to make their phones unlockable or not for sale in their country.

China, Russia, Saudia Arabia, all forced Blackberry to turn over their encryption keys long ago.

US politicians should set an example and say we are NOT going to be like China and Russia and other repressive regimes and that when people's lives are literally on their phones, they have a reasonable right to privacy and protection from search and seizure, you know like in our constitution but ignored everyday.

11
ccvannorman 3 days ago 1 reply      
I am surprised that a search for "math" only turned up one result in this thread, about car accidents vs terrorist victims.

Isn't it true that encryption legislation or policy is sort of irrelevant next to the very clear math that says encryption will always be ahead of decryption? Even in a (hopefully avoidable) dystopia where encryption is illegal, would that really stop technology companies from continuing to do what they've always done?

John Oliver has a great segment[1] where he notes that the majority of cheap, available encryption applications aren't even US-based, and so it becomes nigh-impossible for our (or any) government to stop any pedestrian from encrypting.

[1] https://www.youtube.com/watch?v=zsjZ2r9Ygzw

12
kordless 3 days ago 0 replies      
Encryption and privacy are what make this reality work. You think you are you. I think I am me. This reality's ability to keep those separate is a privacy feature. From a Buddhist's perspective our understanding (Dharma) is that we are, on some level, all the same entity.

One of the early sutras put it this way:

> "Discrimination is consciousness. Nondiscrimination is wisdom. Clinging to consciousness will bring disgrace but clinging to wisdom will bring purity. Disgrace leads to birth and death but purity leads to Nirvana."

Encryption gives the means by which we can enable privacy between ourselves, or what we think of as self. If we enable complete privacy from all others, we drop into a self-world. If we disable privacy, and join all the others disabling privacy, we drop into an isolated type of Nirvana, with the implication everything becomes quite boring. I have compared this in the past to the observed push and pull of public and private cloud business models.

One solution may come via virtual realities where we can arrive at consensus in a fair and measured way without centralized control. It is my belief that immutable data structures backed by encryption, such as a blockchain, are the path out of this mess.

Here's Alan Watts talking about this: https://www.youtube.com/watch?v=lBOcFwUzIIQ

13
shpx 3 days ago 1 reply      
>We could put ankle bracelets on everybody so we'd know where everyone was all the time.

How does everyone carrying phones not already make this the case?

14
xrorre 3 days ago 1 reply      
The Apple situation annoys me because it's no longer about the web. It's about breaking crypto on a device which is vendor-locked. The same thing as breaking homegrown crypto, or DVD crypto; easy and trivial. The fact that Apple doesn't use ephemeral keys and can't simply throw away the key in the event of an incident is worrisome enough.

Real crypto needs to be more compartmented than that. A bank is not secure because of the massive door - it's safe because it would take a thief weeks to empty every safety deposit box.

It's also made even safer when the key is (more or less) thrown away for periods of time and nobody can get it. Even with manual over-ride. Literally somebody could be dying inside the safe and nobody could save them.

In properly implemented crypto nobody should hear you scream.

15
bicknergseng 3 days ago 1 reply      
I just had a thought: what happens if Apple complies with the order (say they lose the legal battle or something), but individual employees refuse to build the software? I think the verdict is out on whether or not Apple, a corporation, can be compelled to do this, but what if they can't find anyone to do it?

Just thinking it _should_ be much harder to compel individuals to do something like this than it is to compel a corporation.

16
username3 3 days ago 0 replies      
Gun rights are larger issues than fighting mass shootings.
17
JustSomeNobody 3 days ago 1 reply      
Edit:Posted to wrong article. My apologies.
6
A previously unnoticed property of prime numbers quantamagazine.org
788 points by tcoppi  4 days ago   233 comments top 38
1
dantillberg 4 days ago 11 replies      
I almost overlooked this article because I got turned off by the opening description in base 10, as there is a lot of math trivia out there that is specific to base 10 which holds little general significance.

But a little further down, the article discusses how this was discovered originally in base 3, and I think it's much simpler to understand in that context, since all primes except 3 (aka 10 base 3) end in just either 1 or 2:

"Looking at prime numbers written in base 3 in which roughly half the primes end in 1 and half end in 2 he found that among primes smaller than 1,000, a prime ending in 1 is more than twice as likely to be followed by a prime ending in 2 than by another prime ending in 1."

2
mjs 4 days ago 14 replies      
"If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses."

How does this work?

3
crnt2 4 days ago 2 replies      
The results are particularly striking in base 11 - looking at primes below 100 million, only 4.3% of primes ending in 2 are followed by another prime ending in 2 (compared to the 9.1% you would naively expect) with similar numbers for other pairs.

A prime ending in 2 (in base 11) is also unlikely to be following by a prime ending in 5, 7 or 9, whereas it is particularly likely to be following by a prime ending in 4 or 8.

It would be interesting to know what structure there is (if any) in this NxN "transition matrix" for various bases.

 1: ( 1, 4.3%) ( 2, 13.0%) ( 3, 14.3%) ( 4, 7.7%) ( 5, 11.5%) ( 6, 6.3%) ( 7, 18.0%) ( 8, 9.0%) ( 9, 10.7%) (10, 5.2%) 2: ( 1, 10.0%) ( 2, 3.7%) ( 3, 11.3%) ( 4, 14.1%) ( 5, 7.5%) ( 6, 12.1%) ( 7, 5.3%) ( 8, 17.5%) ( 9, 7.8%) (10, 10.7%) 3: ( 1, 6.1%) ( 2, 10.3%) ( 3, 3.7%) ( 4, 12.5%) ( 5, 14.0%) ( 6, 9.2%) ( 7, 12.1%) ( 8, 5.6%) ( 9, 17.5%) (10, 9.0%) 4: ( 1, 11.1%) ( 2, 6.1%) ( 3, 9.9%) ( 4, 4.1%) ( 5, 11.5%) ( 6, 14.5%) ( 7, 7.7%) ( 8, 12.0%) ( 9, 5.3%) (10, 18.0%) 5: ( 1, 9.6%) ( 2, 12.7%) ( 3, 6.3%) ( 4, 11.5%) ( 5, 4.0%) ( 6, 13.6%) ( 7, 14.5%) ( 8, 9.2%) ( 9, 12.1%) (10, 6.4%) 6: ( 1, 17.9%) ( 2, 8.5%) ( 3, 10.6%) ( 4, 5.0%) ( 5, 9.6%) ( 6, 4.0%) ( 7, 11.4%) ( 8, 14.0%) ( 9, 7.5%) (10, 11.5%) 7: ( 1, 6.0%) ( 2, 19.1%) ( 3, 8.8%) ( 4, 11.1%) ( 5, 5.1%) ( 6, 11.6%) ( 7, 4.1%) ( 8, 12.5%) ( 9, 14.1%) (10, 7.7%) 8: ( 1, 12.0%) ( 2, 5.5%) ( 3, 17.5%) ( 4, 8.8%) ( 5, 10.6%) ( 6, 6.3%) ( 7, 9.9%) ( 8, 3.7%) ( 9, 11.3%) (10, 14.3%) 9: ( 1, 8.8%) ( 2, 12.4%) ( 3, 5.5%) ( 4, 19.1%) ( 5, 8.6%) ( 6, 12.7%) ( 7, 6.0%) ( 8, 10.3%) ( 9, 3.7%) (10, 13.0%) 10: ( 1, 14.3%) ( 2, 8.8%) ( 3, 12.0%) ( 4, 6.0%) ( 5, 17.8%) ( 6, 9.6%) ( 7, 11.1%) ( 8, 6.1%) ( 9, 10.0%) (10, 4.3%)

4
crnt2 4 days ago 0 replies      
Here is my attempt to work through the math and figure out how "surprising" this result is.

Clearly, we should expect that for small primes (< 100e6) it is less likely that a prime ending in K (in base B) will be followed by another prime ending in K - because for that to happen, none of the B-1 numbers in between can be prime.

A (very naive) model of the distribution of primes says that every number n has probability p(n) = 1/log(n) of being prime. Assume that a number n ends with a k in base b. Define p = 1/log(n). Then the probability that the next prime ends in k+j is, roughly,

 q(j) = p * (1-p)^(j-1) * sum_{i=0}^{infinity} (1-p)^(i*b) = p * (1-p)^(j-1) / (1 - (1-p)^b)
In this formula, j takes values 1 to b (where j = b represents another prime ending in k).

For n ~ 1,000,000 and working in base b, under this model we would expect to see around 6.97% of primes ending in k followed by another prime ending in k, whereas we expect to see 13.7% of primes ending in k+1 (it is apparent how naive the model is, since in fact we never see a prime ending in k followed by a prime ending in k+1, except for 2,3). It would not be hard to extend the model to rule out even primes, or multiples of 3 and 5, but I have not done this.

Around n ~ 10^60 the distribution starts to look more equal, as the primes are "spread out" enough that you expect to have long sequences of non-primes between the primes, which blurs out the distribution to be roughly constant.

I think this is what the article is getting at when it quotes James Maynard as saying "Its the rate at which they even out which is surprising to me". With a naive model of 'randomness' in the primes, you expect to see this phenomenon at low numbers (less then 10^60) and for it to slowly disappear at higher numbers. And indeed, you do see that, but the rate at which the phenomenon disappears is much slower than the random model predicts.

I think that is why it is surprising.

5
c3534l 4 days ago 2 replies      
> This conspiracy among prime numbers seems, at first glance, to violate a longstanding assumption in number theory: that prime numbers behave much like random numbers.

I don't think this is true at all. Take a look at the famous Ulam Spiral: http://scienceblogs.com/goodmath/wp-content/blogs.dir/476/fi...

You can see that while prime numbers are difficult to predict, they're anything but random. I'm not sure why the article is claiming that mathematicians used to think the distribution of primes was evenly distributed, which is complete and utter nonsense.

6
valine 4 days ago 4 replies      
Can anyone say what the security implications of this are? Intuitively, it would seem the less 'random' primes appear to be, the easier it would be to factor the composite of two prime numbers.
7
dr_zoidberg 4 days ago 7 replies      
For those willing to try this over toy code, I did a (horrible, horrible, I'm terribly ashamed of it) quick Python snippet to check it out:

 def primer(): p = 3 while True: is_prime = True for x in xrange(2, p): if p % x == 0: is_prime = False break if is_prime: yield p p += 2 give_prime = primer() primes = [1, 2] # had to separate this into 2 lines because Python primes.extend([give_prime.next() for x in xrange(9998)]) # so we get 10,000 primes primes_dict = {} for i in xrange(len(primes) - 1): p0 = str(primes[i])[-1] p1 = str(primes[i + 1])[-1] key = "".join([p0, "-", p1]) try: primes_dict[key] += 1 except: primes_dict[key] = 1 # let's delete the 4 outliers from the begining del(primes_dict["1-2"]) del(primes_dict["2-3"]) del(primes_dict["3-5"]) del(primes_dict["5-7"]) 
So long story short, my results over 10,000 primes:

 In [57]: primes_dict Out[57]: {'1-1': 365, '1-3': 833, '1-7': 889, '1-9': 397, '3-1': 529, '3-3': 324, '3-7': 754, '3-9': 906, '7-1': 655, '7-3': 722, '7-7': 323, '7-9': 808, '9-1': 935, '9-3': 635, '9-7': 541, '9-9': 379}
And you can clearly see that the tendency to avoid the same last digit is starting to show, thow those that end in 1 are still not showing it completely. Tried with 100,000 primes but the (horrible) algorithm kinda got stuck so I settled with 10,000 to make this a "quick test".

Before you go, please believe me I'm sorry for primer() and give_prime. I'll try to never do those kind of things again.

Edit: I've edited this like 5 times already over little typos and bad transcription mistakes I did all over the place. Should work now.

8
grandalf 4 days ago 0 replies      
Primes seem to me to be more of an information theoretic concept than a number concept.

Primes are the simplest way to encode specific kinds of graphs that unambiguously encodes all sub-graphs.

If you try to come up with a bit-representation that is equivalently rich it becomes difficult to think of one that is as simple yet preserves the semantics of the factorization tree.

So I guess my point is that the factorization tree of numbers is the fundamental concept, and it's information theoretic. Primes happen to be an encoding of that fundamental concept into integers, but if we found an equivalently rich representation using a different encoding, we might understand primes better. I doubt that the quirks of the encoding has anything to do with the fundamental concept however.

9
Houshalter 4 days ago 0 replies      
I once was really interested in finding patterns in prime numbers. I got a long csv file of prime numbers from the internet. I used symbolic regression on it, to try to predict the next prime in the list.

Symbolic regression basically uses genetic algorithms to fit mathematical expressions to data. The program I was using, Eureqa, tries to find the simplest expressions that fit, with only a handful of elements. To prevent overfitting, and give a human understandable model.

Anyway this actually worked. Far from perfectly of course, but it was able to get much better than random predictions. It was definitely finding some pattern.

Unfortunately I used up Eureqas free trial forever ago, and I'm not going to pay thousands of dollars to buy a subscription. But I am now thinking of writing my own software to do this, and then running it on a dataset of mathematical sequences like the primes.

10
Jabbles 4 days ago 3 replies      
I'm shocked at how simple a pattern was previously unknown.

https://play.golang.org/p/ajn-wMo_3V

11
personjerry 4 days ago 0 replies      
Wrote some code to compare random numbers to the primes for this property. To generate the random numbers, I apply the Prime Number theorem as a probability to determine if we want to select it, and then compare the stats to that of the actual primes. https://gist.github.com/personjerry/c58483daaf372acbe1fa

 cumulative: 1 to 1: 30768 rand, 28289 prime 1 to 3: 53573 rand, 51569 prime 1 to 7: 44306 rand, 53263 prime 1 to 9: 36968 rand, 32816 prime ratios: 1 to 1: 0.18578027352594872 rand, 0.17048036302934247 prime 1 to 3: 0.323479153458322 rand, 0.3107745710721539 prime 1 to 7: 0.26752407692539926 rand, 0.3209832647330011 prime 1 to 9: 0.22321649609032998 rand, 0.19776180116550257 prime cumulative: 3 to 1: 37015 rand, 38455 prime 3 to 3: 31015 rand, 25900 prime 3 to 7: 53377 rand, 48596 prime 3 to 9: 44594 rand, 53082 prime ratios: 3 to 1: 0.22298058445431052 rand, 0.23161058343823215 prime 3 to 3: 0.18683622387816942 rand, 0.15599308571187656 prime 3 to 7: 0.3215462557454473 rand, 0.2926888028283534 prime 3 to 9: 0.2686369359220728 rand, 0.3197075280215379 prime cumulative: 7 to 1: 44412 rand, 42590 prime 7 to 3: 36923 rand, 45728 prime 7 to 7: 30588 rand, 25886 prime 7 to 9: 53404 rand, 51800 prime ratios: 7 to 1: 0.26863125805222376 rand, 0.25656008288956894 prime 7 to 3: 0.2233331518747694 rand, 0.275463241849594 prime 7 to 7: 0.18501515179008873 rand, 0.15593600154213152 prime 7 to 9: 0.3230204382829181 rand, 0.3120406737187056 prime cumulative: 9 to 1: 53453 rand, 56602 prime 9 to 3: 44489 rand, 42837 prime 9 to 7: 37022 rand, 38259 prime 9 to 9: 30902 rand, 28144 prime ratios: 9 to 1: 0.322266166664657 rand, 0.3413007561413876 prime 9 to 3: 0.2682225410873838 rand, 0.2583000687401261 prime 9 to 7: 0.22320427332907286 rand, 0.23069548124118136 prime 9 to 9: 0.18630701891888632 rand, 0.1697036938773049 prime
Unless I'm doing something wrong, it honestly it doesn't seem like the actual prime numbers have a statistic that deviates from random numbers with a prime distribution. Hence it looks like to me just the result of a) specifying the "next" number which naturally favors the digit after it and b) probability of a given number being prime (prime number theorem).

12
Kenji 4 days ago 0 replies      
Soundararajan showed his findings to postdoctoral researcher Lemke Oliver, who was shocked. He immediately wrote a program that searched much farther out along the number line through the first 400 billion primes.

This is how modern computers revolutionized even the most theoretical fields like number theory. Remarkable, I love it!

13
JoeAltmaier 4 days ago 2 replies      
Its supposed to be true in every base. But of course in Binary its not true. Every prime in Binary ends in a 1; its followed by another prime that ends in a 1.
14
Terr_ 4 days ago 1 reply      
> This conspiracy among prime numbers seems, at first glance, to violate a longstanding assumption in number theory: that prime numbers behave much like random numbers.

I wonder if this is really an artifact like Benford's Law, which also involves first-digit-frequency (in any base) and also involves certain kinds of "random" numbers.

To recycle a past comment:

> If you have a random starting value (X) multiplied by a second random factor (Y), most of the time the result will start with a one.

> You're basically throwing darts at logarithmic graph paper! The area covered by squares which "start with 1" is larger than the area covered by square which "start with 9".

15
taf2 4 days ago 1 reply      
Does this have any ramifications in security? I vaguely understand we rely on prime numbers to create secrets that are hard to guess... So does this in someway make it easier to possibly guess?
16
silveira 4 days ago 0 replies      
I created a ulam spiral visualization for this article using JavaScript and HTML5 canvas. The demo and source code are at http://silveiraneto.net/2016/03/14/the-prime-conspiracy-visu...
17
ms013 4 days ago 1 reply      
For those who have Mathematica and want to experiment with this, here's a quick function to generate the data:

 f[n_, base_] := Module[ {m, d, dpairs}, d = Table[Last[IntegerDigits[Prime[i], base]], {i, 1, n}]; dpairs = Table[{d[[i]], d[[i + 1]]}, {i, 1, Length[d] - 1}]; Map[#[[1]] -> #[[2]] &, Tally[dpairs]] ]
For the first n primes in a given base, it returns the mapping {i,j}->count for the all pairings of digit i followed by digit j. E.g., for the first million base 5 primes

 {2, 3} -> 68596 {3, 0} -> 1, {0, 2} -> 1, {2, 1} -> 64230 {1, 3} -> 77475 {3, 2} -> 72827 {2, 4} -> 77586 {4, 3} -> 64371 {3, 4} -> 79358 {4, 1} -> 84596 {1, 2} -> 79453 {4, 2} -> 58130 {4, 4} -> 42843 {1, 1} -> 42853 {3, 3} -> 39668 {2, 2} -> 39603 {1, 4} -> 50153 {3, 1} -> 58255

18
arghbleargh 4 days ago 0 replies      
It should be noted that from the original paper, the asymptotic formula that Oliver and Soundararajan conjecture still says that each possibility for the last digits of consecutive primes should occur about the same number of times in the limit. It's just that the amount by which the frequencies vary is more than you would expect from the most naive model of primes as being "random".
19
hellofunk 4 days ago 1 reply      
>If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses.

Now that is particularly interesting to think about.

20
wallacoloo 3 days ago 0 replies      
As a non-mathematician, this is a pretty neat read. I was distracted by the personification of the numbers though (they have 'likes' and 'preferences', which in my day-to-day vocab are concepts applicable only to things that possess the ability to think). Is this common in mathematical writing, or is this paper an abnormality in that sense?

(I don't mean to nitpick - I'm genuinely curious. I recall seeing the same thing in high-school chemistry, but never in physics, for example, and I'm curious if entire fields see this effect or if it's a product only of the audience being written to).

21
kordless 4 days ago 0 replies      
I just spent 5 minutes looking for a chart showing the distribution of reserved commands in Python. Didn't find much.

A while back, I read something about different number bases' ability to help find additional primes. The base itself was prime, so maybe 7 or 13. Can't find the article ATM. I hypothesized that prime numbers are "code" provided by this universe to allow us to access other data stored in other primes. Quines of a sort, if you will. One way to invalidate this hypothesis would be to do a mean distribution of basic operators in a simple programing language and compare it to what we are seeing in primes.

22
jamieb007 4 days ago 2 replies      
"If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses."

Counter-intuitive at first but makes sense - the outcomes as a whole converge towards the average (50% heads, 50% tails). Nonetheless, it shows that each toss is related to the others. One can expect that primes are even more related - or at least to the primes that came before.

23
aaronchall 3 days ago 0 replies      
This phenomenon feels trivial - Think of 3 - X11, X13, X17, X19, X21, X23, X27, X29, X31, X33, X37, X39 - how many of these pseudonumbers will be divisible by 3? I count 4 where X is 0, 1, and 2, one time each for numbers ending in 1, 3, 7, and 9.

Just based on this knowledge, I know that a prime number is guaranteed not to be immediately followed by another one with the same ending 1 time in 2.

I'm not sure these fellows have found anything particularly interesting, but if so, and I have missed something, kudos to them.

24
mjevans 4 days ago 0 replies      
It would be interesting to know how well this holds up over different scales.

Does a prediction based on base 3 hold up better over primes under 100 than 1000 and 1000 than 10000?

Is the ratio of how well a base ending sequence is predictive scale to a predictable range based on the base that the prime number field is viewed within?

Just thinking about what might be happening, I would imagine that the answer is yes, but that a lot of crunching would be needed to graph and deduce a relationship to an actual predictive property statement.

25
CarolineW 4 days ago 0 replies      
26
jeffdavis 4 days ago 1 reply      
I am surprised this took so long to discover -- wouldn't this be one of the first things to examine when looking for non-randomness?
27
undoware 4 days ago 0 replies      
So, the million dollar question is: how does this affect my security and privacy? Does this pattern mean encryption based on the assumption of the inherent randomness of primes is now less secure? E.g. is there now less entropy in a given set of primes?

I have a premonition of Quite a Bit of Trouble coming down the pipe.

28
baby 4 days ago 0 replies      
> Looking at prime numbers written in base 3

I HAD THE EXACT SAME IDEA. But I would probably have reached no conclusion.

29
lohankin 4 days ago 0 replies      
Possible generalization (example):

23=(7)3+(2);

7=(2)3+(1);

2=(0)3+(2);

0=(0)3+(0);

Take only remainders, and form a vector a=(2 1 2 0)What can be said about components of the vector for the prime next to p? E.g., do i-th components repel, like the 1-st ones?

30
girkyturkey 4 days ago 0 replies      
This is absolutely incredible. This is why mathematics is so amazing, that something so small can be missed for centuries. All about how to look at things!
31
caf 4 days ago 0 replies      
So I wonder if a similar pattern is observable for Prime_i and Prime_i+n with some n > 1?
32
callesgg 4 days ago 0 replies      
It is not quite clear to me how. But to me it seams like it has to do with the fact that we use a number system that has a base.
33
porcodio 4 days ago 0 replies      
Interesting
34
CarolineW 4 days ago 4 replies      
That's bizarre - I tried to submit this four hours ago and was told it was a duplicate. I searched, and couldn't find the original submission to upvote it, and now it's submitted again, after my submission was declined.

I don't understand.

But it's a great result, so I've upvoted it, despite being confused.

35
mikek 4 days ago 2 replies      
> Quanta MagazineNUMBER THEORYMathematicians Discover Prime ConspiracyA previously unnoticed property of prime numbers seems to violate a longstanding assumption about how they behave.

Zim + Teemo for Quanta MagazineBy: Erica KlarreichMarch 13, 2016Comments (2)

Share this:facebooktwitterredditmailPDFPrintTwo mathematicians have uncovered a simple, previously unnoticed property of prime numbers those numbers that are divisible only by 1 and themselves. Prime numbers, it seems, have decided preferences about the final digits of the primes that immediately follow them.

Among the first billion prime numbers, for instance, a prime ending in 9 is almost 65 percent more likely to be followed by a prime ending in 1 than another prime ending in 9. In a paper posted online today, Kannan Soundararajan and Robert Lemke Oliver of Stanford University present both numerical and theoretical evidence that prime numbers repel other would-be primes that end in the same digit, and have varied predilections for being followed by primes ending in the other possible final digits.

Weve been studying primes for a long time, and no one spotted this before, said Andrew Granville, a number theorist at the University of Montreal and University College London. Its crazy.

The discovery is the exact opposite of what most mathematicians would have predicted, said Ken Ono, a number theorist at Emory University in Atlanta. When he first heard the news, he said, I was floored. I thought, For sure, your programs not working.

This conspiracy among prime numbers seems, at first glance, to violate a longstanding assumption in number theory: that prime numbers behave much like random numbers. Most mathematicians would have assumed, Granville and Ono agreed, that a prime should have an equal chance of being followed by a prime ending in 1, 3, 7 or 9 (the four possible endings for all prime numbers except 2 and 5).

I cant believe anyone in the world would have guessed this, Granville said. Even after having seen Lemke Oliver and Soundararajans analysis of their phenomenon, he said, it still seems like a strange thing.

Yet the pairs work doesnt upend the notion that primes behave randomly so much as point to how subtle their particular mix of randomness and order is. Can we redefine what random means in this context so that once again, [this phenomenon] looks like it might be random? Soundararajan said. Thats what we think weve done.

Prime Preferences

Soundararajan was drawn to study consecutive primes after hearing a lecture at Stanford by the mathematician Tadashi Tokieda, of the University of Cambridge, in which he mentioned a counterintuitive property of coin-tossing: If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses.

Can someone explain this?

36
sparrish 4 days ago 1 reply      
While fascinating, I fail to see how this qualifies as a "conspiracy". Are there definitions of "Conspiracy" in mathematics that I'm unaware of?
37
tdsamardzhiev 4 days ago 0 replies      
I've thought about that when I was a little kid. True story!
38
learnstats2 4 days ago 3 replies      
Perhaps I have missed something, but the introductory example seems to follow from simple probability and therefore I do not find it mathematically remarkable.

Say, there is a fixed and equal probability that each number ending with 9 and 1 is prime. I could go along with that assumption, although the fact that primes get less likely as you go higher is potentially relevant.

What the authors consider here is starting with a prime ending in 9.So the next potential prime ends in 1. If only because 1 is the next number to be checked, a 1-prime is more likely to appear next than a 9-prime. The probability of that can be calculated, depending on your assumptions, as a geometric sequence. In any case, P(next prime is 1) > P(next prime is 9).

"Most mathematicians would have assumed, Granville and Ono agreed, that a [known] prime should have an equal chance of being followed by a prime ending in 1, 3, 7 or 9" So - I'm a definite nope on that.

This result appears to be exactly what I would have assumed was the case.

7
I made my own clear plastic tooth aligners and they worked amosdudley.com
934 points by dezork  5 days ago   133 comments top 38
1
rl3 5 days ago 4 replies      
Not to be a downer, but was any thought given to the safety of the plastic(s) used?

This is something that's in your mouth a lot and constantly exposed to saliva.

The Dimension 1200es mentioned doesn't appear to be specific to medical applications.[0] The product page lists the only compatible thermoplastic being ABSplus-P430. The MSDS for that basically says the stuff is dangerous in molten form, and beyond that there's very little data.[1] The same company makes "Dental and Bio-Compatible" materials for use with their other products, and these appear to have considerably more safety data.[2]

>The aligner steps have been printed, in addition to a riser that I added in order to make sure the vacuum forming plastic (sourced from ebay) ...

As another commenter pointed out, the vacuum forming plastic is probably the primary concern because the 3D printer was just used to create the molds. The specific type of vacuum plastic isn't mentioned.

Regardless, very neat project.

[0] http://www.stratasys.com/3d-printers/design-series/dimension...

[1] http://www.stratasys.com/~/media/Main/Files/SDS/P430_ABS_M30...

[2] http://www.stratasys.com/materials/material-safety-data-shee...

2
jeffchuber 5 days ago 1 reply      
Awesome work!

The animation definitely seems the most difficult (and subjective), but also the most cool! Body hacking via computed geometry!

Invisalign (align technology) uses almost the same workflow. Market cap $5.89B.

If you could move the workflow over to something based on WebGL / three.js - you could make this accessible to dentists in developing countries. Could be an awesome open source project.

I think "allowing" it to be used in the US would open yourself up to too much liability though :(

3
loocsinus 5 days ago 1 reply      
It is smart that you designed the retainers based on maximum tolerance of tooth movement quoting from a textbook. I suggest you take X ray to make sure no root resorption have occurred. Also for those who want to imitate, measure the length of teeth and compare with the arch length to make sure the teeth can actually "fit" into the arch. I am a dental student.
4
percept 5 days ago 2 replies      
Now that is awesome--those things aren't cheap.

I'm going to send this to my dentist (who's cool enough to appreciate it).

5
forgotpasswd3x 5 days ago 1 reply      
This is really amazing, man. It's honestly the first 3D printing application I've seen that I can see quickly improving thousands of lives. Just to think of all the people who right now can't afford this procedure, that soon will be able to... it's just really wonderful.
6
valine 5 days ago 0 replies      
He scans his teeth, animates how he wants them to move in blender, and then 3D prints each frame. That is absolutely brilliant.
7
daveguy 5 days ago 1 reply      
It looks like the author took into account the safety of the plastic in creating these, which is a good thing. Maybe more so than dentists. You know "silver" fillings aka dental amalgum? They are 50% mercury by weight and are still being used. Supposedly safe because it is inhalation of mercury that is poisonous. Removal of those fillings with a drill can be dangerous. When some guy told me about this and was talking about it being the next asbestos/mesothelioma, I was thinking "sure! That sounds like conspiracy crap!" Then I looked it up on the FDA site like he suggested:

http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedur...

Anti-vaxxers are idiots and it is obvious that vaccines don't cause autism (original study was a fraud). The health benefit of vaccines is as undeniable as the lack of correlation to autism.

That said, dental amalgum is a chunk of mercury in your mouth. FDA says it is safe for people over 6yrs old, but I personally will stay away from it for any future dental work.

8
minsight 5 days ago 1 reply      
This is just amazing. I was waiting for how it might go horribly wrong, but the guy's mouth looks great.
9
wslh 5 days ago 1 reply      
There is an important issue missing in the article (beyond the warning notice): the occlusion. The modification of the dental structure requires a whole functional analysis that goes beyond the teeth.

Anyway, the future is promising and the issues could be solved taking into account all the factors.

10
rashkov 5 days ago 1 reply      
I came across an article here on HN about mail-order Invisalign companies at a fraction of the price. I'm about half way through and very happy with the progress so far. Just thought I'd give a heads-up if anyone is interested
11
CodeWriter23 5 days ago 1 reply      
The work he did with the impressions, to me, suggests he has experience as / knows someone who is a dental technician. If he didn't, wow, he independently figured out some of their key techniques.

My grandfather used to make dentures, and that casting in the 4th photo looks exactly like the impressions my GF would make. They also used these hinges so they could mate the upper to the lower, so they could adjust any collisions that occurred while opening and closing the mouth.

12
hamburglar 5 days ago 0 replies      
Having recently done invisalign, I think this is brilliant, but I would have had a really hard time sticking with it through the pain. I would worry too much that I was doing damage. My case was quite a bit more severe, however, so maybe it's less of a big deal if the movements are minor.
13
teekert 5 days ago 2 replies      
This also seems to have whitened his teeth at the same time ;), typical "before, after".

But on a serious note, I had braces, after the were remove a wire was placed behind my teeth to keep them in place. It didn't stick to one of my ceramic teeth I had from an accident in my youth. The wire was removed and after some months my front two teeth were as far apart as ever. Ok, the overbite didn't return but things will move back at least to some degree over time.

As mentioned before, I myself would never just put any plastic material in my mouth with all the bad things known about plasticisers, bpa/bps, etc.

14
racecar789 5 days ago 0 replies      
Another option....have a dentist bind composite material to the couple teeth out of alignment.

Had two teeth done for under $500 10 years ago.

It's a stop gap until braces are an option financially.

15
zump 5 days ago 0 replies      
Now THIS is a hack.
16
KRuchan 4 days ago 0 replies      
Kudos to him for doing this, but I am slightly concerned that he has introduced overbite [1] in his jaws looking at the before and after pictures :(

[1] https://en.wikipedia.org/wiki/Overbite

17
vaadu 5 days ago 1 reply      
How soon before the FDA says this is illegal or the medical industrial complex lobbies congress to make this illegal?
18
hellofunk 5 days ago 2 replies      
This is cool but I can't say I agree with actually doing it. Just because you can do something doesn't mean you should, particularly in matters of health. If you don't have the requisite experience and knowledge and training, it seems risky to go about something like this on your own.
19
stefanix 4 days ago 0 replies      
Made my own as well when I was interning at an othodontist.

There is not really much tech required. You can simply cut apart the gypsum model tooth by tooth and align it perfectly with wax and add space for you gum. Finally create a mold and use medical grade silicone to make the tooth straightener.

Silicone also allows for more movement and gives you control of upper and lower teeth in relation to each other.

While this is not rocket science there are considerations about jaw alignment that would be difficult for the amature to get right the first time around in any but simple misalignments.

20
Tepix 5 days ago 0 replies      
I love this project - well done, and the result speaks for itself! It's unfortunate that you were forced to go this somewhat dangerous route due to money. In some countries dental care like that would be paid for by the health insurance.
21
ck2 5 days ago 0 replies      
This is definitely for the brave, not me.

Not sure what I would do if we didn't have a dental school.

When I go there I am always surprised to find people who actually have insurance who still go there despite all the hassle.

22
yogipatel 5 days ago 0 replies      
I'm not trying to downplay how much the hacker/geek in me loves this, however, as a former* dental student, I would highly suggest not trying to pull this off on your own.

First, teeth and their movement is more complicated than it might first seem. You have to think about the entire masticatory apparatus, for example:

There's more root than crown, how does the root move in relation to the tooth? Root resorption is a common problem in orthodontic treatment.

Is there / will there be enough bone surrounding the tooth to support the intended movement?

How will the patient's occlusion (how the teeth fit together) be affected? Part of the Invisalign process is to take a bite registration that shows the upper and lower teeth in relation to each other. This is important, and ignoring it can potentially lead to other complications:

- stress fractures

- supraeruption of opposite tooth

- TMJ pain

Does the patient display any parafunctional habits that will affect the new tooth positions? For example, do they grind, clench, or have abnormal chewing patterns?

Many Invisalign techniques require the placement of anchors, holds, and various other structures attached to the teeth themselves. They allow for more complex movement than the insert itself would be able to provide.

Adjustments are often required mid-treatment. Not everybodys anatomy and biology is exactly the same, so you have to adjust accordingly.

Now, does every general dentist take this into account 100% of the time? No, but theyre at least trained to recognize these situations and compensate for them.

That said, many simple patients dont require any more thought than the OP put in. Its a good thing he looked in a text book and realized that theres a limit to how much you should try to move a tooth at each step before youre likely to run into problems. And if you do run into problems do you think a professional is going to come anywhere near your case?

A few issues I have with his technique:

Unless he poured his stone model immediately after taking the impression, its likely there was a decent loss in accuracy. Alginate is very dimensionally precise, but only for about 30 minutes. The material that most dentists use, PVS, is dimensionally stable for much, much longer (not to mention digital impressions).

Vertical resolution of the 3D print does matter. You might be moving teeth in only two dimensions, youre applying it over three dimensions.

Again, I think it is awesome that someone gave this a shot, and did a fairly good job as well. Im all for driving the cost of these types of treatments down, as well as promoting a more hacky/open approach to various treatments. Just know theres more than meets the eye.

* I decided to go back to tech, theres too little collaboration in dentistry for me to make a career out of it.

23
syberspace 4 days ago 0 replies      
slightly off topic:how is diy tooth alignment going to affect criminal investigations? on all those crime shows on tv (csi, navy cis, ...) they use dental records to identify otherwise unidentifyable bodies. is this method even used in real life and how would they find any records of your teeth if you fixed them yourself?
24
semerda 5 days ago 0 replies      
Wow this is awesome! Thank you for sharing. Retainers post Invisalign cost between $400-900 for 1 set - total ripoff. This looks like a far cheaper alternative.
25
scep12 5 days ago 0 replies      
Awesome stuff Amos! It's always nice to see creativity and persistence rewarded with successful results. I really enjoy reading these types of posts on HN.
26
squizzel 3 days ago 0 replies      
Is it me or did it whiten your teeth. I noticed a big difference in upper plaque between the before and after picture.
27
muniri 5 days ago 0 replies      
This is awesome! Definitely not the safest thing to do, but I'm glad that they worked.
28
justinclift 5 days ago 0 replies      
Cool, that's an idea I'd had in the back of my head for some time too. Good to see someone's gone ahead and done it, and proven the concept. :D
29
vram22 5 days ago 4 replies      
Interesting article. Waterpik is a related product (as in, for teeth and gums) that a dentist recommended. Anyone have experience using it - pros, cons?
30
burgessaccount 5 days ago 0 replies      
This is awesome! Thanks for the detailed description.
31
mentos 5 days ago 1 reply      
Are you considering starting a business out of this?
32
pcurve 5 days ago 0 replies      
this is pretty amazing and daring.

I guess this would work better with those with gaps or very mildly crowded teeth.

Often crowded teeth result in pulling teeth to make room.

33
z3t4 4 days ago 0 replies      
Considering opportunity cost of the 100+ hours that probably went into this it would be cheaper to go to a dentist.

He might be able to come up with a better or cheaper method then the currently industry standard though ...

34
darksim905 3 days ago 0 replies      
Very very awesome job :-)
35
hardtime 4 days ago 0 replies      
Nice work. I have braces so...
36
transfire 5 days ago 3 replies      
Can you chew food with the aligner on?
37
peleroberts 5 days ago 0 replies      
Direct leak into your gums..
38
brbsix 5 days ago 0 replies      
Orthodontics is a field known for its protectionism. It'd be pretty foolish but I wouldn't be surprised if you receive a cease and desist.
8
Dropboxs Exodus from the Amazon Cloud wired.com
665 points by moviuro  4 days ago   238 comments top 31
1
jamwt 4 days ago 6 replies      
I want to point out one other thing about this project, if only to assuage my guilt. And this community of hackers and entrepreneurs seems as good a place as any to clear the air.

There is a necessary abridgement that happens in media like this, wherein a few individuals in lead roles act as a vignette for the entire effort. In particular, on the software team, James and I are highlighted here, and it would be easy to assume we did this whole thing ourselves. Especially given very generous phrasing at times like "Turner rebuilt Magic Pocket in an entirely different programming language".

The full picture is James and I were very fortunate to work with a team of more than a dozen amazing engineers on this project, that lead designs and implementations of key parts of it, and that stayed just as late at the office as we did during crunch time. In particular, the contributions of the SREs didn't make it into the article. And managing more than half a million disks without an incredible SRE team is basically impossible.

2
riobard 4 days ago 4 replies      
> Measuring only one-and-half-feet by three-and-half-feet by six inches, each Diskotech box holds as much as a petabyte of data

This number is very interesting. Basically Diskotech stores 1PB in 18" 6" 42" = 4,536 cubic inch volume, which is 10% bigger than standard 7U (17" 12.2" 19.8" = 4,107 cubic inch).

124 days ago Dropbox Storage Engineer jamwt posted here (https://news.ycombinator.com/item?id=10541052) stating that Dropbox is "packing up to an order of magnitude more storage into 4U" compared to Backblaze Storage Pod 5.0, which is 180TB in 4U (assuming it's the deeper 4U at 19" 7" 26.4" = 3,511 cubic inch). Many doubted what jamwt claimed is physically possible, but doing the math reveals that Dropbox is basically packing 793TB in 4U if we naively scale linearly (of coz it's not that simple in practice). Not exactly an order of magnitude more but still.

Put it another way, Diskotech is about 30% bigger in volume than Storage Pod 5.0 but with 470% more storage capacity.

That was indeed some amazing engineering.

3
jamwt 4 days ago 11 replies      
Hi HN! A couple of us from the Magic Pocket software team are around to answer questions if anyone has some.
4
barkingcat 4 days ago 6 replies      
The word "cloud" loses all meaning in this article.

"The irony is that in fleeing the cloud, Dropbox is showing why the cloud is so powerful. It too is building infrastructure so that others dont have to. It too is, well, a cloud company."

Wait ... so using AWS is "cloud", having your own servers is "cloud" too. Everything is cloudy!

5
echelon 4 days ago 7 replies      
Unrelated to the content of the article--I've never seen the oft talked about "anti-adblocker interstitial" before. I was surprised to find that the website blocks viewing of the article entirely because I'm running Adblock.

It's an interesting subject. I'll never click on or be persuaded by ads, so they're not really gaining anything by showing me ads. I'm essentially worthless traffic to them no matter how they cut it.

I do understand the problem they're trying to solve, and I would like to pay for content that I find worthwhile. A convenient microtransaction protocol where I _don't have to sign in or interact in any way_ would be nice. (I don't want to waste time interacting with their bespoke account system / sign-on, even if it is "frictionless".)

6
asendra 4 days ago 2 replies      
So, just for fun. The video quotes "over 1PB" of storage per box.

I count 6 columns of 15 rows/drives. Might be 7 columns even, there's some panels that aren't fully open on the video.

So, 90-105 drives. I'm guessing they are using 10TB drives, although maybe they can get bigger unannounced drives? Roughly, the math seems to check out.

Quite impressive. Guess the Backblaze guys need a Storage Pod 6.0 soon :P (I know, I know, different requirements/constraints)

7
dikaiosune 4 days ago 0 replies      
steveklabnik also pointed out on the Rust subreddit that there's a Dropbox blog post covering a few more technical items:

https://blogs.dropbox.com/tech/2016/03/magic-pocket-infrastr...

8
myth_drannon 4 days ago 1 reply      
Now that Golang is making headways at Dropbox, I guess Python codebase will diminish in its importance...I wonder how Guido Van Rossum feels about it. First he was at Google but they created Go and he left to Dropbox which was a large Python shop, now they are moving to Go too.
9
hakcermani 4 days ago 1 reply      
Its an amazing remarkable feat indeed to move all those bits live to another location ! But seems contrary to what Netflix did (move everything to AWS ! - http://www.eweek.com/cloud/netflix-now-running-its-movies-ex...) I guess they have their specific usage and reasons ?
10
turingbook 3 days ago 0 replies      
I hated the writing style of Wired guys. The blog from Dropbox is much more clear: https://blogs.dropbox.com/tech/2016/03/magic-pocket-infrastr...
11
matt_wulfeck 4 days ago 0 replies      
The open-source alternatives to S3 have a long way to go, especially since AWS S3 is so simple and reliable. I ran Openstack swift in production once upon a time and I remember the entire system coming down because the the rsyslog hostname did not resolve. Ouch. I'll let somebody else work out those bugs.
12
steveklabnik 4 days ago 0 replies      
A short mention of their use of Rust appears!
13
cj0 2 days ago 0 replies      
Some requests for the DropBox Diskotech tech blog series:For which datacenter cooling system are the Diskotech storage pods designed? Hot aisle cold aisle? How evenly can the disks inside the Diskotech be cooled? With this question I try to ask, how good is it in keeping all drives cool? For example for the theoretical case where all disks in the Diskotech are dissipating identical heat, how much variation is there in drive temperature at roughly 25C intake temperature? How much power is consumed for cooling (despite that it will be fractionally low compared to what the disks will consume)? Is there any drive idle optimization done or is the expectance that not any disk will reach its idle (non-rotating) state ever? Is there any optimization done inside a Diskotech to reduce the (inrush) current surge when connecting the machine to power?
14
Spooky23 4 days ago 1 reply      
I'm a recent Dropbox Pro subscriber -- I've been a user for many years.

Really glad to see what's been going on from behind the scenes, and I'm looking forward to seeing what the future will bring to the front end of Dropbox.

Thanks guys!

15
virtuallynathan 3 days ago 1 reply      
It looks like the servers they make use of are purchasable, part of the Dell DSS series - the DSS7000: http://downloads.dell.com/manuals/common/dss%207000_dss%2075...

90x 3.5in disks, 2x compute nodes with 2x E5-2600v3 CPUs.

16
iskander 4 days ago 1 reply      
Q: Did the Dropbox devs working on the second version of Magic Pocket encounter any language stability issues with Rust? Did updates to the core language ever break existing code?
17
bane 4 days ago 0 replies      
From a business sense, this is a great move. Amazon's cloud storage offerings are pretty expensive, even with various deduping strategies in place, Dropbox needed both store and move in and out massive amounts of traffic. If DB had reached the point where they could staff the extra over head of doing it themselves, and figure out how to spread the hardware cost out well, it will give them more pricing flexibility in the near future.
18
the_watcher 4 days ago 0 replies      
I worked at a company that moved off AWS and the changing a tire on a moving car is exactly how the devops team described it.
19
dmitrifedorov 3 days ago 0 replies      
I'm done with Wired: cancelled its subscription a couple months ago (after many years), now they demand I login to read their web site because I use ad blockers.Done!
20
photonwins 4 days ago 1 reply      
I am curious to know, with so many disks densely packed in a 4U configuration, I am guessing there is definitely increased heat generation and not to mention vibration. How do you handle these? Also, does it have any effect on MTBF?
21
bluedino 4 days ago 1 reply      
>> Transferring four petabytes of data, it turned out, took about a day.

46GB/s, is my math right?

22
jessegreathouse 4 days ago 1 reply      
Is there a link without the anti-adblock crap?
23
ldom66 3 days ago 0 replies      
Unrelated to the article but these portraits are really beautiful! Props to the photograph and studio.
24
ccannon 3 days ago 0 replies      
I applaud your efforts at answering all these questions and providing very detailed responses.
25
max_ 3 days ago 1 reply      
whats the "brand new programming language" the writer meant ?
26
ignoramous 4 days ago 5 replies      
Key points:

1. Dropbox moved from AWS to its own datacenters after 8 months of rigourous testing. They didn't exactly build a S3 clone, but something tailored to their needs, they named it Magic Pocket.

2. Dropbox still uses AWS for its European customers.

3. Dropbox hired a bunch of engineers from Facebook to build its own hardware heavily customised for data-storage and IOPS (naturally) viz. Diskotech. Some 8 Diskotech servers can store everything that humanity has ever written down.

4. Dropbox rewrote Magic Pocket in Golang, and then rewrote it again in Rust, to fit on their custom built machines.

5. No word on perf improvements, cost savings, stability, total number of servers, amount of data stored, or how the data was moved. (Edit: Dropbox has a blog post up: https://blogs.dropbox.com/tech/2016/03/magic-pocket-infrastr... )

6. Reminds people of Zynga... They did the same, and when the business plummeted, they went back to AWS.

7. Not a political move (in response to AWS' WorkDocs or CloudDrive), but purely an engineering one: Google and then Facebook succeeded by building their own data centers.

27
JackPoach 3 days ago 0 replies      
I wonder how capital intensive the move is. Isn't Dropbox overall still unprofitable as a company?
28
planetjones 4 days ago 9 replies      
Oh dear I didn't realise Dropbox had invested all of that time and money moving into their own data centre. From my perspective the future of Dropbox looks bleak. Mass storage with Amazon is much cheaper [edit: from a consumer perspective]. I know Dropbox has superior software that works (as opposed to the poor apps by Amazon and Google) but I imagine a lot of people are like me i.e. Store most of the stuff at the cheapest location and use Dropbox just for docs that you want to sync on multiple devices. Total income from consumers like me equals 0.

Dropbox also had that email app didn't they that they recently announced was closing down - mailbox if I recall correctly.

Can anyone convince me that this move by Dropbox isn't going to end very badly for them?

EDIT: downvoters - is HN unable to have a debate about whether this was a smart move or not.

29
razster 4 days ago 0 replies      
My reasons for leaving DB was due to their director of board choice. They were great but since then I have found equal if not better services.
30
known 3 days ago 0 replies      
TL;DR Amazon Cloud is expensive;
31
tschellenbach 3 days ago 1 reply      
Why would you reinvent a piece of technology offered by various cloud providers? It doesn't make any sense, what a waste of engineering resources (fun exercise though). They should have been able to reach some middle ground with AWS about pricing.

I really doubt their in-house storage backend will be cheaper compared to the lowest rate they can get from one of the cloud providers.

9
Adam unity3d.com
651 points by nikolay  3 days ago   131 comments top 29
1
valine 2 days ago 6 replies      
Much of what makes this appealing has nothing to do with the render engine. What I saw was some really excellent character animation / motion capture. The rendering itself wasn't particularly jaw dropping. And that's not a critique of unity. Rather it's a critique of their chosen subject. Metal, walls, and artificial objects in general are all very easy to render convincingly. Show me some trees, grass, translucency, or volumetrics and then I'll be impressed.
2
BinaryIdiot 2 days ago 3 replies      
Honestly the most impressive part to me was being able to convey a story of "human somehow put into a machine" pretty much only through physical acting. That's not something you see every day in video games.
3
bd 3 days ago 1 reply      
Here it is running real-time in Unity editor:

https://www.youtube.com/watch?v=eN3PsU_iA80&t=37m50s

It's a standalone PC project using DX11.

Cloth and cables physics simulation is pre-baked, lighting is dynamic.

4
emehrkay 2 days ago 4 replies      
I want to watch this movie/or someone play this game for a few hours. I can assume that Adam is a human who was put into a robot for some reason. There is some potential there, like Chappie re-imagined.
5
jitl 3 days ago 2 replies      
This reminded me of the pod scene from The Matrix (1999): https://www.youtube.com/watch?v=0WCcX0KQ9V0

Very similar tubes removal process :)

6
kmfrk 2 days ago 4 replies      
There have been a couple of games with atrocious performance on PS4 including Broforce and Firewatchand Broforce is a 2D sidescroller(!)

I dont know whether Unity is innately bad, or whether frameworks in general just tend to enable bad code.

Would love to hear more from people who know more; right now I associate Unity with people starting out in games rather than a platform people continue to use after they hone their skills.

7
tlrobinson 3 days ago 3 replies      
> Rendered in real time with Unity

...on what hardware?

8
pcurve 3 days ago 0 replies      
love the ending... probably all going back to their cubes to write themselves some React JS code..
9
supercoder 2 days ago 2 replies      
Damn, I thought this was running in browser until I read the comments.
10
bcheung 3 days ago 1 reply      
Realtime with what hardware?
11
kmfrk 2 days ago 0 replies      
If you like the philosophy of this, I heartily recommend you check out the first season of Ghost in the Shell: Stand Alone Complex. The second season is so-so, but the first season is amazing and jam-packed with cyberpunk and philosophical challenges like this one.

Theyre also doing a (whitewashed) live-action version of some amalgamation of the movie and TV show, so might as well watch the original (1.0) movie and anime, before Hollywood ruins them for you.

12
pcurve 3 days ago 1 reply      
the best part was the real time orchestra sound track.
13
kristofferR 2 days ago 0 replies      
Have Unity fixed the supposed big threading issues causing bad performance with v5.4?

https://youtu.be/HnVOi9wrZVU?t=188

14
BogusIKnow 2 days ago 0 replies      
Especially the second part with the humans looked very real.
15
zyb09 2 days ago 2 replies      
Why not use the Unity Webplayer?
16
coolnow 2 days ago 1 reply      
Anyone else reminded of the Metal Gear Solid series? The cutscenes (especially in MGSV) had the same exact shaky cam and overall feel in this clip.
17
bunkydoo 2 days ago 0 replies      
This is pretty cool. I'd be really interested to play around with unity to build webpages for VR or something of the sort
18
grogenaut 2 days ago 1 reply      
I was confused for a bit but it's a youtube video not running in engine... you can understand how I'd be confused as unity has a web player.

Edit: Apparently they don't really have the web player anymore. Still confusing. Was actually hoping that for once I'd be proved wrong about WebGL / encriptem.

19
Geee 2 days ago 0 replies      
Great idea for Unity to have their own demo team producing stuff like this to push the technology further.
20
keyle 2 days ago 0 replies      
Great demo by the unity folks. Hopefully the small studios can harness that power just as the people that make the engine. They usually know how to cheat it best.
21
daveheq 2 days ago 0 replies      
Great graphics do not make great games.
22
tibbon 3 days ago 2 replies      
This is... too good. This is seriously all in browser?
23
imperialdrive 3 days ago 1 reply      
woah, what is this??

nvm i scrolled down lol

24
edem 2 days ago 1 reply      
If it is rendered in real time why do I see a youtube logo at the bottom right corner?
25
lifeisstillgood 2 days ago 1 reply      
Wait what? It says rendered in realtime. You mean there was an actor with dots on him / her and that film was playing on a monitor next to them so the director could see the cables popping of?

Surely not

26
linhchi 2 days ago 1 reply      
How can a robot have breaths?

Why does he try to take off the "mask"? How can he identify that it's not a part of his "body"?

27
intrasight 2 days ago 4 replies      
Moore's Law would predict realtime rendering as inevitable. Now let's hope something interesting comes of it.
28
xyproto 2 days ago 0 replies      
This isn't realtime rendering, like demoscene demos. It's just faster rendering of a video. I think the webpage is misleading.
29
davidw 2 days ago 2 replies      
I'm tired of projects using 'Unity' as a name.

There's this, there was an Apache Unity thing that was a Java implementation. There's an Ubuntu thing. Who knows how many others.

10
More code review tools github.com
618 points by Oompa  3 days ago   145 comments top 23
1
js2 3 days ago 15 replies      
It would be nice if GitHub supported a Gerrit-inspired code-review process, where instead of having to choose between:

1) piling new commits onto the existing branch/PR, or

2) force-pushing and completely losing the old commits on the server

You could instead push into a "magic" ref-spec and the server would retain the original commits, but also the rewritten commits, such that a PR can have multiple revisions. This is somewhat hard to explain unless you're familiar with Gerrit and its "patch set" workflow.

Why? Because my preference is to rewrite history in order to address code review comments, such that what is finally merged leaves a clean history. But it is also valuable during the code review process to be able to look across the revisions to make sure everything has been properly addressed.

The only way to do both, today, is to create a new branch and PR for each "revision". i.e. new-feature, new-feature-rev1, new-feature-rev2. Then close the original PR and reference it from the new PR. A bit tedious.

2
willchen 3 days ago 6 replies      
Does anybody else feel like GitHub has released more features in the last month than the last 6 months? I'm not sure if it's just a coincidence with all the attention they've gotten on HN, but these improvements are much appreciated!
3
numbsafari 3 days ago 7 replies      
I'd love it if there was a way for me to queue my comments before submitting. I often keep my comments in a separate textedit/nv window and then go back to put them in since I want to keep track of questions that arise as I'm reading, but I don't want to pepper someone with comments that would be resolved 200 lines later in the diff.
4
danpalmer 3 days ago 3 replies      
This is a great step in the right direction. I use GitHub for code review every day, and it has historically been very poorly designed for thorough reviews. These changes look great, I just hope that we get some sort of checking-off of review points, and accept/reject functionality.
5
willchen 3 days ago 0 replies      
I've noticed some open source projects (particularly Angular 2) follow this convention where they never actually "merge" the PR, but rather rebase it into their main branch and have a message in the commit to close the PR.

The advantages seem to be that you get a cleaner git history, and you can keep all the "in-between" / WIP commits that tell the story. Is this done through an automated tooling or is someone manually rebasing it into master? It seems to be a really useful practice so I'm curious how people do it. It would be great if GitHub could offer this natively, as I think many power Git users appreciate the benefits of rebasing over merging.

6
guelo 3 days ago 1 reply      
Sometimes when I submit a PR and I get a lot of feedback I get lost making sure I've addressed every comment. My #1 feature request would be being able to mark comment threads as resolved. The outdated comment feature sometimes works for this use case but mostly it doesn't.
7
alexwebb2 3 days ago 1 reply      
Did they kill commit-level comments?

I can no longer make comments on a given commit - the entry box at the bottom is gone. Looks like you have to scroll all the way back up to the top, switch to the "Conversation" tab, and then make a PR-level comment instead of a commit-level comment.

I hope I'm missing something here, because as it stands now it's a big step backwards.

8
jakub_g 3 days ago 0 replies      
I'll plug my work as always when talking about the topic:

I wrote a small script for Chrome/Firefox that I found useful for reviewing big PRs on GitHub. It gives you possibility to expand/collapse files, mark files ok/fail in order come back to them later.

It works with mouse and keyboard (though I've noticed there are some issues with tab order after GitHub upgraded their code, and small UI glitches, I'll try to have a look at them soon)

It's a hack on top of GitHub so it needs maintenance every couple of months, but overally it does its job well IMO.

I don't have much time to hack on it anymore, but community contribs are very welcome. I wrote some potential ideas for improvements as GH issues in the repo. AMA if you're interested.

[1] https://github.com/jakub-g/gh-code-review-assistant

9
eridius 3 days ago 0 replies      
These are some nice changes, though there's still plenty of things I want GitHub to make better about code reviewing.

One of these changes, and one that annoys me every single day, is when I get an email about a comment, the email doesn't include any context (e.g. it should include the previous comments on that line and probably the hunk as well). And when I click "View on GitHub" to see it in context, if the commit is now on an outdated diff, I get taken to a page that doesn't show the comment at all. It takes me to the Files view, but the Files view doesn't show outdated comments. If the comment is on an outdated diff then it really should take me to the Conversation view with the comment in question expanded.

10
pizza 3 days ago 0 replies      
If next to every file, Github also listed its filesize, I would know exactly what file to begin looking at when I encountered a new repo (to a good approximation -- in any case, I would be able to intuit better decisions with a combination of filenames, the information in hypothetical README.md's, and filesizes than just READMEs and filenames alone..)

Maybe this doesn't scale or something? It's something I've felt necessary for a while though. Maybe their user testing has concluded otherwise?

11
kdazzle 2 days ago 1 reply      
My biggest pet-peeve with GH code review is that line-comments are automatically folded whenever that line of code is changed. So if the changes to that line didn't relate to your CR or if they didn't actually fix anything, then your comment will pretty much be lost to time.

Plus, it would be nice to see the discussion around a particular line without having to go through the entire PR and unfolding each conversation to see if it's the right one.

12
netcraft 3 days ago 1 reply      
I wish it were possible to put comments directly on a line of code in any commit / repo outside of a PR. Sometimes someone wants me to review their code that they aren't submitting anywhere.
13
forgotpwtomain 3 days ago 0 replies      
I can't say I'm a fan. If I select a single commit, instead of giving me a list with a single commit check-marked, it gives me only that single commit and an option to 'show all commits'. So to change the commit you are viewing requires clicking 'show all commits' (waiting for them to load) and then selecting the other commit you want to view (which should just be a check box).

Also it totally baffles me that these actions all require server-side requests. It's really a lot easier to go to the old commits tab, and to ctrl click an open tab for each commit then to use this new feature.

At one point Github offered the best and cleanest UI of all the alternatives - but I doubt this will be the case for long at this rate.

14
hwangmoretime 3 days ago 3 replies      
Code review is something all of us do, but all of us do differently. Anyone know of any nice frameworks, articles, or blog posts for code review? I'm particularly interested in the case where knowledge transfer is a high priority in the code review.

My current side project integrates feedback theory [1], to provide scaffolds and other cues to help and remind reviewers to give high quality feedback. Thus, my interest.

1: https://scholar.google.com/scholar?q=%22The+Effects+of+Feedb...

15
xkarga00 3 days ago 1 reply      
Dear Github, can you please bring the search bar back (w/o the need to log in)?
16
Negative1 3 days ago 1 reply      
Nice additions. Can't help but feel at some point you'll be able to edit and compile code directly on Github (with some sort of compiler/CL backend). Has Github ever discussed integrating Atom directly into the site?
17
stormbrew 2 days ago 0 replies      
Only related to review in a somewhat indirect way, but I really wish the commit view on github (as well as other git tools) had an equivalent of --left-only command line option to git-log. It's incredibly useful for viewing a high level of the history of a repo that uses merge bubbles, and lack of tooling around doing just that seems like that main reason people do things like squash their branches before merging to master (which is where I think it connects back to review).
18
artursapek 3 days ago 0 replies      
Awesome to see GitHub stepping to the plate with these updates. IMO the biggest thing missing in the PR's diff view is a git blame column beside the changes, so if someone is writing good commits I can glance through the entire diff and see which changes were introduced together and why.

Just a feature request, and I know how annoying those can be on the receiving end. Great updates either way.

19
known 2 days ago 0 replies      
20
pearlsteinj 3 days ago 1 reply      
I've been pleasantly surprised at how fast Github has been moving since that critical open letter came out. For a giant company like Github they've been releasing developer tools very quickly.
21
debacle 3 days ago 0 replies      
I'd like block-level comments and maybe even an in-commit commit function (the ability to commit to a PR branch while in the PR). That's all I really care about.
22
Jemaclus 3 days ago 1 reply      
I think the biggest thing I want is pagination for extremely large commits. Any ideas if that's gonna happen?
23
fiatjaf 3 days ago 0 replies      
No matter how much they do, people will still complain.
11
Should All Research Papers Be Free? nytimes.com
640 points by mirimir  5 days ago   306 comments top 43
1
kriro 4 days ago 7 replies      
If it is funded by government in any way (public university, research project) I think it is borderline defrauding the tax payer that the research funded by tax-money is not free by default. Since close to all research is government funded in some way, shape or form...my answer would be yes in the general case.

I think the long term answer is decentralized publishing. Publish everything you do on a university or private website and let others decide if it's good or not when they want to cite it instead of a peer review that is set in stone. I think people reading papers deciding if they want to cite you are smart enough to figure out if it's good research or not. The peer review process is overrated (and quite often suffers from insider networks). If you decentralize publishing you can also have other researchers upvote a paper to basically approve of the academic standards in the paper.I also think the static nature of papers is a problem. I'd much rather cite a specific version of the paper. I'm thinking about git and pull requests along the lines of "want to cite, fixed layout" or "new research disproves this" etc.

2
robertwalsh0 4 days ago 6 replies      
Full disclosure: I'm a founder of a company called Scholastica that provides software that helps journals peer-review and publish open-access content online. One of our journal clients, Discrete Analysis, is linked to in the NYT article.

It is incredibly obvious that journal content shouldn't cost as much as it does.

- Scholars write the content for free

- Scholars do the peer-review for free

- All the legacy publishers do is take the content and paywall PDF files

Can you believe it? Paywalling. PDFs. For billions.

Of course the publishers say they create immense value by typesetting said PDFs, but as technologists, we can clearly see that this is bunk.

There's a comment in this thread that mentions the manual work involved in taking Word files and getting them into PDFs, XML, etc. While that is an issue, which you could consider a technology problem, it definitely doesn't justify the incredible cost of journal content that has been created and peer-reviewed at no cost. Keep in mind that journal prices have risen much faster than the consumer price index since the 80s (1).

The future is very clear, academics do the work as they've always done and share the content with the public at a very low cost via the internet.

PS. If you want a peek into how the publishers see the whole Sci-Hub kerfuffle, check out this post from one of their industry blogs - the comment section is a doozy: http://scholarlykitchen.sspnet.org/2016/03/02/sci-hub-and-th...

1. https://cdn1.vox-cdn.com/thumbor/jtj2dzMfklULQipRZt_3xaLoFxU...

3
payne92 5 days ago 4 replies      
I feel especially strongly that papers that result from taxpayer-funded research should be free.
4
reuven 4 days ago 2 replies      
When I finished my PhD at Northwestern, part of the university's procedure involved going to the ProQuest Web site. ProQuest is a journal and dissertation publishing company.

They asked if I wanted my dissertation to be available, free of charge, to anyone interested in reading it.

Clicking on "yes, I want to make it available for free" would cost me something like $800.

Clicking on "no, I'll let you charge people to see it" would cost me nothing.

Having just finished, and being in debt to do so, it shouldn't come as a surprise that I wasn't rushing to pay even more. So now, if people want to see my dissertation, they have to pay -- or be part of an institution that pays an annual fee to ProQuest. (BTW, e-mail me if you want a copy.)

My guess is that it's similar with other journals. And while professors have more than PhD students, they have limited enough research funds that they'll hold their nose, save the money, and keep things behind a paywall.

Which is totally outrageous. It's about time that this change, and I'm happy to see what looks like the beginning of the end on this front.

5
imglorp 5 days ago 3 replies      
Some things, like dissemination of knowledge, are truly in the interest of all humanity. It seems criminal that a few hundred people at the publishing houses should benefit at the expense of billions' welfare.
6
stegosaurus 5 days ago 3 replies      
All everything 'should' be free. At least, that which is not scarce.

The correct question to ask is 'can' all research papers be free - does the world continue to spin, will research still happen, will we still progress, if they are free?

The only reason we even have this debate to begin with is because the producers of this information require scarce/controlled resources in order to survive.

7
davnn 4 days ago 0 replies      
I think Elbakyan should do everything to make sci-hub easily replaceable. Once it's hosted on multiple places it would be much harder to shut down.

Maybe completely free research papers are not the future but there should be a Spotify for research papers that is affordable for everyone. I hope that Elbakyan will reach her goal and ultimately change the whole industry.

8
platform 5 days ago 0 replies      
Taxpayer funded research must be free to read.

Also, a research that has been at least partially tax-funded resulting in a publication, must not be usable as an necessary ingredient for a commercial patent.

That is, a patent can include this type of research, but it cannot be a 'necessity' for the patent to be viable. Or, if the particular research, is necessary for a given patent to be viable, the patent must grant no-fees, no-commercial-strings-attached use.

This allows a corporation to establish patents as means to protect itself, while allowing the tax funded research to be used by others without commercial strings attached

9
jammycakes 5 days ago 0 replies      
Something I'd like to see here: results published in research papers aggregated and released as open data.

There must be a lot of interesting meta-analyses that aren't getting done because the necessary data is locked away behind paywalls, and usually not in an easily machine readable format into the bargain.

10
tomahunt 4 days ago 2 replies      
There must be thousands of people who could use free access to research papers: PhDs and Masters now in industry trying to apply the state of the art, engineers who have worked their way into a subject, concerned citizens who want to read the source material.

I am a PhD who'd love to be working in industry, but I'm shit scared that once I leave the gates of the university I'll simply lose touch with the state of the art because the papers will no longer be accessible.

11
bloaf 4 days ago 0 replies      
Yes. They should.

It is in the best interests of humanity to make the knowledge obtained through research available to anyone looking for that knowledge. There is a clear consensus among scientists that the current publishing model is at best inexpedient and at worst hostile to that end.

Most people are asking what good the current publishing model provides, but I think to answer that question we need to ask: "compared to what?" It seems clear to me that the current model is better than having no publishing mechanism at all, but I doubt that anyone seriously thinks that the "none" model is the only alternative.

I think that if we sat down today and thought up a new publishing model from scratch, we would be able to outdo the status quo on just about every "good" people have mentioned here, as well as provide features that the current model is incapable of. I think it is highly likely that we could make a system that ran on donated resources alone.

Some things we might want/have in a "from scratch" model:

1. Direct access to data-as-in-a-database instead of data-as-a-graph-in-a-PDF

2. Blockchain-based reputation system for scientists

3. P2P storage and sharing of scientific data

4. Tiers of scientific information, e.g. an informal forum-of-science, semi-formal wiki-of-science, and formal publications

5. Automated peer review process

6. A better and more consistent authoring tool for scientists

12
denzil_correa 5 days ago 0 replies      
> The real people to blame are the leaders of the scientific community Nobel scientists, heads of institutions, the presidents of universities who are in a position to change things but have never faced up to this problem in part because they are beneficiaries of the system, said Dr. Eisen. University presidents love to tout how important their scientists are because they publish in these journals.

For me, this is the cog of the problem. People who are in a position to change should push for it.

13
ycmbntrthrwaway 5 days ago 1 reply      
The main problems with tax-funded research and grants is that money is given in return for citations in journals with high "impact factor". As a result, publishers of those journals are indirectly supported by the state. Instead, government or funding organizations should review the results of the work for themselves, but they are unable to do it, because they usually don't understand a thing about research subject.
14
arbre 5 days ago 4 replies      
Can someone explain me why the researchers themselves don't publish their work for free? The article says they are not paid for the articles so I don't see why they couldn't do that.
15
cft 4 days ago 1 reply      
Publishing used to cost money when it required physical printing/distrubution/storage of journals. Now all of this is basically free, but they still charge. Most theoretical physicists for example only care about "publishing" in the ArXiv (all free, open source). The traditional publishing is ridiculous.
16
naveen99 23 hours ago 0 replies      
I plan to submit my next paper to peerj https://en.m.wikipedia.org/wiki/PeerJ
17
return0 5 days ago 0 replies      
I hope this publicitly doesnt lead to swift shutdown of scihub. She provides us with a great service that helps many researchers work faster. We should also commend her for stirring the most lively debate about an anachronistic and dumb publishing system.
18
mrdrozdov 5 days ago 1 reply      
This isn't the right question. The question is, "Who should be profiting from research papers?" The Journal performs quality control for the sake of consistency and prestige, but the papers and their reviews are put together by researchers, commonly at great cost for marginal personal gain. The article's hero doesn't really care. She needs to read papers, and needs other people to be able to read them, so she built sci-hub (demo: https://sci-hub.io/10.1038/nature16990).
19
sekou 3 days ago 0 replies      
Providing more open access to existing research information is just as important as empowering people to share and distribute the findings of their research in formats that both machines and people can understand. I believe we have already produced large amounts of data in wildly different fields of study that can potentially be used with the help of machines (and the diverse perspectives of many humans) to solve problems for which we currently don't have answers.

It looks like the Fair Access to Science and Technology Research (FASTR) bill linked in the article would be a step in the right direction for US citizens. I wonder how other forces (like Sci-Hub) will affect the direction of things to come.

20
catnaroek 4 days ago 2 replies      
What follows is just my very uninformed opinion. I'm not a scientist myself, but my interest in CS and math has made me an avid reader of scientific papers and books - whenever they're publicly available, that is.

What publishing houses do is exploit the rules of the social games that scientists themselves willingly play. When the importance of an academic work is judged by the names of its authors, or by the name of the journal in which it is published, or by the presence of fashionable keywords in its title or in the abstract, scientists are giving publishing houses the very rope with which they will be hanged. So, while the behavior of publishing houses is certainly antisocial and most abominable, it is only made possible by the very scientific community that condemns it.

Is there any fundamental reason why scientists can't always submit their papers to the arXiv, and let the web of citations determine their relative importance?

21
justncase80 4 days ago 0 replies      
I was thinking a good startup company may be an open publication and peer review site. Something where users are non-anonymous and they are weighted by their accomplishments irl. Submissions and peer reviews would be open to anyone but weighted heavily by ranking, which is affected by irl achievements and cumulative quality contributions to the site. Like a combination of stack overflow and wikipedia maybe.

Money would be made by donation (ala wikipedia) and paper submission fees. Perhaps organizational level membership fees, such as universities, etc.

Just an idea I haven't had time to work on.

22
yeukhon 5 days ago 1 reply      
I am still willing to pay for a high-quality printed version of research journals, but for the online access I think we should simply give away because research knowledge should belong in the public domain when you choose to publish the knowledge with a research journal. You are not publishing a paper within your 4x4 walled intrnaet.

But I get it. There is a business cost behind running a journal / magazine (although not all reputable one charge fees!). So here is the radical question: why the fuck do we need 100+ CS-related journal publishers out there? All we need is one.

23
alkonaut 4 days ago 0 replies      
Let's rephrase the question: should public research funded with public money have results available to those who paid for it?

This isn't Elseviers fault, or the researchers fault or the universities fault, it's the fault of whoever distributes public money to research without having proper criteria for what is expected in return.

24
n2dasun 2 days ago 0 replies      
I found it chuckleworthy that this posted on the NYT site, which is known for paywalled articles.
25
tn13 4 days ago 0 replies      
All the state funded research must be in public domain. Everywhere else the one who funds the research must decide what to do with it.
26
erikpukinskis 4 days ago 0 replies      
A better question to me is "Should there be fields where distributing your work for free will harm your academic career?"
27
ajuc 4 days ago 0 replies      
Science funded by taxes should be free, obviously.
28
guico 4 days ago 0 replies      
I wonder, in the end what's really holding open-access publishing back? What can we do, as technologists, to help fix it?
29
jeffdavis 4 days ago 1 reply      
Nothing is "free", the only question is: "who pays, and how does that change the incentives and results?".
30
pmarreck 4 days ago 0 replies      
How is something that is, in essence, "truth determination and dissemination," not free?
31
dschiptsov 4 days ago 0 replies      
At least they will get wider and less biased and institution-conditioned reviews.
32
leed25d 4 days ago 0 replies      
Research funded by taxpayer dollars should be free to those who paid for it.
33
jimjimjim 5 days ago 5 replies      
unpopular opinion ahead: no, and probably not even for taxpayer funded research.

Can you demand a lift in a garbage truck? or in a tank?both of these things are provided for local or central government. why not? because it distracts from the job that they are there for.The same can be said for research (and source code). It takes time, effort and money to publish and peer review research. If journals can't make money providing access to the research who is going to pay for it?

Also there is currently a lot of BAD research out there. Domain experts don't have time to review all of it. Journals with prestigious names act as filters and as sort of priority queues for where you should look first.

34
baby 4 days ago 0 replies      
You don't want me to read your paper? Charge for it.
35
sandra_saltlake 4 days ago 0 replies      
required open access publication
36
adultSwim 4 days ago 0 replies      
Yes
37
kombucha2 4 days ago 0 replies      
Yes
38
Chinjut 5 days ago 1 reply      
Yes.

("Betteridge's law of headlines" fails)

39
x5n1 5 days ago 1 reply      
What benefit do the publishers provide to anyone? Why do the publishers deserve billions of dollars?
40
lacker 5 days ago 0 replies      
Why does it cost $3000 to publish an article?? You can put it on Medium for free.
41
arek_ 5 days ago 1 reply      
Who will produce meaningful research for free?
42
julie1 4 days ago 0 replies      
Should leibniz, keppler, Einstein, newton, flemming, jenner papers should have been free?

Oh they were. And it is thanks to flemming that people stopped dying in atrocious ways over the world from gangrene.

Should citizens be able to access state of the art research about how to prevent vascular disease, dietetic, chemical pollution, effects of fracking, urbanisms to help them make good choices when voting? I do think so.

Because no vote exists without enlighten choices.

Chosing without relevant information on future choice that will impact every one require citizens that can have a constructed opinion. And experts are doing a poor job at being right. (look at the financial regulations made by experts and the long lasting crisis since 2000).

So yes, papers should be free to support the exercise of democracy.

43
ikeboy 5 days ago 1 reply      
>The largest companies, like Elsevier, Taylor & Francis, Springer and Wiley, typically have profit margins of over 30 percent, which they say is justified because they are curators of research, selecting only the most worthy papers for publication.

> But that financial model requires authors to pay a processing charge that can run anywhere from $1,500 to $3,000 per article so the publisher can recoup its costs.

These two facts seem to point strongly to the publishers' being in the right. 30% is not a high number. If they were to lower their prices by 30%, running completely as nonprofits (or whatever number would break even), do you think people's complaints about difficulty of access would go away? If not, your complaining is not about their profit.

And you can't seriously expect them to eat a >$1000 loss on every paper.

Either we need a single party to fund upfront, like the government, or we need some other way to pay for it.

12
Neural Networks Demystified lumiverse.io
684 points by maxlambert  4 days ago   62 comments top 25
1
yxlx 4 days ago 4 replies      
I am very interested in this subject but I was unable to finish watching these videos. The background music is incredibly distracting. I have a name for that kind of music, I call it Silicon Valley Music because it is the kind of music used in a lot of startup product videos. The narrator voice style is also pretty much the same that is used in those. I wanted to like these videos but I did not feel that they had any value what so ever. With all of these kinds of videos, I feel like the producers want the audience to feel bliss or learning but that they never actually deliver on that so instead it seems insincere and underhanded.
2
rayalez 4 days ago 7 replies      
Hey, everyone! I'm the founder of lumiverse.io, it's pretty incredible to see our website on the front page of HN!

I want lumiverse to become an awesome community where people can discover and discuss great educational videos.

We've launched only recently, the site is still in active development, I'm improving it every day. If you have any feedback - please let me know =)

(Also feel free to contact me at raymestalez@gmail.com)

3
max_ 4 days ago 1 reply      
Every beginner in Neural networks should probablystart with this and follow with Karpathy's http://karpathy.github.io/neuralnets and may be later on http://neuralnetworksanddeeplearning.com/
4
elsherbini 4 days ago 1 reply      
These are also available on Stephen Welch's youtube channel[1]. He uploaded the last one in this series Janaury 2015.

[1]: https://www.youtube.com/playlist?list=PLiaHhY2iBX9hdHaRr6b7X...

5
pjdorrell 1 day ago 0 replies      
Plan A: The material is difficult, so present it very slowly and carefully.

Plan B: To avoid boredom, present the material as quickly and densely as possible, with lots of constantly changing detail, with simultaneous visual, audio and even some light background music.

These videos are a bit like "Hitchhiker's Guide to the Galaxy" meets "Khan Academy".

Some people will like them, and some won't. I like them.

6
0vermorrow 4 days ago 0 replies      
My initial feedback would be that the music is way too loud and that I really like that you used Python to show how to construct the bits and pieces.
7
protomyth 4 days ago 0 replies      
Nice, one suggestion, please ditch the music, its distracting and doesn't play well with speaker's voice.
8
jordigh 4 days ago 1 reply      
Why have neural networks become synonymous with backprop networks? Is it because those are the most successful? What happened to bidirectional associatve memory and Kohonen maps? Does anyone take the biological inspiration of neural networks seriously anymore?
9
cjcenizal 4 days ago 0 replies      
This is really great, but like a few other people, I found the music incredibly distracting. At first I thought I had left my Spotify on!
10
V-2 4 days ago 2 replies      
At the risk of not sounding very clever, I have to admit it's way too fast for me
11
Achshar 4 days ago 1 reply      
Man the the second video gets steep, quickly.
12
therobot24 4 days ago 1 reply      
no one can ever claim there that is a shortage of tutorials with regard to neural nets (and deep learning)
13
rdlecler1 4 days ago 1 reply      
To demystify NNs further we need to stop graphically representing spurious interactions. If you can perturb or remove a link win between any two neurons i & j, then that interaction is spurious and shouldn't be represented by in the graphical network representation. Doing this iteratively you can start to better appreciate that neural networks are computational circuits that use thereshold functions instead of logic gates.
14
masthead 2 days ago 0 replies      
People who are finding a difficult time concentrating, an clone the git repo where it has ipython Jupyter notebooks.

Link to Installation of Anaconda(by Continuum):https://www.continuum.io/downloads

Link to the Git Repo:https://github.com/stephencwelch/Neural-Networks-Demystified

15
pawelwentpawel 4 days ago 0 replies      
It's a great short introduction, really worth looking through the code examples that they have on github too - https://github.com/stephencwelch/Neural-Networks-Demystified

There is certainly no shortage of new tutorials bubbling up on neural nets. One of my favorites - https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearni...

16
JD557 4 days ago 1 reply      
In case the author of the videos is here, do you have plans to add some videos about Convolutional NNs and Recurrent NNs?

I know a lot of developers like myself that know about traditional NNs, but are not familiar with those two.

17
txprog 4 days ago 0 replies      
I liked! Already done the first 3, the music didn't bothered me neither the voice (i'm not a native english at all.). As for the concept explain, it's true that some equation are a little bit out of reach for someone that has less mathematical background. But i understand the concept so far, which are well explained.

I'm not sure i would be able to just write (or decide) the equation for the neural network. On what criteria can you decide this will work better or not? What are your key to make a decision?

18
rjcrystal 3 days ago 0 replies      
I saw some of the videos and i agree with the suggestions provided here but i really like the simple way of explaining and less mathematical more programmatic approach. It'd be awesome if you could make some neural networks with cuda or any other library like tensorflow or theano. Good luck.
19
birdwatcher9 4 days ago 0 replies      
get rid of the music and find someone to talk who uses less sibilance
20
coherentpony 4 days ago 0 replies      
Cool. I have a word of advice, though. The term 'scalar product' when referring to the product of two vectors is a scalar, not a vector. In the back-propagation video you mis-spoke here.

Otherwise, good job.

21
TheAwesomeA 4 days ago 0 replies      
Great videos!

Does somebody know a source for a nice data analytics/machine learning taxonomy or something (grouped by the class of problems the different methods solve)..?

22
BinaryIdiot 4 days ago 0 replies      
This looks really interesting. Thanks! I'll save these to start watching later.
23
alexjv89 3 days ago 0 replies      
Awesome video ! Crisp and to the point
24
dubmax123 3 days ago 0 replies      
great content, most annoying music ever!
25
sandra_saltlake 3 days ago 0 replies      
Mathematical more programmatic approach.
13
Handful of Biologists Went Rogue and Published Directly to Internet nytimes.com
490 points by srikar  3 days ago   165 comments top 28
1
dekhn 3 days ago 6 replies      
When I first found out about the web, early 90's, and it was "obvious" that the role of the web was to expand scientific publishing. I expected that everybody would publish latex files (raw source, not PDFs) in computationally accessible formats, with raw data in easily parseable, semantic forms.

That didn't really happen as expected. In my own chosen field (biology) it happened much more slowly than I hoped- physics (with arxiv) was far better. However, just getting PDFs on biorxiv is only a small part of the long game. I did not appreciate the huge importance placed on publishing in high-profile journals has on one's career trajectory and how large a role that would play in slowing the transition to free publication and post-publication review.

The long game is to enable the vast existing resources, and the new resources, to be parsed semantically by artificial intelligence algorithms. We've already passed the point where individuals can understand the full literature in their chosen field and so eventually we will need AI just to make progress.

2
blakesterz 3 days ago 2 replies      
I was working in academia back in 2002 and I remember talking about this crazy open access thing, and blogs, and wikis, with the folks on the tenure committee then. I remember thinking how fast this tenure/publishing thing will change in the next few years. And here it is 13 years later and there's a headline about a HANDFUL of biologists going rouge and daring to publish a PREPRINT!? I know there's been quite a bit of progress, but I'm still surprised at just how little things have changed.
3
nickbauman 3 days ago 0 replies      
Uh ... isn't this what Tim Berners-Lee meant when he created the world wide web was that scientists do this exactly? It's like he handed a machine gun to us cavemen scientists 25 years ago and we've been collectively clubbing him in the head with it ever since.
4
reporter 3 days ago 1 reply      
I just uploaded three articles in the last two weeks to BioRxiv. The papers were previously just sitting in review. I have already received several emails thanking and informing me that my work is influencing the manuscripts they are writing - more citations. Overall the experience has been an extremely positive experience for me. I don't really see any downsides. So excited for the revolution.
5
slizard 3 days ago 0 replies      
While this is nice in that it sets the example (and provides publicity to bioarxiv) it's not a couple of Nobel laureates that post one out of dozen(s) of papers/year published that will make the real difference.

The system is aged and inefficient (some would even argue it's rotten) and IMO comprehensive changes are needed. Like racial or gender discrimination can't be addressed without changing the social rules people live by, the current academic system that's rather elitist, non-inclusive, discriminatory, often more biased and less fair than many think needs to change substantially.

Such change will be aided by important people setting examples (and often going back to their old ways). However more substantial change is needed on multiple levels, most importantly: academic leaders and funding agencies (run by the former) need to stop looking at who's who and how many Nature/Science/insert-your-fancy-journal papers does the person have. For instance, the culture of applying for grant money with work that's half done to maximize one's chances needs to stop and so should the over-emphasis of impressive and positive results.

Additionally, publishers exploiting everyone need to die out and as long as these researchers "go rogue" with a single paper (rather than for instance committing to publish 100% preprints and >75% open access), not much will change.

6
jackcosgrove 3 days ago 2 replies      
I know the "wisdom of crowds" is passe, but the continued success (all things considered) of Wikipedia and open source software really makes me question the value of quality gatekeepers. I know I'm biased because I work in software and the costs of mediocrity in this industry are less than in others, but I think we could speed up innovation and discovery if we opened up science and made it more publicly accessible and collaborative. At some point the gatekeepers are just protecting their turf and hold back progress.
7
hwang89 3 days ago 4 replies      
What's the best way to support and reward these researchers? Something we can do in the next five minutes while they have the reader's attention.
8
timrpeterson 3 days ago 1 reply      
Nobel prize winning scientists can go rogue. Until these same rogues hire incoming professors based on their own biorxiv papers this is a small advance.

This whole thing needs to start at the level of the funding agency, namely NIH. Publishing in a good journal is a prerequisite to getting a grant. Try getting an R01 on a Biorxiv paper. Not gonna happen.

9
chrisamiller 3 days ago 1 reply      
This article is a little bit breathless. In the academic circles that I run (genomics, computational biology, cancer), bioArxiv is not "going rogue". It's becoming pretty common, and will continue to increase in popularity as the FUD surrounding preprints and high-impact journals begins to dissipate. i.e. Nature won't accept my paper if it's on bioArxiv! (Yes they will).
10
cs702 3 days ago 1 reply      
When a mainstream publication like the New York Times has a positive article about Nobel-prize-winning scientists bypassing the choke-hold of established journals by directly publishing preprints online, you know it's the beginning of the end for the old, bureaucratic way of publishing scientific research.

Awesome.

11
pnathan 3 days ago 3 replies      
I talk with a doctoral candidate in Chemistry regularly about the different paper cultures. It's amazing how different different disciplines are... her accounts of Chemistry is that (I synthesize) they are extremely locked down and research is very much aimed towards granting patents. Knowledge sharing to the broader chemistry community does not appear to be a key goal.

It was a huge shock to me, coming from CS: knowledge sharing has such a high value in our community.

12
mirimir 3 days ago 0 replies      
Even cooler, I think, are "working papers". In my arguably limited experience, this seems to be popular mostly in economics. As I understand it, authors are soliciting comment from peers, and thinking becomes long-term collaborative. It's a conversation, not a paper. Maybe scientific research can become even more open and collaborative, using the GitHub model or whatever.
13
devy 3 days ago 1 reply      
Is it just me who think that this NYT title reflects the negative view of releasing research results in preprints by using the phrase "went rogue"?

Speeding up the knowledge sharing and to solve more problems more quickly is a good thing IMMO. As the same article pointed out, Physicists has been releasing research results in preprints since 1990s.

14
ylem 2 days ago 0 replies      
I'm a physicist and we routinely publish on the arxiv. I hope the chemists are next (a number of chemistry journals ban preprints)!!!
15
yeukhon 2 days ago 0 replies      
Total off topic, but here is an anecdote I heard from a professor when he was explaining about review process. One reviewer argued to change

 Figure 5. The statistics ....
to

 Figure (5) The statistics ....
because the reviewer liked () format better, although (IMO) the new format is so ugly.

Another reviewer saw the comment and had a really nasty debate. The paper was published with the original format nonetheless.

16
larakerns 3 days ago 0 replies      
This needs to happen in more scientific fields, like a field-specific consensus publishing platform. Everyone agrees to publish their research to benefit everyone else.
17
return0 3 days ago 1 reply      
That is great (or, to be honest, it's way past time for this, this shouldn't be news). But we need to go beyond that. The PDF format is a relic. We need a platform where scientists can directly edit their articles. Figures should be replaced by interactive visualizations where possible. This would solve the problem of data availability and allow other researchers to have direct access to the data shown in a plot.
18
octatoan 2 days ago 0 replies      
There's a massive project going on in math using GitHub to write an open-source algebraic geometry textbook.

http://stacks.math.columbia.edu

19
batbomb 3 days ago 4 replies      
Why not just use arxiv.org?
20
kusmi 3 days ago 0 replies      
So it was put online without peer review? Papers can always be submitted by the author to PMC using NIHMS if the journal doesn't do it. However the paper must go through a journal because they arbitrate the peer review.
21
zem 3 days ago 3 replies      
can someone explain this bit:

> If university libraries drop their costly journal subscriptions in favor of free preprints, journals may well withdraw permission to use them

withdraw permission to do what exactly, and enforced how?

22
p4wnc6 3 days ago 1 reply      
This isn't that innovative. Creationists have been doing this for years.
23
tevlon 3 days ago 2 replies      
Does somebody know why we are still using pdfs for papers ?I know a lot of people that are trying to parse PDF files and it is an awful process.

If somebody is looking for an idea for a new venture, this is a problem, yet to be solved !

24
danieltillett 3 days ago 1 reply      
If I was an evil journal editor I would use the metrics on biorivx to accept or reject papers. This would make it easy to predict the future impact factor and help you game the impact factor of your journal.
25
the_watcher 3 days ago 0 replies      
The concerns over "peer review" seem ridiculous to me. The peers who would review it would still have access, and it would open it up to exponentially more people.
26
cmurf 3 days ago 0 replies      
What about ODF? The Open Document Foundation has been separate from the content creation applications for a couple of years now.
27
z3t4 2 days ago 0 replies      
What's stopping them from writing in HTML with some standard CSS?
28
dfraser992 3 days ago 0 replies      
Information wants to be free, BITCH... But seriously, given the increasing "media savviness" of subsequent generations (from Baby Boomers who grew up with TV, to Gen Y for whom the Internet is a given), the general ability across the spectrum of humanity to synthesize disparate information sources and filter them, compare and contrast, decide what is 'truthy' vs actually true... is increasing. Given all the information scientists have to process... What if machine learning was applied to this problem? The role of traditional gatekeepers is breaking down. I see this is the publishing industry - lots of content, most of the self-published books are awful, but books like "Wool" are able to rise to the top.

At least I hope humanity is getting more sophisticated. What is the median of the age of Trump supporters and the one sigma std dev? That would be an interesting statistic.

14
Open Sourcers Race to Build Better Versions of Slack wired.com
494 points by butterfinger  2 days ago   376 comments top 37
1
scrollaway 2 days ago 17 replies      
A hundred different chat systems mentioned here. None of them compatible with one another. Well, I guess a couple of them have IRC gateways, but you have to actually set those up.

Gee, wouldn't it be nice if you could pick and choose your UI? Pick and choose your "integrations", your "plugins", your client, etc without having to lose your entire userbase, history, contacts?

Some sort of open protocol.

Urgh, I've been working on this lately and the entire field is depressing. Between one side that doesn't understand the legitimate need for open protocols, and another side that doesn't understand why IRC isn't the end-all-be-all of group chat (and why UX matters), it's just people talking past each other.

Every new attempt at making "the slack killer" makes this problem worse, because it comes with its own protocol. Its own users. Etc.

PS: If somebody has free time to work on an open source multi-protocol group chat gateway, email me, I have something started.

2
navait 2 days ago 5 replies      
For talking about a race, Wired seems to have neglected a crucial technology: ircv3[1].

I'm not very fond of IRCv2, and have moved away from it where I previously used IRC. But IRC is popular, has a lot of clients, and v3 has a lot of promise. Why didn't the author bother to mention it?

[1]: http://ircv3.net/

3
matthewmacleod 2 days ago 1 reply      
It's good to see a bit of a variety we all know that monoculture ends up being a bad thing, and there are obvious downsides to applications hosted by third parties (though of course in some cases it's not a problem!). Of course, there's no real network effect with applications like this either (beyond integrations support) so there's a relatively low bar to switching when compared to something like Facebook.

I do hope to see some work on better user interfaces. Our team recently switched to Slack from Hipchat - many features are better, but the desktop UI is really unpleasantly sluggish. It seems unreasonable for a chat application to stutter and freeze on even small amounts of data. The trend for wrapping web UIs in desktop apps is not a good one, in my experience. Hopefully open equivalents will encourage the development of native clients.

4
andrey_utkin 2 days ago 1 reply      
What about reusing and polishing XMPP and existing decent software for it? About donating money to people who develop it all these years? No, we'll spread FUD upon XMPP or just ignore it but will proceed to use and abuse its legacy behind the closed doors.

http://risovach.ru/upload/2016/03/mem/novyy--shablon_1085837...

5
parfe 2 days ago 1 reply      
In my org we started using slack and the people who talk the most say the least.

Important conversations already took place on our internal jabber server. People with nothing to say never connected, but now with animated gifs and embedded youtube they use slack for entertainment.

Never thought I'd see Eternal September in a professional environment.

6
wslh 2 days ago 0 replies      
I think the Achilles heel of almost every open source projects is [end-]user experience. The first time I used Slack I was sure it was trivial to replicate the software (not the business) by a small team of developers but a short time after I realized a lot of UX details that a normal open source team will not prioritize ever (e.g. conversationally educating the user about new features).

Probably the weakest point of Slack is their price. A similar open source SaaS offering will not be free so Slack has margin to compete.

7
whitegrape 2 days ago 1 reply      
It's been built: http://matrix.org/
8
abecedarius 2 days ago 0 replies      
No mention of Zulip? It's the first to come to mind for me if you ask for an open source Slack competitor.
9
giancarlostoro 2 days ago 5 replies      
One thing that nobody's mentioned yet is encryption. A Slack alternative that's end-to-end encrypted (at least for private messaging between users) and has encryption for chatrooms would be worth my time.
10
tambourine_man 2 days ago 2 replies      
We need open protocols.

Then let a thousand clients with different paradigms bloom

11
maaaats 2 days ago 0 replies      
I work in consultancy, and the client I currently work for has us working on-premise using air-gapped computers. No internet, only intranet, meant finding new ways to communicate. So far Mattermost works well. We have it integrated with Jenkins, Stash etc. and do the typical "chat-ops" things.
12
fweespee_ch 2 days ago 1 reply      
Hopefully Mattermost and Rocket.Chat both build mature, robust HA systems since their features [honestly] are on par with Slack. Then move into SaaS hosting of their service in VM clusters across ~3 regions. :)

Really, the only thing that stops me from wanting to use these services at $DayJob is the fact I have enough stuff I have to maintain. I'd really rather just pay $99/month for a 10-20 person team.

13
mikerichards 2 days ago 1 reply      
Regarding IRC... I started using Gitter a few months ago and have used various IRC clients for over 20 years.

I was blown away on how much better the UX was in Gitter compared to IRC. The history, the notifications, the markup, the UI...you name it.

And Gitter is a pretty crappy client. If there was a nicely implemented client, it would be even more impressive.

14
altitudinous 1 day ago 0 replies      
This reads like that old XKCD where someone tries to build a standard that unifies all the existing 14 incompatible standards but it just becomes the 15th incompatible standard itself - https://xkcd.com/927/

Getting a critical mass of people to use a chat system is the defining factor of success, not the technical genius or features of the system.

Slack is successful because it IS one of those standards that a critical mass of people use and is modern.

There are plenty of systems that are far superior. However they are failures because they didn't achieve the critical mass of users.

15
ex3ndr 2 days ago 1 reply      
If someone want performance, simplicity and encryption, take a look at our project: https://actor.im
16
alex_duf 1 day ago 0 replies      
I know what we need: an open protocol so that people can chat easily, whichever company you work for, wherever you are...We should call it IRC or XMPP

:)

Seriously I know XMPP is pretty un-popular, but deep inside I kind of wish it was the default protocol everybody used to communicate...

17
cookiecaper 2 days ago 2 replies      
This is one of those applications that is always being rewritten somewhere. JWZ's law may even be relevant here: "Every program expands until it reads mail". It seems there is always a large group people out there working on messengers.

The question really is what made Slack special. Why is it the Slack revolution and not the HipChat revolution? What about the veneer has allowed it to become a pervasive trend, expanding into non-tech companies and being hailed as a permanent replacement for most emails? This is the only interesting question around Slack, since, as numerous other commenters have pointed out, it doesn't really bring anything to the table that the messaging space hasn't seen a good number of times before.

18
supergeek133 2 days ago 1 reply      
Oh man this entire field is madness... especially when you realize this when trying to get ahold of someone:

I'll try Slack... let me check Skype... no maybe they're on HipChat... SMS... oh hell I'll just call them!

19
trengrj 1 day ago 0 replies      
If we are going to go with the name Open Sourcers why not make it Open Sourcerers.
20
ursus_bonum 1 day ago 0 replies      
I don't know why people think Slack is any good to begin with. It's fine at first but the more people use it the more unwieldy it gets.

Email's pretty crappy but at least with email I get small, contained threads of conversation that can be searched and organized easily. Slack is like having a few giant never-ending email threads. It's horrible.

And there's still no voice.

21
kin 2 days ago 0 replies      
I dunno, I love being able to easily switch between different teams. Plus, they've just introduced audio, and video is next.

Everyone is talking about open protocols but when you're client agnostic you have to store/manage data somehow so surely Slack with its integration friendly mentality could go in that direction?

22
cybette 2 days ago 2 replies      
What about Telegram? It's partially open source (clients only for now) and provides end-to-end encryption.
23
benevol 2 days ago 0 replies      
Has anyone compared Rocket.Chat VS Mattermost?

What are the significant differences (advantages/disadvantages)?

24
jessegreathouse 1 day ago 0 replies      
I wish we could just hurry up and come to the ultimate conclusion that we always had everything we needed with IRC. I can't understand why we're still in the jungle of high level applications replicating a low level protocol.

I get that IRC never had a particular client that galvanized the protocol as much as the many iterations of proprietary real time chat that has came and went over the past 10 years, but honestly the problem isn't the protocol, the problem is that no one was making clients meant for the mainstream.

25
hammerandtongs 2 days ago 0 replies      
matrix.org with the vector.im and android client have been pretty great so far.

Matrix has solved the right engineering problem first which is the persistence of the chats across all of your clients.

26
tribaal 2 days ago 11 replies      
If only there was a well supported, standard, open and interoperable text messaging protocol (with multiple implementations) that companies could host themselves...

Oh wait, IRC.

27
amelius 2 days ago 0 replies      
What I conclude from this is that open-source developers are all hungry for good specs (for a killer application).

Therefore, wouldn't it be nice if there was a website where product designers (not software developers) could post their product designs, so that the developers have something to work from?

28
55acdda48ab5 2 days ago 1 reply      
Between phone conferencing, email, and shared desktop video conferencing, I don't get what more is needed in professional contexts.

I don't even get IRC or IM. "Chat" is noise and distraction. Write proper emails or arrange phone calls.

What exactly are people using slack for? I don't get it.

29
pcocko 2 days ago 0 replies      
We started to use Slack in our company but it wasn't adopted by most of the people. Desktop version requires too much resources and it was unacceptable. We backed to Skype again. Has anyone ever had same experience?
30
it33 2 days ago 0 replies      
Connect IRC with Mattermost: https://github.com/42wim/matterbridge

You can also talk to Mattermost team on #matterbridge on Freenode.

31
shmerl 2 days ago 0 replies      
So, is any of those projects proposing something better than XMPP as IETF standard, or may be someone is proposing to improve XMPP further? Without interopearibilty it will be just +1 to the sea of incompatible solutions.
32
ShaneBonich 2 days ago 3 replies      
It's already here - Bitrix24. Light years ahead of Slack. Free for unlimited users without silly restrictions on search and other crap. And GASP it comes with actual work tools beyond IM.
33
ausjke 2 days ago 0 replies      
Could not read this from wired.com due to my router has adblock enabled. Wired says if you don't let us do ads you can't read on. How did they detect that?
34
beyti 2 days ago 2 replies      
adblock blocker on wired, so upset
35
DeepYogurt 2 days ago 0 replies      
IRC comes to mind.
36
_pmf_ 1 day ago 0 replies      
Too bad Google insisted on dead-birthing Wave by throttling access via beta invitations.

It generated incredible hype, but a lot of people (including me) could not use it due to this bullshit; maybe they can acquire Slack and reengineer it using the much more mature Wave foundation.

37
luckydude 2 days ago 3 replies      
This is why I sort of hate open source. Slack is doing a great job, their prices are right. So why is it OK to destroy that? With what is going to be not as good stuff?

I'll get down voted to hell but come on. It's cheap, it's good, so why not let the guy make a living? And for the record I don't even know who the guy is (or woman is).

I just hate the "it's cool, lets rip it off attitude". Because the ripoff is rarely better than the original.

15
AlphaGo beats Lee Sedol 3-0 [video] youtube.com
565 points by Fede_V  6 days ago   407 comments top 50
1
Radim 6 days ago 4 replies      
In a recent interview [1], Hassabis (DeepMind founder) said they'd try training AlphaGo from scratch next, so it learns from first principles. Without the bootstrapping step of "learn from a database of human games", which introduce human prejudice.

As a Go player, I'm really excited to see what kind of play will come from that!

[1] http://www.theverge.com/2016/3/10/11192774/demis-hassabis-in...

2
bronz 6 days ago 2 replies      
Once again, I am so glad that I caught this on the live-stream because it will be in the history books. The implications of these games are absolutely tremendous. Consider GO: it is a game of sophisticated intuition. We have arguably created something that beats the human brain in its own arena, although the brain and AlphaGO do not use the same underlying mechanisms. And this is the supervised model. Once unsupervised learning begins to blossom we will witness something that is as significant as the emergence of life itself.
3
awwducks 6 days ago 5 replies      
Perhaps the last big question was whether AlphaGo could play ko positions. AlphaGo played quite well in that ko fight and furthermore, even played away from the ko fight allowing Lee Sedol to play twice in the area.

I definitely did not expect that.

Major credit to Lee Sedol for toughing that out and playing as long as he did. It was dramatic to watch as he played a bunch of his moves with only 1 or 2 seconds left on the clock.

4
pushrax 6 days ago 4 replies      
It's important to remember that this is an accomplishment of humanity, not a defeat. By constructing this AI, we are simply creating another tool for advancing our state of being.

(or something like that)

5
Eliezer 6 days ago 19 replies      
My (long) commentary here:

https://www.facebook.com/yudkowsky/posts/10154018209759228

Sample:

At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for probability of long-term victory rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol's probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game 'eventually' shifted to AlphaGo later, may simply have failed to read the board's true state. The reality may be a slow, steady diminishment of Sedol's win probability as the game goes on and Sedol makes subtly imperfect moves that humans think result in even-looking boards...

The case of AlphaGo is a helpful concrete illustration of these concepts [from AI alignment theory]...

Edge instantiation. Extremely optimized strategies often look to us like 'weird' edges of the possibility space, and may throw away what we think of as 'typical' features of a solution. In many different kinds of optimization problem, the maximizing solution will lie at a vertex of the possibility space (a corner, an edge-case). In the case of AlphaGo, an extremely optimized strategy seems to have thrown away the 'typical' production of a visible point lead that characterizes human play...

6
wnkrshm 6 days ago 0 replies      
While he may not be number one in the Go rankings afaik, Lee Sedol will be the name in the history books: Deep Blue against Garry Kasparov, AlphaGo against Lee Sedol.Lots of respect to Sedol for toughing it out.
7
Yuioup 6 days ago 1 reply      
I really like the moments when Alpha-Go would play a move and the commentators would look stunned and go silent for a 1-2 seconds. "That was an unexpected move", they would say.
8
flyingbutter 6 days ago 2 replies      
The Chinese 9 Dan player Ke Jie basically said the game is lost after around 40 mins or so. He still thinks that he has a 60% chance of winning against AlphaGo (down from 100% on day one). But I doubt Google will bother to go to China and challenge him.
9
kybernetikos 6 days ago 2 replies      
Go was the last perfect information game I knew where the best humans outperformed the best computers. Anyone know any others? Are all perfect information games lost at this point? Can we design one to keep us winning?
10
jamornh 6 days ago 0 replies      
Based on all the commentaries, it seems that Lee Sedol was really not ahead during the game at any point during the game... and I think everybody has their answer regarding whether AlphaGo can perform in a Ko fight. That's a yes.
11
bainsfather 6 days ago 0 replies      
It is interesting how fast this has happened compared to chess.

In 1978 chess IM David Levy won a 6 match series 4.5-1.5 - he was better than the machine, but the machine gave him a good game (the game he lost was when he tried to take it on in a tactical game, where the machine proved stronger). It took until 1996/7 for computers to match and surpass the human world champion.

I'd say the difference was that for chess, the algorithm was known (minimax + alpha-beta search) and it was computing power that was lacking - we had to wait for Moore's law to do its work. For go, the algorithm (MCTS + good neural nets + reinforcement learning) was lacking, but the computing power was already available.

12
partycoder 6 days ago 2 replies      
Some professionals labeled some AlphaGo moves as being unoptimal or slow. In reality, Alpha Go doesn't try to maximize its score, only its probability of winning.
13
atrudeau 6 days ago 1 reply      
It would be nice if AlphaGo emitted the estimated probability of it winning every time a move is made. I wonder what this curve looks like. I would imagine mistakes by the human opponent would give nice little jumps in the curve. If the commentary is correct, we would expect very high probability 40-60 minutes into the game. Perhaps something crushing, like 99,9%
14
niuzeta 6 days ago 3 replies      
Impressive work by Google research team. I'm both impressed and scared.

This is our Deep Blue moment folks. a history is made.

15
skarist 6 days ago 2 replies      
We are indeed witnessing and living a historic moment. It is difficult not to feel awestruck. Likewise, it is difficult not to feel awestruck at how a wet 1.5 kg clump of carbon-based material (e.g. Lee Sedol brain) can achieve this level of mastery of a board game, that it takes such an insane amount of computing power to beat it. So, finally we do have a measure of the computing power required to play Go at the professional level. And it is immense, or to apply a very crude approximation based on Moore's law, it requires about 4096 times more computing power to play Go at the professional level than it does to play chess. Ok, this approx may be a bit crude :)

But maybe this is all just human prejudice... i.e. what this really goes to show is that in the final analysis all board games we humans have inveted and played are "trival", i.e. they are all just like tic-tac-toe just with a varying degree of complexity.

16
jonah 6 days ago 0 replies      
Cho Hyeyeon 9p's commentary on the American Go Association YouTube channel: https://www.youtube.com/watch?v=CkyVB4Nm9ac
17
dwaltrip 6 days ago 0 replies      
AlphaGo won solidly by all accounts. This is an incredible moment. We are now in the post-humanity go era.

The one solace was that Lee Sedol got his ko =) however, AlphaGo was up to the task and handled it well.

18
seanwilson 6 days ago 4 replies      
Super interesting to watch this unfold. So what game should AI tackle next? I've heard imperfect information games are harder for AI...would the AlphaGo approach not work well for these?
19
partycoder 6 days ago 2 replies      
I don't think Ke Jie would win against Alpha Go either.
20
hasenj 6 days ago 1 reply      
Seems to be playing at super-human levels.

I'm no where near a strong player but it seems like AlphaGo is far ahead of Lee Sedol.

21
dkopi 6 days ago 0 replies      
One can only hope that in the final battle between the remaining humans and the robots, it won't be a game of Go that decides the fate of humanity.
22
starshadowx2 6 days ago 1 reply      
I'm very interested to see what the Google DeepMind team applies themselves to in the future.
23
awwducks 6 days ago 1 reply      
I am really curious about the reviews from An Youngil 8p and Myungwan Kim 9p. The commentary by Redmond always tend to leave something to be desired.
24
hyperion2010 6 days ago 2 replies      
I really want to see how a team of humans would do against alpha-go with a 3 or 4 hour time limit.
25
dynjo 6 days ago 1 reply      
How long before AlphaGo also participates in the post-match press conference I wonder...
26
yeukhon 6 days ago 0 replies      
I wonder what if you put the top 10 players in a room, team up and play with Alpha-Go. They are allowed to make one move within a 1-hour period and they can only play up to 8 hours a day. I wonder what the outcome would be.

Anyway, I think AlphaGo is a great training companion. I think Lee felt he's learning.

Finally, I also feel that while experience is crucial, the older generation would flush out by the younger generation every decade. I wonder if age really play a role in championship - not that AlphaGo isn't considered a 1000 years old "human" given it has played thousands of games already.

27
gandalfu 6 days ago 0 replies      
Its a matter of language.

Our model of representation of Go fails at expressing the game/strategies of AlphaGo is showing, we are communicating in the board in different languages, no wonder everyone looking the at games is stomped by the machine "moves".

Our brains lack the capacity of implementing such algorithms (understanding such languages), but we can still create them. We might see in the future engine A played against engine B and enjoy the matches.

No one is surprised by a machine doing a better job with integer programming/operational research/numerical solutions etc.

28
mikhail-g-kan 6 days ago 0 replies      
Interestingly, I feel proud of AI, despite humans lost. It's the progress toward our end as classic human species
29
zhouyisu 6 days ago 2 replies      
Next move: how about beating human at online dating?
30
asmyers 6 days ago 1 reply      
Does anyone know if the AlphaGo team is saving the probability of winning assignments that AlphaGo gave its moves?

It would be fascinating to see how early AlphaGo assigned very high probability of it winning. It would also be interesting to see if there were particular moves which changed this assignment a lot. For instance, are there moves that Lee Sedol made for which the win probability is very different for the AlphaGo moves before and after?

31
xuesj 6 days ago 1 reply      
This is a milestone of AI in history. The ability of AlphaGo is amazing and far beyond to human.
32
eemax 6 days ago 1 reply      
What happens if you give the human player a handicap? I wonder if the games are really as close as the commentators say, or if it's just a quirk of the MCST algorithm.
33
asdfologist 6 days ago 0 replies      
Ke Jie's gonna be next.

http://www.shanghaidaily.com/national/AlphaGo-cant-beat-me-s...

"Demis Hassabis, Google DeepMind's CEO, has expressed the willingness to pick Ke as AlphaGo's next target."

34
jcyw 6 days ago 0 replies      
We had Godel on the limitation of logic and Turing on the limitation of computation. I think AI will only change the way human calls intelligence. We used to call people who can mentally calculate large numbers genius. Lots of that has to be re-defined.
35
Huhty 6 days ago 0 replies      
Full video will be available here shortly:

https://www.youtube.com/playlist?list=PLqYmG7hTraZA7v9Hpbps0...

(It also includes the videos of the first 2 matches)

36
theroof 6 days ago 2 replies      
Is anyone also asking themselves when they'll be able to play against this level of AI on their mobile phone? Or formulated differently: when will an "AlphaGo" (or equivalent) app appear in the play/app store?

In 2 years? In 1 year? In 3 months?

37
arek_ 6 days ago 0 replies      
I was using machine learning in computer chess some time ago. My commentary: http://arekpaterek.blogspot.com/2016/03/my-thoughts-on-alpha...
38
yodsanklai 6 days ago 0 replies      
All this excitement makes me want to learn a little bit about those algorithms. I don't know anything about neural networks (but I've already implemented a chess game a while ago). Would it be difficult to implement a similar algorithm for a simpler game?
39
mzitelli 6 days ago 0 replies      
Congratulations to AlphaGo team, curious to see if Lee Sedol will be able to defeat it in the next matches.
40
cpeterso 6 days ago 0 replies      
Does AlphaGo run in the cloud or is it a machine onsite at the match? I wonder how small AlphaGo could be scaled down and still beat Lee Sedol. How competitive would AlphaGo be running on an iPhone? :)
41
eternalban 6 days ago 0 replies      
My impression is that Sedol was psychologically defeated at 1-0. Computational machines don't crack under pressure - at most they get exhausted.
42
ganwar 6 days ago 1 reply      
Incredible news. We have all heard all of the positive coverage and how tremendous it is. What I find interesting is that how come nobody is talking about the potential of AlphaGo as a war strategizing AI?

If you provide terrain(elevation etc.) information, AlphaGo can be used to corner opponents into an area surrounded by mountains where AlphaGo is sitting on the mountains. We all know what happens after that.

Don't want to kill the party but I am completely surprised with the lack of chatter in this direction.

43
pmyjavec 6 days ago 0 replies      
If one allowed AlphaGo to train forever, what would happen? Would it constantly just tie against itself ?
44
oliebol 6 days ago 0 replies      
Watching this felt like watching a funeral where the commentary was the eulogy.
45
awwducks 6 days ago 0 replies      
I guess the next question on my mind is how AlphaGo might fare in a blitz game.
46
ptbello 6 days ago 1 reply      
Does anyone have insights on how a game between two Alpha-Gos would play out?
47
Queribus 5 days ago 0 replies      
Was I in a "prophetic mode" yesterday? ;)))
48
tim333 6 days ago 2 replies      
Kind of a shame the tournament isn't closer.
49
Queribus 6 days ago 0 replies      
Strictly speaking, "just" because Alphago finally won, that doesnt mean it was right when claiming being ahead already.
50
scrollaway 6 days ago 1 reply      
If, as you believe, this post was "assigned" points (which only HN staff can theoretically do), what do you believe you will achieve by flagging it?
16
Google nabs Apple as a cloud customer businessinsider.com
466 points by rajathagasthya  2 days ago   210 comments top 28
1
doxcf434 1 day ago 3 replies      
We've been doing tests in GCE in the 60-80k core range:

What we like:

- slightly lower latency to end users in USA and Europe than AWS

- faster image builds and deployment times than AWS

- fast machines, live migrations blackouts are getting better too

- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front

- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account

What we don't like:

- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them

- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them

2
phoboslab 2 days ago 14 replies      
Can someone explain to me why traffic is still so damn expensive with every cloud provider?

A while back we managed a site that would serve ~700 TB/mo and paid about $2,000 for the servers in total (SQL, Web servers and caches, including traffic). At Google's $0.08/GB pricing we would've ended up with a whooping $56,000 for the traffic alone. How's that justifiable?

3
markgavalda 2 days ago 3 replies      
We are consolidating all of our cloud services at Google Cloud and couldn't be more happier. We've had north of a thousand virtual machines scattered across ~6 2nd and 3rd tier providers and switching to gcloud has been a game changer for us.
4
rdl 2 days ago 3 replies      
They've been a heavy Azure user too. Probably more than AWS.

I'm glad there's now at least 2 and probably 3 competitors for public cloud infrastructure. So many things were at risk, including adoption of public cloud in general, when it was a sole source monopoly from Google (OpenStack/Rackspace/etc. was basically stillborn, and VPSes aren't the same thing, nor was VMware ever really credible for public cloud)

Neither GC nor Azure are as comprehensive as AWS, but together at least one of them is usually a viable alternative for any given deal.

5
pori 2 days ago 3 replies      
Can someone provide a little context towards this exodus from AWS to Google Cloud? I understand in DropBox's case that they (questionably) need their own infrastructure for cost saving. But then there's Apple and Spotify suddenly changing over. What's the advantage?

I have a fear that this trend among large companies is going to trickle down to smaller ones and independent devs. Considering these "Cloud Wars" I can see stories like continuing with different providers. Ultimately, a scenario could occur where one year, one provider is king. Then the next, everyone decides they need to migrate to the next big thing. That would be irritating for us contractors. We would have to learn new interfaces and apis at the same rate of JS frameworks.

6
imperialdrive 2 days ago 1 reply      
I so sick of EC2 rogue 'underlying hardware issues' and EBS volumes dropping dead... AWS Console status will say everything is 'Okay' even when there are major problems - it's a joke... I wonder to myself, is it because I recently migrated (December 15) over and they are starting to buckle? Really a bad experience. At this rate I'll be looking at Google next month, or going back to Colo (25 servers, 100TB) so not much, but still worth doing right.
7
dantiberian 2 days ago 2 replies      
Would be interesting to know what kind of discounts Apple got on this. It's a massive PR win for Google, the kind I expect they could give $100m for. Apple is also notorious for getting a very sharp price from their suppliers, so the combination suggests there were some steep discounts.
8
fidget 2 days ago 2 replies      
My guess is that it's pretty much just BigQuery. No one else seems to be able to compete, and that's a big deal. The companies moving their analytics stacks to BQ and thus GCP probably make up the majority (in terms of revenue) of customers for GCP
9
dzhiurgis 2 days ago 6 replies      
So it makes sense for Dropbox to build it's own infra but it doesn't for Apple.

Also wondering why Apple isn't hosting exclusively with IBM, they seem to have the best geographical coverage.

10
kozukumi 2 days ago 2 replies      
God damn Diane Greene hit it out of the park with this one! Amazing work getting Apple to migrate so much away from Amazon.
11
conradev 2 days ago 0 replies      
They've been using Google Cloud Storage for blob storage of iMessage attachments for a little while now. They seem to use a combination of Amazon S3 and GCS (just watching connections coming out of the app on OS X).
12
nodesocket 2 days ago 2 replies      
It is reported that Apple accounts for 9% of Amazon's AWS revenue. If that is true, this move by AAPL is a serious dent in the financials of AMZN.
13
mahyarm 2 days ago 1 reply      
If you run little snitch on your mac and have your photos sync with apple, you'd notice the photos agent going to google for quite a while now. Maybe it was a trial?

I say this is why icloud is about 2x the price of other cloud providers, because they don't run it themselves and want a profit margin.

14
karlshea 2 days ago 2 replies      
I'd be super interested to know what their backend looks like (at least the new stuff, not WebObjects), I wish they were as open as Facebook with regard to tech.

Unfortunately that's probably a wish that will forever be unfulfilled.

15
jessegreathouse 2 days ago 0 replies      
I can't see this as anything but a good thing for us lowly consumers. Competition in the marketplace is a great thing.
16
hans 2 days ago 1 reply      
Does anyone enjoy working at AWS? maybe the Zon will have to up its game to compete but they're so mired in employee-thrashing it seems unlikely. Is it getting better there or worse? this seems to question that.
17
Implicated 2 days ago 1 reply      
This seems to be good for everyone but Amazon, can anyone offer some insight otherwise?
18
iqonik 2 days ago 1 reply      
Side note, but I'm impressed the article didn't try and put a positive spin on it given Jeff Bezos' interest in Business Insider.
19
Xcelerate 2 days ago 1 reply      
Does anyone know if GCE offers discounts or grants to graduate students doing research?
20
kloud_ops 2 days ago 0 replies      
I expect they want a multi-cloud presence for HA now that there is good tooling to support that such as Spinnaker ( http://spinnaker.io/ )
21
lobo_tuerto 2 days ago 7 replies      
"It's been only four months since Google convinced enterprise queen Diane Greene to lead its fledgling cloud-computing business, but she's already scored a second huge coup for Google"

Who was the first?

22
tn13 1 day ago 0 replies      
I don't think someone at Apple looked at Amazon's pricing table and Google's pricing table and decided to move to Google.

Very like sales teams of Azure, Amazon, Google must have done the mating dance for few months sharing their future plans etc. Very probably government's stand on encryption could have been one of the things that were discussed.

Some people must have played golf together and eventually made some decision. Also, very likely Apple will be well invested in all these three players and will remain so for a long time.

23
pinkskip 1 day ago 0 replies      
I love aws fanboy
24
anacleto 2 days ago 1 reply      
This should be read as: "In exchange for keeping Android crappy, Apple to reward Google on his Cloud efforts."

(being downvoted? little sense of humor)

25
obulpathi 2 days ago 1 reply      
This move will be a "GAME CHANGER" for the Cloud industry.
26
serge2k 2 days ago 0 replies      
Have the google PR guys been working a lot of OT lately?
27
phragg 1 day ago 1 reply      
Apple vs FBI in Encryption Lawsuit.

Pentagon Grabs Former CEO Larry Page to head technology.

Google nabs Apple as cloud customer.

i put on my robe and tinfoil hat

28
ocdtrekkie 2 days ago 1 reply      
So, it will be nearly impossible to buy a phone in the United States that isn't designed to send your data to a Google datacenter?
17
Honda's $20k Civic LX now offers self-driving capability for highway use wsj.com
452 points by davidst  4 days ago   280 comments top 32
1
ben1040 4 days ago 13 replies      
This looks like the "Sensing" feature that Honda has implemented on some of their other vehicles. I just bought a 2016 Accord that does the same thing -- there's a camera mounted on the windshield, another camera in the front grille, and a radar sensor on the front bumper.

Calling it "self-driving" is kind of a misnomer and I think the article kind of blows it out of proportion.

It will track the car in front of you and keep a safe following distance, keeping either the maximum cruise control speed you've set, or whatever speed the vehicle ahead of you is driving, whichever is slower. It will accelerate or brake accordingly. It will also attempt to stay in the lane by using the onboard cameras for tracking the lane markings.

The lane keeping assist is not nearly as autonomous as the article makes it out to be. It does not like to work on sharper curves on the freeway, for one -- the system will disengage and tell you to steer manually. It still wants you to keep your hands on the wheel. It must be looking for very very subtle movements on the wheel, because the system will yell at you if you take your hands off the wheel for longer than 10-15 seconds.

All in all it's a pretty cool feature for longer road trips (keeping in your lane can just get kind of tiring, even with cruise control) but it's not the sort of autonomous driving that the article here paints it out to be.

2
kazinator 4 days ago 4 replies      
Americans could rather use a robot highway driving instructor.

"Consider moving over to the right line; you're driving at the speed limit, and a speeding car is approaching; you may confirm this in the rear-view mirror."

"I have detected that you came to a full stop at the end of a generous freeway entrance ramp in light traffic. Suggested future action: look over the shoulder as early as possible and match the speed of the traffic."

"Suddenly exiting out of the left lane is dangerous. Please know which exit you're supposed to take, watch for its approach early, and change lanes ahead of time. If you miss an exit, do not make a sudden, dangerous action. Look for an alternate route or U-turn starting at the next exit."

3
Someone1234 4 days ago 3 replies      
Just for comparison, you can get a Subaru Legacy Premium with Eyesight for $25,835 (or a Crosstrek for thereabouts, and an Outback for $2-3K more), since at least summer 2015. So nothing Honda are doing here is revolutionary technologically, they're just bringing the same technology to a new market ($5K cheaper), which is still something.

I highly recommend that if you invest in THIS technology that you go all in and get the blind spot detection and rear cross traffic alerts. I have had a Subaru with Eyesight for over six months now, and I don't regret buying it and definitely like the blind spot/cross traffic alerts, they're legitimately useful day to day.

I will say one downside of this system is what I call "alert fatigue" particularly lane drift warnings, ice warnings, traffic pulled away warnings, etc. You can disable many of these, but it would be my only whine.

I have had automatic braking pre-warn me a handful of times, but not had it activate yet except when I pulled into the garage and the dangling tennis ball confused it and even then it only slowed me slightly. You get a yellow warning then red, then brake, and most of the warnings are legitimate I am just ahead of them.

Lane keep assist and distance based cruise control are like crack. It feels like you just get on the freeway, push a few buttons, and the car almost drives itself.

4
mikeash 4 days ago 5 replies      
The article really wants to compare with electric vehicles, for some reason. The subheading says "they are being snapped up faster than electrified vehicles." This is repeated in the article, which says the relevant options packages "are being snapped up at a far higher rate than electrified vehicles." Discussing the Q50's technology package and how many buyers opt for it, it says "Thats three times as many people who pay extra to buy a hybrid-electric version."

What's the deal? Is this just because Tesla happens to be the one with the best system at the moment? It doesn't make any sense to me, and really distracts from the article's main thrust.

5
TrevorJ 4 days ago 9 replies      
The real problem with self-driving cars is the car to human handoff. Over the long term it's incredible unlikely that humans will be any good atall at remaining aware and 'up to speed' on the current situation in the event that the car needs to give control back to the driver due to road conditions, hardware failure, or sudden situation that it cannot contend with.
6
stcredzero 4 days ago 5 replies      
In the early 2000's, I was hanging out sometimes in western North Carolina, and there was this young woman who has in the habit of getting together with some friends and driving around the clock to get to Colorado and back on short trips, instead of flying. I'm wondering if this technology isn't going to be used for such purposes.
7
CoffeeDregs 4 days ago 1 reply      
Conversations about self-driving have focused on zero defect rate in-city self-driving vehicles, but a lot of these technologies are reaching Pareto-efficient levels of usefulness. I don't need my car to drive the first and last miles; I'd be perfectly happy if it just drove on the highway.

And why do humans drive long haul trucks for anything other than the first and last miles? Trucks should drive themselves between depots at the edges of metros and then humans can drive them into the city. https://www.mercedes-benz.com/en/mercedes-benz/innovation/th... "Self steering"... How long before that moniker is replaced by "Self driving".

And why are humans delivering packages? They might not be for long: http://www.reuters.com/article/us-starship-delivery-robot-id...

It's going to shock our economy once industries begin constraining roles to the level at which robots can be "good enough". After figuring out how to manage them, we'll start to see robots deployed in force.

As parents, my wife and I are talking about this a lot as we think about how to raise our kids (and we are emphatically hands-off parents)...

8
bliti 4 days ago 1 reply      
The elephant in the room is this one:

When will highways be upgraded/updated for self driving? I don't mean V2I (vehicle to infrastructure) capabilities, but properly painted and maintained lane lines, reflectors, and signage. The infrastructure is just not there. You can't simply rely on a car's sensing abilities for self driving.

9
raz32dust 4 days ago 0 replies      
Am I missing something? Why is this top news? There are several cars out there with adaptive cruise control. In fact, I think most mainstream cars offer it as an option now. It's pretty impressive but calling it "self-driving" is hyperbole.

Subaru's eyesight is technically even more impressive considering it does image processing to detect vehicles, whereas most of these systems are based on radar. Although I don't see the point because radar is more reliable IMO. Unless you use some features which only camera can provide (stop at traffic lights?), which Subaru doesn't yet do. From whatever research I did before buying a car, Volvo's system (uses both radar and camera) seems to be one of the best overall, along with Mercedes, and Subaru being a close third.

10
ipsin 4 days ago 0 replies      
I'm curious about whether this is going to be a net win or lose for safety.

If users treat these automated cruise systems as a "magic self-driving device" when it can potentially make mistakes or hand back control when it's confused, drivers are going to die.

If people "really want to look at their cell phones", and they take this as the tool that lets them pay no attention to the road, it better be up to that task.

11
Negative1 4 days ago 0 replies      
The title is a bit misleading. I have a 2016 Civic Touring that has the Sensing Package. It's effectively a sensor package (cameras in the top-middle of the windshield and below passenger-side headlight) with integration with the steering and powertrain systems and very basic logic.

When you turn it on the car basically tries to stay in your lane by looking at the lines on the road. It actually tells you when it can and cannot see the lines. When it detects you going outside of the lane (without using your signal) it takes control of steering and corrects for you. You can also set it to stay within some distance of the car in front of you and it will control your speed. Supposedly, it will also auto-brake if you are in danger of collision but I haven't had a chance to test that yet (and hope not to have to).

The whole thing is more like a driver assistance system and if you take your hands off the wheel for more than 15 seconds or so a bunch of alerts start going off and the system disengages. After using it for a few months I think this is probably a good idea. There are quite a few places in the SF Bay area that have worn out and faded lines on the road and once the system loses sight of the lane markers it just stops working. Not a great moment to have your hands off the wheel or your eyes closed. ;-)

For the price its incredible that Honda offers something like this. Suburu offers something similar but the next best thing is buying a Tesla for much more. I treat it sort of as insurance on long trips. If I start dozing off or am distracted the system keeps me in check but it is not reliable enough to be truly autonomous. So yeah, it can sort of drive autonomously.

As a preview of the future it gives me hope and it's possible this may be the last car I actually buy (when cars drive themselves it could very well become a service industry).

12
bliti 4 days ago 0 replies      
This is an expansion on cruise control capabilities and not self-driving. Its a step up, of course. But not what the title may make you think.
13
ekpyrotic 3 days ago 0 replies      
This is a interesting proposition for Honda, but really not that new. Even the price is not /that/ new.

In fact, this is technology that has been sold at around the same price point in the industrial transportation sector -- think of logistics and lorries -- for quite a while. This is where the majority of the innovation is taking place.

For example, just this week it has been revealed that the UK Govt will likely to announcing tomorrow funding for driverless truck convoys in the North of England. What's the price differential between these intelligent trucks and regular trucks? $0.

In fact, so much innovation is taking place in the industrial sector that just last week Toyota announced that it has hired the FULL workforce of Jaybridge Robotics, a firm that specializes in autonomous industrial vehicles, mainly in the agricultural sector.

If you want to understand the tailwinds in the sector, follow the b2b and industrial segment of the market. Technologies and trends are already starting to filter down.

--

I also want to plug my email newsletter Driverless Weekly (http://driverlessweekly.com) while I'm at it. It's a once-weekly summary of the top news stories in the autonomous vehicles sector.

14
bengoodger 4 days ago 0 replies      
I've owned three cars with automatic cruise control for the past decade. This isn't exactly new technology, perhaps only at this price point.

The first car I had with this, an '06 Infiniti, was only able to slow to a crawl, not a full stop. So while it was useful on the highway it was useless in stop and start.

The second car, a '11 BMW, added "Stop & Go" to the formula. Great? Not quite. What would happen is that the car would come to a full stop, and then a timer would run, and if the car didn't start moving again within 10 seconds the cruise control would shut off, and to resume you'd have to push the pedal. This was especially maddening when stop & start traffic is inconsistent and the stops last 10.5 seconds. Basically the idea of being able to set & forget was completely undermined by this and driving with the feature on was more stressful than driving with it off and just doing everything manually. A complete bust. I don't know why it does this but I can see it being some combination of the product team needing to ship the feature in the state it was in, and legal requirements.

The problem with both of these implementations is that they promised to alleviate some of the issues with past "auto-drive" features (and you should consider Cruise Control to be the very first auto drive feature), but introduce their own. If you want the user experience of set & forget, you need very predictable conditions if you want any of these mass market systems to work for you, and unfortunately that's just not the way the roads are.

I think I have the feature in my latest car too, but I've given up and decided to enjoy manually driving, and just wait for fully autonomous vehicles.

15
usrusr 4 days ago 0 replies      
How do we as drivers keep up with the rising levels of automation? It's challenging but doable for owners, but imagine jumping from a pain old 1990s car into a flashy new rental with all the bells and whistles... With a bit of exaggeration, one might make a case for individual type rating, like airliner pilots need to have.

Before we reach fully autonomous driving, we might see an age of widespread "car illiteracy", with more and more people who have a driver's license, but who have completely lost touch with the state of the art in car UI concepts. With not enough time on automatic transmission, people here in Europe occasionally even struggle with something as simple as park/neutral/reverse/drive (don't ask my where I got that)

16
dsmithatx 4 days ago 0 replies      
Sounds like these cars are far from self driving. Just some added safety features that resemble self driving but, are dangerous to use without a foot near the break and hands on the wheel.

I think this sentence pretty much says it all, "For instance, some owners have posted videos of hands- and foot-free driving on YouTube and the car inevitably makes a mistake.".

It sounds like these features are going to end up being abused and probably causing serious fatalities. As we have seen people want to txt and even watch movies while driving. These new features will allow wreck less drivers to pay less attention to driving and more attention to how many Instagram followers they have.

I'll be sure to pay more attention to people driving Honda civics when I'm on the highway.

17
yalogin 4 days ago 0 replies      
This looks like the same functionality the Tesla Model S has. One of the big draws for the new Model S cars. So much so that people are getting rid of their older Teslas because they did not have the autopilot.

I thought it's only a matter of time before autopilot becomes common place but it looks like its happening sooner than I thought. The good thing is this has nothing to do with the car being electric or not. But the main thing is, if Tesla thought its going to be a differentiator in terms of calling their cars "luxury" its going to be a problem for them. Given that the internals of the Model S itself are not particularly luxorious they need to think about it.

18
girkyturkey 3 days ago 0 replies      
I have yet to drive a "self-driving" or even "self-monitoring" car and the thought of doing so terrifies me. I know technology is good and it can do great things, but helping me drive is something I don't enjoy the idea of. What happens if, however unlikely, someone were to hack my car? They could potentially crash my car and leave without a trace. I think we really need to take a step back and ask if the benefits outweigh the potential costs/risks.
19
jtouri 4 days ago 2 replies      
There was a startup that for $10,000 it was a third party option package that was self driving. I wonder how it is doing with all these car companies that create their own options for self-driving.
20
winter_blue 4 days ago 0 replies      
> The Obama administration has proposed spending $4 billion to accelerate autonomous-car technology during the next decade.

Hmm, what are they spending it on? There's a lot of money being spent on developing this technology already by multiple private companies. I assume it's for something else...

IMO government research money should go into stuff that private industry is unwilling to fund, like pure math, theoretical physics/CS, and other things that have very long-term yield/results timelines.

21
encoderer 4 days ago 0 replies      
It might take a while before self driving is widely available, but self stopping is here today. It's now standard equipment in all new Mercedes -- including cars selling for about $30k. I have an entry-level Mercedes and it includes blind spot radar, lane tracking, and collision avoidance that will stop your car automatically if you're distracted or incapacitated. These features are available widely from most automakers now.
22
sweetbabyjesus 3 days ago 0 replies      
Exciting title, disappointing marketing piece for Honda. WSJ has joined the ranks of Forbes promoted content levels.
23
nashashmi 4 days ago 0 replies      
The conversation on this HN thread makes me wonder about the future generation who will never learn how to drive.

And then I feel sorry for the generation in between who will be confused with automatic driving happening only sometimes.

Such interruptive learning or even worse, never having learned, has the power to change culture for the worst.

24
a_imho 3 days ago 0 replies      
the self driving car is a concept I have a hard time appreciating. It sounds cool, and offers enormous technological and legislative obstacles to overcome, yet I can't figure out the fundamental problem they will solve - compared to the attention they get. For any use case I can imagine (minus the cool factor) we either have more efficient solutions already, or there are better alternatives to investigate. Plus I figure, most people still like to drive.
25
sandra_saltlake 3 days ago 0 replies      
I'm impressed, I'm much more relaxed than before - and that means I'm actually able to just look ahead and think about the road, not about what I'm doing.
26
pnut 4 days ago 2 replies      
The impending dawn of the autonomous car era is the reason why I am not marching on Washington for rail investment.

This is really one time where cynical cheapskate politics may hasten rapid, positive societal change.

27
johnchristopher 4 days ago 0 replies      
Curiously I'd rather pay a premium for a car that can park itself flawlessly. Or drive itself around town while I am busy doing things in said town.
28
spike021 4 days ago 0 replies      
If it's a much cheaper car than the alternatives, would the self-driving capability be implemented with lower-quality hardware and/or software?
29
collyw 4 days ago 2 replies      
Is it legal to let these cars drive themselves?
30
embro 4 days ago 0 replies      
Sadly there is no such thing as an electric Civic.I wish my 30K Nissan Leaf had it.
31
Animats 4 days ago 1 reply      
From the article: "as long as lane markings remain visible and another vehicle is in front of the car." That's more like platooning, which was demoed two decades ago in Demo 97 [1], than self-driving. Are there more details about how this works, especially about autopilot disconnect and user takeover? Tesla's system is known to have trouble with offramps.[2] (Tesla customers are very forgiving. "It's a beta", one says in their forum.)

Honda's follow-the-leader system avoids some of Tesla's problems. Radar systems for not rear-ending the car ahead are already pretty good, and many are already on the road. Lane following by lane markings isn't that reliable, but restricting it to following the car ahead handles traffic jams nicely while locking out most of the hard cases.

The deployment of self-driving systems which are much dumber than Google's is worrisome. I've written before of the "deadly valley" of automated driving. This is another "deadly valley" system. The deadly valley begins where the driver can take their hands off the wheel and tune out. It ends where the automated system can correctly handle more situations than a human driver.

Google is trying hard to get to the far side of the deadly valley. That's good. Look at the problems they're having. Their only known fender-bender in autonomous mode was when the vehicle was trying to deal with a drain blocked with sandbags and very slowly maneuvered around it, to be hit by a bus, while in a wide lane at a right turn, because the AI misjudged the likely behavior of the bus driver. Google's dealing with the hard edge cases. Cruise, on the other hand, ran into a parked car in San Francisco when the autopilot lost lane tracking, veered left, overcorrected right, and the driver took back control too late. That's a more basic failure.

It also shows the problem with semi-autonomous systems. Expecting the driver to compensate for failures of the automation will not work.

Volvo Car Group President and CEO Hkan Samuelsson says that the company will accept full liability whenever one of its cars is in autonomous mode.[3] He has it right. This needs to be a requirement before the "move fast and break things" crowd gets on the road.

nonmarked-ramp[1] http://www.fhwa.dot.gov/publications/research/.../pavements/...[2] https://forums.teslamotors.com/de_DE/forum/forums/careful-wh...[3] http://fortune.com/2015/10/07/volvo-liability-self-driving-c...

32
ams6110 4 days ago 0 replies      
No thanks. Just something else to go wrong.
18
Study: Immigrants Founded 51% of U.S. Billion-Dollar Startups wsj.com
445 points by mavelikara  1 day ago   218 comments top 26
1
dalke 21 hours ago 3 replies      
In a slightly older thread concerning this topic, at https://news.ycombinator.com/item?id=11306290 I made some objections to this analysis.

I think there's a fundamental problem in how to interpret this analysis. It says that a company is founded by an immigrant if there is at least one immigrant founder. This means that if 1 of 10 founders is foreign born, then it classified as immigrant founded. An aggregate calculation like this is very likely to give a number which is higher than the more important number, the over- or under- representation of immigrant founders relative to the immigrant population.

For example, if every company has 10 founders, and every company has 1 immigrant founder, then by the given definition, 100% of the companies would be founded by an immigrant. However, only 10% of the founder population would be an immigrant. As immigrants make up about 13% of the US population, this would mean that immigrants would be proportionally less likely to be founders of $1B valuated startups than non-immigrants.

Curiously, and I believe significantly, the study does not give this population information, and it makes it difficult to determine that ratio.

I did not find an explicit count of the number of immigrant founders. Table 4 has a list of all of the name, and Table 3 has a count of immigrants from a given country, so it's possible to figure this out. I am perturbed that the first contains 60 people and the second 61. Perhaps I have miscounted, but I believe there is an error in the report. (I double checked the report by searching for "60" or "61", but found no explicit total of the number of immigrant founders.)

I did not find an explicit count of the total number of founders, which means I have to compute that myself from the list of companies. I did not find an explicit list of companies, which means I have to go to the original WSJ data source.

Which I did, though my count was off by one from the report's count. It would have been much better if the report included the explicit list of companies and founder counts.

I took the current WSJ list of 106 companies and picked a few in the top, middle, and bottom range. (Statistical sampling would have been better, I know.) I found founder counts of 65 of them before tedium kicked in. I found 161 founders.

This gives an estimated total founder size of 161/65x87 = 215.5, and implies that about 60/215.5 = 28% of founders are immigrants.

This number, while twice as large as expected from the general immigrant population of the US, is also around half of the eye-catching 51%.

The next step would be to do a sensitivity analysis to give an idea of error bars. I do not know anything beyond basic statistics, but will point out that 60 is a very small number compared to the number of foreign visa, and hardly representative.

Speaking of statistics, while 28% is twice as large as 13%, there's likely also a form of p-hacking, or "garden of forking paths" going on. With enough sampling, you will be able to find very unrepresentative subgroups in your data. Why was this population of $1B startups chosen? Do the results change with $2B? Do they change for companies that go public?

For a more specific example, last year there was an analysis going around which pointed out that "Most high tech companies are founded by founded by First/2nd gen immigrants". This is definitely in the same vein, though with different measures. See https://news.ycombinator.com/item?id=9085970 for my observation that this is almost exactly what you would expect given the immigrant population in the US.

Since I can point to two reports related to the topic, how many more negative and thus unpublished correlations are there?

Finally, the report includes the suggestion that diversity among the founders helps improve the success of the company. If that were the case, then should we not exclude companies where the all founders are from the same foreign country, as in Nutanix where all three founders are from India?

2
staunch 1 day ago 10 replies      
These are not the downtrodden masses. Immigrant does not mean poor. These are wealthy upper class people coming to the most logical place to create a successful technology company, because they can afford to do so. Poor people are usually stuck wherever they are.

Investors in Silicon Valley care a lot about status and pedigree. A foreigner that can afford elite credentials is far more likely to get funding than a poor local.

And of course H1B visas drive down wages. This is basic supply and demand.

3
pandaman 1 day ago 4 replies      
The article is advocating for increasing H1-B caps. I wonder what does H1-B (a non-immigrant visa, which does not allow running your own business) has to do with immigrants founding billion dollar startups other than them using it to acquire cheap labor?
4
tinco 23 hours ago 6 replies      
I'm not a U.S. citizen, and obviously one day I hope to be one of those U.S. Billion-Dollar startup founders and I would appreciate a VISA if it would help me achieve that, but there's something shaky with the premise here.

Why does the U.S. need to depend on the ~80.000 odd first order immigrants per year to generate that amount of wealth. Couldn't the U.S. somehow draw those ambitious out of its current population?

What if people could 'emigrate' from Detroit to California? They'd basically be refugees from a low-opportunity environment as well.

Or is it not the fact that they came from a low-opportunity environment, but rather that they came like Bill Gates from the tender love and care of rich/succesful parents, and being born abroad was just a small hurdle?

5
sonabinu 23 hours ago 0 replies      
Most CS students want to come to the U.S. from around the world because the CS Rock Stars are here! Where else are you able to go to a meet up and strike up a conversation with the person who wrote popular software you admire? How often do you get to do that in another country? The immigrants want to start companies here because there are success stories here - especially immigrant success stories. I love the CS classes here ... I love the passion that students show, I love the side projects people are willing to invest free time in ...
6
Dr_tldr 1 day ago 3 replies      
It's no secret that the continuing economic dominance of the US is not due to a post-WW2 windfall or the unique capabilities of Americans, but more to the ability of the US to attract and retain the very best, and then subsequently give them unique opportunities.

Few nativists are opposed to the best and the brightest (and the already-wealthy) coming to the US in very, very small numbers, but what they don't realize is that it's pretty much impossible to tell in advance who's going to be a winning lottery ticket.

7
mchahn 1 day ago 3 replies      
So those immigrants are taking all that VC money away from me!

(Kidding, kidding, kidding, just a Trump joke).

8
thetruthseeker1 6 hours ago 0 replies      
No matter how you feel about the whole legal immigration thing, If it is true that immigrants who form significantly less than 51% of the working population found 51% of the startups there is an interesting insight statistically speaking (This is not normal, but good)

May be there needs to be a distinction between the kind of immigrants who go on to found startups vs the one who come to take low paying jobs. May be create different visa category for each. Encourage one discourage another(limiting it), but that doesn't mean that every person who comes under 1st category starts a company or everybody who comes under the second category doesn't start a company.

I see it as math, maximize one function and minimize the other.

9
convexfunction 23 hours ago 0 replies      
1. Many want it to be true that it's an objectively good idea to allow more immigration, or for there to be a broad consensus that more immigration is objectively good, or for there to be more ammunition to conduct moral posturing upon the outgroup when the outgroup has negative valence about immigration, or maybe two or all three. (I don't particularly disagree, immigration is fine.)

2. One great way to accomplish all of those is to create an association between "immigrant" and "success" or "innovation" in people's minds.

3. People gloss over the difference between P(A|B) and P(B|A) all the time. If you can show that P(B|A) is surprisingly high, and there's sufficient motivation to do so, people will instead think something much more like "A and B are associated" than "A often implies B though not at all necessarily the converse".

4. So, just show that {thing you want people to have positive valence about} is overrepresented in {tiny set of things with positive valence}. This will probably be easy, since there are lots of tiny sets of things with positive valence.

This strategy is far more used in the negative case -- "people who do bad thing Y are often from group X, don't concern yourself with the base rate of bad thing Y among all members of group X" -- but either way it's very effective and very insidious.

10
frandroid 22 hours ago 2 replies      
> These 44 companies, the study says, are collectively valued at $168 billion and create an average of roughly 760 jobs per company in the U.S.

$168B divided by 760 employees:$220M of value created per job.

If there's no error in the numbers, that means that employees at these startups are tremendously underpaid, in spite of probably making above-market salaries.

11
eva1984 22 hours ago 1 reply      
Not on a serious note...this title could be phrased as 'With no immigrants flooding, americans will found 100% of U.S. Billion-Dollar startups'
12
tim333 1 day ago 1 reply      
They could do with a startup founder visa really. Maybe allow people to hang out and launch companies for a year or 2 and allow conversion to a long term visa if they employ a few Americans. Bit like a relaxed version of the E2 visa.

Edit - I read to the bottom of the article and see they tried to bring something like that in with the "EB-JOBS Act of 2015" which didn't get through the political system.

13
ekianjo 23 hours ago 0 replies      
There aren't many Billion Dollar startups out there (44 as said in the article) so the apparently precise percentage (51%) is slightly misleading (base size is small so not so representative). If they had looked at, let's say, 500M startups, the actual figure would have been more telling.
14
flurben 11 hours ago 0 replies      
The study found that 51 percent of the countrys $1 billion startup companies had at least one immigrant founder.

To say that "immigrants founded 51% of U.S. billion-dollar startups" seems a bit misleading to me.

15
jcuga 12 hours ago 1 reply      
What good is a billion dollar startup if it pays little US tax, and only employs one or two hundred high paying jobs? A 100 Million dollar company typically employs just as many high paying jobs and may pay more US tax than a lot of these tech companies.

Also, a lot of these tech companies that are "shaking up" industries are profitable at the expense of other more traditional businesses (For example: AirBnB taking business from Hotels, Uber from taxi services, etc).

So bringing in more Billion dollar unicorn startups may not be the best thing for a country.

16
nickpsecurity 22 hours ago 1 reply      
What was thd class of the inmigrant? Where did they go to school? And were those that funded them immigrants?

Then the big picture emerges.

17
Kenji 1 day ago 4 replies      
According to the study, Mr. Bansal couldnt leave his job to start a new company because it was unclear if hed be able to keep his H-1B status.

For 7 years. What a lock-in. And during that time, your employer can probably abuse you and freeze your salary, knowing exactly that your stay depends on that visa and you have a hard time leaving.

Another point on immigrants. The US does it right: It causes brain-drain in other countries and actually takes in skilled workers. I can guarantee you that the kind of mass immigration central Europe is experiencing right now won't lead to an array of startups.

18
vpkaihla 19 hours ago 0 replies      
49% of US billion-dollar startups were founded by native americans?
19
nativedude 23 hours ago 3 replies      
We're talking theses United States right?! So unless you're talking 'bout Native Americans then 99%+ of all companies since 1776 have been created by Immigrants. Think about folks!
20
falsestprophet 23 hours ago 3 replies      
"These 44 companies, the study says, are collectively valued at $168 billion and create an average of roughly 760 jobs per company."

Firstly, that's 33,000 jobs or about 0.0003% of the US workforce, so hardly the backbone of the economy.

Secondly, the H1B program, the subject of the article, is not an immigration program but a guest worker program. Generally, H1B workers leave the country and don't go on to start companies because the visa is limited to six years.

21
whybroke 20 hours ago 0 replies      
tldr; WSJ want to raise H1B cap and cherry-picks/mis-represents data to justify that.
22
Nutmog 18 hours ago 0 replies      
The arbitrary criteria for deciding what companies to include shows they could have just searched for one (valued at over $1B at 2016-01-01 and not publicly traded) that gives the most favorable result. Why $1B? Why not $10B, $100M, or $1 million? Why 2016? Why not 2015 or any time in the past decade or two? Why not listed companies? Is that so they can call them "startups", or is it because $1B+ listed companies are mostly founded by American born people?

I couldn't find any mention of their methodology for selecting criteria in the linked report. Without that, the only conclusion seems to be "water is wet". This doesn't look like science. It looks like politics and is inherent dishonesty.

23
known 19 hours ago 0 replies      
Never let your inferiors do you a favor. It'll be extremely costly;
24
hackaflocka 20 hours ago 0 replies      
Pretty sure white immigrants like Elon Musk are not who Donald Trump and his followers are talking about.
25
chetanahuja 21 hours ago 1 reply      
Clicked this expecting to see much breast-beating about H1B and "driving down wages". Top comment has this:

"And of course H1B visas drive down wages"

Wasn't disappointed. Never change hacker news.

26
SCAQTony 20 hours ago 0 replies      
Immigrants are awesome and unfortunately exploited but don't you think an orderly immigration is far more effective than having borders Zerg rushed?
19
FingerIO: Using Active Sonar for Fine-Grained Finger Tracking washington.edu
609 points by jonbaer  14 hours ago   114 comments top 37
1
parkaboy 9 hours ago 2 replies      
Regarding everyone's latency concerns, as someone who has done low-latency audio processing on Android -- in their defense I'd bet almost anything the demo is meant to only demonstrate the math behind this. Depending on the platform (Android cough), low latency audio processing can be almost a dark art itself. And hey look, they're doing this on Android.

My guess is that they decided to release the demo earlier instead of spending days/weeks getting up to speed with low-latency audio processing in the Android JNI.

It's an academic demo/press release. Not a software release for production/market.

2
magic_man 13 hours ago 4 replies      
I work with sonar and the physical positioning of the sensors is important in trying to get useful results. Why is it these academic types don't release the apks or software? Just publications and maybe a video.
3
jstapels 13 hours ago 4 replies      
I wonder how accurate it really is. The demo video didn't match up with the movements at all and the on-screen drawings looked like prerecorded video that they were trying to sync to.

It's a neat idea, but without a dedicated component or an extremely high-speed RTOS, you're not going to come close to the level accuracy that's really needed to do the math and still allow interaction.

I don't mean to rain on the parade, but I just don't think they really have anything usable.

4
k_bx 12 hours ago 4 replies      
We need to immediately improve on PIN-code protection upon cash-withdrawal in ATMs. The problem has been there for a while, but man, it gets easier and easier.
5
IanCal 13 hours ago 0 replies      
Really interesting.

Reminds me of SOLI (which is radar rather than sonar): https://www.youtube.com/watch?v=0QNiZfSsPc0

Is there a way of trying this out? I know it'd only be demo line drawing applications but it'd still be interesting to try.

6
sarreph 13 hours ago 0 replies      
I love it when people find ways of using existing hardware with software innovation to make new interactions such as this!
7
wyldfire 13 hours ago 1 reply      
I'm looking forward to the first theremin app.
8
adrianN 13 hours ago 3 replies      
I wonder how much power all the processing draws. Judging from the slow movements and the delayed update on the screens in this video, it's pretty heavy on the processor.
9
crudbug 12 hours ago 0 replies      
Similar to SixthSense [1] work from Media Lab.

[1] https://www.media.mit.edu/research/highlights/sixthsense-wea...

10
faded242 13 hours ago 1 reply      
Uhh.. not the best name choice in my opinion.
11
4684499 13 hours ago 1 reply      
This is much like Project Soli. I started dreading devices like that. Sonars everywhere.
12
verbatim 11 hours ago 1 reply      
I can understand the "I", but what's the "O" part of this?
13
szczys 9 hours ago 0 replies      
Latency looks like a real issue in this demo. If it can be improved this could be important, but think about how impatient you are if your smartphone doesn't respond to your touch immediately. Users have been trained to be irritated by laggy interfaces.
14
aub3bhat 10 hours ago 0 replies      
This is not same as FingerIO since it does not uses sophisticated signal processing but still interesting.Make sure that you remove earphones before using it.

https://danielrapp.github.io/doppler/

15
djsumdog 8 hours ago 0 replies      
I have a feeling this is also really depending on the hardware. The demos were probably designed around the specific brand of watch and cellphone since they'd need to know exact distances between the microphones/speakers.

It's a really cool concept. I wish they'd open source what they have, or at least have plans to open source it. However if this came about via University funding, they'll probably claim IP on it. If it was a student's own fellowship, he/she/they might decide to create a start-up out of it.

16
fitzwatermellow 13 hours ago 0 replies      
This is cool. Thinking about smartphones as sensors opens up so many possibilities, even if their capabilities aren't nearly as accurate as dedicated devices. Wondering if the sonar information can be combined with images from the camera to create a close-range depth camera?
17
xtf 13 hours ago 2 replies      
Cats will love it. Thats why the first ultrasound remote never became standard
18
halotrope 13 hours ago 1 reply      
This looks interesting. Can you try it somewhere?
19
debarshri 13 hours ago 2 replies      
How about security? How does it protect other uses from controlling your device?
20
TTPrograms 8 hours ago 0 replies      
How many microphones do cell phones typically have? I guess I assumed one, though background noise cancelling would certainly be improved by having more. For this kind of positioning it seems the minimum needed would be 3 - and the Android SDK can access those audio streams separately unprocessed? Pretty neat.
21
JamesBaxter 13 hours ago 0 replies      
I wonder how well it would work in a noisy environment?
22
labithiotis 13 hours ago 1 reply      
Isn't it too early for April fools ?
23
eltronix 2 hours ago 0 replies      
It might as well be alchemy
24
sehugg 10 hours ago 0 replies      
I was thinking of something along these lines for a proximity sensor / motion detector application where you don't need very much accuracy.
25
memonkey 10 hours ago 0 replies      
What happens when there are multiple devices around each other emitting the signals? Great proof of concept.
26
yread 13 hours ago 0 replies      
I wonder if you can just run it on any smartphone or you need to configure the positions of microphones beforehand
27
dandare 11 hours ago 0 replies      
I don't get it, how do they track specifically the tip of a finger? Or are they not?
28
k__ 9 hours ago 0 replies      
Ultrasound tracking is always problematic, because of all the noise.

I have the feeling, every few years someone has the idea again, to use ultrasound for something and it starts promising, but then the accuracy and lag doesn't go away and dogs and cats go wild.

29
pizza 2 hours ago 0 replies      
sonar keylogger enabled
30
supergirl 13 hours ago 0 replies      
cool idea but the tracking will never be good enough to be practical.

if people really want this type of interaction then phones will start to incorporate specialized hardware for it.

31
exotiik 10 hours ago 0 replies      
For a second i tought we were April 1st
32
melling 12 hours ago 0 replies      
There has been a lot of recent work with gesture based computing: Intel Real Sense, Google's Soli, Myo, Leap Motion

https://github.com/melling/ErgonomicNotes/blob/master/README...

Leap Motion made huge improvements a few weeks ago with their Orion SDK:

http://venturebeat.com/2016/03/04/leap-motions-hyper-accurat...

We must be close to actually getting something basic for our desktops.

33
xuan 13 hours ago 0 replies      
very interesting!
34
leosteve78 11 hours ago 0 replies      
It's impressive!
35
basicallydan 13 hours ago 0 replies      
Very cool, good job :)
36
jbverschoor 9 hours ago 0 replies      
Fake video.....
37
nly 10 hours ago 3 replies      
My intuition tells me this just doesn't hold water with respect to information theory.. i.e. the number of bits of useful information about a finger you can pull from a microphone. Putting aside human digits, has anyone even demonstrated that you can reliably detect an eighteen-wheeler rig moving toward a phone with this technique? And what about the range of the speaker? Complete nonsense.
20
Facebook is the new Excel alexmuir.com
540 points by AlexMuir  1 day ago   245 comments top 36
1
jasode 1 day ago 25 replies      
I didn't perceive the author's comments to be discouraging but if anyone else is feeling bummed out by the Facebook juggernaut, keep in mind what happened to IBM, Microsoft, and Google.

When IBM built the original PC, they let Microsoft keep the rights to the operating system software. In hindsight, it was a massive miscalculation as Microsoft's "software-without-the-hardware" business went on to earn more profits than IBM's hardware. IBM's later weak attempt with OS/2 to beat Windows failed.

Microsoft's then CTO Nathan Myhrvold (who Bill Gates considered one of the smartest guys on the planet) was on a phone call with Larry and Sergei and yelled into the phone that "Search is NOT A PRODUCT!!!". (Probably their MSN hubris speaking.) Microsoft's later attempt with Bing has hardly dented Google's search marketshare.

Google didn't notice the importance of social networking before Facebook did. Google's later attempt with Google+ Circles failed to trigger a mass exodus from Facebook.

Facebook.... <story is yet to be written> but the same thing will happen to them. Somebody will come up with something that blindsides them. Or it might not even sneak up on them. Mark Z himself may look at the early-stage product and conclude that it's not anything special.

There are plenty of untapped business ideas that Facebook will misjudge and dismiss, leaving others to win that market.

But to the author's point, it will suck if you create a product that paints a bullseye and it lines up in Facebook's crosshairs. Microsoft couldn't beat Intuit Quicken with MS Money but they did crush Netscape with IE. They lose some, they win some.

2
olivermarks 1 day ago 6 replies      
Speaking as someone who does restore and modify old cars I can say as a FB user that there are really no groups with any in depth practical help.

FB tends to be a 'look at the car I just bought' or 'look at the project I just finished' or 'the event I was just at' ego space. The vertical, model specific forums continue to be the place to go if you are diving into detail via searches of existing threads and people helping each other in response to specific questions and more importantly problem solving.

IMO FB tends to be the place to share the superficial, and to reduce sophisticated ideas to ephemera. A significant challenge on the internet generally and particularly FB is uninformed advice - youtube for example is full of parts life endangering how-to videos.

FB is more of a casual after hours clubby bar atmosphere rather than a serious forum in my experience, and i wouldn't go there for advice.

3
Johnie 1 day ago 4 replies      
Reddit is likewise another competitor.

I've always looked at many of the subreddits as a potential for different products or a channel to access customers of that interest.

For example:

 * r/RedditGetsDrawn -- Marketplace for artists and users * r/SomebodyMakeThis -- Request for product * r/DataIsBeautiful -- Collection of people passionate about data visualization * r/IAMA -- Ask Me Anything -- many other community sites have copied this concept.

4
overgard 1 day ago 2 replies      
With the Excel metaphor, I wonder if a better comparison might be craigslist. That's more of a utilitarian swiss army knife that's generally overlooked, but which a lot of social services are indirectly competing with.
5
exolymph 1 day ago 3 replies      
I think this is a serious overestimation of how much people use Facebook as a piece of professional software. And no serious business is going to have a Facebook page without a website -- your competitor there is self-hosted WordPress, not Facebook.
6
harigov 1 day ago 1 reply      
The idea that partially-structured or unstructured Facebook could be a _serious_ competitor for a website that enforces some structure and provides services that can only work with that structure sounds a little stretch to me. It seems to me that the direction we are moving is towards more structured, purpose oriented websites that provide better services.
7
api_or_ipa 1 day ago 1 reply      
Excellent article Alex! I'm actually rather amazed that this connection wasn't more readily apparent to me. Nonetheless, I think you didn't address the distinction between B2C and B2B business models. Excel is both a B2B and B2C product, where Facebook almost exclusively serves just B2C (who among us solicits business deals using Facebook?).

If your SaaS is targetting B2B then Facebook is a non-issue. It goes almost without saying that Facebook has no teeth to eat your lunch.

8
minimaxir 1 day ago 3 replies      
Excel is ubiquitous because, as noted, it is an incredibly powerful tool. Facebook is ubiquitous because of network effects.

It's not a straight analogy. Specialized use cases can coexist with Facebook, otherwise half the Social-Network-for-X or Facebook-but-not-Facebook startups wouldn't exist. Productivity applications in contrast have an uphill battle and tend to stray away from anything even remotely resembling a spreadsheet UI.

9
Mikeb85 1 day ago 1 reply      
So Facebook is the new hammer used to smash everything that isn't a nail? Or is it the new software no one likes but everyone uses?
10
vermontdevil 1 day ago 2 replies      
NextDoor is a competitor I think.

It's quite popular in my neighborhood and works pretty good as a way to share issues, things for sale, lost pets, etc.

Doubt FB can replicate this unless they set up the verification by postcard process.

11
bikamonki 1 day ago 1 reply      
"Facebook.... <story is yet to be written> but the same thing will happen to them."I believe the story is already out there: FB saw the threats coming and just bought them. Whatsapp is a particularly clear example, I have about a dozen groups of friends, family, hobby, work, etc. On these groups is where we share pictures, memes, links; we plan meetings, meals, outings; we invite and are invited to events, etc. In fact: I already closed my FB account b/c I no longer need/use it. Then there's the fact that soon Whatsapp will be open for business, chat bots will come. Ordering food? Selling a house? Transferring money? Whatsapp is the new social net and FB already bought it.

So the story to be written is who or what will take over Whatsapp.

12
zeveb 1 day ago 0 replies      
He's not wrong, but I hate it. Facebook pages are walled gardens, and I certainly don't want to have to use the Facebook client to fetch them.

This continue destruction of the open Web must end.

13
bitL 1 day ago 0 replies      
Facebook is unbelievably limited for any signs of creativity - even MySpace was like a generation ahead comparing to what Facebook is nowadays. Most creatives/content producers use it as a glorified Twitter; women use it for bragging about status/observing status of others and everybody else seems to have relegated it to the most banal things (hi family!). If you pay for ads, some strange Italian or Asian company does all the clicks on your ads, so unless you have a sure way to attract people with herd-mentality to your ads, B2C is useless as well.
14
karmajunkie 1 day ago 0 replies      
Interesting premise. Excel is my number one indicator for what could be a startupif you find a complex spreadsheet that's an important part of someone making money, you have a good shot at doing it better. Businesses don't use Excel because its good at solving their problem. They use it because they don't know how to do it with anything else.
15
DanitaBaires 1 day ago 1 reply      
For me at least this has been so since around 2008. I was working on a startup whose product was a "social network for travellers". Every single feature we came with, Facebook implemented it and reimplemented it faster and better. Likes, pages, reviews, photo albums, messenger, etc. You name it. At one point we would think "why bother", we couldn't compete with that with a team of 3 developers.
16
thanatropism 13 hours ago 0 replies      
Here's the comparison with Excel: it has eliminated thousands upon thousands of low-end coding jobs. But now knowing anything else carries a huge premium.

It used to be the case that if economists who wanted to do any quantitative analysis at all had to know Fortran 77 - and punch cards in batch mode. Now people in my field who know some Matlab are already seen as wizards.

17
torus 1 day ago 1 reply      
> People are increasingly using Pages, Groups and Albums for all kinds of things that would previously have justified a whole startup

This is a snarky comment, but if a company can be replaced by a facebook page then it isn't adding much value.

18
Animats 23 hours ago 0 replies      
Trying to organize anything, even a meeting, over Facebook is a huge pain. Facebook is not a collaboration system. It's just not designed for that.
19
tangled_zans 1 day ago 4 replies      
Obvious question, but which one of you here actually uses facebook groups and what for?

I'm sure I'm a member of a whole bunch, but aside from using them to coordinate society activities back at uni, I haven't really used them in years. I always found the interface to be ugly as hell, and if I want to join a large thriving community on topic X then I just go on /r/topicX.

20
wslh 23 hours ago 0 replies      
I think Facebook is the new Internet [for many people]. The worst thing about this is not the competition with other companies but the competition with core Internet protocols and its meaning. All this while they have an embedded search engine that sucks!
21
ausjke 1 day ago 0 replies      
Really? I'm not a facebooker per se.

However I do use nextdoor that I check once or twice a day for community related news/events/etc.

22
elcapitan 11 hours ago 0 replies      
Twitter is the new Powerpoint.
23
savanaly 1 day ago 0 replies      
A very astute and logical analogy that hadn't occurred to me. Thanks for the nice, concise write up.
24
emilecantin 1 day ago 0 replies      
That's been true for a few years now. We've faced it when trying to launch a ridesharing startup (before Uber was a thing); everyone was already organized around Facebook groups.
25
jeeyoungk 1 day ago 0 replies      
"Facebook is the Facebook for X" is way I summarize it.
26
p4wnc6 1 day ago 0 replies      
I successfully avoided ever working in Excel when I worked in quant finance, and I also do not have a Facebook account. So... for me anyway, the title checks out.
27
sleepychu 1 day ago 0 replies      
This was really hard to read because it scaled to my 27" 1440p screen and that's really wide for so little text.
28
hackaflocka 1 day ago 0 replies      
Facebook's real identity concept works well when communicating with ones family and friends.

However, most of everything else, people will slowly come to the realization that it's not a good idea to use real identities.

Reviewing a restaurant or small business? Probably not a good idea to leave a lifelong trail of clues about oneself.

29
vtlynch 1 day ago 0 replies      
This article is bad. The "threshold of utility" that you have to cross is VERY low. If you previously could justify a startup that competed withe very minimal features of many of these facebook products, than that says more about the startups market than anything else.
30
hudell 1 day ago 0 replies      
And at the end of the post, a twitter handler.
31
ebbv 1 day ago 1 reply      
I mean this is true, but it's always been true of the internet.

Pre-web: UsenetEarly web: Bulletin BoardsMid-Late 90s: YahooEarly 00s: Myspace/LivejournalNow: Facebook

So, yeah, it's not so much Facebook is the new Excel as it's just the current "place where the average person is going to post things/look for things."

32
jgalt212 23 hours ago 0 replies      
I dunno. My perception of facebook users in the US is it's mostly suburban moms. So I guess if you target that demo, then FB is your Excel.

Away from that demo, I think you can still build a service for a community.

33
jorgecurio 1 day ago 3 replies      
how many of you use fb? I removed all social media 2 years ago
34
known 1 day ago 1 reply      
Can we run Pivot tables in Google docs?
35
lukasb 1 day ago 0 replies      
dude! rss feed!
36
nsgi 1 day ago 1 reply      
> You can guarantee that Facebook has already signed up 99% of your potential userbase.

That's only true if your userbase is relatively young. Facebook is far from ubiquitous among middle-aged people who still have smartphones, etc.

21
Apple Encryption Engineers, If Ordered to Unlock iPhone, Might Resist nytimes.com
452 points by IBM  1 day ago   335 comments top 31
1
kstenerud 1 day ago 17 replies      
Taking this further down the rabbit hole:

Suppose that only about 5 people can do what the FBI wants done. Suppose all 5 refuse, to the point of quitting Apple. Does the FBI now compel them to return to Apple and write the software or go to jail?

And what if one of those engineers says that he doesn't actually know how to do it; Apple only thought he could, but he actually can't. Now we get into territory of proving competency and capability.

2
naaaaak 1 day ago 1 reply      
No matter what happens in this case, to the individual engineers or to Apple, the problem runs much deeper:

- Government power and rights > individual power and rights.- Mass surveillance of their own people.- Constitution consistently ignored.- Civil liberties viewed as an annoyance.- Militarized police force.- Secret court systems that "OKs" any government action.- Mainstream media little more than an arm of government propaganda.- Whistleblowers treated like criminals.- Indefinite detention laws ready to be used for any reason.- Can justify any action in the name of "national security".- Political class rules all.

We have a word for this type of government but but no one is talking about it yet. Whatever the outcome to Apple, a government like this will try again and find other ways to do what they want.

3
studentrob 1 day ago 8 replies      
I honestly feel that engineers need not fall on their swords over this. The decision is up to them, of course. But ultimately, shareholders would expect someone to comply with the court order should the DOJ win. Note that Yahoo was threatened with daily fines of $250,000 for failing to comply in a FISA court case in 2008, and we only just learned this in 2014 [1].

I don't think we would live in a forced back door world for too long. After another 2, 4, or 8 years, we will eventually realize that giving the government a back door to the iPhone did not give it a back door to the myriad of other encrypted communications tools out there. Terrorists will find other ways to hide their communications.

I really don't want to see Apple lose this case, or any sort of anti-encryption bill. I also wouldn't want to see someone throw away working at Apple over it. Apple can maintain its integrity by complying with the law as it has publicly stated. Engineers can remain true to an employer they respect knowing said employer did everything they could to resist the government. There aren't many great employers out there like this. Don't take it for granted.

That's just my 2c.

[1] http://www.theguardian.com/world/2014/sep/11/yahoo-nsa-lawsu...

4
cpt1138 23 hours ago 0 replies      
Back in the days of the PalmV I was aghast at the terrible "technique" they used to store the user password to unlock the device. I was young and very stupid, but I pushed through proper, for that time, handling of the password.

With a court order, LE asked to unlock a device, and I was able to do it, did it and they sent me a letter of thanks which I still might have somewhere. I remember being happy to help, it was a drug case, drugs are bad mmmmkay.

In thinking about it I'm embarrassed at my younger self, but also cognizant that anyone familiar with the art could break it. It was a terrible, reversible scheme. After I pushed through the change to store the password I was confident that it could not be reversed and that it was "safe" and that I could no longer break it.

If they had suggested removing the other safeguards e.g. allowing any number of tries, etc. That would be this Apple situation and I really hope my younger self would have had the sense to plead "ignorance," refuse or whatever because my principles have not changed that much, and I am 100% on Apple's side on this issue.

5
hysan 22 hours ago 1 reply      
Question, does Apple employ any engineers that are not US citizens? Or are telecommute workers living in other countries? If so and they were one of the key engineers, what would happen if they refused? What type of international laws would come into play here?
6
moioci 1 day ago 6 replies      
Interesting parallel with Lavabit pointed out at the end of the article. Would the DOJ be willing to risk shutting down the biggest and most profitable corporation on earth over this?
7
joshka 23 hours ago 1 reply      
Even putting the engineers in the position to have to make a choice to resist means that you damage their career prospects whichever way they choose. To be known as 'that guy' who {supported a corrupt government/supported terrorism} polarizes future job choices. There's no real upside. As someone asked to do this, I'd be asking for all future earnings up front from the FBI, so the economic worth of this is really 8 or 9 figures.
8
staunch 22 hours ago 1 reply      
It may be time for that hippocratic oath for engineers idea to become a reality. I'd take it and live by it.
9
rbobby 11 hours ago 0 replies      
Meh. The FBI has already suggested that it might request the code signing certificate and the full source tree. With those in hand, I'd expect the NSA programmers could get this done in a few months.
10
neugier 3 hours ago 0 replies      
How do we know this isn't all fake? That the FBI doesn't already have access to iPhones, and just wants people to feel safe (from them)?
11
bunkydoo 14 hours ago 0 replies      
I'm amazed to think that in only a short number of years the work that has been done at Apple R&D in the US might have to go off shore because of our own government...
12
marricks 1 day ago 2 replies      
I wonder if Apple would finance their legal fees if they resisted, or would that be considered some sort of encouragement?

It might very well help public appeal if there was a person resisting the government compared to a large corporation.

Then again, if they get to the point of ordering Apple to break their security seems like they already lost the case at that point.

13
hollander 17 hours ago 1 reply      
Guess what happens if the FBI wins this, and Apple is forced to comply, and actually does decrypt this one phone, and probably later on hundreds or thousands of other iPhone 5 phones. Guess what: Apple will make the system so secure that this can never happen again.

This whole discussion has led me to reconsider the much too expensive iPhones, and my next phone might very well be an iPhone 6 or newer.

14
superuser2 1 day ago 1 reply      
If Apple continues to resist, the FBI will simply take the source code and signing keys and hand them over to some contractor to do the work. Is that better? Apple's source code and signing keys in the FBI's hands?
15
ekianjo 21 hours ago 1 reply      
> Its an independent culture and a rebellious one, said Jean-Louis Gasse, a venture capitalist who was once an engineering manager at Apple. If the government tries to compel testimony or action from these engineers, good luck with that.

Funny to see Jean-Louis's name out of the blue again. He was the creator of BeOS back in the days.

16
schwarze_pest 12 hours ago 0 replies      
Independent of the outcome of this case, maybe it is time for Apple to leave their current jurisdiction. I heard Island is lovely this time of the year.
17
nxzero 1 day ago 0 replies      
It would be funny if a bug was introduced that became known that instead of being an exploit was a patch to the backdoor.
18
spinlock 10 hours ago 0 replies      
Id love to see the people behind healthcare.gov tell Apple employees that they were moving too slow.
19
simonh 17 hours ago 1 reply      
Surely the moral responsibility of a manager ordering such work done is just as great as that of an Engineer carrying it out. So why is the debate entirely about the Engineers refusing to do the work, and nobody is talking about managers refusing to give the order?

I'm not saying that anyone should refuse, I think that's a foolish idea and as has been pointed out the Government has many tools and sanctions available it can use to compel compliance. I just find the current debate somewhat blinkered.

20
muddi900 23 hours ago 0 replies      
What sort of decryption task would even be needed if ? Suppose Apple can update the phone signed backdoored update, the DOJ order never asked for decryption, only a way to bruteforce the phone.
21
yarou 1 day ago 3 replies      
This is very much a case of civil disobedience. Non-violent struggle is a surprisingly effective tactic.

As Gandhi states:

>You can chain me, you can torture me, you can even destroy this body, but you will never imprison my mind.

22
bicknergseng 1 day ago 0 replies      
I actually just asked this question in another thread 2 days ago. Really interested in how it would play out from a legal standpoint.
23
spectrum1234 1 day ago 0 replies      
Apple's poor org structure is a blessing in this case:

Apple said in court filings last month that it would take from six to 10 engineers up to a month to meet the governments demands. However, because Apple is so compartmentalized, the challenge of building what the company described as GovtOS would be substantially complicated if key employees refused to do the work.

24
throwaway7798 1 day ago 0 replies      
What exactly does Apple need to crack the iPhone? A bunch of signing keys?
25
awinter-py 1 day ago 1 reply      
the EPIC quote comparing backdoorization for a security dev to euthanasia for a doctor is weirdly confusing; it flips the script on personal freedoms.
26
mrmondo 17 hours ago 0 replies      
and I for one would stand by them for doing so.
27
harryh 1 day ago 3 replies      
Son, go to your room!

But I don't want to go to my room!

Son, go to your room!

Mommy, what would happen if on the way to my room I ran into a pack of wild dogs in the hallway blocking my path? Would I still have to go to my room?

28
tacos 1 day ago 0 replies      
"Apple Encryption Engineers, If Ordered to Unlock iPhone, Might Resist"

Um... no. Perhaps until they get a whiff of a professional, um, "motivator" in the guise of an FBI agent or carefully-chosen warden. Some of you guys crack under the pressure of solving a C++ warning. The guy who upvotes every "Ten things about being an Introvert" post at HN will last precisely ten seconds when presented with that reality.

I admire a good hunger strike every now and then but this case has been mismanaged by both sides. Slippery slopes and domino theories but really -- you're gonna rot in jail versus coughing up a pin code to protect the privacy of a dead terrorist? This could have been narrowed, should have been narrowed, and an anonymous post card with four digits on it could end the standoff. And that's the way it's always been done. Apple seems ignorant of this reality and they are going to pay a dear price for their position -- even before they incur the cost of forcing employees into an ethical rat trap.

29
intrasight 21 hours ago 0 replies      
You all understand, right, that Apple is going to lose this fight. I'm sure the smart players, including Apple, are already planning for the eventuality. There is no right to have close source software.
30
freewizard 1 day ago 0 replies      
Dear FBI, why bother Apple? Just hire Chinese or Korean engineers[1] to crack it! They know more backdoors than Apple does.

[1] http://blog.trendmicro.com/pwn2own-day-1-recap/

31
iamleppert 1 day ago 6 replies      
I bet none of these engineers have spent a single night in jail. All it would take would be for a judge to send a single one to jail for the weekend, and they'd be happy to quickly bang out whatever the government wanted as soon as possible on Monday morning.

The sentiment is nice, but I doubt the government is worried in the slightest. The government is all powerful and can be whatever it wants, lest we forget.

22
Tcpdump is amazing jvns.ca
549 points by ingve  1 day ago   101 comments top 38
1
mettamage 1 day ago 3 replies      
Tcpdump is amazing. The quickness that you can have with it over Wireshark is awesome. I love that it is a command line tool instead of a GUI tool, since I needed to analyze TCP packets for quick debugging purposes.

But for the people who are new(ish) to tcpdump have you heard of libnet and libpcap? You can basically build your own tcpdump! :D

I was amazed at the speed packets fire when you program it yourself in C.

See:https://github.com/the-tcpdump-group/libpcap <-- CAPturing packetshttps://github.com/sam-github/libnet <-- sending packets

Libnet tutorial that I used religiously:https://repolinux.wordpress.com/2011/09/18/libnet-1-1-tutori...

2
DyslexicAtheist 1 day ago 4 replies      
>> Also if you understand how to reason about the overhead of using tcpdump ("below 2 MB/s is always ok"?), I would REALLY REALLY LOVE TO KNOW. Please tell me.

I think when Dick Sites from Google says he has a rule of "stay below 1% of overhead" when analyzing traffic on datacenter nodes he wants you to select the right tool for the job. In the context of tcpdump you can run it with so many options that makes it very powerful. But that power is dangerous in the hand of a novice user. A simple error in how you run it (maybe missing filters or too wide an address space) can cause you to shoot well above the theoretical 1% limit. But that's not the tools fault IMO.

Anyway this seems more theoretical, because in practice I'd prefer a hardware based network tap and analyze that without creating any risk to the live traffic

3
jlgaddis 1 day ago 1 reply      
As a network engineer, tcpdump and friends are on my list of most frequently used applications.

If you're not a network engineer, you'd be amazed at how many times we get issues escalated to us exclaiming that "it's the firewall" and demanding that we fix it.

It's usually not the firewall, though, and unfortunately it falls on us to prove that that's the case. It's not always easy but, luckily, tcpdump and friends allow me to show that and I can punt the issues back to where they came from.

(Several years ago, I was able to prove it was a customer's on-premise firewall -- managed by them -- and not our firewall based upon the packet timestamps and this little thing known as "the speed of light".)

4
majke 1 day ago 2 replies      
It's a pity the article doesn't touch on BPF. BPF is the magical bit in tcpdump.

https://blog.cloudflare.com/bpf-the-forgotten-bytecode/

5
raulk 1 day ago 1 reply      
Actually, you can decode a stream as HTTP traffic in Wireshark. It assembles related TCP packets and puts together the conversation from the viewpoint of the application layer.
6
kubov 1 day ago 1 reply      
You can use netsh on Windows without need to install anything external (beside capture file viewer - Microsoft's Message Analyzer - but that can be done on your workstation rather than servers)

 netsh trace start capture=yes IPv4.Address=10.2.0.1 netsh trace stop
The last command will output path to .etl file containing captured packets.

https://technet.microsoft.com/en-us/library/dd878517(v=ws.10...

https://isc.sans.edu/forums/diary/No+Wireshark+No+TCPDump+No...

7
zaro 1 day ago 2 replies      
I read the title and think "Ok, lets see what tcpdump has that wireshark doesn't" and what do I find inside ?

Article about wireshark :)

8
pi-rat 1 day ago 1 reply      
I really love ngrep[1] for debugging network traffic (for smaller problems).

Give it a filter (BPF) and the pattern you're looking for, and off you go: $ ngrep -W byline -d en0 "INVITE" port sip and host sip.phone.tld

[1]: http://ngrep.sourceforge.net/usage.html

9
pjc50 1 day ago 3 replies      
Wireshark is great, but it's fundamentally an interactive tool; you fire it up when you have a problem to look into and close it down afterwards. Running wireshark/tcpdump all the time seems like it would generate too much data. I'm not sure how well Wireshark handles dumps larger than available RAM, either.

The author mentions the pcap filter language, but one of the misfeatures of Wireshark is that it has a different filter language for the GUI filter box.

10
dorfsmay 1 day ago 2 replies      
Some Linux distros (Debian family) separate tshark from wrireshark which is great because that means you can install tshark without X-Windowsetc... I wish the other predominant distro family (Fedora) did the same.

I have not used tcpdump a lot, but I believe there is nothing it can do that tshark cannot.

Tshark can capture only the portion of traffic you want via filters, which can reduce the size of your pcap files considerably, and possibly haves less of an impact on performance.

When using tshark, make sure you capture the traffics to a file too, so you can go back to look at something that happened x seconds or minutes ago.

I understand the security concerns, but forms being able to debug with tshark (or an equivalent tool) is a very good reason for not using HTTPS internally.

11
fisheuler 1 day ago 1 reply      
Tcpdump use the BPF syntax to filter packages. In the current linux kernel,BPF was implemented and extended as a kernel virtual machine,when cooperated with the perf module,they can be used for collect trace info of the system. see https://lwn.net/Articles/599755/
12
magoon 1 day ago 1 reply      
One of my favs, a tcpdump filter for HTTP including request headers:

sudo tcpdump -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'

This is useful for troubleshooting outbound requests that your backends are making. I've had the interesting logic explained to me but can't remember the details.

13
ccozan 1 day ago 1 reply      
Aren't tcpdump and wireshark some pretty interfaces to libpcap? Also linking the home page of tcpdump/libpcap [1], and check the documentation about the packet capturing.

[1] http://www.tcpdump.org/

14
theptip 1 day ago 0 replies      
One thing that you have to watch out for with tcpdump in the field (and especially in production) is that it doesn't rotate its capture files by default, and so you'll eventually end up with very large capture files which aren't loadable in Wireshark (which struggles to load capture files of size ~= avaliable RAM).

See http://superuser.com/questions/904786/tcpdump-rotate-capture... for an example of the required syntax.

Note that tcpdump's default behaviour here is better than Wireshark's; last I checked Wireshark just crashes when your capture file exceeds the available RAM. Again, you can enable file rotation, but many don't realize this until they have been bitten by this attempting to do an overnight capture of a production issue...

15
annnnd 1 day ago 1 reply      
I started really using tcpdump when I discovered "-X" switch. It displays all the traffic as it happens in hexadecimal, so you can then use other standard unix tools (grep, less, redirection...) to check the traffic. For inspection recording is still better, but nothing beats "tcpdump -X" when you just want to know if the packages are arriving at some port.
16
dTal 1 day ago 0 replies      
tcpflow is another useful tool. It's similar to tcpdump, but reassembles connections by sequence number. I've found it easier to use for application-layer analysis, where you care more about the data being sent than what literally appears on the wire.
17
mclovinit 1 day ago 0 replies      
I have used tcpdump in the past to capture traffic when I had physical access and ability to implement a hardware tap and analyze packets after feeding in packets with tcpreplay from another network's pcap file (IIRC). The point was to configure an IDS like Snorby using rules I derived from the packets I analyzed in Wireshark (from the resulting pcap file).

However, I haven't seen a need to use tcpdump in awhile since my problem domains have been quite different in that my focus back then was primarily network monitoring. Usually performance problems where I have worked have been easy enough to identify at a higher layer (e.g. n+1 select issues with SQL).

18
qz_ 1 day ago 1 reply      
Julia Evans' blog is pretty great overall.
19
hunterjrj 1 day ago 0 replies      
Just yesterday tcpdump saved me an indeterminate amount of time working with a Horribly Problematic vendor's support team by allowing me to sniff the loopback interface to pinpoint which of two (closed-source) components was introducing a sizeable processing delay in our real-time network event management system.

I am grateful for the tool.

20
0xxon 1 day ago 0 replies      
If you are interested in tcpdump and use it for debugging, you might potentially also be interested in the Bro network monitoring system (http://bro.org).

It gives you very deep visibility in the supported protocols, dumps easy to parse log-files by default (see e.g. https://www.bro.org/sphinx-git/httpmonitor/index.html for HTTP information) - and it is fully scriptable.

(Disclaimer: I am involved with the project.)

21
xiaq 1 day ago 0 replies      
You can capture with tcpdump while analyzing it with wireshark at the same time (https://wiki.wireshark.org/CaptureSetup/Pipes#Remote_Capture).
22
danielrm26 1 day ago 0 replies      
Here's my primer for anyone interested:

https://danielmiessler.com/study/tcpdump/

23
nanodano 1 day ago 0 replies      
Tcpdump and wireshark are built on top of libpcap. If you are interested in learning how to write programs in C or Go using libpcap check out these posts.

http://www.devdungeon.com/content/using-libpcap-c

http://www.devdungeon.com/content/packet-capture-injection-a...

24
NetStrikeForce 1 day ago 0 replies      
We use tcpdump all the time to capture traffic that we later analyse with Wireshark, so I'm a bit surprised by some comments trying to confront the two.
25
foota 1 day ago 0 replies      
This person's blog in general seems pretty great. This was posted a while back from there: https://news.ycombinator.com/item?id=8167546
26
goshx 1 day ago 1 reply      
I apologize if someone already mentioned it, but you can capture packets from Wireshark directly as well, without the need for tcpdump or tshark. However, this is only useful if you want to capture packets on the machine you are running Wireshark, obviously.
27
carlosnunez 1 day ago 0 replies      
tcpdump is fantastic; I often use it in lieu of Wireshark if I can. It's also a bit faster, which kind-of doesn't matter for me since I usually have it output the trace to a file and then use less to go through it.

windump is the windows version of tcpdump: https://www.winpcap.org/windump/

I haven't used it yet but it's libpcap based so I can't imagine it being too different. It has to be at least 2000x better than the piece of shit Microsoft Network Monitor (it's like Wireshark, except so much worse...oh, and it doesn't do promiscuous mode)

28
samuel 1 day ago 0 replies      
With tshark used to be possible to capture packets using display filters, which was pretty handy in lots of situations. When privilege separation was implemented, the feature was lost(bug 2234).

I understand the importance of privilege separation, but I miss the feature.

29
amelius 1 day ago 1 reply      
The problem is that encryption (https everywhere) spoils its usefulness for the larger part.
30
mchahn 1 day ago 0 replies      
I love that the PCAP format has become the defacto standard for analyzer tools to use. I just saw an HN article about a new analyzer based on PCAP.
31
doggydogs94 1 day ago 0 replies      
If you are on Windows, I highly recommend starting with Netmon (Network Monitor) instead. It is 100 times easier to use than tcpdump and Wireshark.
32
epicmellon 1 day ago 1 reply      
It's also great combined with something like CURL you can upload the captures to CloudShark and skip using Wireshark entirely.
33
danieljp 1 day ago 0 replies      
Didn't use -n in any of the examples...please...
34
ausjke 1 day ago 0 replies      
love this, I always thought tshark is similar to tcpdump for CLI, did not realize it knows more protocol than tcpdump, learned something new today. Thanks!
35
abledon 1 day ago 1 reply      
particular advantages over wireshark?
36
saharo 1 day ago 0 replies      
it's really amazing
37
vassilevsky 1 day ago 0 replies      
Tcpdump for president!
38
baghali 1 day ago 0 replies      
For a moment I read Tcpdump as Trump
23
Amazon Echo, home alone with NPR on, got confused and hijacked a thermostat qz.com
398 points by potshot  7 days ago   144 comments top 24
1
bdhe 7 days ago 3 replies      
This reminds me of one of my favorite quotes from Douglas Adams in the Hitchhiker's Guide to the Galaxy. A man not just ahead of his time, but humorous about it too.

> The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitiveyou merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.

2
imglorp 7 days ago 15 replies      
Wow, this is a new DDOS attack vector. Get an ad on broadcast radio saying stuff like "alexa, order more milk", or "okay google, send a text to xxxxx".
3
eddieroger 7 days ago 2 replies      
What's really great about this is that it's a joke on the future that's been predicted so many times already, my favorite of which being the last vignette on Disney's Carousel of Progress. The future family is talking about points in a video game, and the oven hears it and turns the temperature way up, ruining another family Christmas dinner - the joke being that this convenience was finally going to make Dad able to not ruin dinner.
4
userbinator 6 days ago 0 replies      
Somewhat related story: me and some coworkers were talking in a room where someone had a Windows 10 laptop being used to present some data. We were talking as usual when the laptop suddenly decides to open a browser to a Bing search with what looked like a few (badly) voice-recognised words of our conversation. That was a rather awkward moment, given that we were discussing some extremely confidential information, and not helped by the "did someone say 'Hey Cortana'?" the laptop's owner promptly blurted out. If I remember correctly, none of us said anything that sounded remotely like that phrase, yet it activated.

It's now company policy that built-in microphones have to be disabled, and only external ones are allowed to be used when necessary.

5
brebla 7 days ago 1 reply      
Am I reading this correctly? Amazon essentially built a better integrated version of "The Clapper" https://www.youtube.com/watch?v=Ny8-G8EoWOw
6
mmanfrin 7 days ago 5 replies      
I think they need to pick a different name. 'Alexa' is very easy to trigger with other names, and reliably activates when I am watching any show with a character named 'Alex', 'Alexy', etc.

One side effect I've noticed is that they seem to have tried to account for it, which has made the Echo less responsive to actual requests; a few times I've stood in front of it yelling 'ALEXA' trying to get it to stop and it does not respond.

7
minimaxir 7 days ago 1 reply      
Interestingly, the same thing happened about 2 years ago with the Xbox One: http://www.slate.com/blogs/future_tense/2014/06/13/kinect_vo...
8
scott_s 7 days ago 2 replies      
This happens to me with Siri and podcasts - I listen to podcasts in my car, through my iPhone. Occasionally what people say will sound close enough to "Hey, Siri" that it stops the podcasts and and answers whatever question it could extract from the talking following what it thought was "Hey, Siri".

It's repeatable, too. One time it happened right as I was parking, on an episode of This American Life. (Or Serial. Or Planet Money. Yeah, yeah, I listen to a lot of NPR shows.) So I kept rewinding back over that part, and it kept triggering Siri.

10
chucksmash 7 days ago 0 replies      
Sometimes when you try to recognize speech you wreck a nice beach.
11
tlrobinson 7 days ago 1 reply      
I, for one, am looking forward to the day Alexa, Siri, Cortana, and Google Now can hold full conversations with each other.
12
mrbill 7 days ago 0 replies      
I had the wake-word on mine set to "Amazon" and then made the mistake of watching an online training video for AWS....

Had to stop it and change the wake word back to "Alexa".

13
dredmorbius 7 days ago 1 reply      
I see a tremendous future in direct-to-voice-response advertising. Particularly for purchase-capable systems.
14
sxates 6 days ago 0 replies      
I had something similar happen watching Battlestar Galactica on my Xbox and Kinect a few years back.

The show went through the opening sequence, then announced "Previously on Battlestar Galactica" at which point the xbox rewound back to the beginning of the show.

15
zanok 7 days ago 0 replies      
It reminds me of the Toyota radio ad that would place iOS into airplane mode.

https://news.ycombinator.com/item?id=9869797

16
beedogs 6 days ago 1 reply      
I guess I must be from the wrong generation, because none of these voice-activated products make any sense to me whatsoever. I really just can't see the point.
17
joeblau 7 days ago 0 replies      
I had a pretty funny story a few months ago. I was watching San Andreas and there is one part where Paul Giamatti (Dr. Lawrence Hayes) yells "ALEXI..." and sure enough Amazon Echo turns on. I had to stop the movie and turn the Echo off because the it subsequently tired to process everything the movie was saying after the trigger word.
18
jkot 7 days ago 1 reply      
That is a serious security issue, many apps and webpages have permission to use speaker.
19
grogenaut 6 days ago 0 replies      
I was on a PS4 launch title. We seriously considered writing things like "Xbox Off" into the script. Also that "Alexa buy me a motorcycle" commercial supposedly triggers it all the time.
20
yorwba 7 days ago 1 reply      
For most voice control applications, trigger words are enough to reliably detect owner intent, but it seems Echo needs a better mechanism. Maybe adding cameras and looking for eye contact would work?
21
nialv7 7 days ago 1 reply      
I don't understand why would anyone think having a remote control system without any form of encryption or authentication is a good idea.
22
MikeTLive 7 days ago 0 replies      
listening to XM radio, they frequently have station identification announcements.

"Siri us xm..."

with the iphone plugged in to charge while driving to work hilarity ensues as it cuts out the audio to speak of whatever it thinks was asked.

23
sandra_saltlake 6 days ago 0 replies      
That is a serious security issue
24
ljk 7 days ago 0 replies      
Wow 30 Rock predicted the future!
24
Work for only 3 hours a day, but everyday plumshell.com
508 points by NonUmemoto  5 days ago   141 comments top 27
1
err4nt 5 days ago 4 replies      
I have experienced a similar thing while freelancing or design and web development. I used to work 16 hours some days and less hours others, but then sometimes I would need to work and found it hard to kick it into gear.

I think creativity is like a well, and when you do creative work its like drawing that water out. If you use too much water one day, the well runs dry. You have to wait for the goundwater to fill it up again.

Not only did I begin viewing creativity as a limited resource I create and have access to over time, but I noticed that some activities, like reading about science, listening to music, and walking around town actually increase the rate at which the well fills up.

So now I have made it a daily habit of doing things that inspire me, and I also daily draw from the well like the author said - but Im more careful not to creatively 'overdo it' and leave myself unable to be creative the next day.

Viewing it this way has helped a lot, for all the same benefits the author listed. Im in a rhythm where I dont feel I need a break on the weekend, I just still have energy.

2
JacobAldridge 5 days ago 5 replies      
If I told you that every car needed 8 gallons of gas to drive 100 miles, you'd point out I was wrong - so many different makes and models, not to mention variables from tire pressure to driving style.

Yet for the potentially even more complex range that is different people, it amazes me that so much of the advice is didactic - we all need 8 hours sleep, 8 glasses of water, and 8 hours of work with breaks is optimal.

The closest I get to advice is 'learn your body and what works for you'. Thanks to the OP for sharing what works for him.

3
jiblylabs 5 days ago 3 replies      
As a freelancer, I understand where some of the comments "As a freelancer this won't work" are coming from. However, the last year I've flipped my freelancing model where I offer a more productized service with a clearly defined scope and set price. Instead of doing design work for $XXX/h, I'll deliver A,B,C within Timeframe Y, for Price $XXXX. With clearly defined services, I've actually been working for the last 12 months using a similar model, usually constraining myself to 4h/day with weekends off. My productivity + revenue have increased dramatically. Productizing your service makes it easier to market and generate leads, while it gives you the flexibility to work the way you want and actually free up time. Awesome post OP!
4
wilblack 5 days ago 3 replies      
I started contract work last fall. I set me rate assuming a 25 hour work week. At first I tried working ~4 hrs / day everyday day. I quickly realized this did not work for me. Working everyday, even just a little is not sustainable for me. I have a family and they are still on the 9 to 5 schedule, so working even a few hours on weekends cut into my family time which is important to me. So now I force myself to take at least one weekend day off with no prgramming. This is hard because I love to program. Also I have a hard cutoff time during the week days at about 5:30pm when my wife and kid get home. I usually feel like I want to keep working but that forces me to stop (at least until my daughter goes to bed). So now I work 5 or 6 days a week but seldom exceed 6 hours/ day. Most days are closer to 4hrs. It's great at this pace because I usually always feel like i want to keep programming so I don't get burnt out. And if I do have an off day I just don't work.

The problem I am running into now is what do I do with my spare time? All my hobbies are computer based (video games and Raspberry Pi projects) but I am trying to minimize my screen time in my off hours. This will get better in the spring and summer as the weather gets better but during winter on the Oregon Coast going outside is hit or miss.

And I hear you about not being able to go to bed until I solve a problem I am stuck on, that drives me crazy.

5
susam 5 days ago 3 replies      
I agree with this article mostly, although 3 hours a day might be too little to make good progress with work for some people.

This article reminded me of my previous workplace (about 7 years ago) where my manager discouraged engineers from working for more than 6 hours a day. He recommended 4 hours of work per day and requested us not to exceed more than 6 hours of work per day. He believed working for less number of hours a day would lead to higher code quality, less mistakes and more robust software.

Although, he never went around the floor ensuring that engineers do not exceed 6 hours of work a day, and some engineers did exceed 6 hours a day, however, in my opinion, his team was the most motivated team on the floor.

6
shin_lao 5 days ago 1 reply      
3 hours a day is just not enough for everyone.

For some projects it's perfectly fine but some tasks can only be done if you focus for a large amount of time on it, work obsessively on it until you reach a milestone.

The greatest work I have ever done, was always done when I retreated like a monk for several weeks, cutting myself of the whole world and working almost non-stop on the task until I made a significant breakthrough.

Then I go back to the livings and share the fruits of my work, and of course, take a well deserved rest for several days.

The trap into most people fall is that they are confusing being active and working.

7
dkopi 5 days ago 1 reply      
I'm pretty sure this has worked for the author, and it will work for a lot of other people as well, but a lot the benefits raised can still be achieved when working more than 3 hours a day.

A few points are raised in the post:1. If you only work 3 hours, you're less tempted to go on twitter/facebook/hacker news.

True - but that's really a question of discipline, work environment and how excited you are about what you're working on.It's perfectly possible to perform for 10 hours straight without distractions, just make sure to take an occasional break for physical health.

2. Better prioritization.

Treating your time as a scarce resource helps focus on the core features. But your time is a scarce resource even if you work 12 hours a day.Programmers are in shortage. They cost a lot. And the time you're spending on building your own apps could have been spent freelancing and working for someone else's apps.Always stick a dollar figure on your working hours. Even if you're working on your own projects.You should always prioritize your tasks, and always consider paying for something that might save you development time (Better computer. better IDE. SaaS solutions, etc).

3. Taking a long break can help you solve a problem you're stuck on.

Personally, I find that taking a short walk, rubber duck debugging or just changing to a different task for a while does the same.If I'm stuck on something, I don't need to stop working on it until tomorrow. I just need an hour or two away from it.

8
andretti1977 5 days ago 2 replies      
I agree with the author with some exceptions: when you are working as a contractor or freelancer for someone else's project maybe 3h/day is not acceptable. When you've got externally imposed deadlines 3h/day may not be sufficient.

But i agree that working less than 8h/day could be really more productive. I also liked the "less stuck for coding" topic as "...it is sometimes hard to go bed without solving some unknown issues, and you dont want to stop coding in the middle of it..." so maybe forcing themselves to stop could be a solution.

Anyway, i would really like to work 4 or 5 hours a day but keeping holidays and weekends free from work and i think this can only be achieved if you can pay your living with products of your own such as your apps and not by freelancing (i am a freelance and i know it!).

But i enjoyed the idea behind the article and i will try to achieve it one day.

9
rmsaksida 5 days ago 4 replies      
I mostly agree with the author, but I don't see the point of stopping yourself when you're "in the zone". Why lose the flexibility?

What works for me is having a baseline of 3 or 4 hours of daily work, and not imposing any hard limits when I want or need to do extra hours. This works out great, because I have no excuses not to do the boring routine work as it's just a few hours, but I also have the liberty of doing obsessive 10h sessions when I'm trying to solve a tough problem or when I'm working on something fun.

10
jacquesm 5 days ago 1 reply      
There is a much better alternative: work really hard for 2 to 3 months per year and then take the rest of the year off. If you're doing high value consulting you can easily do this. You may have to forego some luxury but that's a very small price to pay for the freedom you get in return.
11
jjoe 5 days ago 0 replies      
It reads like someone who isn't doing much of realtime support. This works great for projects that haven't been unveiled or even ones that require little ongoing maintenance like a game. But if I worked 3 hours a day, my clients would crucify me.

Sadly, it isn't always possible.

12
maxxxxx 5 days ago 1 reply      
When I was freelancing there were a lot of days when I didn't do much but then there were days when I got into the flow and worked 2 or 3 days almost straight. Most of the time this ended up at around 40 hours/ week on average but in spurts. This was probably the best work environment I have ever been in.

I hate about the corporate workplace that it doesn't accept any kind of rhythm but treats you like a machine that performs exactly the same at all times. Nature is built around seasons and so are humans. They are not machines.

I would much prefer to have a time sheet where I can do my 40 hours whenever I feel like it.

13
joeguilmette 5 days ago 0 replies      
I work on a remote team and I am only accountable for my output. I end up working 15-25hrs a week. Sometimes more if something is on fire.

I usually work 7 days a week, but invariably a couple days a week I only work an hour, checking email and replying to people.

The work I do is of better quality, I'm happier, and I easily could work at this pace until the day I die.

14
LiweiZ 5 days ago 0 replies      
I work 4-5 hours everyday but everyday on my own project. I wish I could have more time on work since most of the rest time I have is allocated to housework and taking care of two little ones. I guess the key is to control your work pace. When a sprint is needed and you are ready for it, a two-week with 90-100 hours in each week would not be a bad idea. Just like running. Listen to your body, pick your pace and keep going towards your goal.
15
a-saleh 5 days ago 0 replies      
Nice!

I actually had similar routine while at school, but it was 6 hours a day total. 3 hours in the evening, usually just before I went to sleep, might be 19-22, or 21-24 and 3 hours in the morning when I woke up and continued for ~3 hours and then left for lectures.

I started doing this because I realized that I am no longer capable of pulling all-nighters. And it worked surprisingly well :-)

16
spajus 5 days ago 2 replies      
How to pull this through when you are paid by the hour?
17
shanwang 4 days ago 0 replies      
I'm about to quit my day job and work on my own projects. I planed to maintain a 9-6 working style by going to a library with wifi. Reading this post I'm now thinking maybe I can experiment with different work routines and see which one is more productive for me.
18
amelius 5 days ago 1 reply      
> Making money on the App Store is really tough, and people dont care how many hours I spend on my apps. They only care if it is useful or not. This is a completely result oriented world, but personally, I like it.

I would guess that, if the OP had a competitor, then the OP would be easily forced out of the market if that competitor worked 4 hours a day :)

19
Shorrock 5 days ago 0 replies      
One size certainly does not fit all, however, my one take away is that this is huge benefit to paying close attention to what works best for you and optimizing your life around that. When you focus on productivity and happiness (often the 2 are linked) ignoring, when possible, schedules dictated upon you your quality of life will improve.
20
TensionBuoy 5 days ago 2 replies      
3 hours is not enough time to get anything done. I'm self employed. I go 12 hours straight before I realize I should probably eat something. I love what I'm doing so I'm drawn to it all day, every day. At the end of the day I've hardly made a dent in my project though. 3 hours is just getting warmed up.
21
1123581321 5 days ago 0 replies      
I read an essay several years ago that suggested working three focused hours a day. But, it suggested slowly increasing the hours worked while keeping the same level of focus, and doing restorative activities in the remaining time. The idea was that this would "triple" productivity.
22
abledon 5 days ago 0 replies      
This is so true of people who give 100% every moment they work, but can't work long hours without feeling drained. compared to someone who goes at 50% and can manage the 40hr/work-week, I wish this method would become more recognized.
23
mrottenkolber 5 days ago 0 replies      
What about work 11 hours a week and be happy? Works for me, and I am a freelancer.

Edit: I usually do three blocks of three hours each and one two hour block each week. I find three hours perfect to tackle a problem, and a good chunk to be able to reflect upon afterwards.

24
JoeAltmaier 5 days ago 0 replies      
"Work for only 3 hours a day, but every day".

'everyday' is an adjective

25
jamesjyu 5 days ago 0 replies      
Work hard. Not too much. Focus on what's important.
26
xg15 5 days ago 1 reply      
So no going out for drinks where you might have a hangover the next day?
27
logicallee 5 days ago 7 replies      
Historically, working 24 hours a day (I include sleep because after a certain number of hours you even dream of code or your business) for 1 year typically accomplishes more than working 3 hours per day for 8 years. Or 1.5 hours per day for 16 years. There is just some kind of economy of scale.

---------

EDIT: I got downvoted. Come up with whatever standard of productivity you want (ANY standard that you want) and adduce a single human who in 16 years times 90 minutes per day accomplished more than I can find a counter-example of someone doing in the same field in 1 year. 1 year of 24 hours a day strictly dominates 16 years of 90 minutes per day, and you cannot find a single counterexample in any field from any era of humanity. Go ahead and try.

oh and by the way, in 1905 Einstein published 1 paper on the Photoelectric effect, for which he won his only nobel prize, 1 paper on Brownian motion which convinced the only leading anti-atomic theorist of the existence of atoms, 1 paper on a little subject that "reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity" and 1 paper on Massenergy equivalence, which might have remained obscure if he hadn't worked it into a catchy little ditty referring to an "mc". You might have heard of it? E = mc^2? Well a hundred and ten years later all the physicistis are still laying their beats on top.

https://en.wikipedia.org/wiki/Annus_Mirabilis_papers

Your turn. Point to someone who did as much in 16 years by working just 90 minutes per day.

Closer to our own field, Instagram was sold for $1 billion about a year after its founding date, to Facebook. Point out anyone who built $1 billion in value over 16 years working just 90 minutes per day.

25
VPN Comparison Chart docs.google.com
499 points by prawn  3 days ago   191 comments top 23
1
deanclatworthy 2 days ago 6 replies      
PIA changed their business model at the turn of the year to not support the circumvention of geo restrictions [1]. Given this was a core selling point prior to this point, I'd say it's pretty clear they have succumbed to the legal problems associated with it and can no longer be trusted. BBC iPlayer has now been broken for months.

[1] https://support.privateinternetaccess.com/Knowledgebase/Arti...

2
gilrain 2 days ago 5 replies      
In addition to its excellent scorecard here, I can report that I've been extremely happy with IVPN. Very easy to deal with, even for detailed, technical support requests. I got an immediate response from an engineer which addressed my complaint in detail (poor port forwarding setup), and even gave me a timeline for when they were going to fix it. And they did fix it! The port forwarding is great, now.

Also, since this does matter a lot: I have a 100 Mbps connection, and I get between 50-80 Mbps through almost all of their servers, barring understandably slow countries like Hong Kong.

Oh, also, they have multihop, and you select your own entry and exit server from among their pool.

I have no relationship with them, just a satisfied customer, relieved to have found a reliable, consumer VPN after many attempts.

3
SXX 2 days ago 8 replies      
Looks like so much of work put into list, but I still wonder why on Earth anyone would use 3rd party service especially one based in weird jurisdiction for anything other than torrents download?

Likely every service with questionable legal status (e.g all that state there is no logging going) does analyse all bandwidth for it's own needs and clearly going to steal everything they can. Even TOR exit nodes are more secure since you at least know they can't be trusted by default.

What advantage is there over own servers that is unlikely monitored by default and still dirt cheap?

4
marklawrutgers 3 days ago 2 replies      
Here's the link so that the top and left tabs stay while you scroll: https://docs.google.com/spreadsheets/d/1FJTvWT5RHFSYuEoFVpAe...

I guess something about htmlview?sle=true breaks that.

5
gmac 2 days ago 1 reply      
Another option is to run your own. I guess it's swings and roundabouts concerning privacy, traceability and so on. I have a script to automate setup of an IKEv2 server on Ubuntu, which seems nice for a balance of security and the availability of built-in clients. I was inspired to set this up by the awful proposed Investigatory Powers Bill going through in the UK.

https://github.com/jawj/IKEv2-setup

6
dguido 3 days ago 5 replies      
This is pretty useless. Put them all under the category of "centralized one-hop VPN." Each of these is a sitting duck for surveillance, law enforcement, hackers, and more! It doesn't even matter who runs it, each one is an attractive enough target for someone to learn how to subvert. And then what? You'll never find out all your data is being scooped up or potentially modified.

If you want to protect your network communications, run your own endpoint. Projects like Streisand and Tinfoil's OpenVPN setup scripts let you stand up and tear down VPN endpoints instantly (just remember to ditch Tor from Streisand, see why here: https://news.ycombinator.com/item?id=10735529).

https://github.com/jlund/streisand

https://www.tinfoilsecurity.com/vpn/new

I would be truly interested if someone developed Ansible scripts that setup an OpenIKED server (http://www.openiked.org/) on your choice of cloud providers, and spit out the configuration instructions for your mobile phone. iOS 9 and OS X 10.11 support IKEv2 out of the box now: https://www.ietf.org/mail-archive/web/ipsec/current/msg09931...

7
kmfrk 2 days ago 3 replies      
Hmm, Cloak is not there. I really like the people behind it, and they truly care about privacy and security. The iOS app is sorta wonky and turns on and off when it shouldnt, though.
8
hackuser 2 days ago 2 replies      
Does anyone know of a hosted VPN service that provides a firewall too?

It seems like the only effective way to control outbound traffic from my Android phone. These solutions don't work effectively:

* Detect and block each outbound connection manually: There are endless holes to close and always new ones; that is playing whack-a-mole.

* Software firewall on phone: The firewall would need to operate on a low enough level to block everything. That is a challenge for all software firewalls, and from my sense of Android's outbound data 'features', that seems especially difficult.

* Hardware firewall of my own: Because my phone is mobile, it's not always connecting through the same hardware. I could create a VPN back to my personal firewall, but then either I must share all my data with my ISP or I must create a 2nd VPN connection from my firewall to a hosted VPN service, which seems like too much latency and complexity.

I can't be the only one who wants this ... ?

9
daheza 2 days ago 2 replies      
I have always wondered when I see charts like this that add a column for bitcoin accepted. I would much rather pay with a prepaid gift card / visa which can be purchased with cash. If I even wanted to pay with bitcoin I have no idea how I could get a balance and remain anonymous.
10
SXX 2 days ago 0 replies      
If anyone related to list is here here is my suggestion for improvement: add information about year certain service started to operate into the list. May be worth also add information if there some real company behind that service if country of jurisdiction provide way to check it exist.

Many clearly wouldn't want to pay for longer periods if service created few months ago and don't have real company behind it.

11
Johnwbh 2 days ago 1 reply      
Hello from China. Lantern https://getlantern.org/ is a fairly reliable free option maade by a non-profit. It's slower that the paid options here, but works well enough for gmail, facebook, etc.

Also, the more people outside China who have it the better, so if you wouldn't mind installing that would be great

12
dheera 2 days ago 1 reply      
"Accessibility from China" might be a good column to add.
13
autarch 2 days ago 2 replies      
This would be a lot more useful if the header rows and name column were frozen. Once I scroll I lose all track of what each column means or what row I'm on.
14
revanx_ 2 days ago 0 replies      
I use AirVPN, fairly satisfied, the only thing that bothers me is that they affiliate with ipleak.net , which is a website that checks your connection for dns leak among other things. That's great by itself but it's more of a honeypot at this point because the website heavly relies on google scripts so if you happen to have a google normal/evercookie installed in your browser you are instantly identified no matter what VPN you use.
15
q4 2 days ago 3 replies      
Thanks for the meticulous efforts, whoever worked on it.

However I especially didn't understand why some of the values under Privacy > Traffic / DNS Traffic say "NO" but are still in green. Or why some of the other values under the same Privacy column like "connection" say "Yes" but are in red.

Can anyone explain what those mean? Also does empty values there mean "no data available" or something else?

16
peeters 2 days ago 1 reply      
Google Docs lets you lock rows so that they always show up. It'd be really helpful if that were done for the headers.
17
Swannie 2 days ago 0 replies      
Could you change the title to "Privacy VPN Service Comparison Chart"?

VPN in of itself is meaningless. This could mean anything from corporate VPN products, through to AWS site to site VPN services.

18
zyxley 2 days ago 2 replies      
One thing I'm curious about is if there's any VPN services out there that will do virtual LAN functionality, a la Hamachi but with the same focus on privacy mentioned in the threads here.
19
pteredactyl 2 days ago 0 replies      
We use NordVPN running on OpenWRT box. Very happy so far
20
wodenokoto 2 days ago 0 replies      
You really should publish and share the published version of Google docs unless the users needs to edit the shared version.
21
a_c 2 days ago 1 reply      
What's the meaning of cell color?Some red cells say "yes" while some say "no"?
22
hathym 2 days ago 0 replies      
it's best to run your own on a cheap VPS, check this out https://github.com/kylemanna/docker-openvpn
23
frenchie4111 2 days ago 1 reply      
How would you feel if a review site was using affiliate links only in reviews of VPN companies that they really liked, and were glad to promote?
26
A man overrides his camera's firmware to bring rare pictures of North Korea back m1key.me
448 points by jorge-d  4 days ago   172 comments top 37
1
Laforet 3 days ago 8 replies      
The pictures are not bad, but the captions are incredibly cringe and condescending. Of course he would not bother to tell us how to disable the delete in the firmware. Seriously, I mean, thousands of people have gone on these state-sponsored package tours in North Korea and we see the same set of trains, roads, hotels and attractions that they have really got boring.

One photography project I do found interesting is the set below commissioned by Getty Images. The photographers they hired found a loophole in their visa conditions, managed enter NK from Russia and reached Pyongyang on trains rarely used by tourists. They were able to interact more with the locals since the border guards as well as people en route have not been "coached" to speak to foreginers and the whole thing came out feeling much more genuine than these tourist flicks.

Selected pictures featured in Daily Mail: http://www.dailymail.co.uk/news/article-3210256/Fascinating-...

Photographer's Portfolio: http://www.gettyimages.co.uk/search/events/573232783?exclude...

Photographer's written account of their travel (in Chinese), plus a few candid shots that Getty refused to buy:https://www.zhihu.com/question/19972643/answer/81163727

2
easong 3 days ago 7 replies      
I went to North Korea in January as an American (incidentally on the same tour as the kid who is currently being detained there and in the same group). As mentioned in the article, Americans can't go on the train, so I can't speak to his experiences specifically, but I have a few reactions to this.

1. He probably didn't need to modify his camera firmware - I took pictures of basically whatever I wanted and they didn't look or care. The only exception to this was the (singular) grocery store, which I thought was hilarious because it was actually pretty nice. (With surprisingly good food, too! I'm still snacking on some of the candies I bought there.) I went on a helicopter ride and took a bunch of pictures of anti-aircraft guns and the like hoping that they would make me delete them and I would have a story to tell, but no such luck.

2. There's a big difference between the different tour companies. Some of them really sold the dystopian totalitarian state experience, with extremely strict guides who yelled at people on the tour and checked everybody's cameras and so on. The people on those tours seemed to have signed up for that experience and were, I think, happy to receive it. My group was drunk off our collective asses literally 24/7, our guides were making dick jokes, a guy wandered off on his own on New Years Eve and didn't get back to the hotel until ~3am after getting in a fistfight with a cab driver, and it was generally a party. It seems to me like the author of this article went in looking for a dystopia to photograph, and the tour company gave him one.

3. North Korea is poor as hell, obviously has a horrible government, etc. If you've been to other extremely poor parts of the globe it's obvious that it's a poor country trying to pretend to be a rich one. The successful example of neighboring South Korea makes their failure to provide for their citizens even sadder. But I would guess total quality of life is comparable or better in the DPRK than many other places I've been. (Somalia, nasty bits of Bangladesh, etc)

4. The citizens and possibly government are (not totally unreasonably) terrified that the US is going to invade and crush them like ants at any given moment. I think this drives a lot of their government's malinvestments.

3
lingben 3 days ago 4 replies      
NK is really a surreal place, no matter how much you learn about it, something new will still shock you

for example, I watched a documentary about a Western eye doctor who went there pro bono to do cataract surgeries

the thing was, when he gave people their sight back they didn't thank him, they went up to a picture of 'Dear Leader' and wept and prostrated themselves in a cathartic show of gratitude

4
jonah 3 days ago 1 reply      
A couple rail fans booked a train from Vienna to Pyongyang in 2008.

Their travelogue is fascinating although it can take some effort to wade through the train-schedule minutia and whatnot if you're not into that but well worth it for the uncensored images.

"The forbidden railway: Vienna - Pyongyang - - - 36 hours in North Korea without a guide..."

http://vienna-pyongyang.blogspot.com/

EDIT: Direct link to skip the European and Russian part of the trip and go directly to crossing the border:

http://vienna-pyongyang.blogspot.com/2008/09/irkutsk-skovoro...

5
sveiss 3 days ago 0 replies      
The photographer shared this set on Reddit around a month ago and included some additional commentary in the thread.

https://www.reddit.com/r/pics/comments/46ahkv/illegal_photo_...

It was, and is, a fascinating series of photos.

6
hohohmm 3 days ago 1 reply      
Pictures are okay if not just generic NK photos, definitely not rare, and the captions, wtf?

"At night, the elderly Chinese dance in the streets in unison avoiding any displays of individuality."What is this? To avoid any display of individuality??? I just marvel at this wishful thinking. They do that in unison because it's fun to do activities with other people sharing the same interest. If i'm playing Starcraft with a bunch of friends, am I avoiding display of individuality? Why can't the biased eyes just state the obvious that the elderly are just enjoying themselves.

Honestly I've seen way better NK photos and way better wishful NK journalism. Why is this even on Hacker News?

7
x1798DE 3 days ago 3 replies      
What a weird title for this. He doesn't even say that that actually happened, just that you can override your firmware to make the delete button not work.
9
Blackthorn 3 days ago 1 reply      
In case anyone was wondering some of the ways rural Chinese were dealing with the aborted-their-baby-girls problem...

> If you're caught escaping by the Chinese, they send you back if you're a man. But the captured women are referred to as "pigs", and sold to Chinese men

10
eulji 3 days ago 1 reply      
WOW. What an ignorance.

1) NK's ICBMs program is much more advanced than SK's2) Young generation is openly doubting the regime3) There's a lot of smuggling of SK's soap operas and music4) Their nuclear research is top-notch considering the circumstances5) All other parties involved (US,China,Japan,South Korea, Russia) are all to be blamed for suffering of 25 millions of people living in an artificial skansen of communism.6) Praising the great leader is one way how to protect yourself and your family when the "democratic, cool westerners" are not going to help you.

And the guy is openly preaching China for being much more advanced ? NK's regime is scary but the one in China is even scarier. They are pragmatic criminals that embraced the art of trade.

US and SK should have invaded and liberated the NK long time ago. They are as much part of the problem as the NK's leadership is.

11
jorge-d 3 days ago 0 replies      
A lot of people expressed concern about the title. I just would like to say that it does come from a misunderstanding about the original article I read the story from [1], as I expressed in another comment [2].

I've discovered plenty of North Kore related stories in HN that people already cover in the comments section [3] or other submissions [4] [5].

I thought that given these two points this story would be quite interesting here in HN. As of the title, knowing now that it's not exactly true I let the mods choose a more appropriate one if needed.

[1] https://twitter.com/Dimitri_Jorge/status/709705048150949888

[2] https://news.ycombinator.com/item?id=11288584

[3] https://news.ycombinator.com/item?id=11287297

[4] https://news.ycombinator.com/item?id=5091962

[5] https://news.ycombinator.com/item?id=2541189

12
ZoeZoeBee 3 days ago 3 replies      
It is amazing what a generation of brain washing and isolation can do to the perspective of a nation. Amazing to hear a quarter of the population is mentally deficient due to malnutrition, no wonder they believe the world is in awe of their accomplishments. I chuckled when I read they believe payments form other nations (International Aid) are their spoils from war.

Topographically what a beautiful country, I hope some day the North Koreans are afforded an opportunity at freedom.

13
hellofunk 3 days ago 0 replies      
Most interesting comments by the photographer:

>"Anyone who composes a work that has not been assigned to the writer through this chain of command is by definition guilty of treason. All written works in North Korea must be initiated in response to a specific request from the Workers Party."

>"After the Korean War, North Korea was economically a more attractive destination than South Korea, and many people, including 100,000 ethnic Koreans from Japan, were welcomed into North Korea."

>"in North Korea you only travel big distances by bus or train, when you get permission."

>"This was one of the most strange moments - when we finally arrived in Pyongyang. Through the courtains of the compartment window, we looked at a surreal scene that appeared like something out of a theatre in its perfection and artifice. Elegant men, beautiful women, walking in a simulated hurry, travellers without a reason (ours was the only train that day), all to impress us and so that the station doesn't look empty."

14
sakopov 3 days ago 0 replies      
North Korea looks like the Soviet Union of the 1950s. It's just amazing how stuck in time this place is.
15
hnfmr 3 days ago 1 reply      
This looks very much like late 1970s, early 1980s in rural areas of China. For me they are reminiscent of my childhood, when rural areas were still not affected by air/land/water pollution.
16
ivanb 3 days ago 0 replies      
Nice. Here is what I see if I just consider the pictures: very clean cities and countryside, disciplined and healthy people living their simple lives. People work and serve their country. Everyone is poor but equally so. No apparent inequality. People get education and perform arts. Of all poor places on Earth it doesn't look like it is the worst.

So this is how it looks to me if I put aside the usual "North Korea is evil" media context.

17
nether 3 days ago 1 reply      
When my cousin visited the DPRK, she said that all the paper there was like tissue, and that anything made there of plastic would shatter when dropped. Really limited manufacturing abilities.
18
rdl 3 days ago 0 replies      
I wonder if the North Koreans who get killed for letting a device slip by them which then is used to mock their security will be told exactly why they're being tortured/murdered, or if it will just happen. :(
19
b123400 2 days ago 0 replies      
I went to NK few months ago, while they do check cameras on train, they didn't check anything at all when I leave the country via airport. They call this "internationalization", I felt a little bit stupid for encrypting all the photos I took.
20
yequalsx 3 days ago 5 replies      
Visiting a country with concentration camps is morally reprehensible. His visit helps to finance and prop up the deplorable North Korean regime.
21
ReedJessen 3 days ago 0 replies      
This is the craziest thing I have the read in this recent memory: "Anyone who composes a work that has not been assigned to the writer through this chain of command is by definition guilty of treason. All written works in North Korea must be initiated in response to a specific request from the Workers Party."
22
necessity 3 days ago 1 reply      
Beautiful colors. If you didn't mention "firmware" I'd have guessed some professional grade film was used.
23
swang 3 days ago 0 replies      
Wait, NK women are sold off to the Chinese if they're caught in China? Is this well known?
24
ksrm 3 days ago 0 replies      
Also check out this person's many photos from all over North Korea: https://www.flickr.com/photos/kernbeisser/
25
forgetsusername 3 days ago 0 replies      
I was mostly struck by a "civilized" backdrop (used loosely) with nary a billboard or piece of litter in view. If I didn't know better, I'd say it looked quiet nice there. But I know better.
26
dominotw 3 days ago 0 replies      
I am on my way to get my US Citzenship in couple of years. I've been dreaming about visiting NorthKorea almost everyday for past 3 yrs.

Would visiting NK cause me any trouble getting my US citizenship?

27
l33tbro 3 days ago 1 reply      
Admittedly an aside, but why the author's little jibe about Sigma lenses being amateur? My 18-35mm outperforms Zeiss and Canon glass that is 3 times its price.
28
jecjec 3 days ago 1 reply      
I can't do it right now but I plead to those of you with the resources and skill to spin up mirrors of this website to do so. I expect imminent problems.
29
tiatia 3 days ago 0 replies      
30
sriram_iyengar 3 days ago 2 replies      
all i see is - less population- no traffic- clear air and very less pollution- farm and live easy- walk and cycle and stay fit- great childhood of enjoying countryside

Overall, they are 117th in life expectancy with 65years+. My greatest and largest democracy as well offers just the same.

Don't get any of this living in the greatest democracy's top Urban city !

31
lfam 3 days ago 0 replies      
People are carrying heavy things around the city. They don't even have carts or wagons with wheels.
32
sedeki 3 days ago 0 replies      
I am still amazed that NK actually tells people to look like busy travellers at the train station.
33
a_c 3 days ago 0 replies      
Next time north korea with be confiscating the camera instead..
34
jecjec 3 days ago 1 reply      
OP produces possibly the greatest work of North Korean photography;

Comments: "This isn't that big of a deal. I've seen better."

35
jaseemabid 3 days ago 0 replies      
1984?
36
topspin 3 days ago 1 reply      
True paradise. No commercial billboards. Few cars. Obesity cured. Low energy use. Cooperative domestic politics. Free healthcare.

And all it takes is a couple gulags. Brilliant.

37
dang 3 days ago 1 reply      
This comment and far too many others you've posted are an abuse of HN, not because of the positions you take but because HN isn't a forum for screaming your politics at other people.

Since you've ignored our requests not to do this, I'm banning your account. If you don't want it to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.

27
An Experimental Autism Treatment Cost Me My Marriage well.blogs.nytimes.com
431 points by salgernon  8 hours ago   254 comments top 25
1
kauffj 7 hours ago 3 replies      
I work with John's son, Jack, on LBRY (http://lbry.io). The whole Robison family is full of people with interesting stories:

- Jack went to trial as a teenager, facing 60 years (!) in prison for chemistry experiments (http://www.masslive.com/localbuzz/index.ssf/2009/06/actionre...)

- John showcasing a guitar that Jack's mother and Jack built for KISS (https://www.youtube.com/watch?v=dXZi4UZjiiI&t=10)

- John's brother is Augusten Burroughs (https://en.wikipedia.org/wiki/Augusten_Burroughs)

I pointed Jack to this thread. I believe he went through same treatment as John at one point if people have questions.

2
graeme 7 hours ago 14 replies      
Is the way he describes intensely feeling others emotions normal?

When I was younger, I was awful at reading people. Very shy with others as a result, because I was missing most of the data.

I eventually decided to learn how to read body language. I did some training to recognize expressions, focussed on one skill at a time, and viewed every conversation as practice. I improved to the point that people comment that I'm surprisingly good at reading them.

But the emotions don't hit me the way this author describes. I just....see them.

Granted, I've also practice stoicisim and mindfulness, which explicitly trains you to not worry about things like someone insulting you (or hearing a comment that might be construed as insulting).

But, I've wondered if something is going on. When I was younger, before learning to read people, I read descriptions of Aspergers and it sounded much like me. Now when I read them it sounds not very much like me, because a significant component of those symptom descriptions involve poor social skills.

Thoughts?

3
mchahn 7 hours ago 20 replies      
A functionally autistic woman, Temple Grandin, wrote a fascinating book on autism. She offered a simple test for autism. Think of a church steeple (stop and do that).

If you thought of a real steeple you had actually seen then you probably tend towards autism. If you thought of an abstract non-existing steeple then you don't tend towards it.

I was at a gathering of employees in my company. There were about a dozen random people sitting around a table. I tested the whole group at once. Every single programmer answered with a real steeple and every non-programmer thought abstract.

I know this doesn't represent a real study and chance was involved. But it matches something else she said. Functional autistics with jobs are predominantly programmers. She quoted a number, like 70%, but I don't remember for sure.

I, a programmer, personally prefer human interaction on the web. Meeting in real-life, not so much.

4
munificent 7 hours ago 2 replies      
> Later, people at work told me theyd liked me better the way I was before.

Whenever you make a large change in yourself, you are going to alienate people in your life. This doesn't say anything about whether the change is good or bad.

The set of people currently in your life is highly biased towards people who like you the way you are. If they didn't, they wouldn't be in your life.

The more interesting question is after you make a change and get a new set of people, how do those people compare to your old set?

5
theoh 7 hours ago 4 replies      
The extreme sense of feeling the emotions of others that is described in the article seems like something stronger than normal for typical people, but perhaps it's just relative to previous baseline of little insight into the emotions of others.

Some research opposes the deficits of autism to the excesses of schizophrenia. Not sure it's totally relevant to this item, but seeing emotional meanings where they don't exist is a very schizotypal (positive schizotypy) phenomenon:

http://blogs.scientificamerican.com/beautiful-minds/how-is-c...

6
mfoy_ 8 hours ago 2 replies      
It's hard to imagine what it must be like to go from feeling no emotions to feeling them all, like the colourblind glasses.

It's relatively easy to imagine seeing the world in black and white and then having the colour switch flipped. I can't imagine the same for emotion... what a wild ride it must have been. It must have been so painful at first, especially when he realized that some of his "funny friends" had really been making fun of him...

7
franciscop 7 hours ago 5 replies      
Maybe a more apt title would be "An experimental autism treatment gave me my son back".
8
scott_s 8 hours ago 2 replies      
I'm surprised there was no protocol where he went to regular therapy to help him process the new emotions, and his new ability to read emotional cues in others. I suspect therapy may have helped with his understanding that he gained years later, which is that your perception of someone's emotions is not always correct.
9
RangerScience 8 hours ago 2 replies      
I'm "spectrum" enough to have an opposite reaction to various drugs - Ritalin being the important one, here - and my experience is... similar, in some ways.

I probably didn't notice other people's emotions a lot when I was younger, to the effect that now that I'm older, and do notice them, I frequently don't have any idea what to do with that understanding.

10
orik 8 hours ago 9 replies      
It's too bad John felt he needed to seek treatment to 'cure' his autism.

I was diagnosed when I was 17, and I get to interact with a lot of other students on the spectrum every day at my school. There are a few students that believe autism is something to overcome, and that if they try hard enough perhaps one day they won't be 'autistic'. Most of us are comfortable with the fact that we're different.

11
rusabd 33 minutes ago 0 replies      
reminds me this sci fi story:https://en.wikipedia.org/wiki/Flowers_for_Algernon

``The operation is a success, and within the next three months Charlie's IQ reaches 185. However, as his intelligence, education, and understanding of the world increase, his relationships with people deteriorate''

12
billhendricksjr 8 hours ago 4 replies      
Reminds me of Flowers for Algernon, my grandfather's favorite book.
13
opendomain 8 hours ago 6 replies      
Please note that the results are not reproducible.

This effectively putting a current the the subject's brain and can be dangerous if not administered by a doctor

14
epx 7 hours ago 0 replies      
I had a slightly similar experience while taking antidepressants. The anxiety went completely away. That made me drive more recklessly (my parents and my wife began to dread taking a lift) and I contemplated getting a mistress. Of course, I did compensate for that, once I took note of the effects. But actually a certain level of anxiety is a good tool, and I went out of medication as soon as possible.

I talked about my suspected Asperger syndrome with the psychiatrist, and she said "I could medicate you for that, but are you really sure you want to change?".

15
bung 8 hours ago 1 reply      
> It took me five years to find a new balance and stability. In that time, my sense that I could see into peoples souls faded.

I wondered how that would go, we all go through years of social interaction and have to build walls. Nowadays, I don't know if my walls have become too thick or if I never had the same emotional range as others in the first place.

16
nxzero 8 hours ago 2 replies      
Unclear how it made sense to provide such a potential life changing treatment to someone that was likely more than halfway through their adult life. Is this common to perform such potential profound treatment when there's a very real chance of it having a negative impact?
17
foobarbecue 5 hours ago 0 replies      
"Seeing emotion didnt make my life happy. It scared me, as the fear I felt in others took hold in me, too." Reminded me of an excellent death metal track: https://www.youtube.com/watch?v=tUVPIknZ9ao

Lyrics include:

The thing that scares me most /Is the fear I see in others /And the thing that really frightens me to the core /is when I see that fear in you

18
vehementi 7 hours ago 2 replies      
> but instead she said matter of factly, You wont need me anymore.

What? There was no follow up on this. Why would anyone say that - how does this even make sense at a basic level? Was there no follow up question by him?

19
leemailll 8 hours ago 0 replies      
Is what he described different from popular view that autistic personal has a problem because they are incapable of handling their sense of emotions from others so they choose to avoid social interaction?
20
callesgg 7 hours ago 0 replies      
I often think about how strong the emotions that other people feel are.

Situations can be turned upside down when trying to think from the perspective of a person with stronger emotional responses.

21
hughperkins 7 hours ago 1 reply      
I think the premise for the story - applying magnetism to the brain curing autism tomorrow - seems suspect, and don't really believe it.
22
barney54 8 hours ago 1 reply      
This experiment is similar to what happens in the novel The Speed of Dark by Elizabeth Moon. http://www.amazon.com/The-Speed-Dark-Elizabeth-Moon/dp/15012...
23
StanislavPetrov 6 hours ago 0 replies      
Not to be insensitive, but what did he really expect? Experimenting with your brain in order to change your perceptions and though processes are bound to lead to major disruptions in your life.
24
cookiecaper 6 hours ago 1 reply      
The author's life was radically changed by a procedure that runs a magnet over his brain. It appears to have induced a severe bout of depression and anxiety that negatively impacted his performance at work and cost him 2 marriages. He seems to believe that this "treatment" worked, though the empirical evidence would hardly suggest that. Does he believe that he was reading emotions instead of descending into anxious paranoia just because the doctors told him the first thing is what would happen? The unfortunate thing here is that doctors have probably recorded his case as a success.

I've found that the emotions we read out of people are often exaggerated from their true thoughts. It's easy to feel like there's some harsh judgment occurring when, in fact, there isn't. Someone should've told the author this, and not to take his new "emotional superpower" too seriously.

25
flagelate 5 hours ago 0 replies      
I read the entire article. It's better to be emotionless... at least sometimes.
28
Experimental support for WebAssembly in V8 v8project.blogspot.com
393 points by AndrewDucker  3 days ago   262 comments top 35
2
titzer 3 days ago 0 replies      
Standard caveats apply here. It's experimental: a couple bugfixes are already in flight and we're working hard on improving the startup time.

Overall a huge thanks to our friends at Mozilla and Microsoft and Apple, with whom we've worked closely with over the past year to make this a reality.

3
xigency 3 days ago 3 replies      
Finally, a reason to go into JavaScript development with a background in compilers! I can see the need for cross-compiling tools and certain special skills for performant web development in the future, especially since even with complete adoption of WebAssembly this will still create a split between browsers that support it and browsers that don't.

From glossing over the wasm spec, it seems it's designed for C++ compilation, which in the environment I do development, is totally impractical. It sounds like a TypeScript to WebAssembly compiler isn't on Microsoft's radar. In that case, it might be time to work on some tools for analyzing syntax and start annotating types, because compiling from ECMAScript to WebAssembly would be a great boon in production.

4
fizzbatter 3 days ago 4 replies      
Anyone know how difficult Rust -> WebAssembly will be, compared to C++ -> WebAssembly? I'm hoping to switch some of my Go code to Rust, as i imagine Go -> WebAssembly will be much further out _(GC and all)_, so i'm hoping Rust is much sooner in the pipeline.
5
gregwtmtno 3 days ago 6 replies      
I see the comments here are overwhelmingly positive, so I will keep an open mind. That said, someone please correct me if I'm wrong, but this runs binary code loaded over the internet in your browser? As a free-software advocate, this concerns me.
6
sp332 3 days ago 3 replies      
Are there any plans to run WebAssembly outside of a browser? Would it be difficult (once the binary format is nailed down) to distribute a standalone program?
7
iLoch 3 days ago 2 replies      
Hmm, I wonder if and what people will switch to when they're not forced to use Javascript. Now any language could become a server side + client side language like JS is now.

Will I have the ability to write a library in one language and use it in another as long as they both compile to wasm?

8
spriggan3 3 days ago 3 replies      
So who's going to be first to recreate an entire Flash runtime for WASM? ;)
9
Touche 3 days ago 1 reply      
So when is Google going to deprecate Native Client?
10
dccoolgai 3 days ago 1 reply      
For all the complaining about it, I think I will actually be a little sad to see Javascript go. It wasn't perfect, but it had a powerful unifying effect on the Web Community when we could write, share and learn about our code in a higher-level scripting language that we could all read. (Even if there were some surprising little quirks that took a while to get used to.) Despite the stated intention from WASM - there is just something fundamental about AST trees - no matter how well you pretty-print them - that doesn't allow sharing and shared learning the same way. As a run-of-the-mill guy who makes my bread maintaining websites as much as greenfield-developing them, I feel like I'm in for a bit of a rough ride trying to decipher and debug binary/AST code compiled from Perl (or whatever). Guess I'll have to suck it up and learn to do it, though.
11
systems 3 days ago 10 replies      
does this mean i wont have to learn javascript, css, d3, svg, typescript, jquery, etc ... to make webistes and web apps
12
amelius 3 days ago 1 reply      
Will WebAssembly allow us to create custom video codecs? Or is there still a bottleneck somewhere that makes this unrealistic?
13
megaman821 3 days ago 2 replies      
Would there be any benefit to compiling dynamic languages with GCs (like Python or Ruby) to WASM? It is cool that is technically possible, but this looks like a target for C, C++, and Rust code.
14
legutierr 3 days ago 2 replies      
Does anyone who is familiar with WebAssembly development know what kinds of things Dynamic Linking (mentioned among the future features) will enable? For instance, would it permit the creation of optimized WebAssembly libraries written in C++ that could be called by a JavasSript program (say, for things like image processing)? Would it eventually permit library sharing between, say, a Python runtime compiled to WebAsembly and JavaScript in the browser?

Because that would pretty cool.

15
moron4hire 3 days ago 0 replies      
This is going to be huge for the WebVR community. The demo actually ran great on my janky laptop, whereas the ASM.js version at a lower resolution ran at the considerably crappy level I had come to expect of Unity HTML5-built apps on an Intel integrated GPU. Faster = more efficient, also, so equivalent demos should burn less battery power on my smartphone as well, which means less heat build up, which means a longer time before the GPU throttles down.
16
formula1 3 days ago 3 replies      
More importantly, will rust compile to wasm one day?
17
iagooar 3 days ago 1 reply      
When I read news like this, I think: this is the cool stuff you should be doing, not maintaining a lousy e-commerce shop...
18
crudbug 3 days ago 1 reply      
Giant Leap for Web Platform.

How is the mapping from wasm to asm done, at all?

Are they directly mapping from web world to native world in a sandbox ?

19
batmansmk 3 days ago 0 replies      
It is a wonderful news. Firefox version is ready too.
20
tracker1 3 days ago 0 replies      
I can't help but to think that this could be a huge win for binary modules in node.js, as it could finally allow for cross-platform, prebuilt binary modules without the tight coupling and build dependencies that are currently needed.
21
Illniyar 3 days ago 2 replies      
This is moving fast.

Does anyone know if theres a webassembly to js compiler to support legacy browsers?

22
tadlan 3 days ago 1 reply      
Question please: Even though there is stated desire to support python etc vm, this will be hampered by the need for the client to download the large VM vs native code that can be distributed itself, correct?
23
hutzlibu 3 days ago 2 replies      
This is great news! And a nice working demo ...And a general question about wasm - since typescript can be strictly typed, would it benefit much, if it would be compiled to wasm?
24
BinaryIdiot 3 days ago 1 reply      
Oh man I'm very excited about this! Great job all around!

Web Assembly should make code more portable and faster on the web. No more transpiling to JavaScript (which honestly is a hack; going from one language to another language which is then compiled). If you want to use CoffeeScript, even though I dislike it, you'll be able to compile directly to WebAssembly instead.

I can't wait until this is fully baked with DOM access, etc.

25
xhrpost 3 days ago 2 replies      
I know programmers sometimes write directly in assembly for certain performance enhancements and certain hardware access. Is there a foreseeable reason someone would program directly in WebAssembly though? I realize that's not the reason this technology exists, just curious.
26
ilaksh 3 days ago 0 replies      
Nim, which is objectively the best programming language, is perfect for Web Assembly.
27
z3t4 3 days ago 1 reply      
It might sound like a stupid question, but is it faster then vanilla JavaScript!?
28
spriggan3 3 days ago 1 reply      
Great news, this is the beginning of the "Javascript Less" movement.
29
noiv 3 days ago 1 reply      
That three major browser teams implement and publish a new standard/technology at the same time is exciting. OTOH I can't believe we start over again with Assembler (a machine language).
30
bobajeff 3 days ago 1 reply      
I wonder if Google has any plans on using C++ in any of their web applications now.

Might help them improve Google Docs a lot if they could utilize some mature C++ libraries from a desktop word processor.

31
shurcooL 3 days ago 1 reply      
This is really exciting, because it's going on a great path and it's a matter of time before all this is available out of the box!

Is anyone working on a Go to WebAssembly compiler yet?

32
kin 3 days ago 1 reply      
Is there any news for WebAssembly on mobile browsers?
33
neals 3 days ago 1 reply      
Is this in a state where I can go and try making experiments in webassembly?
34
randyrand 3 days ago 2 replies      
Anyone gonna write an LLVM backend for this? =D mouth waters
35
anthk 3 days ago 6 replies      
The day WebAssembly is pushed into mainstream I'd use a web browser with no support for it at all.
29
The 451 status code is now supported github.com
389 points by cujanovic  18 hours ago   74 comments top 11
1
schlowmo 11 hours ago 1 reply      
Besides the reference to Fahrenheit 451 the referece to Life of Brian at the example from the RFC made my day:

https://tools.ietf.org/html/rfc7725#section-3

"Unavailable For Legal Reasons

This request may not be serviced in the Roman Province of Judea due to the Lex Julia Majestatis, which disallows access to resources hosted on servers deemed to be operated by the People's Front of Judea."

2
dcw303 17 hours ago 1 reply      
This could be really useful. If this was done by other big content sites (Youtube for example) then a search bot could build up an index of banned resources. A repository of burned books.
3
chippy 13 hours ago 1 reply      
"Responses using this status code SHOULD include an explanation, in the response body, of the details of the legal demand: the party making it, the applicable legislation or regulation, and what classes of person and resource it applies to."

So in the articles example, GitHub should really include who is requesting the DCMA in the response.

4
nothis 11 hours ago 1 reply      
5
TuringTest 10 hours ago 0 replies      
I like the Farenheit 451 reference. Is that intentional, or does destiny have a sense of humor?

Edit: Wikipedia knows it all, as always. [1]

[1] https://en.wikipedia.org/wiki/HTTP_451

6
edent 16 hours ago 1 reply      
Original discussion on HN from 4 years ago https://news.ycombinator.com/item?id=4099751
7
jasonjei 7 hours ago 1 reply      
If I understand NSL correctly, its existence cannot be published without a government waiver? So in the case a repo needs to be taken down due to a NSL, what does GH do? 404? 401? 451? Returning 451 in response to a NSL would definitely violate NSL requirements?
8
sanqui 15 hours ago 2 replies      
I was under the impression that the 451 status code should be used for requests blocked by proxies, where the original content is technically still available at the source but blocked for some reason. Probably got the wrong idea.
9
apalmer 10 hours ago 5 replies      
Help Me Understand:

I am a government who is censoring content. I do not like the explicitly saying I am 'censoring' the internet I instruct my infrastructure not to use the status code 451. and I instruct my nation's infrastructure to reject or rewrite all responses with 451 status code to 404.

What stops me?

10
ape4 11 hours ago 0 replies      
This means webcrawlers/bots can now compile stats.
11
lolidaisuki 15 hours ago 7 replies      
I actually saw the status code 451 in the wild.

First time I saw it was in December and after that in January, both on the same site. The site that was blocked was archive.is.

This block was targeted at Finland and none of the different Internet connections I tried could get to the site, I tried my home connection, cellular and connecting from my school network. It's a shame that anyone even thought of censoring such an useful tool for history and other legitimate uses. I wrote a thing about it to a Finnish newspaper and a few weeks after that the block was gone. I suspect that the newspaper conatcted archive.is and it was removed so they don't get bad publicity.

It was kind of ironical that I had to subvert the archive.is censorship to read an archived version of a thread discussing web censorship in Sweden.

I think this error code is a bad idea as it legitimizes censorship.

30
Dells Skylake XPS 13, Precision workstations now come with Ubuntu preinstalled arstechnica.com
344 points by bpierre  6 days ago   234 comments top 43
1
ThePhysicist 6 days ago 11 replies      
I think it's really impressive what Dell has put together here. As my old Thinkpad T430 is nearing it's fourth anniversary I have been looking for an upgrade for a while and compared different options with a focus on lightweight, powerful laptops with good Linux support. And so far the XPS 13 seems way more attractive than the new Lenovo Skylake laptops (e.g. the 460s), which have lower resolution displays (some models still start with a 1.366 x 768 (!) display, which is just ridiculous in 2016), less and slower RAM, smaller hard drives and -as far as I can tell from the specs- less battery life as compared to the XPS 13 but are actually 300 - 400 $ more expensive, even when choosing three year guarantee for the XPS. The only thing I don't like about the XPS is Dell's guarantee, which is "send in", meaning that I probably won't see my laptop for a few weeks if it has to be repaired, whereas Lenovo will send a service technician to me who will usually be able to repair the laptop immediately (I already had to make use of this service twice, once to exchange a noisy fan and once to replace a broken display bezel).

I guess I'll wait for Apple to reveal the new MB Pro line before making a decision, but it seems that for the first time in 10 years my next laptop will not be a Lenovo/IBM.

2
jaseemabid 6 days ago 2 replies      
> They come with Windows by default, but you can pick Ubuntu instead and shave about $100 off the price.

How awesome!

3
jpalomaki 6 days ago 2 replies      
I was surprised to notice that the XPS 15 comes also with quad core CPUs and supports 32GB max memory. Interesting option for those looking for desktop level performance in reasonably sized package.
4
boothead 6 days ago 2 replies      
Any info on when this (or the XPS 15 with linux) will be available in the UK? I just had a look on Dell's website, and as expected it's still a shower of shit WRT finding what you want.

I bought one of the first or 2nd gen XPS 13s and loved it. However the experience of buying from Dell was awful and customer service was so intractable as to be useless too.

5
latch 6 days ago 2 replies      
I've oft wondered if these would sell better without the Dell branding. Put a nondescript logo on the back (no word), remove all "Dell".

This really annoyed me years ago when I spent a small fortune on a beautiful TV that had "COMPANY" in white letters on the otherwise perfect dark bezel.

6
siscia 6 days ago 9 replies      
I am a little scared by the touch screen, I have never had one and I don't think I need it...

Anyway the extra complexity that come with it doesn't makes me comfortable...

Any experiences so far ?

7
kylec 6 days ago 1 reply      
Apple better hurry up with Skylake MacBooks, these look very tempting.
8
tholford 6 days ago 1 reply      
Bought a used first or second gen Developer Edition XPS13 last year, installed Mint 17.2 on it and have been very pleasantly surprised. Pretty much just as functional as my old MBP for half the price :)
9
forgotpwtomain 6 days ago 1 reply      
I have a new XPS 15 running the 4.4 Kernel - Skylake is very buggy as is the broadcom wireless firmware.

Also slight physical tremors can cause complete system crashes. I would stay away from it.

10
davidw 6 days ago 3 replies      
I've been very happy with the various XPS 13 systems. This one looks even better. Most likely my next computer.
11
arca_vorago 6 days ago 0 replies      
I have ordered a few midline desktops from dell for testing their Ubuntu setup. In the end I wiped and installed my own, and the eula that pops up on first boot was fucking ridiculous, I mean I know they like tonpush the boundaries for self protection, and I understand things like wanted to keep any issues in their jurisdiction and stuff like that, but clauses in the eula stated you waved all rights including constitutional ones (yes, the word constitutional was used in the actual eula,) agreed to forfeit any trial by jury or anybother legal procedure except private arbitration in Dells jurisdiction, and some other stuff that really bothered me to see as the first thing that popped up on first boot.

Lots of it is obviously totally unenforceable and wouldnt stand in court, but they put it in there anyway just because they can get away with it.

Does no one do reasonable eulas/tos?

12
krob 4 days ago 0 replies      
If you're not looking for an ultra book. I've got a 7510 Dell Precision Laptop base. i7-6820HQ supports upto 64gb of ram in the laptop, 2 ram slots above keyboard, 2 below. Supports 1 m.2 epci nvme, 1 sata3. I've Samsung 950 Pro NVME 512gb ssd, 1 2tb Samsung Evo 850. I don't believe the NVME works w/o AHCI booting. My experience with linux on this laptop was bar none one of the best. I did have to install ubuntu 15.10, but everything worked without a hitch. This laptop also worked with optimus graphics chip switching. Quadro 4gb DDR4 M2002 chip. Battery is really impressive. Monitor is 4k matted. It's probably the best laptop I've ever owned. Since I purchased it, it now comes with usb type-c w/ thunderbolt 40gbit support. So you can get a really nice fancy docking port for it. Also I've a 2014 macbook pro fully loaded and this is only 1lb heavier than that was. You can also get a xeon chip on this platform. http://www.dell.com/us/business/p/precision-m7510-workstatio...
13
nickpsecurity 6 days ago 0 replies      
History repeats: the Dell Inspiron that just crapped out on me (somewhat) after years of use was their first Linux model. It also had Ubuntu by default. Great laptop. Interesting enough, after all the updates, I'm having trouble finding something that works out of the box that's not Ubuntu. It's running Fedora fine right now but software management is totally different from my Debian-based experience. Might ditch it. ;)
14
giovannibajo1 6 days ago 1 reply      
The previous models didn't support DFS in wifi 5ghz making them unable to work in high density wifi environments. Actually what's worse is that they randomically lose connection on a DFS AP (when the channel gets into one of the DFS-reserved ones they can't access). So you basically have to force them on 2.4 or disable DFS on the APs.

This applied to both the Broadcom and Intel wifi. Any chance these models are better in this regard?

15
Mikeb85 6 days ago 0 replies      
It's nice to see Dell beginning to actually adopt Linux and Ubuntu. I always kind of figured part of the strategy of going private was to be able to move away from the status quo of being just another Windows vendor... By offering choice and eliminating lock-in, they can go after techie types and serious users who otherwise would have probably just bought a ThinkPad or MBP.
16
mistat 6 days ago 0 replies      
Are these available in Australia yet? I can only ever find reference to the US store
17
davidy123 6 days ago 0 replies      
This is great, but Thinkpads have always had good Linux support. I have a friend who bought the previous XPS 13 Ubuntu edition and it had all kinds of problems which are only being worked out now, problems that aren't present on most Thinkpads.

I got the X1 Yoga one month after it came out, installed an alpha version of Ubuntu 16.04 on it and everything just works, including the touch screen.

18
manaskarekar 6 days ago 2 replies      
I've had great luck with Dells for Linux support. Lubuntu on Latitudes has run flawlessly over the years.

It is unfortunate that Dell chose to use small arrow keys and at the same time overload the arrow keys with the 'Home-End-PgUp-PgDown'. Hard to believe this layout was chosen for their Latitude and Precision lines too.

19
Ruud-v-A 6 days ago 0 replies      
Ive been running Arch Linux on the non-developer edition XPS 15, and Ive experienced very little problems. Occasionally the touchpad does not work, and sometimes headphone audio is silent. Other than that, everything works like a charm, even the Broadcom WiFi adapter.
20
jgalt212 5 days ago 0 replies      
I have a new (purchased Dec '15) XPS 15. And despite having dual booted about 10-15 different PCs and laptops (mostly Dell and HP) have thus far have had zero success getting Ubuntu on my new box. I suspect it has to do with two internal hard drives, but I've sort of given up at this point (I bricked the first box, and Dell sent me a new one) and relegate this otherwise very nice laptop to the accounting department to run Quickbooks and Office.
21
otar 6 days ago 2 replies      
One tiny detail that bothers me is that there's a Windows logo on the keyboard. It could be Tux or Ubuntu logo.

Tux Penguin sticker solved my problem on my XPS 13, but would be nice to see it coming out of the box.

22
yitchelle 6 days ago 1 reply      
Can anyone share their experience when compared to Macbook Air?
23
_RPM 6 days ago 0 replies      
I seriously can't tell if this article being here is an advertisement. Is it possible the site owners have been paid to have this post here?
24
Timshel 6 days ago 2 replies      
Looked really good until they had to botch something: let's put hdmi 1.4 and no DisplayPort, it's not like we're selling 4k screen ...
25
dblooman 6 days ago 0 replies      
Is there a 13 or 15 inch laptop without a number pad that supports Ubuntu for less than 500 that uses an Core i5 Skylake CPU?
26
pascalo 6 days ago 1 reply      
For all the people dealing with shitty dell customer support on the phone, try using their @dellcares twitter account. Had a broken acreen glass and later a faulty fan on my 2014 xps 13, and they sent around a technician each time, all via twitter. Much less painfull than hanging on the phone. Excellent customer support.
27
rcthompson 6 days ago 0 replies      
The placement of the webcam in the lower left corner is truly idiotic.
28
ciokan 6 days ago 1 reply      
Just got the xps 15 (9550) yesterday which had windows on it. Installed ubuntu 16.04 beta and works very well. I had huge problems trying to install any lower version of ubuntu & variants.
29
tiatia 6 days ago 0 replies      
I use an XPS 13 with Kubuntu.

I have no experience with preinstalled Linux, but similar to Android, I would be afraid of presinstalled crabware. Just remove the windows and make a clean install.

30
moonbug 6 days ago 1 reply      
At the other end of the spectrum, the 5" Inspiron 3552 that comes with Ubuntu, which I'm typing this on, is quite the best 200 dollar laptop you can get.
31
jkot 6 days ago 3 replies      
Price is not that great. 16GB RAM version is more expensive than Windows edition at my local shop (Prague). At least it has Intel wifi.
32
nivertech 6 days ago 0 replies      
I waiting for Lenovo Thinkpad X1 Carbon 4th gen with Skylake CPU. Anybody knows if it's already available?
33
natch 6 days ago 2 replies      
Does this have a magsafe-type connector for the power cord?

I did look for it in the review but maybe I missed it.

34
modzu 6 days ago 0 replies      
looks great on paper, but the 2015 xps13 had some serious issues like useless webcam and trackpad...

it's the laptop that flipped me to mac. wont go back.

35
Vlaix 6 days ago 0 replies      
Now make a laptop with a keyboard and touchpad that justifies me stopping frankensteining my old machines to keep them alive.

That chiclet keyboard and phone-sized pad nonsense is very limiting.

36
intrasight 6 days ago 0 replies      
Is it less expensive than one with Windows?
37
Raed667 6 days ago 0 replies      
The only thing holding me back is the CPU.
38
bliti 6 days ago 0 replies      
How much does this thing cost?
39
cttet 4 days ago 0 replies      
The keyboard though.
40
tunichtgut 6 days ago 1 reply      
Hell, its about time!
41
andreaso 6 days ago 4 replies      
Does it really matter that much what distro it ships with? As long as the laptop ships with any distro preinstalled that hardware tend to be properly supported by the Linux kernel, allowing you to feel safe about installing any other (up-to-date) distro.
42
akerro 6 days ago 0 replies      
Why is this a news? I bought two laptops before, both of them came with Linux, one Asus one Dell.
43
xkiwi 6 days ago 2 replies      
If anyone needs or have to install windows 7 on DELL brand laptops for any reason, I highly recommend you to wait until you confirm it can be done.

I have Dell XPS/Precision 11 and 13, the problem is the Windows 7 have difficulty to boot from UEFI, and you will stuck because AHCI is not supported by these DELL's BIOS.

       cached 19 March 2016 02:11:01 GMT