hacker news with inline top comments    .. more ..    14 Mar 2016 Best
home   ask   best   3 years ago   
Lee Sedol Beats AlphaGo in Game 4 gogameguru.com
1209 points by jswt001  19 hours ago   403 comments top 62
mikeyouse 19 hours ago 9 replies      
Relevant tweets from Demis;

 Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now... Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87 When I say 'thought' and 'realisation' I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87 Lee Sedol wins game 4!!! Congratulations! He was too good for us today and pressured #AlphaGo into a mistake that it couldnt recover from
From: https://twitter.com/demishassabis

argonaut 19 hours ago 6 replies      
If it's true that AlphaGo started making a series of bad moves after its mistake on move 79, this might tie into a classic problem with agents trained using reinforcement learning, which is that after making an initial mistake (whether by accident or due to noise, etc.), the agent gets taken into a state it's not familiar with, so it makes another mistake, digging an even deeper hole for itself - the mistakes then continue to compound. This is one of the biggest challenges with RL agents in the real, physical world, where you have noise and imperfect information to confront.

Of course, a plausible alternate explanation is that AlphaGo felt like it needed to make risky moves to catch up.

dannysu 18 hours ago 2 replies      
In the post-game press conference I think Lee Sedol said something like "Before the matches I was thinking the result would be 5-0 or 4-1 in my favor, but then I lost 3 straight... I would not exchange this win for anything in the world."

Demis Hassabis said of Lee Sedol: "Incredible fighting spirit after 3 defeats"

I can definitely relate to what Lee Sedol might be feeling.Very happy for both sides. The fact that people designed the algorithms to beat top pros and the human strength displayed by Lee Sedol.

Congrats to all!

fhe 15 hours ago 3 replies      
My friends and I (many of us are enthusiastic Go lovers/players) have been following all of the games closely. AlphaGo's mid game today was really strange. Many experts have praised Lee's move 78 as a "divine-inspired" move. While it was a complex setup, in terms of number of searches I can't see it be any more complex than the games before. Indeed because it was a very much a local fight, the number of possible moves were rather limited. As Lee said in the post-game conference, it was the only move that made any sense at all, as any other move would quickly prove to be fatal after half a dozen or so exchanges.

Of course, what's obvious to a human might not be so at all to a computer. And this is the interesting point that I hope the DeepMind researchers would shed some light on for all of us after they dig out what was going on inside AlphaGo at the time. And we'd also love to learn why did AlphaGo seem to go off the rails after this initial stumble and made a string of indecipherable moves thereafter.

Congrats to Lee and the DeepMind team! It was an exciting and I hope informative match to both sides.

As a final note: I started following the match thinking I am watching a competition of intelligence (loosely defined) between man and machine. What I ended up witnessing was incredible human drama, of Lee bearing incredible pressure, being hit hard repeatedly while the world is watching, sinking to the lowest of the lows, and soaring back up winning one game for the human race.. Just incredible up and down in a course of a week. Many of my friends were crying as the computer resigned.

jballanc 19 hours ago 6 replies      
So AlphaGo is just a bot after all...

Toward the end AlphaGo was making moves that even I (as a double-digit kyu player) could recognize as really bad. However, one of the commentators made the observation that each time it did, the moves forced a highly-predictable move by Lee Sedol in response. From the point of view of a Go player, they were non-sensical because they only removed points from the board and didn't advance AlphaGo's position at all. From the point of view of a programmer, on the other hand, considering that predicting how your opponent will move has got to be one of the most challenging aspects of a Go algorithm, making a move that easily narrows and deepens the search tree makes complete sense.

jonbarker 9 hours ago 0 replies      
AlphaGo's weakness was stated in the press conference inadvertently: it considers only the opponent moves in the future which it deems to be the most profitable for the opponent. This leaves it with glaring blind spots when it has not prepared for lines which are surprising to it. Lee Sedol has now learned to exploit this fact in a mere 4 games, whereas the NN requires millions of games to train on in order to alter its playing style. So Lee only needs to find surprising and strong moves (no small feat but also the strong suit of Lee's playing style generally).
keypusher 19 hours ago 3 replies      
The crucial play here seems to have been Lee Seedol's "tesuji" at White 78. From what I understand this phrase in Go means something like "clever play" but is something like sneaking up on your opponent with something that they did not see coming. Deepmind CEO confirmed that the machine actually missed the implications of this move as the calculated win percentage did not shift until later.https://twitter.com/demishassabis/status/708928006400581632

Another interesting thing I noticed while catching endgame is that AlphaGo actually used up almost all of its time. In professional Go, once each player uses their original (2 hour?) time block, they have 1 minute left for each move. Lee Sedol had gone into "overtime" in some of the earlier games, and here as well, but previously AlphaGo still had time left from its original 2 hours. In this game, it came down quite close to using overtime before resigning, which is does when the calculated win percentage falls below a certain percentage.

Houshalter 8 hours ago 2 replies      
We were discussing the probability that Sedol would win this game. Everyone, including me, bet 90% that no human would ever win again, let alone this specific game: http://predictionbook.com/predictions/177592

I tried to estimate it mathematically. Using a uniform distribution across possible win rates, then updating the probability of different win rates with bayes rule. You can do that with Laplace's law of succession. I got a 20% that Sedol would win this game.

However a uniform prior doesn't seem right. Eliezer Yudkowsky often says that AI is likely to be much better than humans, or much worse than humans. The probability of it falling into the exact same skill level is pretty implausible. And that argument seems right, but I wasn't sure how to model that formally. But it seemed right, and so 90% "felt" right. Clearly I was overconfident.

So for the next game, with we use Laplace's law again, we get 33% chance that Sedol will win. That's not factoring in other information, like Sedol now being familiar with AlphaGo's strategies and improving his own strategies against it. So there is some chance he is now evenly matched with AlphaGo!

I look forward to many future AI-human games. Hopefully humans will be able to learn from them, and perhaps even learn their weaknesses and how to exploit them.

Depending how deterministic they are, you could perhaps even play the same sequence of moves and win again. That would really embarrass the Google team. I hear they froze AlphaGo's weights to prevent it from developing new bugs after testing.

mizzao 14 hours ago 6 replies      
Another way to look at this is just how efficient the human brain is for the same amount of computation.

On one hand, we have racks of servers (1920 CPUs and 280 GPUs) [1] using megawatts (gigawatts?) of power, and on the other hand we have a person eating food and using about 100W of power (when physically at rest), of which about 20W is used by the brain.

[1] http://www.economist.com/news/science-and-technology/2169454...

sethbannon 8 hours ago 0 replies      
I wouldn't be surprised if, in a month, Lee Sedol was able to beat AlphaGo in another match. This is what happened in chess. The best computers were able to beat the best humans, until the best humans learned how to play anti-computer chess. This bought them a year or so more, until computers finally dominated for good.
minimaxir 19 hours ago 4 replies      
There were a few jokes made during the round about how AlphaGo resigns. Turns out it's just a popup window! http://i.imgur.com/WKWMHLv.png
versteegen 18 hours ago 0 replies      
I found this comment on that thread quite insightful:https://gogameguru.com/alphago-4/#comment-13410

Edit: here's another great one on MCTS: https://gogameguru.com/alphago-4/#comment-13479

adnzzzzZ 19 hours ago 3 replies      
According to the commentary of both streams I was watching, after losing an important exchange in the middle (apparently move 79 https://twitter.com/demishassabis/status/708928006400581632) it seems AlphaGo sort of bugged out and started making wrong moves on an already dead group on the right side of the board. After that it kept repeating similar mistakes until it resigned a lot of moves after. But the game was already won for Lee Sedol after that middle exchange. It was really interesting seeing everyone's reactions to AlphaGo's bad moves though.
emcq 19 hours ago 3 replies      
That was really cool! It seemed after the brilliant play in the middle the most probable moves for winning required Lee Sedol to make impossibly bad mistakes for a professional, which would be a prior that AlphaGo doesn't incorporate. I've heard the training data was mostly amateur games so perhaps the value/policy networks were overfit? Or maybe greedily picking the highest probability, common with tree search approaches, is just suboptimal?
magoghm 18 hours ago 2 replies      
Right now I don't know if I'm more impressed by AlphaGo's artificial intelligence or its artificial stupidity.

Lee Sedol won because he played extremely well. But when AlphaGo was already losing it made some very bad moves. One of them was so bad that it's the kind of mistake you would only expect from someone who's starting to learn how to play Go.

hasenj 19 hours ago 1 reply      
The game seemed to be going in AlphaGo's favour when it was half way through. Black (AG) had secured a large area on the top that seemed nearly impossible to invade.

It was amazing to see how Lee Sedol found the right moves to make the invasion work.

This makes me think that if the time for match was three hours instead of two, maybe a professional player will have enough time to read the board deeply enough to find the right moves.

herrvogel- 19 hours ago 2 replies      
Am I right by asumming, that if they would play another game (AlphaGo black and Lee Sedol white), that Lee Sedol could pressure AlphaGo into makeing the same mistake again?
kronion 19 hours ago 3 replies      
After AlphaGo won the first three games, I wondered not if the computer had reached and surpassed human mastery, but instead how many orders of magnitude better it was. Given today's result, it may be only one order, or even less. Perhaps the best human players are relatively close to the maximum skill level for go, and that the pros of the future will not be categorically better than Lee Sedol is today.
Bytes 19 hours ago 2 replies      
I was not expecting Lee Sedol to come back and win a game after his first three losses. AlphaGo seemed to be struggling at the end of the match.
Angostura 16 hours ago 1 reply      
Bizarre. I felt a palpable sense of relief when I read this. Silly meat-brain that I am.
rubiquity 14 hours ago 0 replies      
This is a great day for humans. Glad to see all those years of human research finally pay off.
pmontra 9 hours ago 0 replies      
GoGameGuru just published a commentary of the game with some extra insight https://gogameguru.com/lee-sedol-defeats-alphago-masterful-c...

The author thinks that Lee Sedol was able "to force an all or nothing battle where AlphaGos accurate negotiating skills were largely irrelevant."


"Once White 78 was on the board, Blacks territory at the top collapsed in value."


"This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves."

"Weve talked about AlphaGos bad moves in the discussion of previous games, but this was not the same."

"In previous games, AlphaGo played bad (slack) moves when it was already ahead. Human observers criticized these moves because there seemed to be no reason to play slackly, but AlphaGo had already calculated that these moves would lead to a safe win."

Which, I add, is something that human players also do: simplify the game and get home quickly with a win. We usually don't give up as much as AlphaGo (pride?), still it's not different.

"The bad moves AlphaGo played in game four were not at all like that. They were simply bad, and they ruined AlphaGos chances of recovering."

"Theyre the kind of moves played by someone who forgets that their opponent also gets to respond with a move. Moves that trample over possibilities and damage ones own position achieving less than nothing."

And those moves unfortunately resemble what beginners play when they stubbornly cling to the hope of winning, because they don't realize the game is lost or because they didn't play enough games yet not to expect the opponent to make impossible mistakes. At pro level those mistakes are more than impossible.

Somebody asked an interesting question during the press conference about the effect of those kind of mistakes in the real world. You can hear it at https://youtu.be/yCALyQRN3hw?t=5h56m15s It's a couple of minutes because of the translation overhead.

awwducks 19 hours ago 0 replies      
Lee Sedol definitely did not look like he was in top form there. I would say (as an amateur) his play in Game 2 was far better. It was the funky clamp position that perhaps forced AlphaGo to start falling apart this game. [0]

I wonder if Lee Sedol can find a way to replicate that in Game 5.

[0]: https://twitter.com/demishassabis/status/708928006400581632

creamyhorror 18 hours ago 1 reply      
Here's the post-game conference livestream:


At the end, Lee asked to play white in the last match, and the Deepmind guys agreed. He feels that AlphaGo is stronger as white, so he views it as more worthwhile to play as black and beat AlphaGo.

Conference over, see you all tomorrow.

jacinda 12 hours ago 0 replies      
<joke>AlphaGo let Lee Sedol win to lull us all into a false sense of security. The robot apocalypse is well underway.</joke>
hyperpape 6 hours ago 0 replies      
It's worth mentioning that while 79 is where Black goes bad, not everyone is sure that 78 actually works for White (http://lifein19x19.com/forum/viewtopic.php?f=15&t=12826). I'm sure we'll eventually get a more complete analysis.
h43k3r 18 hours ago 0 replies      
The post match conference analysis with Lee Sedol and the CEO of deepmind about the different aspects of the game is beautiful to watch. There seems to be a sense of sincerity rather than the greed to win from each of the side.
devanti 4 hours ago 0 replies      
I wonder if Lee Sedol were to start as white again, and follow the exact same starting sequences, would AlphaGo's algorithms follow the exact same moves as it did before?
asdfologist 11 hours ago 1 reply      
On a tangential note, apparently AlphaGo has been added to http://www.goratings.org/, though its current rating of 3533 looks off. Shouldn't it be much higher?
zkhalique 10 hours ago 0 replies      
Wow! Incredible! Now we know that they have a chance against each other. I would say that this was a very major point... otherwise we wouldn't know whether AlphaGo's powers have progressed to the point where no one can ever beat it. Now I take what Ke Je said much more seriously: http://www.telegraph.co.uk/news/worldnews/asia/china/1219091...
GolDDranks 8 hours ago 1 reply      

> This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves.

It seems to me, that these bad moves were a direct result of AlphaGo's min-maxing tree search.

According to @demishassabis' tweet, it had had the "realisation" that it had misestimated the board situation at move 87. After that, it did a series of bad moves, but it seems to me that those moves were done precisely because it couldn't come up with any other better strategy the min-max algorithm used traversing the play tree expects that your opponent responds the best he possibly can, so the moves were optimal in that sense.

But if you are an underdog, it doesn't suffice to play the "best" moves, because the best moves might be conservative. With that playing style, the only way you can do a comeback is to wait for your opponent to "make a mistake", that is, to stray from a series of the best moves you are able to find, and then capitalize that.

I don't think AlphaGo has the concept of betting on the opportunity of the opponent making mistakes. It always just tries to find the "best play in game" with its neural networks and tree search in terms of maximising the probability of winning. If it doesn't find any moves that would raise the probability, it picks one that will lower it as little as possible. That's why it picks uninteresting sente moves without any strategy. It just postpones the inevitable.

If you're expecting the opponent to play the best move you can think of, expecting mistakes is simply not part of the scheme. In this situation, it would be actually profitable to exchange some "best-of-class" moves to moves that aren't that excellent, but that are confusing, hard to read and make the game longer and more convoluted. Note that this totally DOESN'T work if the opponent is better at reading than you, on average. It will make the situation worse. But I think that AlphaGo is better in reading than Lee Sedol, so it would work here. The point is to "stir" the game up, so you can unlock yourself from your suboptimal position, and enable your better-on-average reading skills to work for you.

It seems to me that the way skilful humans are playing has another evaluation function in addition to the "value" of a move how confusing, "disturbing" or "stirring up" a move is, considering the opponent's skill. Basically, that's a thing you'd need to skilfully assess your chances to perform an OVERPLAY. And overplay may be the only way to recover if you are in a losing situation.

kibaekr 18 hours ago 1 reply      
So where can we see this "move 78" that everyone is talking about, without having to go through the entire match counting?
jonbarker 9 hours ago 1 reply      
Would it not be beneficial to the deepmind team to open at least the non-distributed version to the public to allow for training on more players? I was surprised to learn that the training set was strong amateur internet play, why not train on the database of the history of pro games?
conanbatt 14 hours ago 1 reply      
This game is a great example for the people that said that AlphaGo didnt play mistakes when it had a better position because it lowered the margin, because it only looks at winning probability.

AlphaGo made a mistake and realized it was behind, and crumbled because all moves are "mistakes"(they all lead to loss) so any of them is as good as any other.

Im very suprrised and glad to see Humans still have something against AlphaGo, but ultimately, these kind of errors might dissapear if AlphaGo trains 6 more months. It made a tactical mistake, not a theory one.

yoavm 19 hours ago 0 replies      
okay human race, let's sit back and enjoy our last moments of glory!
ctstover 10 hours ago 0 replies      
As a human, I'm pulling for the human. As a computer programmer, I'm pulling for the human. As a romantic, I'm pulling for the human. As a fan of science fiction, I'm pulling for the human. To me it will matter even he can pull off a 3-2 loss over a 4-1 loss.
overmille 15 hours ago 0 replies      
Now that we have two points for interpolation, expectations are down to near best human competency in go using distributed computation. Also from move 79 to 87 the machine wasn't able to detect the weak position, that shows its weakness. Now Lee can try and aggressive strategy creating multiple hot points of attacks to defeat his enemy. The human player is showing the power of intelligence.
yulunli 18 hours ago 1 reply      
AlphaGo obviously made mistakes in game 4 under the pressure from LSD's brilliant play. I'd like to know if the "dumb moves" are caused by the lack of pro data or some more fundamental flaws with the algorithm/methodology. AlphaGo was trained on millions of amateur games, but if Google/Deepmind builds a website where people (including prop players) can play with AlphaGo, it would be interesting to see who improves faster.
atrudeau 5 hours ago 1 reply      
Move 78 gives us hope in the war against the machines.

78 could come to symbolize humanity.

What a special moment.

yk 18 hours ago 1 reply      
Apparently AlphaGo made two rather stupid moves on the sidelines, judging from the commentary. Which incidentally is the kind of edgecase one would expect machine learning against itself is bad at learning, since there is a possibility that AlphaGo just tries to avoid such situations. It will be interesting to see if top players are able to exploit such weaknesses once AlphaGo is better understood by high level Go players.
eslaught 7 hours ago 1 reply      
Is there a place I can go to quickly flip through all the board states from the game?
ljk 19 hours ago 0 replies      
Does this mean Lee found AlphaGo's weakness, and AlphaGo wasn't player at a out-of-reach level?
agumonkey 13 hours ago 0 replies      
Way to go humans. (I felt that AlphaGo was unbeatable and a milestone in computing overthrowing organic brains... I gave in the buzz a bit prematurely).
spatulan 19 hours ago 1 reply      
I wonder what the chances are of a cosmic ray or some stray radiation causing AlphaGo to have problems. It's quite a rare event, but when you have 1920 CPUs and 280 GPUs, it might up the probability enough to be something you have to worry about.
piyush_soni 15 hours ago 0 replies      
I am super excited and all about the Progress AI has made in AlphaGo, but a part of me feels kind of relieved that humans won at least one match. :). Sure, won't last for long.
uzyn 17 hours ago 1 reply      
It seems Lee Sedol fares better at late to end game than AlphaGo. Makes one wonder if Lee might have won the earlier games had Lee pushed on until the late game stages.
_snydly 19 hours ago 4 replies      
Was it AlphaGo losing the game, or Lee Sedol winning it?
esturk 19 hours ago 2 replies      
LSD maybe the only human to ever win against AlphaGo.
codecamper 15 hours ago 0 replies      
Wow that is awesome news. Very happy to read this this morning. It's a good day to be human.
partycoder 18 hours ago 0 replies      
Montecarlo bots behave weirdly when losing.
makoz 19 hours ago 0 replies      
Some pretty questionable moves from AlphaGo in that game, but I'm glad LSD managed to close it out.
pelf 9 hours ago 0 replies      
Now THIS is news!
vc98mvco 19 hours ago 1 reply      
I hope it won't turn out they let him win.
another-hack 11 hours ago 0 replies      
Humans strike back! :P
samstave 8 hours ago 0 replies      
So I am a completely ignorant of the game go. I mean I've heard about it my whole life but never bothered to understand it ever.

But after watching the summary video of AlphaGos win... I'm fascinated.

I'm sure there are thousands of resources that can teach me the rules, but HN; can you point me to a resource you recommend to get up to speed?

Dowwie 16 hours ago 0 replies      
But, did he pull a Kirk vs Kobayashi Maru? :) (yes, I went there)
techdragon 19 hours ago 3 replies      
I was hoping to see how AlphaGo would play in overtime. Now I'm curious, does it know how to play in overtime? Can the system handle evaluating how much time it can give itself to 'think' about each move, or does it fall into the halting problem territory and it was programmed to evaluate its probability of winning given the 'fixed' time it had left.
repulsive 18 hours ago 0 replies      
negativist paranoid skeptic could say that it would be a good move for the team to intentionally make go lose single battle in the moment it has already won the war..
conanbatt 15 hours ago 0 replies      
Maybe Alpha Go understood it won the 5 series, so its reading that it can lose the last 2 games and still win and hence plays suboptimal :P
gcatalfamo 17 hours ago 1 reply      
I believe that after winning 3 out of 5, AlphaGo team started experimenting with variables now that they can relax. Which will in turn be even more helpful for future AlphaGo version than the previous 3 wins.
antonioalegria 18 hours ago 3 replies      
Don't want to sound all Conspiracy Theory but somehow this feels planned.. It plays into DeepMind's hand to not have the machine completely trouncing the human. It's less scary and keeps people engaged further into the future.

Also seems in-line with the way Demis was "rooting" for the human this time they already won so now they focus on PR.

AlphaGo beats Lee Sedol again in match 2 of 5 gogameguru.com
940 points by pzs  3 days ago   553 comments top 65
fhe 3 days ago 24 replies      
As someone who studied AI in college and am a reasonably good amateur player, I have been following the matches between Lee and AlphaGo.

AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.

I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.

These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.

But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.

It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.

Cookingboy 3 days ago 19 replies      
Someone somewhere asked why a lot of people in the Go community is taking this in a somewhat hard way, here is my hypothesis:

Go, unlike Chess, has deep mytho attached to it. Throughout the history of many Asian countries it's seen as the ultimate abstract strategy game that deeply relies on players' intuition, personality, worldview. The best players are not described as "smart", they are described as "wise". I think there is even an ancient story about an entire diplomatic exchange being brokered over a single Go game.

Throughout history, Go has become more than just a board game, it has become a medium where the sagacious ones use to reflect their world views, discuss their philosophy, and communicate their beliefs.

So instead of a logic game, it's almost seen and treated as an art form. And now an AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics. It's a little hard to accept for some people.

Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

davelondon 3 days ago 5 replies      
Let's compare Go and Chess. We all know that Go is more complex that Chess, but how much more?

There's 10^50 atoms in the planet Earth. That's a lot.

Let's put a chess board in each of them. We'll count each possible permutation of each of the chess boards as a separate position. That's a lot, right? There's 10^50 atoms, and 10^40 positions in each chess board so that gives us 10^90 total positions.

That's a lot of positions, but we're not quite there yet.

What we do now is we shrink this planet Earth full of chess board atoms down to the size of an atom itself, and make a whole universe out of these atoms.

So each atom in the universe is a planet Earth, and each atom in this planet Earth is a separate chess board. There's 10^80 atoms in the universe, and 10^90 positions in each of these atoms.

That makes 10^170 positions in total, which is the same as a single Go board.

Chess positions: 10^40 (https://en.wikipedia.org/wiki/Shannon_number) Go positions: 10^170 (https://en.wikipedia.org/wiki/Go_and_mathematics)Atoms in the universe: 10^80 (https://en.wikipedia.org/wiki/Observable_universe#Matter_con...)Atoms in the world: 10^50 (http://education.jlab.org/qa/mathatom_05.html)

mixedmath 3 days ago 4 replies      
This game was largely played extremely well by both sides. There were a a few peculiar-seeming moves made by AlphaGo that the commentator found very atypical. These moves ended up playing a very important role in the end game.

I should also say that it's somewhat clear that Sedol made one suboptimal move, and AlphaGo capitalized on it. Interestingly, the English commentator made the same mistake as he was predicting lines of play. This involved play in the center of the board, in a very complicated position. Prior to this set of moves, the game was almost a tie. Afterwards, it was very heavily in AlphaGo's favor.

skc 3 days ago 5 replies      
I find it very interesting that to a layperson, the idea of a computer being able to beat a human at a logic game is pretty much expected and uninteresting.

You try and share this story with a non-technical person and they will likely say "Well, duh..it's a computer".

rurban 3 days ago 4 replies      
What I really liked about those games so far, and Michael Redmond commentary, is that AlphaGo not only beat Lee Sedol twice, but also Redmond. He is playing the same style as Sedol, he constantly predicts Sedol's moves and he is as surprised and does the same miscalculations as Sedol. He really needs some time to find out when he made a mistake, the same mistake Sedol was eventually doing. This is high class commentary. Even if they have no Computer screen to clear the screen after some variations. He remembers all the stones and immediately clears his own moves, amazing. I'm not sure if a better device would actually help.
IvyMike 3 days ago 2 replies      
I sense a change in the announcer's attitude towards AlphaGo. Yesterday there were a few strange moves from AlphaGo that were called mistakes; today, similar moves were called "interesting".
bradley_long 3 days ago 2 replies      
Human can become tired, emotional and nervous.However, a computer/ software would not have these problems.

Especially for Lee, the whole world is looking at him. An "ordinary" human like me won't be able to make the right decisions under this pressure.

A great respect to Lee and the Developers of AlphaGo. Good Game!

mark_l_watson 3 days ago 0 replies      
I wonder how this will affect future human play. About 30 years ago my brother and I started playing a simple African stone game Kala. We each won about half the games until I coded up a brute force search to play against. Given a game tree to the end, the program made the weirdest looking opening move, when playing first. I started making that move and forever after won.

The situation with Go is different. (I wrote the Go program Honninbo Warrior in the 1970s, so I am a Go player and used to be a Go programmer.) Still, I bet the AlphaGo, and future versions, will strongly impact human play.

Maybe it was my imagination, but it sometimes looked like Lee Sedol looked happy + interested even late in the two games when he knew he was losing.

brianchu 3 days ago 3 replies      
I'm totally uninformed about Go, but by now it seems that unless you're clearly in the lead by the end of the midgame, AlphaGo is going to win, simply because at that point its Monte Carlo Tree Search is going to our-compute you in examining all the tactical variations in the endgame. Lee Sedol really seemed to be under a lot of time pressure at the end.

EDIT: clarified to what I originally meant: "end of midgame"

jknz 3 days ago 4 replies      
The next person that will beat alphaGo may not be a top go player.

In particular, I'm wondering if a computer scientist with access to the alphaGo source code and all the weights of the network could trick alphaGo in order to win games automatically (cf. the papers that show a neural net can be tricked to classify a plane as any other class).

If a human with the knowledge of the source code and the weights can do this, it is scary. Imagine a similar algorithm runs your car. An attacker that knows the source code and the weights may trick the algorithm to send your car in a wall!

pushrax 3 days ago 7 replies      
If AlphaGo wins all 5 matches, what do you think DeepMind will do with it? My intuition is that they won't continue development, and instead focus on other applications.

Great game btw, a pleasure to watch.

bronz 3 days ago 4 replies      
Who was the GO professional commentator? He was consistently predicting the moves of both Sedol and alphago. I was extremely impressed.
oneeyedpigeon 3 days ago 2 replies      
What I find fascinating - and I guess this really highlights that I have no idea whatsoever how AlphaGo works - is that at the start of game 2, AlphaGo plays P4, then Lee Sedol plays D16. To a layman, this looks like it would be a very, very common opening. Moreover, it's symmetrical - I'm not sure how that affects things, but my naive intuition is that it makes the game state less complex.

Nonetheless, AlphaGo takes a minute and a half to play its next move. Can anyone explain what on earth is going on during those 90 seconds?

sams99 3 days ago 0 replies      
The thing I find amazing about this is how soon this has happened. We all were expecting this to eventually happen but if you asked anyone who played go and was across the computer go scene when it would happen, say a year ago, they would say it was "10 years out". AlphaGo is one incredible feat of engineering.
pmontra 3 days ago 1 reply      
Does anybody know how many CPUs and GPUs they're using this time? It was 1200+ and 700+ in October against Fan Hui. It would be interesting to know if AlphaGo became better only because the extra learning or also because of extra hardware. I googled for that and didn't find anything but I could have missed the right source.
typon 3 days ago 1 reply      
I've had this thought watching this play out over the past few months. You have this deeply mystical, zen-like game of ancient China which represents the philosophy of the East and it's pitted against this pure capitalist, cold and calculating (literally) machine.

You can hold out for a few thousand years, but eventually the uncontrollable and amoral technological imperative will catch on and crush you.

It's kind of poetic and sad. It feels like technology will render everything un-sacred eventually.

pavpanchekha 3 days ago 1 reply      
What's even more exciting is that there weren't direct mistakes by Lee Sedol in this game, like there were in Game 1. So does that mean that AlphaGo is just playing on a level beyond him?
ljk 3 days ago 0 replies      
Wow you're fast!

good to know they'll play all 5 games no matter what the result is though

People seem to think Lee knew he lost and was just playing to learn more. Hope he learned enough to take the overlord down in the next three games

studentrob 3 days ago 0 replies      
That was entertaining and I don't even really know the game. Props to Google for making this available live on a solid feed.

I wonder if Lee Sedol will have an interest in studying deep learning after this =)

0x777 3 days ago 5 replies      
Lee Sedol seemed to be doing well before he went into extra time (as far as I could follow from the commentators). How is it ensured that this is a fair game given the time constraints? I'm guessing adding more computing power to the AlphaGo program should definitely help it in this regard.
Jach 3 days ago 2 replies      
I may be glad no one took my bet offer of me paying $19 if AlphaGo won 3/5 vs them paying $1 otherwise... I had a prediction at 90% confidence that nothing would show up before the end of this year that would be capable of beating the top players (though since I first heard about MCTS's success the idea of coupling it with deep learning seemed obvious, so I had an unfortunately non-recorded prediction that if a company ever bothered to devote about 8-12 months of research and manpower into combining those two algorithms with a very custom supercomputer or tons of GPUs then they would have something that could beat the best), then AlphaGo was announced. But the top pros weren't too impressed with its defeat of Fan Hui, and Ke Jie estimated something like "less than 5%" chance of it beating Sedol so I updated to 5% for this match of it winning 3/5...

Tonight's game was beautiful. Last night's was a fighting game way too high level for me to really grasp (I have no idea how to play like that, all those straight and thin groups would make me nervous). I'm expecting Sedol to win Friday since I imagine he's going to have a great study session today, but I'm no longer confident he'll win the last two.. Still rooting for him though. :) (I also want to see AlphaGo play Ke Jie (ed: sounds like from the other submission on Ke's thoughts that may happen if Sedol is soundly defeated), and for kicks play Fan Hui again and see whether it now crushes weaker pros or is strangely biased to adopt a style just slightly stronger than who it's facing.)

pkaye 3 days ago 2 replies      
Let's say AlphaGo can beat all the best human Go players. Then what will the next more difficult game for computers to compete against humans and win?
kerkeslager 3 days ago 1 reply      
As a programmer and a go player, I knew this day would come, but I'm a bit disappointed that this is how it happened, for two reasons:

1. As the game of go progresses, the number of reasonable moves decreases, so that as the game progresses, players on average play closer and closer to optimally. By the end of the game, even weak amateurs can calculate the optimal move. Logically, I would guess that stronger players are able to play optimally earlier than weak ones. Lee Sedol is known for his strong middle and endgame, often falling behind early on and making it up late in the game. He is so strong at this that he has driven an entire generation of go players to developing very strong endgame. But AlphaGo, running Monte Carlo simulations, almost certainly can brute force the game earlier than Lee Sedol can. Lee Sedol is playing AlphaGo on its own turf. A player known for their opening prowess, such as Kobayashi Koichi in his heyday, might have had an advantage that Lee Sedol doesn't. (Note: I'm not strong enough to analyze Lee Sedol or Kobayashi Koichi's play styles; I'm repeating what I've heard from professionals.)

2. I hoped that when an AI beat a pro at go, it would be with a more adaptive algorithm, one not specifically designed to play go. If my understanding of AlphaGo is correct, it's basically just Monte Carlo: the advances made were primarily in improving the scoring function to be more accurate earlier, and the tree pruning function, both of which are go-specific. It's not really a new way of thinking about go (at least, since Monte Carlo was first applied to go). It's just an old way optimized. The AI can't, for example, explain its moves, or apply what it learned from learning go to another game. It's certainly a milestone in Go AI, and I don't want to downplay what an achievement this is for the AlphaGo developers, but I also don't think this is the progress toward a more generalized AI that I hoped would be the first to beat a professional.

astrofinch 3 days ago 1 reply      
So given that this victory seems to be happening a decade or so before experts predicted, how likely are we to see similar acceleration in reaching other AI milestones? (Especially given that AlphaGo is using the same algorithm that won the Atari games, so it has the potential to be very general in its application)
skarist 3 days ago 1 reply      
I predict 5-0 for DeepMind. Now, Lee has a broken self-confidence to battle (crucial for a human player), something that will not and can not trouble the DeepMind team.
Dawny33 3 days ago 0 replies      
Wow! Monte Carlo Search learning into play in this match.

Especially when AlphaGo capitalized on just one suboptimal move of Lee Sedol.

EGreg 3 days ago 0 replies      
Someone here said an interesting thing. Perhaps the next AI challenge would be to see whether AI running on weaker machines can beat AI of yesterday on stronger machines. And this test can be automated to find even better algorithms. Like can Rybka running on an iPhone today beat Fritz running on a distributed supercomputer? Or thinking for 2 seconds rather than 2 minutes, on the same computer?

There is something unnerving about a computer that can answer in 0.01 seconds and still have the move be better than any human would come up with in an hour. At that point a robot playing simulatenous bullet chess would wipe the floor with a row of grandmasters, beating them all without exception.

drjesusphd 3 days ago 1 reply      
So is this it then as far as games go? Does anyone know of any efforts to develop a more "human-friendly" complete information game than go?
spdy 3 days ago 0 replies      
Must be amazing seeing how the program you helped to create beat the best player in one of the most complex games on this planet.

This is a milestone in modern informatics.

bronz 3 days ago 0 replies      
I am so glad that I got to see this live. These matches will be historic.
dvcrn 3 days ago 0 replies      
I think this is really fascinating but also scary. Imagine you are the best in the world in something. That is your thing and no one else can do it better than you.

Then suddenly a computer comes along and takes that title from you. But it takes it in such a way that you are never in your life able to re-take it because of how the AI works.

A game will likely just be the first field. My girlfriend is working in translation and interpretation which is another area already in the crosshair of neural networks. AIs will step by step become more efficient than people and that is terrifying.

TheArcane 3 days ago 0 replies      
I wonder how long until AI starts writing bestselling novels.
axelfreeman 3 days ago 1 reply      
I don't get the mystery of this. This algorithm is complex. SURE! But deep learning is very fast training / repeatition of a game (or some other goal) while saving the good or bad results. Predict user moves. Find good positions/patterns. Or did i miss some here?


Jerry2 3 days ago 1 reply      
Does anyone know what DeepMind's software stack looks like? Just based on past work of some of the people working there, I'm guessing most of the code is C++ with some Lua. Anyone know for sure?
dynjo 3 days ago 1 reply      
"By the 4th game, AlphaGo apparently became self-aware and the fate of mankind was sealed..."
blacktulip 3 days ago 2 replies      
Does anyone notice the lack of ko[1] in the games? In all 7 public games (5 with Fan and 2 with Lee) there isn't any ko. This is unusual. If we still can't see ko fights in the following 3 games...I would suspect that AlphaGo isn't able to handle ko well enough yet, and Google asked Lee and Fan to not initialize ko fights in the games.

[1] https://en.wikipedia.org/wiki/Ko_fight

danielrm26 3 days ago 0 replies      
What I find fascinating about this is that the system was programmed by people who were presumably not as good at Go as Lee Sedol.

So if the first comment in this thread (about how it's a completely non-human approach) is true, it's really interesting that humans can enable computers to come up with non-human ways of solving complex problems.

Seems like a big part of this story, if I'm not being completely dumb here.

grouma 3 days ago 0 replies      
Exciting match with top notch commentary. I'm rooting for a sweep of the series.
toolslive 3 days ago 1 reply      
These matches are not really fair: the AI team can "prepare" and examine the human's previous games, find weaknesses, aso, while the human doesn't really have anything to guide his/her preparations.
blahblah12 3 days ago 0 replies      

Slack channel for discussion if anyone's interested. We're using it for commentary while the games go on. Was created by AGA people.

vancan1ty 3 days ago 0 replies      
Did Lee Sedol have access to a dataset of AlphaGo games in preparation for this match series? I wonder if it would help him if he could study the computers moves and strategies in other matches.
jonbarker 3 days ago 2 replies      
AI enthusiast and amateur player here: Michael Redmond made a great point yesterday, if the algorithm is only interested in maximizing probability of win and ignoring margin of victory, shouldn't there be some override for weak moves played when the lead is sufficient? AlphaGo played some weak moves when it perceived it was sufficiently ahead yesterday in the end game. A truly intelligent opponent will play strong moves even when sufficiently ahead, no?
dineshp2 3 days ago 0 replies      
Most people other than the researchers and hackers, really did not understand what AI was capable of doing. The very idea of AI seemed too abstract to comprehend(I consider myself guilty).

But AlphaGo showed us what AI is really capable of doing in an eerie sort of way and I think interest in AI will soon become mainstream which is a good thing for the development of AI.

Now it's at least easier to comprehend the context of all those doomsday warnings about AI destroying humanity which I never took seriously.

mhagiwara 3 days ago 0 replies      
I always wanted to learn to play Go and one of the reasons was because it was the only game where computers hadn't defeated human - well, it is no longer the case and I kind of lost motivation to learn it.

I wonder what would be interesting games (intellectual sports) where computers have yet to defeat humans that you would probably be interested in learning?

i_don_t_know 3 days ago 2 replies      
Is there a complete recording of the commentary? They had one for game 1. The current live stream only goes back two hours and doesn't include the beginning of the game.

I'm looking at the DeepMind channel on Youtube:https://www.youtube.com/channel/UCP7jMXSY2xbc3KCAE0MHQ-A

jasonjei 3 days ago 0 replies      
Isn't it kind of interesting that Google is pushing the lead for these projects? It reminds me when IBM took on the gusto of developing Chess AI when they had strong technical superiority. It's almost as if Google is taking the mantle from IBM to develop these renaissance projects.
hutzlibu 3 days ago 2 replies      
Does anyone know of a site/video, where I can just see the game moves without commentary and thinking pauses?
naveen99 3 days ago 0 replies      
Well, the nice thing with go is the handicap system. I wonder how many stone handicap the human champion needs to beat alpha go, and watch that number increase over time.I wonder if chess could use a handicap system to keep things interesting.
karussell 3 days ago 0 replies      
Wasn't reading the whole thread, but was it possible for Lee Sedol to play against the final AlphaGo before? Although AlphaGo seems to be a huge achievement I would find the lack of training before a bit unfair as AlphaGo was probably able to play lots of Games from Sedol before.
chm 3 days ago 1 reply      
This will be buried by now but:

What happens if the Go master tries to deceive the oponent? As in purposefully play a counter-intuitive position, or even "try to lose"? Will the AI's response be confused as it is expecting rational moves from its oponent?

github-cat 3 days ago 0 replies      
Should we be worried about the win of AlphaGo? http://www.pixelstech.net/topic/141-Should-we-be-worried-abo...
myohan 3 days ago 0 replies      
I would like to see another experiment where Lee is aided by a computer and plays against AlphaGo and see who wins...some believe that human intuition working with a mediocre computer is much more powerful than a supercomputer by itself.
vedaprodarte 3 days ago 2 replies      
A question: Should we take it as "a computer beating a human" or "developers beating a Go player"?I had this discussion with my friends and we have opposite opinions.
conanbatt 3 days ago 2 replies      
What is interesting to me is that the computer makes clear mistakes when its on the lead. Since it might find the chances to win equally among different scoring results, it often picks a weaker one.

This has a powerful consequence: we have not seen AlphaGo pushed to the limit, he is lowering the distances as if it were playing a teaching game.

Lee Sedol I think came to this conclusion, and the only human strategy left is to take a lead big enough to maintain the rest of the game. And that might be the last strategy to play to show the computer is already unbeatable, because it will be pushed to its limits to win a game and it might overcome humans.

bennyg 3 days ago 0 replies      
I wonder how "smart" the AI can become once Lee Sedol starts pattern matching and playing against its moves better.
pgodzin 3 days ago 3 replies      
Is it best of 5 or are they definitely playing 5 matches?
andreyk 3 days ago 0 replies      
Very impressive. Since there is a ton of hype about this and many media stories (at least NYTimes, with no citation at all) saying that this came 'a decade early', I think its worth looking over Yann LeCun retrospective on research in this area (https://www.facebook.com/yann.lecun/posts/10153340479982143). Clearly he was saying all this to preface the results of Facebook research in comparison to Google's, but I still think it is a very good overview of the history and shows the ideas did not come about suddenly. Quoting a few key things since the whole things is very long:

"The idea of using ConvNet for Go playing goes back a long time. Back in 1994, Nicol Schraudolph and his collaborators published a paper at NIPS that combined ConvNets with reinforcement learning to play Go. But the techniques weren't as well understood as they are now, and the computers of the time limited the size and complexity of the ConvNet that could be trained. More recently Chris Maddison, a PhD student at the University of Toronto, published a paper with researchers at Google and DeepMind at ICLR 2015 showing that a large ConvNet trained with a database of recorded games could do a pretty good job at predicting moves. The work published at ICML from Amos Storkey's group at University of Edinburgh also shows similar results. Many researchers started to believe that perhaps deep learning and ConvNets could really make an impact on computer Go.


Clearly, the quality of the tactics could be improved by combining a ConvNet with the kind of tree search methods that had made the success of the best current Go bots. Over the last 5 years, computer Go made a lot of progress through Monte Carlo Tree Search. MCTS is a kind of randomized version of the tree search methods that are used in computer chess programs. MCTS was first proposed by a team of French researchers from INRIA. It was soon picked up by many of the best computer Go teams and quickly became the standard method around which the top Go bots were built. But building an MCTS-based Go bots requires quite a bit of input from expert Go players. That's where deep learning comes in.


A good next step is to combine ConvNets and MCTS with reinforcement learning, as pioneered by Nicol Schraudolph's work. The advantage of using reinforcement learning is that the machine can train itself by playing many games against copies of itself. This idea goes back to Gerry Tesauro's NeuroGammon, a computer backgammon player that combined neural nets and reinforcement learning that beat the backgammon world champion in the early 1990s. We know that several teams across the world are actively working on such systems. Ours is still in development.


This is an exciting time to be working on AI."

awl130 3 days ago 0 replies      
do you think lee sedol should change his goal from trying to win all remaining three games to winning just one? in other words, sacrifice the next two games to learn about alphago and then try to win the final game.
Tistel 3 days ago 0 replies      
does anyone know anything about the implementation (language etc)?
lottin 3 days ago 0 replies      
From what I gather, if you have a computer powerful enough, you can solve any game by simply applying Game Theory, as long as you can assign a numerical value to the possible outcomes.
w8rbt 3 days ago 0 replies      
Would it be possible to play in a random/unpredictable fashion and win a game of go? If so, that may be one approach to beating the computer.
LaFolle 3 days ago 0 replies      
This is superb awesome!!!

In future, it will be interesting to see AlphaGo playing against itself!

openaccount 3 days ago 1 reply      
Meanwhile 'Google Translate' translates texts terribly bad. Why don't they work on important tasks?
eruditely 3 days ago 0 replies      
Oh come on Lee Seedol we believe in you man, you might crack under pressure, it's cool. Bring it home for us meatbags will you? HK-47 why T_T.
I stayed in a hotel with Android lightswitches and it was as bad as you'd think mjg59.dreamwidth.org
920 points by pjc50  2 days ago   307 comments top 53
ChuckMcM 2 days ago 9 replies      
So sad. I find the mechanics here really challenging to overcome. The hotel management no doubt wants "really cool tech" for their hotel to show they are up to date etc. And they send out an RFQ which someone bids on, really cheaply. Knowing that by only doing the things the hotel asks for, they can throw something together quickly and cheaply for a big payday.

This is exactly the mechanism that gets people in trouble going to China for manufacturing. They say "I want you to build widgets" and they get a good price quote, and say "Wow, this is awesome!" because they have in their mind that "making things in China is cheap" but in reality its that if you cut a lot of corners you can make things really cheap, and since the contract doesn't say you can't cut corners, it is all "perfectly" legal. But the manufacturer knows what the buyer doesn't, and exploits that information asymmetry to make money at the buyer's expense without the buyer having any true recourse.

The hotel in question could have said in the RFQ, "System will be impervious to network traffic snooping and at no time will systems or a guest supplied computer be able to access the controls in another room."

Had they said that, the price quotes would have gone up and had the system the author speaks of been delivered, the Hotel could recover the costs of installing it from the vendor. But they hotel didn't even know they needed to ask for that since they no doubt would assume, "nobody would make something that shoddy would they?"

I learned about this when I saw one of the rules in a NetApp hardware contract that said "Manufacturer will install all components shown on the schematic on the final units in their designated locations." That seemed really odd. I learned that before that clause had been part of the standard contract, there had been a manufacturer who decided unilaterally that half of the noise suppression capacitors in the schematic were "unneeded." Units from that manufacturer started failing in odd ways in the lab.

stestagg 2 days ago 2 replies      
When I stayed there, it was just as soul destroying to use these things as you might imagine.

The implementation felt like they'd asked a VB6 dabbler to implement it in Java. Then stuck it in the cheapest 600mhz tablet they could find.

The UI was purely a button grid with distorted graphics, and dodgy typography. Button presses took about 1/2 a second to respond, and every 5th press caused the app to crash (adding a good 30 s to the experience).

My room had 4 tablets* in, and all of them behaved exactly the same way.

* the idea of a tablet to control the room is neat if it could be moved around. Like a remote-control. But for security (and using Ethernet) they were all fixed down. Making them far more useless than plain switches

thoughtsimple 2 days ago 5 replies      
I'm amused by the use of Modbus. I worked on Modbus networking back in the 1980's at Modicon (a company that disappeared long ago that created the "standard"). Using a protocol invented before the internet to control devices on a semi-public network is insane.

The original Modbus was designed to communicate with factory devices controlled by logic controllers over serial and eventually over a custom token ring network. Modbus got moved to TCP at some point when I stopped paying attention. Modicon rejected TCP when I was there because the OSI model 7 layer network stack was going to be the next big thing.

eloff 2 days ago 4 replies      
Turning lights on at 3 a.m. is a nuisance. Knowing when lights go on and off can tell you when the people are not in their room - which could help if you wanted to break in and steal their stuff. Overall quite disconcerting how lax they are with security.
radarsat1 2 days ago 3 replies      
Maybe it's worse: If these are really off the shelf tablets, presumably the camera can be turned on remotely. Though I'm sure the hotel would have put a piece of black tape over it, right?
patcheudor 2 days ago 2 replies      
"Jesus Molina talked about doing this kind of thing a couple of years ago, so it's not some kind of one-off - instead, hotels are happily deploying systems with no meaningful security, and the outcome of sending a constant stream of "Set room lights to full" and "Open curtain" commands at 3AM seems fairly predictable."

Which takes us to this: "Any sufficiently advanced technology controlled by a miscreant is indistinguishable from a possessed object in a Stephen King Novel."


binarymax 2 days ago 15 replies      
I feel like I'm missing out on a huge bulk of money simply because when I have ideas of "Internet of Things", I cant get over the security obstacles and cancel the ideas. If only I just didn't care (or didnt know) and just implemented whatever the heck brought in money from oblivious customers.
Spooky23 2 days ago 0 replies      
Technology for technology's sake is a real shitshow and a big problem.

I was in my friend's Honda Pilot the other day, which has the new trendy big screen interface to replace the radio. I'm sure it is insecure junk, but more importantly it is a nightmare for humans.

I have a BS in CS, have developed some enterprise apps, run major complex tech programs successfully, and could program my dad's VCR in the early 80s. And... It took me nearly 10 minutes to figure out how to turn off the radio on the weird touchscreen.

To turn the radio on requires 4 clicks, and the key button is on the corner of the screen, where it is least responsive to touch. I would probably be safer driving with my knees and texting with two hands than controlling that radios.

westi 2 days ago 3 replies      
This is the unfortunate outcome of a bunch of factors.

OEMs moving to XXX over TCP protocols which have zero security by default and documenting this in the datasheets.

VAR installers switching to the newer products because CAT5 cable is cheaper and easier to pull than what they used to use.

The previous solution was just as insecure but harder to hack because you needed more specialised equipment.

I'm not sure how we are going to fix this without getting the OEM industry and the industry bodies behind xxx over TCP to understand that they need to bake a security model in.

baconizer 2 days ago 0 replies      
KNX, being one of the most sophisticated and proven building intelligent protocol, widely adopted in Europe.

If anyone interested, cross scan its default IP interface port 3671 and, say German telecom ISP IP range (and there is CSV available on www), with efficient penetration test tool like masscan, challenge it with 0x0205, look for 0x0206 on response.

Thousands of home and factories and commercial buildings welcome you with real time datagrams on all their switches/appliances/presences/sensors/cams/... Bonus point: writable!

pbnjay 2 days ago 1 reply      
The title is implying that Android is the culprit here, and not just a horrible design and implementation.
tedsuo 2 days ago 1 reply      
All that internet, and the android tablets are still just sitting on the wall where the light switches used to be. What's the cost in hardware and electricity to move from light switches to android tablets for an entire hotel?
Kenji 2 days ago 1 reply      
People forget how reliable and secure things like hard-wired physical light switches and other natural interfaces (like paper books, etc.) are. There seems to be a vast ignorance about topics like reliability, interfaces, usability and design. Just because it's digital, it doesn't mean it's better - I'd argue for the contrary in many cases. I don't want to upgrade the firmware of lightbulbs, I just don't. Despite my affinity for them, I have more than enough computers surrounding me in my everyday life.
nickysielicki 2 days ago 3 replies      
I feel like the only thing that can fix this type of mentality is a line of products targeted towards annoying nerdy 13 year old boys-- the type of boy that a lot of us were. We need to make it easy for them to abuse security lapses in IoT products. When I was in middle school, I brought a universal remote to class and turned on the television set. Yeah, I know, I was a badass. But these kids will do much more.

The problem is that when a software engineer goes to the front desk of a hotel and complains about the security of the brand new Android-Powered Hi-Tech system that they just put in, the person working the desk thinks, "Haha wow! That nerd was a real Sheldon Cooper, like on the television!" and they don't care at all. If you live in a bubble where programming and computer work is black magic, well then of course it is completely inevitable that someone so nerdy and so smart would be able to hack everything on the planet. So they don't really think there's anything to be done.

When it's a group of annoying little 15 year olds that sneak out in the middle of the night to wake up all of your guests, it's a lot bigger of a deal.

mataug 2 days ago 1 reply      
This whole IoT craze is turning into a nightmare. People are building all kinds of devices in complete ignorance of Security.
abledon 2 days ago 0 replies      
I liked how the article gave the commands used to set up the correct networking config with the bridge.

Can anyone recommend a good reference / tutorial for learning basic network-fu in unix ?

FireBeyond 2 days ago 0 replies      
Yeah, wow. Twelve years ago, I worked for a firm that built DVOD (digital video on demand) systems for hotels across Australia and UAE.

Even then, and with the limited 'damage' that could be done, each and every single room got its own VLAN. That was certainly a little ugly to manage at times, especially in a 1200 room hotel, but yes.





tezza 2 days ago 0 replies      
Isn't this just a modern day equivalent of Phone Phreaking ?

There used to be party lines in villages where the whole village could listen in to anyone's phone call.

Never mind the operator could also have a sticky beak.

Now if they can change your sound system to play Kanye West... that truly is a problem worth worrying about.

goodcanadian 2 days ago 2 replies      
This is why I don't understand the "Internet of Things." A light switch is a pretty effective solution to the problem; there seems little advantage to networking it. Ditto for a toaster, refrigerator, et cetera, et cetera.

Now get off my lawn!

marcoperaza 2 days ago 0 replies      
My favorite example of this is the evolution of volume controls in cars. These days you have all sorts of fancy and inferior alternatives that leave you wishing for a plain old-school volume knob. The worst are the purely virtual volume settings with up and down buttons on a touch screen. Or only a bit better, physical knobs that spin endlessly and just send up and down operations to a digital volume level. Reasons why the old school knob is better:

* It maintains its position across power cycles. It can even be adjusted when the car is off. So you can lower the volume knob before you turn the car on and blast loud rock into your grandmother's ears.

* It does not require you to look at a touch screen to find the volume buttons. Tactile feedback is enough. You can operate it while maintaining the other 99% of your attention on the road.

* It physically stops at the lowest and highest possible volumes. Again, no need to look at some display.

Even better would be a physical slider instead of a knob. That would let you feel out the exact position of the volume without looking. The downside would be the limited space on a car stereo dashboard. But please, a touch screen is the worst and most dangerous interface while driving.

The same goes for radio presets. In a car with physical buttons for the presets, I can switch between my favorite stations without having to look. Try doing that with a touchscreen. How is this progress?

Maybe it's just a symptom of an industry that's often more about selling status symbols than selling functional products.

liveoneggs 2 days ago 3 replies      
prepare to be arrested for various violations as a punishment for pointing out these obvious and dangerous flaws.
JustSomeNobody 2 days ago 2 replies      
I don't see what this gains the hotel. You get an increase in complaints/request about not being able to turn on/off lights, etc. Standard light switches are dirt cheap and last years and everyone from age 2 up knows how to use them.

Is this solely to look "fancy"? If so, then at least get the tech right otherwise you look incompetent.

TheOtherHobbes 2 days ago 0 replies      
Perhaps the hotel was getting ahead of UK gov requirements for network backdoors.
dheera 2 days ago 0 replies      
Great, now make a drone or self-driving robot to randomly run around the country messing with peoples' insecure lights. It'll be one huge party.
matthewmacleod 2 days ago 2 replies      
This is absolutely fucking preposterous.

The 'Internet of Things' or whatever you want to call it controllable peripherals, ubiquitous connections, stuff like that is a pretty cool concept. I want to be easily able to do things like ask 'when will my laundry be finished?', or have my central heating come on when I start heading home. Not because it's massively beneficial, but because it removes some minor annoyances.

The technology is there, and has been for a while. But the proliferation of mindless, unforgiveable security flaws, pervasive surveillance, proprietary cloud-based networks, shitty software and bad UX generally it's really mad. It really makes it difficult to want to use any of these devices.

I'd love some kind of proper, non-half-baked-and-riddled-with-holes solution for home automation, but I reckon I'd probably have to build it myself.

ryandrake 2 days ago 1 reply      
Hotel name and address was not mentioned. As long as people complain but fail to name-and-shame, these practices will continue.
ochoseis 1 day ago 0 replies      
I surprised no one's mentioned Brillo[0] or Weave[1] from Google. They're trying to solve this problem in an open, standardized way.

[0] Brillo - Embedded Android - https://developers.google.com/brillo/

[1] Weave - Communications - https://developers.google.com/weave/

Lanari 2 days ago 0 replies      
If you found that you could do the same thing on a classic installation people will be like. So what? and an electrician wouldn't even be excited to try it.

But since hacking is cool, we like this stuff.

Weird thing also is that using WiFi years ago was basically giving your data, when SSL websites where so rare. And we didn't even cared for it...

frogpelt 2 days ago 1 reply      
Every once in a while invention is the mother of necessity.

But do we really need our lights to do all kinds of funky things and be controlled from around the globe?

Don't we really just need our lights when we're in the room? And don't we just need to them to be on/off or at most dimmable?

Help me here.

devishard 2 days ago 0 replies      
This is a great example of more technology making things worse. I don't mean badly done technology, I mean that even if this were working and secure, a light switch would be cheaper, more durable, easier to repair, and easier to use.
saint-loup 2 days ago 0 replies      
I work in a flagship building in France, known for its environmental compliance and automation features. It's quite nice, but there's a web app to toggle the lights, the blinds and whatnot. And guess what, the logins to other floors are trivial to guess.
ilvnvtoomuch 2 days ago 0 replies      
Last week's episode, Ask This Old House, had something similar. They swapped a normal front door lock/handle with a Bluetooth (or WiFi) controlled unit. The phone could be used with an optional WiFi extender. My head swirled with so many scenarios where things could be bad.

1) Leave the phone in the home, you'll never be able to get in!2) Wireshark the WiFi3) Hijack the signal

I'm sure the dark side is waiting for us all to adopt IoT in our homes. I prefer my mechanical locks, thank you.

Kristine1975 2 days ago 1 reply      
And here I thought the video game Gunpoint (where you rewire switches, lights and doors among other things) was unrealistic.

Although I'm a bit disappointed mjg59 didn't play Blinkenlights with the rooms on his floor.

pyabo 2 days ago 0 replies      
In a competition it's possible to wake up your competitor. I mean if you are going to play a tennis or football match the next day, you can bother your competitor to have an advantage.
zmmmmm 1 day ago 0 replies      
In the hotel's defence, I'm sure he could also go toconventional hotels and chop a hole in the wall and start messing with the wiring to achieve approximately similar "security breaches". The broken implementation is more concerning to me than the security aspects.
bencollier49 2 days ago 1 reply      
Why on earth are they using Modbus? Is there already some sort of industry standard (on Modbus) for remotely controlling hotel peripherals?
swasheck 2 days ago 0 replies      
> My coworker asks whether you can control the channels. Can you set all of your neighbours' TVs to pay-per-view while they're out?

Hahahahahahah! "Asking for a friend."

But really, folks are talking about the nuisance of waking people up in the middle of the night and that's true. However, controlling channels could be a more significant nuisance.

jameshart 2 days ago 0 replies      
I wonder if a more fruitful attack target than the lights in other rooms might be the android switches themselves. Even cheap commodity android tablets contain cameras or at least microphones. There's almost certainly a remote update interface on them of some sort.
jjp 2 days ago 0 replies      
Having stayed at a similar (same) hotel don't even get me started on the guest experience. It took longer to familiarise myself with all the controls in the room than a normal stay in the hotel. Also really appreciated the slight glow all the tablets gave off at night...well they did until they got covered with cushions and gaffer tape!


talles 2 days ago 0 replies      
Are physical light switches even a problem to be solved?

Seriously, I fail to see the ROI of such endeavor.

webXL 2 days ago 0 replies      
What's up with all the hotel hacking today? First towels [1], now light switches??

[1] https://news.ycombinator.com/item?id=11265849

justinclift 2 days ago 0 replies      
Don't suppose those tablets had any kind of microphone or image sensor/webcam built into them? If they're using cheap generic android tablets, they probably do.

Should be fairly simple to setup remote blackmail-material-collection. :(

mortenjorck 2 days ago 1 reply      
Theres nothing inherently wrong with a touchscreen, IoT light switch. But the main problem here, apart from using an insecure legacy protocol, is the use of a general-purpose OS like Android instead of an embedded OS.

Its not just this light switch Android refrigerators, Android ovens, Android washing machines are all using a wildly inappropriate operating system for single-purpose devices. The problem is likely that its a lot easier to develop for Android than it is for a proper embedded OS: Its faster, the commodity hardware is easy to procure, licensing fees are minimal to none, and its easier to hire developers.

The first company to bring to market a more IoT-appropriate, yet accessible combination of operating system and SoC reference designs stands to become a massive player when IoT goes mass-market.

clapinton 2 days ago 0 replies      
Write a script to rhythmically open and close the curtains, as well as turn the lights on and off for the whole floor. Then call OK Go and tell them to bring a drone because you got their next clip idea.
thrillgore 2 days ago 1 reply      
Does anyone have a mirror? It looks like it's been taken down for me.
JJJollyjim 2 days ago 0 replies      
I wonder if, with MITM devices set up on a few consecutive floors, you could make massive pixelart animations on the outside of the building by turning lights on and off...
elif 2 days ago 0 replies      
eh, It doesn't scare me because they have the name, credit card, and exclusive control over the lock on the attacker's door. You'd have to commit so many counts of fraud and hacking in order to attempt getting away with it, that the reward just doesn't seem there to me.
smegel 2 days ago 1 reply      
What has this got to do with Android?
justaaron 2 days ago 1 reply      
why on earth are the 2 largest and highest valued technological companies in human history repurposing mainframe multi-user computer operating systems from the 1970's (extended even further with sandboxed app containers!) for mobile phones!?

It smacks of deliberate incompetence to sell hardware.

iOT on top of this just smacks, again, of deliberate incompetence, to either sell hardware or raise the attack vector profile (the NSA loves you!)

why on earth is there still no reasonable competition to either android or ios?

api 2 days ago 0 replies      
A lot of consumer IoT feels forced. It feels like I am supposed to want these things. What am I? A luddite? It's The Future(tm)! Of course I should want a less reliable, more expensive, shorter life span, more complex security/privacy nightmare in place of a completely reliable long-lived inexpensive device.

Complexity is costly in many, many ways. There is zero justification for adding it to anything unless the payoff is some multiple of the complexity cost being added. I just don't see it here.

For any new tech, I always ask "what super power will this give me?" For much of IoT I can't answer that question. There are a few nice-to-haves but nothing compelling, no must-haves or genuine wows. Then you add in all the unbelievably creepy security and privacy implications and any lukewarm interest goes away. I can't shake the obviously crazy idea that some of this stuff is being pushed because certain people (advertisers, intelligence agencies) want as many sensors out there watching us as possible. Imagine every light switch, thermostat, etc. with an Internet connection and then think about the meta-data correlation capabilities with mobile sensor and location data and other Internet traffic.

We're really talking about a total surveillance society where literally every single thing you do is stored in a database somewhere. Anyone able to correlate your phone's approximate location and/or your web browsing history with, say, light switch data really will know every single time you use the bathroom and for exactly how long.

Do you stop moving and kneel every day at the correct time? Then you're praying to Mecca-- you're a Muslim. Do you leave the lights on late? That might say something about your personality profile. Do you work with the lights off? That says something else. Is there ambient sound but no light and are a male and a female present? They might be having sex. Two men in the bedroom? Gay sex! And that's just the easy low-hanging fruit I can imagine. Throw some theory-agnostic deep learning at it and I can imagine unbelievably spooky stuff that makes this look tame:



But mostly I think the driver is tech industry wishful thinking. Everyone is looking for the next catapult capable of tossing unicorns to billion dollar valuations in 1-2 years.

Mobile has IMHO been a bit of a disappointment. It's been big but not quite as big as everyone predicted. It's failed to displace desktop or achieve "convergence," and the limitations of the UI and the walled garden model have kept "serious" apps off mobile platforms for the most part. The collapse of app stores as a commercial software sales platform with prices spiraling down to $0 and clutter making new apps un-discoverable has further destroyed any incentive to push the boundaries of the platform beyond a "portable dumb terminal."

It's also been an architectural disappointment. It was supposed to be a clean slate where we could escape some of the cruft and bloat of desktop, but we're doing iOS and Android around here and the development experience on both is as bad or worse than Windows, Linux/Qt/GTK, and the web. It's not the promised land by any stretch. We took a lot of bad ideas with us from desktop and then added walled gardens and more resource constraints. Woohoo!

So now everyone's hoping IoT will be the next unicorn flinger. I'm skeptical so far. The Blackberry and the iPhone had immediate killer apps: maps, portable chat/email, portable books, music, and movies, etc. Those are real benefits that are worth the cost and the downsides. They're "super powers." Where's the super power in an internet connected light switch?

wahsd 1 day ago 0 replies      
Sorry, fanboys, but anything Android is always "as bad as you'd think"
JimmaDaRustla 2 days ago 0 replies      
Correction, he stayed in a SHITTY hotel. Not much else to report here.
z3t4 2 days ago 0 replies      
Or you could just kick the door in - and turn the lights on. Then write "HACKED" on the wall with a spray painter. (sarcasm)
Trackers jacquesmattheij.com
866 points by henrik_w  2 days ago   209 comments top 29
Doctor_Fegg 2 days ago 22 replies      
Believe it or not, some stores sympathise with you. They might actually be run by people like you, people who read your story.

They still want to know how you proceed round the store, because that helps them optimise shelf layout, identify hard-to-find items, and so on. So yes, they might use the standard in-store CCTV to observe your journeys, and when they figure that you and people like you always have difficulty finding the eggs (seriously - why is it always so hard to find the eggs?), they'll move the eggs somewhere more prominent, so they can sell more eggs and you can buy what you came to buy.

But that's as far as it goes. They don't follow you out the store, let alone into your bedroom. They don't match anything with third-party data, let alone your mobile phone number. The store just wants to know where to put the eggs.

Unfortunately, your bouncers have simply been told to "hurt them if you have to, Ive really had enough of it". So last time they came in, they smashed the CCTV cameras. The store-owner remonstrated with them a bit but the whole debate around bouncers has become so polarised that there was really no point arguing.


And if this metaphor seems a little obscure, this is why it is irresponsible, populist and ultimately self-defeating for uBlock and chums to block self-hosted Piwik and other such internal analytics tools. Because some of us are trying to do the right thing and your bouncers are still beating us up.

S4M 2 days ago 5 replies      
Thank you Jacques for writing how something that would be completely unacceptable in the physical world is deemed perfectly fine online. It has always bothered me.

Take for example how the FBI wants to have automatic access to the data in all iphones through a backdoor. Would that be considered OK if they asked lockers makers to make their locks accept a master key so they would be able to enter in anybody's house, so they could monitor further people they suspect to be terrorist?

Of course that would cause an uproar, but the general public being so uneducated with technology, I guess they don't see how the two are related.

terryf 2 days ago 1 reply      
I find it interesting that for most people the problem with ads is the tracking part. For me it's the ads themselves - I don't like seeing them, because I don't really want to buy stuff and think that a large part of the first world's problems (obesity, depression) are caused in part by ads.

In the world of ads, I'm constantly reminded that I don't have the perfect body and that my blender does not look as good as the latest model - I really don't want that, because my blender works fine and looks ok.

So yeah, I block ads and I don't really see why I should feel bad about that, the non-tracking feature is a nice bonus.

So the web will go back to sites that either require payment to enter or are run by people who post stuff out of enthusiasm. Sounds like a nice place to me.

akerro 2 days ago 2 replies      
Did you start visiting shops and places you didn't like just to mislead them? So you have more and more of them tracking you until they run out energy and money because their targeting is just wrong?




karmacondon 2 days ago 5 replies      
Where this analogy breaks down is that the people sent to track you are invisible and can't be seen without the aid of special technology. So what you end up telling people is that they are being followed everywhere by invisible ghosts, who's only desire is to change what ads appear in their newspaper. And it appears that the reaction of most people is about what you'd expect.

If a store has policy of "If you come into our store, we'll have employees follow you home" and you don't like that policy, then don't go to that store. That simple. It doesn't make sense to go into the store and have your goons beat up their employees. That might mean that you can't go to the stores you want to go to, but that's how it goes. It seems as clear online as it does in the physical world.

(tldr without the analogy: The overwhelming majority of people don't care about being tracked online because there are no obvious ill effects. The problem with ad blockers is that it makes more sense to just avoid sites that show ads, but most people don't want to do this because it would exclude their favorite sites.)

cm2187 2 days ago 0 replies      
One can make a very long list of things that would look really really creepy in the physical world.

For instance I can draw a little cat in my agenda to remind myself to call a particular friend that day. The police will tell me: "what? you have not written that in plain english? You must tell me what it means and if you don't you will go to prison". (In the UK one can go to jail for refusing to decrypt one's own data)

I go buy the Telegraph at my local newsstand and the guy will tell me: "can I see your papers please?" "But I just want to buy a newspaper" "yes but I must report to the police every day who reads what, by the way I must also know which pages you intend to read" (the UK is passing a law that would force all ISP to record what websites their customers view)

Etc etc

hoopsho 2 days ago 1 reply      
I got tired of seeing a drill show up on a bunch of sites after I just searched for it on Home Depot... I block ads on my workstation, but with tablets I just could not find an easy way to block trackers for my whole family.

So Metiix Blockade was born out of this frustration... Now I have "bouncers" protecting my whole network for every one of my devices.

I hate when a web page decides what ads and trackers it wants to pull down from the Internet. With Blockade, I have taken back control of that process and I get to dictate when and where I want to provide my information.

I love feeling like I have the real internet back. No more of these ads and trackers taking over every place I go.

wouterinho 2 days ago 4 replies      
It is funny how things change when you use the physical world metaphor. There was a campaign recently by the Dutch regulatory agency that made people aware of the implications of allowing permissions in "free" apps.

They made an (anecdotal) video by promising a free cup of coffee in exchange for your contact list on your phone:

https://www.youtube.com/watch?v=AYXM56YJWSo Dutch unfortunately)

buro9 2 days ago 1 reply      
Really well put.

I've been operating browser separation (Google in Chrome, social in Chrome incognito, and everything else in a locked-down privacy mode only Firefox - all with uBlock) for a while, and also use anonymising VPNs for anything I really don't trust, and my own VPN with streisand and Dnsmasq (with a hosts very similar to https://github.com/StevenBlack/hosts/ ).

On my mobile every link I click in any app I open in Dolphin Zero (still on that DNS blocking VPN - which blocks all trackers in apps too), and I only keep apps I actually use and trust the publishers of on my device.

It feels like a chore (manually copying links from one browser to another depending on trust level), I wonder whether it's worth it sometimes... but then I occasionally get to see someone else's experience of the web and it's so incredibly and perniciously been invaded by advertisers that I am glad I do all of this.

It's become so bad that I even had to change my uBlock origin rules for my online bank ( https://banking.smile.co.uk/SmileWeb/start.do ) to block even first-party scripts... because they use Adobe, Omniture and Tealium tools to measure stuff and for A/B testing of their online banking features.

I now block absolutely everything and tell others to do so too, but unfortunately there is collateral damage.

The very sites I care about may not require advertising revenue, but do value tracking data that helps them spot errors, debug things, find out what screen resolutions they should cater for. Their analytics, client-side debugging, this is all now rendered useless to them.

PS: If you happen to work on Firefox for Android, please enable browser.privatebrowsing.autostart to be configured via about:config. I would love to default enable private browsing in a UA capable of running uBlock on my mobile.

raverbashing 2 days ago 6 replies      
I think that better than an ad-blocking solution would be to feed all those trackers with fake information

Oh you want location data here it is, this morning I've been all over the planet. Want to know all the websites I'm visiting, sure, here's a million of them.

Just based on the fact that they keep trying to sell you the thermometer after you already don't care kind of points out that they're being had, and I'm all for helping it happen

tux3 2 days ago 1 reply      
It's funny how much more relatable the whole privacy debate becomes when transposed to the physical world.

Some will disagree, but I think the comparison was spot on.

austinjp 2 days ago 1 reply      
Add the part where one of the bouncers starts their own targeting company and does deals with some of the other targeters.
return0 2 days ago 0 replies      
The solution would be a decentralization. Tracking is a real threat when we only have 1 search engine, 2 social networks, 1 retailer and a single ad network. The web has created global-scale monopolies faster than before , and it seems like the centralization of VC capital and IT talent is permanent. Tracking becomes less of a problem when they are unable to follow you everywhere.
harryf 2 days ago 0 replies      
This reminds me of the surveillance camera man - http://youtu.be/jzysxHGZCAU
brlewis 2 days ago 0 replies      
I just disabled third-party cookies on my phone and computer. I'll see if it causes me any problems.

I already have adblock plus on my computer.

elorant 2 days ago 0 replies      
Once upon a time I used to work for a multinational company that did retail audit. They had developed a program which adjusted the ads a selected group of people were watching on their TVs. Then they provided them with special debit cards and monitored the relevance between viewed ads and purchases of goods in super markets. All that around 1999. And that was just once of a multitude of technologies they used. They also had a technology where a camera was tracing face movement to identify which items on the shelves attracted more attention by gender and age. I cant even fathom what theyll be using these days. Profiling is the holy grail of the marketing world. At least online we have the option of ad blocking. Offline were helpless.
atirip 2 days ago 0 replies      
I have blocked as much trackers as i can. Having said that, i know why they exists. See, when John produces shampoo, he needs to sell it. The only way is to advertise, one way or another, because without advertising, public knowledge, the shampoo does not sell. Now John does want to spend as less to ads as possible, that makes sense, since we, the customers, end up paying about that too. To spend less, John needs to show ads only to core group of buyers, for that he needs to know, who you are. Tracker does that. Shampoo costs 5 bucks. How much you agree to pay for that with advertising costs included? 6? 16? Really, are you ready to eat up 200% advertising markup? I have no idea, frankly.
ap22213 2 days ago 0 replies      
Reminds me of Surveillance Camera Man:


theshadowmonkey 2 days ago 1 reply      
What you probably missed is if I am a big enough retailer, I can pay off your bouncers to still follow you from a distance and still show up on your newspaper on one side since I have more than you and can pay off your bouncers to work for me while they still pretend they are protecting you. Just an example:


blfr 2 days ago 1 reply      
I'm all for blocking ads, tracking or non-tracking, analytics, etc. and wish swift bankruptcy on the propaganda-advertising industrial complex but this is silly. The analogy between physical world and the Internet is not valid or insightful, just like it isn't in case of piracy/stealing. Collecting info on what you read online is nothing like breaking into your house.
superuser2 2 days ago 1 reply      
In small towns and pre-industrialization, stores had a tracking regime that puts Silicon Valley to shame: the shopkeepers knew you.

They didn't need credit cards or scores because they could identify your store credit account by your face, and your creditworthiness by your family's reputation.

If you were buying something out of the ordinary, you better believe your parents/spouse/church/friends/entire town would hear about it from the shopkeeper, who knew them all as well as he knew you.

A juicy conversation on a party line telephone shared with neighbors, interesting metadata on the postal mail also handled by people who know you and your business, a sighting in public with someone not your spouse, a visitor at an odd time of night, a strange car in your driveway - all these things could quickly become a public affair.

Technology is not bringing us a particularly new invasion, but it is helping at least that side of the "tight-knit communities" of old scale to modern population size and density. I think this is a horrific development, and it's certainly quantitatively unprecedented, but not qualitatively.

jalk 2 days ago 1 reply      
Are there any publicly know examples of hackers that have used tracking data for malicious stuff. As nasty as it is to have a large amount of your web history stored in a profile, I don't see a clear path to crime (perhaps extortion). Or didn't I get the burglar metaphor?
esaym 2 days ago 0 replies      
This might help getting rid of those that follow you: https://github.com/jakeogh/dnsgate
pjuu 2 days ago 0 replies      
WOW! This is a great way to put it. Well done.
t0mislav 2 days ago 1 reply      
I didn't quite understand bouncers. Those are some tools for private browsing?
urbushey 2 days ago 1 reply      
What's the analogue for "clearing your cookies"?
shostack 2 days ago 1 reply      
Disclaimer: I do digital media and online marketing for a living.

I used to help lead the paid search group at a top search agency and had a real birds-eye view of where things were moving in that role.

Everything is moving towards audiences. While keywords and search queries are signals that highlight intent, ultimately the audience piece is what the advertiser cares about--that's just one component of it. This is why FB, Google and everyone else under the sun wants companies to upload their CRM data, and then they use that for retargeting (1st party, or 1P data), or building lookalikes.

Then you have Adobe and other companies trying to get companies to sell this data on a marketplace as 2nd party (2P) audience data for retargeting.

There are also companies like LiveRamp and others that try to get companies with login data to provide cookie matches against hashed email addresses to keep cookies fresh and prevent them from just being deleted once and forever. I've been approached by these companies, and always turned them down because it just felt dirty.

That said, this thread seems to draw the usual crowd of everyone who hates anything related to advertising. I'm not going to try to change your opinions because I know that is not going to happen. However the reason all of this data gets shared is because it allows better targeting which leads to more relevant ads, which leads to more purchases.

Think about that for a second.

People are purchasing more when the content is more relevant to them. Nobody is holding a gun to their head making them take out their wallets and hit "Purchase." They are saying "this product/service is relevant to me and I want to buy it."

In that manner, advertising is helping people who want to purchase said thing. The issue comes in with the fact that because targeting isn't perfect (and I doubt anyone wants the level of tracking needed to make it so), and because a lot of advertising is building awareness (not simply retargeting and reminding you to buy something you initially displayed interest in), it becomes intrusive in a manner people dislike.

Unfortunately, because of the data available, there's still plenty of people who say "hmmm, I didn't know about this, but it seems interesting, I'll check it out" and then they purchase. So from an advertiser's standpoint looking at a spreadsheet of data they see "this audience segment had a conversion rate of X and an ROI of Y" and they keep doing it if it is profitable because that is what they are optimizing for.

I actually enjoyed Jacques piece, and I do think that there is some very questionable stuff going on in the ad space. The example of a random app tracking and selling data totally unrelated to said app is a great example. Companies are finding that they can monetize their data without visibly degrading the user experience by showing ads, and still get paid on a CPM rate for it, so expect to see more of that.

At the end of the day, I say all of this to highlight the fact that often is left out of pieces like this, which is that things are the way they are now because it works. Advertisers wouldn't be doing it if it didn't work, which means consumers are voting with their wallets in large enough numbers to keep fueling this behavior. In Jacques restaurant example, he was put off by the restaurant special promoted on his phone. I'd probably behave the same way because I've developed an aversion to the more invasive aspects of my industry and I'm overly sensitive to it now. But Joe Consumer? They see a relevant deal that will save them money and say "hmm, I like what they are offering, and it is a fair price, I guess that just made my decision easier" and they go eat at the restaurant. So the restaurant sees that of all the Jacques that see the ad and keep walking, for the pittance they pay they get enough Joe's in the door to make it profitable, and they keep doing it.

The positive feedback loop created by more targeting leading to higher profits means that it is working and we'll see more of it until the feedback loop is broken. Ad blockers are one avenue towards attempting to break it, and legislation is another. The question is whether pulling on those two levers will be enough to reduce the efficacy of the feedback loop to the point where advertisers stop doing this.

And a final note to those who might respond to my post. Please note that I'm not trying to paint an overly rosy picture of what advertising does or in any way trying to defend some overreaching aspects of it. I think people should own their data and be entitled to controlling how it is used. That is not the reality of the world we live in though, and so I'm simply making observations about how it impacts the various parties involved beyond just the protagonist of Jacques' story. I think there are more "clean" ways of doing advertising, that rely on a strong creative message, etc. Or viral ads that get shared because they are creating great content. But at the end of the day the media person's job is to take that ad/content and get it in front of the audience they are targeting.

kp25 2 days ago 0 replies      
For the first few seconds, i thought you're talking about real people who're tracking you. Later i realized.. Great writeup. So, true.
9248 2 days ago 1 reply      
I find it infuriating how much traction this whole anti ad/tracking war is getting.

People mention there's no choice anymore. Wrong! It's still there, just like it was 10 or 15 years ago. Stop sharing your personal information online and the whole tracking thing doesn't matter anymore.

This analogy seems completely flawed imho. Nobody can get inside my home, or force my door or any of that nonsense, unless I specifically allow them when they ask!

I fail to understand how all these trackers can read my browsing history without me installing <popular plugin> and allowing it access to my browser? Or how are they going to read my contact list from my Android phone, or the one from my Thunderbird? Through thin air?

Nobody took the choice from us, we just happened to open wide our front and back doors, and then complain that random people come in and look through our stuff.

Add Reactions to Pull Requests, Issues, and Comments github.com
733 points by WillAbides  3 days ago   226 comments top 56
mrharrison 3 days ago 17 replies      
-1 or the thumbs down reaction, I think is a mistake. They aren't usually that constructive, because most of the time they are used as retaliation against a specific user, instead of constructive criticism. If someone downvotes you, you tend to downvote them. At the least, it should be a privilege to down vote, like SO and HN do. http://stackoverflow.com/help/privileges/vote-down
Sir_Cmpwn 3 days ago 1 reply      
A missing feature here is sorting issues by public support. An example is FontAwesome, which explicitly asks users to leave a +1 comment on issues they support. You can then get a pretty good idea of the most desired features by sorting the issues by most commented.



Would also be nice to see these reactions on the issue list so you can get a feel for the issues at a glance without digging deep into each one.

bsimpson 3 days ago 1 reply      
Looks like Dear GitHub[1] is having a rather quick impact on the product; first templates[2], now this:

[1] https://github.com/dear-github/dear-github

[2] https://github.com/blog/2111-issue-and-pull-request-template...

southpolesteve 3 days ago 0 replies      
This makes me think Github doesn't get it. Emoji comments are often used because there is no better way to interact with an issue/PR. What we need are better issue management tools. Polls, voting, triage tools, etc. Instead we got an easier way to post emojis. Feels more like a "look we have reactions like slack!!!" gimmick rather than a properly designed response to user requests for features.
jkire 3 days ago 4 replies      
I wonder if this is too featureful? What is the difference between +1, heart, and hooray? Having just a +1 and -1 is unambiguous and probably covers the vast amount of use cases? Perhaps not, but I'd be very interested to know the reasoning between being choosing between "unambiguous" and expressive.
richerlariviere 3 days ago 0 replies      
I think it is definitely the end of +1 era, folks! Thanks Github to listen to the community feature requests. You should allow more icons like Slack is doing currently.
chrismonsanto 3 days ago 0 replies      
Can you downvote replies in threads that are locked? Can collaborators delete these reactions like we would with comments?

> Have feedback on this post? Let know on Twitter.

Not everyone uses Twitter. It would be awesome to give feedback using the one account I'm guaranteed to have: a GitHub account. Otherwise I have to ask my question on HN...

city41 3 days ago 2 replies      
I think people will still write +1 comments because they won't notice this new feature, at least initially. It'd be nice if Github just converted "+1" comments into reactions.
ma138 3 days ago 0 replies      
Awesome move by GitHub. ZenHub[1] will be phasing out our +1 button now that it's no longer needed feels good to focus. We're excited to use this reactions data as part of our reporting suite, please keep the improvements coming!

[1] https://www.zenhub.io/

nmstoker 2 days ago 0 replies      
My apologies if i missed a comment, but I'm surprised how few people support "giving it a go" and then deciding if it's good or bad.There's a multitude of user approaches, lots of people very touchy about their way of doing things (some extra so as this relates to their profession)Whilst not wishing to appear a fanboy, clearly Github have put a decent amount of thought into this, they've got a fairly good track record and they're responsive. If it's poor, they'll reassess it.
bengotow 3 days ago 1 reply      
Finally. Let's just hope it doesn't email you when someone leave's a +1 reaction.
gsmethells 3 days ago 1 reply      
Wow! GitHub is being influenced by GitLab (who released this feature recently in GitLab 8.4).
choward 3 days ago 1 reply      
Am I missing something or is there still no way to +1 issues? All I see are ways to react to comments. Whenever I feel the urge to "+1" something it's for the issue, not a specific comment. Can someone explain how to add a reaction to an issue?
mrmondo 3 days ago 1 reply      
Ah yes, following in the steps of Gitlab which has had this for a while, the thumbs up / down and voting types are useful, everything is a distraction IMO
donretag 3 days ago 2 replies      
"So go ahead:+1: or :tada: to your :heart:s content."

Or please don't. Part of the problem with the +1s is that they add noise. How are reactions going to cut down on the noise? Telling people to go ahead a +1 an issue (increase noise) is the opposite of what the "Dear Github" maintainers want.

Many projects do not use +1 or any other voting scheme to illicit priority from the general public. +1 comments and reactions provide little value. I have seen Github issues where people +1 already closed issues because they do not bother reading.

lettergram 3 days ago 0 replies      
Getting dangerously close to that Facebook patent[1]...

[1] http://www.freepatentsonline.com/8918339.html

dpflan 3 days ago 7 replies      
I like the idea of adding more expressiveness, pictorially capturing sometimes fleeting moments of emotion or accurately representing an emotional state that can occur.

These are the following reactions:

 1. +1 2. -1 3. smile 4. thinking_face 5. heart 6. tada
Do they capture the necessary expressiveness for the context? Facebook's reactions cover more emotions, but FB is trying to support reaction to anything that can be posted.

gjreda 3 days ago 1 reply      
This is a welcome addition. I've run into bugs in projects before and wanted to "+1" a thread, but it always felt like spamming the maintainers.

It'd be cool if they added a way to search through your list of reactions. This would allow you to effectively comment on an issue in an OS project, while simultaneously bookmarking it, so that you can go back and commit a fix when you have a free moment.

Mikushi 3 days ago 1 reply      
April 1st is not there yet. I get the idea, but seriously, emojis...
looneysquash 3 days ago 0 replies      
I assume this is inspired by gitlab's similar feature?
ruffrey 3 days ago 2 replies      
Why is the thumbs up a white hand, and thumbs down is a yellow hand?!
silverwind 2 days ago 0 replies      
The implementation feels a bit rushed. Here are my suggestions:

- Don't allow a user to rate his own posts.

- Don't allow a user to issue contradicting votes like +1 and -1 at the same time.

- Use image emoji like everywhere else on the site for compatibilty.

bhaumik 3 days ago 1 reply      
First* Slack, then Facebook, and now GitHub. Looks like reactions are replacing (or expanding on..) the unary like/upvote/heart expression for tech products.

*Or at least the first time I've seen them used as an important feature.

Animats 3 days ago 0 replies      
I'd keep downvotes, but lose the emoji.
jwilk 3 days ago 5 replies      
Um. What does :+1: mean when applied to an issue? "I like this bug"?!
rocky1138 3 days ago 0 replies      
Emojis are terrible, but they're better than "+1".
yuribit 3 days ago 0 replies      
Is there a way to sort by "reactions"? Otherwise I think this feature is useless.. I would have preferred having more detailed issues rather than ugly Emojis.
Illniyar 2 days ago 1 reply      
While I applaud finally adding the vote up (and even vote down) features, this feature look a bit overdesigned.

I don't really get how I should "love" an issue, or "this issue makes me happy".Or the relevance of a "thinking face".The ui would be simpler with only 1 or 2 icons.

At least there's no "this issue makes me sad/angry" buttons.

knd775 3 days ago 0 replies      
Well, I guess this at least sorta solves the "+1" issues.
voaie 3 days ago 0 replies      
Well, I think a voting pool is more practical than manually counting upvote/downvote of every comment. I don't know how often the maintainers will come back and see how an issue is going and see which comments are popular. Also, sorting comments is not fun because of duplicated contents.
hiphopyo 3 days ago 0 replies      
Speaking about new features -- what I'd like to see is the ability to remove items from my public profile / activities list. Often I make mistakes. Often I do stuff I don't want the public to see, and I'd rather not have to email GitHub support asking them to remove them manually all the time.
odbol_ 2 days ago 0 replies      
The great thing about adding emoji Reactions to all our social media posts is that now AIs can finally learn human emotion. I'm sure Google is already training theirs on how different keywords make people feel.
peterwaller 2 days ago 0 replies      
My emojis look weird and not in keeping with the style of emojis elsewhere on the site http://imgur.com/rB9BdEr
wilg 3 days ago 0 replies      
Stoked about this feature! Can't wait until these are available in the API!
PeterStuer 2 days ago 1 reply      
Interesting cultural bias. Why are people not questioning unsubstantiated up-votes, but feel down-voters shouldn't be let of the hook without a full argumentation?
thejameskyle 3 days ago 0 replies      
SnaKeZ 3 days ago 1 reply      
Could they convert the exiting "+1" comments into reaction?
mkobit 3 days ago 0 replies      
I don't think these necessarily cover all of the responses that can be made, but I think it is a great start to getting simple feedback like this. Like other users mentioned, it would be awesome to be able to sort or perform some kind of action based on the quantify of the reactions.

I wonder if they will allow for repository owners to select which reactions they will allow? I think that would help with the limited selection but still allow owners to select what they consider useful to them.

supernintendo 2 days ago 1 reply      
That's cool. But what I'd really like is the ability to star comments so I can revisit them later.
bigethan 2 days ago 0 replies      
It'd be nice if they added a little warning whenever submitting a comment that was only a '+1' or '-1' otherwise the discoverability is gonna take a while, maybe? Also a option to convert those comments into reactions for repository owners.
franciscop 3 days ago 0 replies      
Awesome! Just a niptick, I think the reaction action should be near where the interface is. So either change the icon to make a reaction to bottom left, or change where it's shown to the right (where milestone, tags, etc are).

This way you get better feedback.

purpleidea 3 days ago 0 replies      
Ironically, you can't respond with a :reaction: to their "reactions" post.
jmspring 2 days ago 0 replies      
I find the idea of adding social responses to github distasteful. It's my own opinion.

That said, it'd be interesting to see a breakdown by age and background in terms of supporting or not supporting this addition.

vaibhavkul 2 days ago 0 replies      
How about a stop (hand palm) [1] instead of -1?

[1] http://emojipedia.org/raised-hand/

jbrooksuk 2 days ago 0 replies      
A step in the right direction, nice!

One thing I don't like is that you're able to add multiple reactions to the same item, it sends mixed signals.

dj_doh 3 days ago 0 replies      
-1 or thumbs down reaction should be avoided.
jorgecurio 2 days ago 0 replies      
I can just imagine this being abused by less socially aware on the autism spectrum disorder, I definitely think it's wrong and toxic to add mechanisms which support non constructive criticisms or praise that leads to emoticon circle jerks.
maaarghk 3 days ago 1 reply      
auscompgeek 2 days ago 0 replies      
Kinda makes you wish you could :+1: the blog post.
edelans 2 days ago 0 replies      
Funny how I wanted to +1 this blog post =
zuxfer 2 days ago 0 replies      
also, it would be good to convert any comment or message with just +1 as reactions.

cleans up the existing issue threads.

SnaKeZ 3 days ago 0 replies      
End of "+1" era?
cpr 3 days ago 0 replies      
Nice to see them moving quickly on some major OSS community requests.
fiatjaf 3 days ago 0 replies      
GitHub: social coding
dkopi 3 days ago 0 replies      
The best way to +1 an issue is with a pull request.
TomasEkeli 3 days ago 0 replies      
Wow, Eric Elliott (@_ericelliott) just asked for this on twitter - and now it happened. He must be a witch.
I made my own clear plastic tooth aligners and they worked amosdudley.com
795 points by dezork  1 day ago   110 comments top 31
rl3 23 hours ago 4 replies      
Not to be a downer, but was any thought given to the safety of the plastic(s) used?

This is something that's in your mouth a lot and constantly exposed to saliva.

The Dimension 1200es mentioned doesn't appear to be specific to medical applications.[0] The product page lists the only compatible thermoplastic being ABSplus-P430. The MSDS for that basically says the stuff is dangerous in molten form, and beyond that there's very little data.[1] The same company makes "Dental and Bio-Compatible" materials for use with their other products, and these appear to have considerably more safety data.[2]

>The aligner steps have been printed, in addition to a riser that I added in order to make sure the vacuum forming plastic (sourced from ebay) ...

As another commenter pointed out, the vacuum forming plastic is probably the primary concern because the 3D printer was just used to create the molds. The specific type of vacuum plastic isn't mentioned.

Regardless, very neat project.

[0] http://www.stratasys.com/3d-printers/design-series/dimension...

[1] http://www.stratasys.com/~/media/Main/Files/SDS/P430_ABS_M30...

[2] http://www.stratasys.com/materials/material-safety-data-shee...

jeffchuber 23 hours ago 1 reply      
Awesome work!

The animation definitely seems the most difficult (and subjective), but also the most cool! Body hacking via computed geometry!

Invisalign (align technology) uses almost the same workflow. Market cap $5.89B.

If you could move the workflow over to something based on WebGL / three.js - you could make this accessible to dentists in developing countries. Could be an awesome open source project.

I think "allowing" it to be used in the US would open yourself up to too much liability though :(

loocsinus 21 hours ago 1 reply      
It is smart that you designed the retainers based on maximum tolerance of tooth movement quoting from a textbook. I suggest you take X ray to make sure no root resorption have occurred. Also for those who want to imitate, measure the length of teeth and compare with the arch length to make sure the teeth can actually "fit" into the arch. I am a dental student.
percept 1 day ago 2 replies      
Now that is awesome--those things aren't cheap.

I'm going to send this to my dentist (who's cool enough to appreciate it).

forgotpasswd3x 1 day ago 1 reply      
This is really amazing, man. It's honestly the first 3D printing application I've seen that I can see quickly improving thousands of lives. Just to think of all the people who right now can't afford this procedure, that soon will be able to... it's just really wonderful.
valine 23 hours ago 0 replies      
He scans his teeth, animates how he wants them to move in blender, and then 3D prints each frame. That is absolutely brilliant.
minsight 1 day ago 1 reply      
This is just amazing. I was waiting for how it might go horribly wrong, but the guy's mouth looks great.
wslh 13 hours ago 1 reply      
There is an important issue missing in the article (beyond the warning notice): the occlusion. The modification of the dental structure requires a whole functional analysis that goes beyond the teeth.

Anyway, the future is promising and the issues could be solved taking into account all the factors.

daveguy 13 hours ago 0 replies      
It looks like the author took into account the safety of the plastic in creating these, which is a good thing. Maybe more so than dentists. You know "silver" fillings aka dental amalgum? They are 50% mercury by weight and are still being used. Supposedly safe because it is inhalation of mercury that is poisonous. Removal of those fillings with a drill can be dangerous. When some guy told me about this and was talking about it being the next asbestos/mesothelioma, I was thinking "sure! That sounds like conspiracy crap!" Then I looked it up on the FDA site like he suggested:


Anti-vaxxers are idiots and it is obvious that vaccines don't cause autism (original study was a fraud). The health benefit of vaccines is as undeniable as the lack of correlation to autism.

That said, dental amalgum is a chunk of mercury in your mouth. FDA says it is safe for people over 6yrs old, but I personally will stay away from it for any future dental work.

rashkov 1 day ago 1 reply      
I came across an article here on HN about mail-order Invisalign companies at a fraction of the price. I'm about half way through and very happy with the progress so far. Just thought I'd give a heads-up if anyone is interested
CodeWriter23 1 day ago 1 reply      
The work he did with the impressions, to me, suggests he has experience as / knows someone who is a dental technician. If he didn't, wow, he independently figured out some of their key techniques.

My grandfather used to make dentures, and that casting in the 4th photo looks exactly like the impressions my GF would make. They also used these hinges so they could mate the upper to the lower, so they could adjust any collisions that occurred while opening and closing the mouth.

teekert 21 hours ago 2 replies      
This also seems to have whitened his teeth at the same time ;), typical "before, after".

But on a serious note, I had braces, after the were remove a wire was placed behind my teeth to keep them in place. It didn't stick to one of my ceramic teeth I had from an accident in my youth. The wire was removed and after some months my front two teeth were as far apart as ever. Ok, the overbite didn't return but things will move back at least to some degree over time.

As mentioned before, I myself would never just put any plastic material in my mouth with all the bad things known about plasticisers, bpa/bps, etc.

hellofunk 15 hours ago 2 replies      
This is cool but I can't say I agree with actually doing it. Just because you can do something doesn't mean you should, particularly in matters of health. If you don't have the requisite experience and knowledge and training, it seems risky to go about something like this on your own.
racecar789 22 hours ago 0 replies      
Another option....have a dentist bind composite material to the couple teeth out of alignment.

Had two teeth done for under $500 10 years ago.

It's a stop gap until braces are an option financially.

zump 1 day ago 0 replies      
Now THIS is a hack.
vaadu 11 hours ago 0 replies      
How soon before the FDA says this is illegal or the medical industrial complex lobbies congress to make this illegal?
Tepix 11 hours ago 0 replies      
I love this project - well done, and the result speaks for itself! It's unfortunate that you were forced to go this somewhat dangerous route due to money. In some countries dental care like that would be paid for by the health insurance.
justinclift 10 hours ago 0 replies      
Cool, that's an idea I'd had in the back of my head for some time too. Good to see someone's gone ahead and done it, and proven the concept. :D
scep12 1 day ago 0 replies      
Awesome stuff Amos! It's always nice to see creativity and persistence rewarded with successful results. I really enjoy reading these types of posts on HN.
muniri 22 hours ago 0 replies      
This is awesome! Definitely not the safest thing to do, but I'm glad that they worked.
vram22 19 hours ago 3 replies      
Interesting article. Waterpik is a related product (as in, for teeth and gums) that a dentist recommended. Anyone have experience using it - pros, cons?
burgessaccount 1 day ago 0 replies      
This is awesome! Thanks for the detailed description.
mentos 1 day ago 1 reply      
Are you considering starting a business out of this?
pcurve 1 day ago 0 replies      
this is pretty amazing and daring.

I guess this would work better with those with gaps or very mildly crowded teeth.

Often crowded teeth result in pulling teeth to make room.

yogipatel 4 hours ago 0 replies      
I'm not trying to downplay how much the hacker/geek in me loves this, however, as a former* dental student, I would highly suggest not trying to pull this off on your own.

First, teeth and their movement is more complicated than it might first seem. You have to think about the entire masticatory apparatus, for example:

There's more root than crown, how does the root move in relation to the tooth? Root resorption is a common problem in orthodontic treatment.

Is there / will there be enough bone surrounding the tooth to support the intended movement?

How will the patient's occlusion (how the teeth fit together) be affected? Part of the Invisalign process is to take a bite registration that shows the upper and lower teeth in relation to each other. This is important, and ignoring it can potentially lead to other complications:

- stress fractures

- supraeruption of opposite tooth

- TMJ pain

Does the patient display any parafunctional habits that will affect the new tooth positions? For example, do they grind, clench, or have abnormal chewing patterns?

Many Invisalign techniques require the placement of anchors, holds, and various other structures attached to the teeth themselves. They allow for more complex movement than the insert itself would be able to provide.

Adjustments are often required mid-treatment. Not everybodys anatomy and biology is exactly the same, so you have to adjust accordingly.

Now, does every general dentist take this into account 100% of the time? No, but theyre at least trained to recognize these situations and compensate for them.

That said, many simple patients dont require any more thought than the OP put in. Its a good thing he looked in a text book and realized that theres a limit to how much you should try to move a tooth at each step before youre likely to run into problems. And if you do run into problems do you think a professional is going to come anywhere near your case?

A few issues I have with his technique:

Unless he poured his stone model immediately after taking the impression, its likely there was a decent loss in accuracy. Alginate is very dimensionally precise, but only for about 30 minutes. The material that most dentists use, PVS, is dimensionally stable for much, much longer (not to mention digital impressions).

Vertical resolution of the 3D print does matter. You might be moving teeth in only two dimensions, youre applying it over three dimensions.

Again, I think it is awesome that someone gave this a shot, and did a fairly good job as well. Im all for driving the cost of these types of treatments down, as well as promoting a more hacky/open approach to various treatments. Just know theres more than meets the eye.

* I decided to go back to tech, theres too little collaboration in dentistry for me to make a career out of it.

hamburglar 23 hours ago 0 replies      
Having recently done invisalign, I think this is brilliant, but I would have had a really hard time sticking with it through the pain. I would worry too much that I was doing damage. My case was quite a bit more severe, however, so maybe it's less of a big deal if the movements are minor.
semerda 19 hours ago 0 replies      
Wow this is awesome! Thank you for sharing. Retainers post Invisalign cost between $400-900 for 1 set - total ripoff. This looks like a far cheaper alternative.
ck2 16 hours ago 0 replies      
This is definitely for the brave, not me.

Not sure what I would do if we didn't have a dental school.

When I go there I am always surprised to find people who actually have insurance who still go there despite all the hassle.

transfire 1 day ago 3 replies      
Can you chew food with the aligner on?
peleroberts 14 hours ago 0 replies      
Direct leak into your gums..
brbsix 15 hours ago 0 replies      
Orthodontics is a field known for its protectionism. It'd be pretty foolish but I wouldn't be surprised if you receive a cease and desist.
NSA data will soon routinely be used for domestic policing washingtonpost.com
486 points by Jarred  3 days ago   166 comments top 29
fweespee_ch 3 days ago 8 replies      

> Perhaps the most chilling quote of the Soviet era came from Lavrentiy Beria, Stalins head of the secret police, who bragged, Show me the man, and I will find you the crime. Surely, that never could be the case in America; were committed to the rule of law and have the fairest justice system in the world.

> This should make everyone fearful. Silverglate declares that federal prosecutors dont care about guilt or innocence. Instead, many subscribe to a win at all costs mentality, and there is little to stop them.

> The very expansiveness of federal law turns nearly everyone into lawbreakers. Like the poor Soviet citizen who, on average, broke about three laws a day, a typical American will unwittingly break federal law several times daily. Many go to prison for things that historically never have been seen as criminal.



> John Baker, a retired Louisiana State University law professor, made a similar comment to the Wall Street Journal: There is no one in the United States over the age of 18 who cannot be indicted for some federal crime. That is not an exaggeration.


Do you even know the 134 laws passed by the current Congress are? I know I don't and you just have to fall afoul of 1.

rubyfan 3 days ago 2 replies      
In related news, I heard on NPR today people are complaining that the TSA no-fly list didn't catch a man who was a wanted shooting suspect.

The insinuation is the no-fly list should be expanded to catch domestic criminals. You know, the no-fly list that you can't be removed from and don't have to have committed a crime to get yourself listed on. The no-fly list that is unconstitutional, yeah that one.


fiatmoney 3 days ago 2 replies      
Of course, since this is now available to local & federal cops & prosecutors, same as any other law enforcement database, any exculpatory information will naturally be discoverable by defense attorneys.



maerF0x0 3 days ago 0 replies      
> Itll be Black, Brown, poor, immigrant, Muslim, and dissident Americans: the same people who are always targeted by law enforcement for extra special attention.

.. Another Quote.

> First they came for the Socialists, and I did not speak outBecause I was not a Socialist.

> Then they came for the Trade Unionists, and I did not speak out Because I was not a Trade Unionist.

> Then they came for the Jews, and I did not speak outBecause I was not a Jew

> Then they came for meand there was no one left to speak for me.

I guess i'm on that list now.

matt_wulfeck 3 days ago 2 replies      
You when you get paid to find terrorist, and your job depends on you finding terrorists, well then, everybody starts to look like a terrorist.
linkregister 3 days ago 2 replies      
I request the link to be changed to the original source; this article is an opinion piece. The original is here:


tehmillhouse 3 days ago 4 replies      
As someone who doesn't live in the US, I am awestruck by how big of a deal this seems to be for people.

Are you really trying to tell me that surveillance of ~the rest of the world~ is somehow less bad?

Someone1234 3 days ago 4 replies      
Rumour has it that the NSA already has threat predictor algorithms and processes, how long until the NSA provides the FBI this and you get arrested for pre-crimes or thought-crimes? How do you answer the charge that you were thinking about doing something bad?
danenania 3 days ago 2 replies      
With low level bureaucrats getting access to this data, isn't it inevitable that we'll start seeing massive security breaches on a routine basis?

I suppose we'll just have to start assuming that anything said or written or looked up online will eventually be accessible to anyone who's interested.

benevol 2 days ago 1 reply      
Sorry in advance if this is somehow not a "substantive" comment, but it really needs to be said very clearly:

This country is so f#cked now.

What are the remaining options?

1. Those who accept the new reality will be forced to behave like sheep.

2. Those who speak up will be silenced even before they will have organized some kind of serious movement (see: OWS + parallel construction).

3. Leave.

meric 3 days ago 2 replies      
Is anyone surprised?

Don't use electronic messaging to tell people you're holding cash, bring a cheque at least. The cops are going to find out and your cash is going to be confiscated.

dools 3 days ago 1 reply      
It's not enough to go after drug dealers, you have to go after their families as well.
kii9mplppmfh 3 days ago 0 replies      
The singularity is coming... It just turns out it's fascism.
datashovel 2 days ago 1 reply      
I would not be surprised if there are crowdfunding sites that help fund lawsuits or other legal matters.

Are there crowdfunding sites that help fund civil rights cases?

Edit: In other words, I'm beginning to think the only way to chip away at this sort of massive government overreach is to crowdfund the hell out of a ton of legitimate civil rights cases. Eventually maybe our government(s) (since I imagine US is not the only one to be concerned about) will get the hint.

Theodores 3 days ago 2 replies      
I think they are a few organisational steps away from something truly terrifying. That will be when the NSA just hand over the 'graph' dataset of all the drug dealers to the FBI/local police. As it stands they have to have a clue, e.g. a name before they can get particulars. So policing is still 'hampered' conveniently for those that use drugs. So long as their network does not arouse the suspicions of the police then there need not be any suspicions.

However, if the complete 'graph' is just handed over then even the most discreet of networks could be uprooted, all chains and links identified, geolocated too. This could be done. All it would take is a Donald Trump to take it up another level. He could get a lot of votes by promising to use NSA data to eradicate drugs from America once and for all, with no drug-dealer left behind...

Aside from the 'where next' aspects, as it is, I found this article to be quite shocking. So much for the 'land of the free'.

api 3 days ago 3 replies      
Meanwhile the leading Republican presidential nominee is a right-populist (a.k.a. fascist), and the Democratic presidential nominee was almost a left-populist (socialist).

We are literally one economic crisis or major terrorist attack away from some form of significantly more authoritarian if not outright totalitarian government. Whether it would be "left" or "right" is sort of up in the air, and might depend on which side is able to produce a more compelling demagogue at the right time. In any case if history is any guide it doesn't matter much. Totalitarianism is totalitarianism.

If that comes to pass, we're going to find out what "turn-key totalitarian state" means. The infrastructure is in place. The only barriers are legal and social/cultural.

beedogs 3 days ago 1 reply      
It's time to shut down the NSA for good.
dombili 3 days ago 0 replies      
LinuxBender 2 days ago 0 replies      
In my humble opinion, something that folks may want to think about is existing NSA data retention plus the statutes of limitations.

Now various agencies can trawl back through decades of data collection and see what people can still be prosecuted for, extorted with, etc.

walid 2 days ago 0 replies      
If that is true then that I think is a violation of due process. If proven to have happened then many people can simply go free due to technicalities of the law. Awkward for the FBI.
squozzer 3 days ago 0 replies      
To quote the great Gomer Pyle, "surprise, surprise, surprise!"
s_q_b 2 days ago 0 replies      
Public announcement of these programs is good. It opens the door to Constitutional scrutiny.

These actions were happening anyway. Aren't we better off now that they're acknowledged and can be challenged?

SwimAway 2 days ago 0 replies      
We need to stop this bullshit from happening. We the people.
s_q_b 2 days ago 0 replies      
More doors in, more doors out. Bad OPSEC.
anocendi 3 days ago 0 replies      
"What is your hue?"

The question will become relevant soon enough.

losteric 3 days ago 2 replies      
So how many candidates in the upcoming election are publicly against this kind of data mining?
cha5m 2 days ago 0 replies      
This is honestly the road to 1984.
bobby_9x 3 days ago 4 replies      
This sounds exactly like what is happening right now.

"All lives matter" gets you harassed and possibly fired.

AlphaGo beats Lee Sedol 3-0 [video] youtube.com
558 points by Fede_V  1 day ago   404 comments top 50
Radim 1 day ago 4 replies      
In a recent interview [1], Hassabis (DeepMind founder) said they'd try training AlphaGo from scratch next, so it learns from first principles. Without the bootstrapping step of "learn from a database of human games", which introduce human prejudice.

As a Go player, I'm really excited to see what kind of play will come from that!

[1] http://www.theverge.com/2016/3/10/11192774/demis-hassabis-in...

bronz 1 day ago 2 replies      
Once again, I am so glad that I caught this on the live-stream because it will be in the history books. The implications of these games are absolutely tremendous. Consider GO: it is a game of sophisticated intuition. We have arguably created something that beats the human brain in its own arena, although the brain and AlphaGO do not use the same underlying mechanisms. And this is the supervised model. Once unsupervised learning begins to blossom we will witness something that is as significant as the emergence of life itself.
awwducks 1 day ago 5 replies      
Perhaps the last big question was whether AlphaGo could play ko positions. AlphaGo played quite well in that ko fight and furthermore, even played away from the ko fight allowing Lee Sedol to play twice in the area.

I definitely did not expect that.

Major credit to Lee Sedol for toughing that out and playing as long as he did. It was dramatic to watch as he played a bunch of his moves with only 1 or 2 seconds left on the clock.

pushrax 1 day ago 4 replies      
It's important to remember that this is an accomplishment of humanity, not a defeat. By constructing this AI, we are simply creating another tool for advancing our state of being.

(or something like that)

Eliezer 1 day ago 19 replies      
My (long) commentary here:



At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for probability of long-term victory rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol's probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game 'eventually' shifted to AlphaGo later, may simply have failed to read the board's true state. The reality may be a slow, steady diminishment of Sedol's win probability as the game goes on and Sedol makes subtly imperfect moves that humans think result in even-looking boards...

The case of AlphaGo is a helpful concrete illustration of these concepts [from AI alignment theory]...

Edge instantiation. Extremely optimized strategies often look to us like 'weird' edges of the possibility space, and may throw away what we think of as 'typical' features of a solution. In many different kinds of optimization problem, the maximizing solution will lie at a vertex of the possibility space (a corner, an edge-case). In the case of AlphaGo, an extremely optimized strategy seems to have thrown away the 'typical' production of a visible point lead that characterizes human play...

wnkrshm 1 day ago 0 replies      
While he may not be number one in the Go rankings afaik, Lee Sedol will be the name in the history books: Deep Blue against Garry Kasparov, AlphaGo against Lee Sedol.Lots of respect to Sedol for toughing it out.
Yuioup 1 day ago 1 reply      
I really like the moments when Alpha-Go would play a move and the commentators would look stunned and go silent for a 1-2 seconds. "That was an unexpected move", they would say.
kybernetikos 1 day ago 2 replies      
Go was the last perfect information game I knew where the best humans outperformed the best computers. Anyone know any others? Are all perfect information games lost at this point? Can we design one to keep us winning?
flyingbutter 1 day ago 2 replies      
The Chinese 9 Dan player Ke Jie basically said the game is lost after around 40 mins or so. He still thinks that he has a 60% chance of winning against AlphaGo (down from 100% on day one). But I doubt Google will bother to go to China and challenge him.
jamornh 1 day ago 0 replies      
Based on all the commentaries, it seems that Lee Sedol was really not ahead during the game at any point during the game... and I think everybody has their answer regarding whether AlphaGo can perform in a Ko fight. That's a yes.
bainsfather 1 day ago 0 replies      
It is interesting how fast this has happened compared to chess.

In 1978 chess IM David Levy won a 6 match series 4.5-1.5 - he was better than the machine, but the machine gave him a good game (the game he lost was when he tried to take it on in a tactical game, where the machine proved stronger). It took until 1996/7 for computers to match and surpass the human world champion.

I'd say the difference was that for chess, the algorithm was known (minimax + alpha-beta search) and it was computing power that was lacking - we had to wait for Moore's law to do its work. For go, the algorithm (MCTS + good neural nets + reinforcement learning) was lacking, but the computing power was already available.

skarist 1 day ago 2 replies      
We are indeed witnessing and living a historic moment. It is difficult not to feel awestruck. Likewise, it is difficult not to feel awestruck at how a wet 1.5 kg clump of carbon-based material (e.g. Lee Sedol brain) can achieve this level of mastery of a board game, that it takes such an insane amount of computing power to beat it. So, finally we do have a measure of the computing power required to play Go at the professional level. And it is immense, or to apply a very crude approximation based on Moore's law, it requires about 4096 times more computing power to play Go at the professional level than it does to play chess. Ok, this approx may be a bit crude :)

But maybe this is all just human prejudice... i.e. what this really goes to show is that in the final analysis all board games we humans have inveted and played are "trival", i.e. they are all just like tic-tac-toe just with a varying degree of complexity.

partycoder 1 day ago 2 replies      
Some professionals labeled some AlphaGo moves as being unoptimal or slow. In reality, Alpha Go doesn't try to maximize its score, only its probability of winning.
atrudeau 1 day ago 1 reply      
It would be nice if AlphaGo emitted the estimated probability of it winning every time a move is made. I wonder what this curve looks like. I would imagine mistakes by the human opponent would give nice little jumps in the curve. If the commentary is correct, we would expect very high probability 40-60 minutes into the game. Perhaps something crushing, like 99,9%
niuzeta 1 day ago 3 replies      
Impressive work by Google research team. I'm both impressed and scared.

This is our Deep Blue moment folks. a history is made.

jonah 1 day ago 0 replies      
Cho Hyeyeon 9p's commentary on the American Go Association YouTube channel: https://www.youtube.com/watch?v=CkyVB4Nm9ac
dwaltrip 1 day ago 0 replies      
AlphaGo won solidly by all accounts. This is an incredible moment. We are now in the post-humanity go era.

The one solace was that Lee Sedol got his ko =) however, AlphaGo was up to the task and handled it well.

seanwilson 1 day ago 4 replies      
Super interesting to watch this unfold. So what game should AI tackle next? I've heard imperfect information games are harder for AI...would the AlphaGo approach not work well for these?
partycoder 1 day ago 2 replies      
I don't think Ke Jie would win against Alpha Go either.
hasenj 1 day ago 1 reply      
Seems to be playing at super-human levels.

I'm no where near a strong player but it seems like AlphaGo is far ahead of Lee Sedol.

dkopi 1 day ago 0 replies      
One can only hope that in the final battle between the remaining humans and the robots, it won't be a game of Go that decides the fate of humanity.
starshadowx2 1 day ago 1 reply      
I'm very interested to see what the Google DeepMind team applies themselves to in the future.
awwducks 1 day ago 1 reply      
I am really curious about the reviews from An Youngil 8p and Myungwan Kim 9p. The commentary by Redmond always tend to leave something to be desired.
hyperion2010 1 day ago 2 replies      
I really want to see how a team of humans would do against alpha-go with a 3 or 4 hour time limit.
dynjo 1 day ago 1 reply      
How long before AlphaGo also participates in the post-match press conference I wonder...
asmyers 1 day ago 1 reply      
Does anyone know if the AlphaGo team is saving the probability of winning assignments that AlphaGo gave its moves?

It would be fascinating to see how early AlphaGo assigned very high probability of it winning. It would also be interesting to see if there were particular moves which changed this assignment a lot. For instance, are there moves that Lee Sedol made for which the win probability is very different for the AlphaGo moves before and after?

gandalfu 1 day ago 0 replies      
Its a matter of language.

Our model of representation of Go fails at expressing the game/strategies of AlphaGo is showing, we are communicating in the board in different languages, no wonder everyone looking the at games is stomped by the machine "moves".

Our brains lack the capacity of implementing such algorithms (understanding such languages), but we can still create them. We might see in the future engine A played against engine B and enjoy the matches.

No one is surprised by a machine doing a better job with integer programming/operational research/numerical solutions etc.

mikhail-g-kan 1 day ago 0 replies      
Interestingly, I feel proud of AI, despite humans lost. It's the progress toward our end as classic human species
zhouyisu 1 day ago 2 replies      
Next move: how about beating human at online dating?
asdfologist 1 day ago 0 replies      
Ke Jie's gonna be next.


"Demis Hassabis, Google DeepMind's CEO, has expressed the willingness to pick Ke as AlphaGo's next target."

xuesj 1 day ago 1 reply      
This is a milestone of AI in history. The ability of AlphaGo is amazing and far beyond to human.
eemax 1 day ago 1 reply      
What happens if you give the human player a handicap? I wonder if the games are really as close as the commentators say, or if it's just a quirk of the MCST algorithm.
jcyw 1 day ago 0 replies      
We had Godel on the limitation of logic and Turing on the limitation of computation. I think AI will only change the way human calls intelligence. We used to call people who can mentally calculate large numbers genius. Lots of that has to be re-defined.
Huhty 1 day ago 0 replies      
Full video will be available here shortly:


(It also includes the videos of the first 2 matches)

theroof 1 day ago 2 replies      
Is anyone also asking themselves when they'll be able to play against this level of AI on their mobile phone? Or formulated differently: when will an "AlphaGo" (or equivalent) app appear in the play/app store?

In 2 years? In 1 year? In 3 months?

yodsanklai 1 day ago 0 replies      
All this excitement makes me want to learn a little bit about those algorithms. I don't know anything about neural networks (but I've already implemented a chess game a while ago). Would it be difficult to implement a similar algorithm for a simpler game?
arek_ 1 day ago 0 replies      
I was using machine learning in computer chess some time ago. My commentary: http://arekpaterek.blogspot.com/2016/03/my-thoughts-on-alpha...
cpeterso 1 day ago 0 replies      
Does AlphaGo run in the cloud or is it a machine onsite at the match? I wonder how small AlphaGo could be scaled down and still beat Lee Sedol. How competitive would AlphaGo be running on an iPhone? :)
mzitelli 1 day ago 0 replies      
Congratulations to AlphaGo team, curious to see if Lee Sedol will be able to defeat it in the next matches.
Queribus 16 hours ago 0 replies      
Was I in a "prophetic mode" yesterday? ;)))
ganwar 1 day ago 1 reply      
Incredible news. We have all heard all of the positive coverage and how tremendous it is. What I find interesting is that how come nobody is talking about the potential of AlphaGo as a war strategizing AI?

If you provide terrain(elevation etc.) information, AlphaGo can be used to corner opponents into an area surrounded by mountains where AlphaGo is sitting on the mountains. We all know what happens after that.

Don't want to kill the party but I am completely surprised with the lack of chatter in this direction.

eternalban 1 day ago 0 replies      
My impression is that Sedol was psychologically defeated at 1-0. Computational machines don't crack under pressure - at most they get exhausted.
pmyjavec 1 day ago 0 replies      
If one allowed AlphaGo to train forever, what would happen? Would it constantly just tie against itself ?
oliebol 1 day ago 0 replies      
Watching this felt like watching a funeral where the commentary was the eulogy.
ptbello 1 day ago 1 reply      
Does anyone have insights on how a game between two Alpha-Gos would play out?
awwducks 1 day ago 0 replies      
I guess the next question on my mind is how AlphaGo might fare in a blitz game.
tim333 1 day ago 2 replies      
Kind of a shame the tournament isn't closer.
Queribus 1 day ago 0 replies      
Strictly speaking, "just" because Alphago finally won, that doesnt mean it was right when claiming being ahead already.
yeukhon 1 day ago 0 replies      
I wonder what if you put the top 10 players in a room, team up and play with Alpha-Go. They are allowed to make one move within a 1-hour period and they can only play up to 8 hours a day. I wonder what the outcome would be.

Anyway, I think AlphaGo is a great training companion. I think Lee felt he's learning.

Finally, I also feel that while experience is crucial, the older generation would flush out by the younger generation every decade. I wonder if age really play a role in championship - not that AlphaGo isn't considered a 1000 years old "human" given it has played thousands of games already.

scrollaway 1 day ago 1 reply      
If, as you believe, this post was "assigned" points (which only HN staff can theoretically do), what do you believe you will achieve by flagging it?
Factorio a game where you can automate basically anything factorio.com
565 points by staticelf  2 days ago   112 comments top 34
hobs 2 days ago 0 replies      
I have been playing this for a few years and just started again with the release of the steam version.

If you ever enjoyed Tekkit (minecraft) or the more automated part of Dwarf Fortress, you will like this game a lot.

My favorite part is that basically any action or building (up to the far late game) usually pushes you to build it by hand once or twice, and then automate the process forever.

It is extremely satisfying to build your first solar panel array while fighting for every resource, and then a few hours later have your 5000 strong robot army assemble a blueprint of that same array in 5 seconds while you watch.

The game also has a really great mechanic where you are constantly unbalanced for what resource you need to build up, but only because you decided to expand and create, so that your plans always digress into other plans and other problems to solve.

Also, the multiplayer is great and works really well, though it has a weird setting where you can set your own latency, and it seems like the host and clients are in lockstep not allowed to skew (so if you have a laggy client its a problem.)

patio11 2 days ago 4 replies      
This game is fantastic, and ate ~20 hours of a business trip earlier in its development lifecycle.

I have one warning to you which I am not aware of seeing elsewhere for it: something about the color palette gives me severe eye strain, to the point of "physically painful to play", in a way I have never experienced before or since.

There exist several other games on Steam these days with the same core build-a-(semi-)autonomous-factory mechanic. My favorite of those that I've played so far is Big Pharma. It's substantially less advanced in terms of factory mechanics than Factorio [+], but the strategy is very, very deep, much deeper than you'd expect from looking it it. (For spoilers on that score, see my Steam review, which is the topmost one on the page. Capsule non-spoiler summary: best $20 I spent last year.)

[ + ] A fairly key skill for Factorio, which is present in Big Pharma but not relevant except at the highest levels of play, is timing the production of multiple subcomponents (which might happen in different quantities, at different rates, at variable distance from where they are consumed) such that one's production line never starves, blocks, or overproduces. It's Totoya Factory Simulator 2016 in this respect. When you get it right you get visual feedback (absence of congestion on your production line as e.g. coal stacks up because your furnaces aren't burning it because they're blocking on insufficient supplies of ore because you have insufficient electrical capacity because...). It feels like you're playing a symphony of borglike capitalist efficiency.

rl3 2 days ago 2 replies      
I have been avoiding this game like the plague due to its potential life-ending, productivity-destroying effects (at least for some people).

This is one of those games where it starts innocently enough, a few hundred hours of gameplay accrue, and the next thing you know you're wondering where the last two months went.

Part of the reason for this is related to the decline of the SimCity franchise over the years. Cities: Skylines was pretty, but it didn't really hit the same notes in terms of gameplay. The last real SimCity game was 2004, so it's been well over a decade of waiting for the next truly addictive builder/simulation fix to arrive. Based on the amount of people I know having sunk triple-digit hours into Factorio, it's the closest we've come to a real SimCity game since.

That said, it seems like a fantastic game and I look forward to playing it some day.

hbt 2 days ago 1 reply      
This is an amazing game, especially in multiplayer.

When playing with others, it's comparable to programming with others. It's about communication, people will try to optimize stuff that already works, people will argue about how to build stuff or when to refactor stuff. Sometimes they will build something in a weird way and argue why this is the best way etc.

It can be a nightmare with the wrong people or a lot of fun with intelligent people who can control their emotions.

Also, the community is amazing and there are tons of mods.

KirinDave 2 days ago 1 reply      
If you like this game, you might also like Modded Minecraft. Many technical mods strongly associate with developers who love to tinker on absurdly complex physical systems. I've found the process to be a nice change of pace in the evenings.

You might also like our Minecraft Modpack, Resonant Rise. You can grab it on the ATLauncher (just search for ATLauncher). It's designed around complex and interesting engineering challenges.

There's a similarly REALLY cool Minecraft mod on the scene for Minecraft 1.8 called "Psi" that you can try (available here: http://psi.vazkii.us/). It lets you use a visual dataflow language and trigonometry to create "magic spells" that are very technical in nature. It's a very fun exercise, and it's neat to write a program (with almost NO flow control!) that does things like dig a tunnel or build a bridge or throw zombies skyward, all by magic.

Twirrim 2 days ago 1 reply      
The whole game is stunning quality for what is still "alpha", where in games that frequently denotes "so buggy it's a miracle if the game runs for 10 minutes without crashing and burning".

It just works, and it's smooth as silk. It seems like they've concentrated heavily on the game engine and now are focussing on content, but there is plenty of content already to play with.

JackuB 2 days ago 0 replies      
I love the irony that you have to fight bugs.

PROTIP: Try multiplayer. Unexpectedly, my girlfriend loved it and we had a lot of fun there.

Also, they are hiring: http://www.factorio.com/jobs

Paul_S 2 days ago 6 replies      
It's a fun game but please, at some point you have to release and stop pretending it's alpha to avoid any responsibility. It is now not unusual to buy an "alpha" release of a game, play for a few years, shelve it and move on and then notice 5 years later that the final release has shipped. I can't be the only person who finds this silly.
skriticos2 2 days ago 0 replies      
I picked this up a few days ago on Steam and spent considerable time on it since then. It's really addictive for people who like automation. It has a nice pace, throwing in some interesting challenges, but generally let's you focus on those little details on how to optimize the structure of your base. Resources are generally bountiful, but they do run out eventually, so you are forced to think about acquiring new sources - and until then you have plenty of time to fill your conveyor belts..

By the way, thanks for supporting this on Linux! Wouldn't play it otherwise.

typeformer 2 days ago 0 replies      
I think it would be cool to be able to play the natives too and have to come up with super ingenious ways of dismantling these unholy factories of the invaders destroying your planet...
kentonv 2 days ago 5 replies      
This game is like crack for programmers.

This has been the most popular game at my LAN parties (http://kentonshouse.com) for the last year and a half. Here's a review I wrote in December 2014 that still applies (original at https://plus.google.com/+KentonVarda/posts/YHayo6sj42n):

My new favorite game is Factorio (http://factorio.com). It's like a cross between Minecraft, SimCity, and Civilization, and the result is massively better than any of them. The game is currently in "alpha", but I'm not sure why; it's far more polished and less buggy than many finished professional games I've played.

Overhead view. Like Minecraft, you start out punching trees for wood to craft a pickaxe with which you can then mine some ore to craft other things. But soon, you are building an automatic mining drill, then a conveyor belt to bring the ore to a smelting furnace, then robot arms to insert the ore into the furnace and take the smelted bars out, then more conveyor belts to bring those to other places where thy can be used. Eventually you can build power plants, labs to research new technologies, walls and turrets to defend against attackers, oil refineries, robot delivery drones, trains, and more.

The game is incredibly addictive (especially for programmers?). But what really impresses me is how the game illustrates the complexity of the real world. Factorio is a lesson in how logistics trump tactics and strategy ("strategy is for amateurs, logistics are for professionals"), and in how to build a complex system for changing requirements. The lessons are broadly applicable to the real world.

It's fairly easy to analogize Factorio to city planning. In your first game, you will quickly discover that the city you built for the early game is all wrong for the late game -- and then you realize: every real-life big city is a horrible mess and this is exactly why.

I also find myself comparing Factorio to software, especially distributed systems and networks. I find myself constantly using phrases like "buffer", "flow control", "back pressure", "throughput", "refactor", "under-utilized", etc.

One transition I find particularly interesting: around the middle of the game, you research the ability to build "logistics drones", which are basically like Amazon's quadcopter delivery drones. They can transport materials from point to point around your base -- you set up "request" points and "supply" points, and the drones pick up whatever items land in the supply points and bring them directly to whichever requester is requesting that item.

Up until this point, you mostly use conveyor belts for this task. When you first get logistics drones, you think "These are WAY more expensive than conveyor belts and have much lower throughput. Why would I ever want them?" But you quickly realize that the advantage of drones is that they are rapidly reconfigurable. Once your base is entirely drone-based, you can switch factories to build different items on a whim -- no need to re-route any conveyor belts. This gets more and more important in the late game as the number of different types of things you are building -- all with different input ingredients -- increases, and maintaining a spaghetti of conveyors becomes infeasible. This is tricky to grasp until you do it.

For a while, of course, you'll have part of your base running on drones while another part is still based on conveyors. It's like using Google Flights in your browser to search for airline tickets, while on the back end it is integrating with 60's-era mainframe-based flight scheduling software.

I can't help but imagine that conveyor belts and logistics drones represent two different programming languages (or, maybe, programming language paradigms). Choosing your programming language based on how easy it is to do something simple is totally wrong. The true measure of a good language is how it handles massive complexity and -- more importantly -- reconfiguration over time.

Another thought: In 10-20 years, when we have everything delivered to our houses via drones and self-driving taxis populating every major street, will we be able to just get rid of small residential side-roads? You won't need to drive a car up to your house anymore: it's easy enough to walk a couple blocks to the nearest major street and hop in a cab, or better yet to a train station. You don't need to carry cargo since it's delivered by drones. Delivery trucks: also replaced by drones. Will we suddenly be able to reclaim a ton of inner-city space? What will we do with it?

In any case, thanks to +Michael Powell and +Brian Swetland for introducing me to this game!

PS. Factorio is multiplayer! We've been having a lot of fun with it at LAN parties, and I just completed a coop game with +Jade Q Wang, who is also addicted. We tend to forget to do things like eat or sleep when we're playing.

imaginenore 2 days ago 2 replies      
The only downside to Factorio is its graphic design. It all looks like grey-brown mess. They need to allow 3rd party skins like Minecraft does.
tobr 2 days ago 1 reply      
That trailer was surprisingly well made.
jomendoz 2 days ago 1 reply      
It seems to be a great game! Sadly, I think it encourages resource exploitation as to what we, the human race, have been doing to Earth as a whole.

I think a very good extension would be to provide new technology for sustainable energy generation and even further the ability to harness and excel the planet as a super living thing (as Asimov could imagine in its Foundation series).

This would provide an even more constrained and challenging environment as to maintain planet's equilibrium on one hand and progress on the other.

Maybe, those aren't really valid concerns from the beginning of a game. But afterwards, it'll be important as massive industrialization reaches ecological relevance.

nickpinkston 1 day ago 0 replies      
If anyone is interested in building an automated factory that can make anything IRL, we'd love to talk with you:


Or hit me up: nick@plethora.com (Founder/CEO)

zokier 2 days ago 5 replies      
The thing that bothered me in Factorio when I played it some time back was that once you hit the end of tech tree, which is pretty easy (or at least was back then), then there wasn't really anything else to do, nothing more for your highly tuned factories to produce. That said, I really liked the part of building your factories, I just would like to have more reason to keep on building. Maybe competitive multiplayer spices things up, but I don't feel like that is where "the scene" is.
frenchie4111 2 days ago 1 reply      
I am going to have job candidates play this game with me
bingeboy 2 days ago 1 reply      
I just started playing and find peace in automation.
TimJRobinson 2 days ago 0 replies      
I'm a huge fan of the Anno series (Anno 2070 multiplayer is amazing) because of how much fun it is to try and maintain supply chains and production lines of 50+ different resources. Have spent over 400 hours playing it according to steam.

This looks right up my alley, downloading it now.

graeme 2 days ago 1 reply      
Looks fun. Is it potentially productive leisure? I run a business, and have automated portions of it with processes. These run very well. Others I've found harder to automate.

I'm wondering if playing a game like this can help train a habit of automation. Thoughts?

r3bl 2 days ago 1 reply      
Can someone explain me the difference between 0.11.22 version and 0.12.26 version? The demos for both of the versions are available in the demo page, and I can't figure out the reason why such an older version seem to still be available for demo.
LoSboccacc 2 days ago 0 replies      
loved this game and hopped in since the indiegogo campaign.

I've made two mod that suits well the engineering mindset: static difficulty stop time based increase of enemy difficultyhttps://forums.factorio.com/viewtopic.php?f=87&t=6433

endless resource makes deposit endless (with diminishing return) so all your railroads don't suddenly vanishhttps://forums.factorio.com/viewtopic.php?f=94&t=3130

check them out :)

StavrosK 2 days ago 0 replies      
Sidenote, I love how both Factorio and Big Pharma accept bitcoin for purchases. It's by far my preferred payment method, and I'll buy one of the two just to support them for that.
fixermark 2 days ago 0 replies      
True story: I remembered I had this installed on my home machine while I was on a business trip.

I... may have consumed copious amounts of hotel wifi playing it over VPN one night. ;)

proactivesvcs 1 day ago 0 replies      
In case you're still on the fence, take a look at the Steam reviews: http://store.steampowered.com/app/427520/#app_reviews_hash
syncsynchalt 2 days ago 0 replies      
I really can't recommend this game enough.
Pamar 2 days ago 2 replies      
Question: would this (or Big Pharma, or some other similar game) work for children? Starting at what age?
bronz 2 days ago 3 replies      
Wow, this really sucks. I had the exact same concept in mind for my next game. Like, during the trailer when he shows the crashed ship, my jaw dropped. Almost every aspect of the game is the same as what I had in mind. I had the exact same idea and I was really excited about it. Oh well.
ccallebs 2 days ago 0 replies      
Another +1 for the game as well. As I get older, it takes a lot more for a game to put me into "binge mode". This game did and then some. If you enjoy solving puzzles / architecting software / survival games, you will probably dig Factorio.
gbersac 2 days ago 0 replies      
I am the 42 school and (a french progamming school) and evryone is playing it (myself include). This is a trendy game especially for programmers (for those who love problem solving).
yread 2 days ago 0 replies      
This game is awesome, it was already great in 2013 when I supported them and it keeps getting better, the developers are listening to the users and really working with them
kawsper 2 days ago 0 replies      
I bought this game one friday, and have now clocked 46 hours into it. It is a fantastic game, and it can really keep you up at night if you aren't careful.
thisisandyok 2 days ago 2 replies      
Wow, I really want to play this. I enjoyed the Kelfigs games on Xbox, but was disappointed that they were so limited.
techman9 2 days ago 0 replies      
As distinct from https://factor.io/ LOL.
State of the Art JavaScript in 2016 medium.com
533 points by achairapart  2 days ago   288 comments top 55
fleshweasel 2 days ago 5 replies      
I strongly disagree about TypeScript-- I think it's a huge boon to productivity. TypeScript has has union types i.e. "number | string" which are similar to algebraic data types. TypeScript also has optional interface members and function parameters by putting ? at the end of the name, i.e. "foo?: number".

Static types allow for much, much better tooling, particularly autocomplete and the ability to check whether your code is valid on some basic levels. I consider avoiding it to be a big waste of time. I've had a good experience getting the definitions files going for the libraries I use.

I also disagree with the statement that TypeScript is making JavaScript "like C# or Java". TypeScript lets you opt out of type checking all you want with minimal difficulty. It also will by default always emit JavaScript even if it detects type errors.

andybak 1 day ago 6 replies      
Disappointing that after months of moaning about the paralysis of choice, few of the comments are positive about a genuine and fairly defensible attempt to cut through that.

He proposes a fairly simple stack (and for the sake of argument he assumes you're needs are beyond the 'static html and a touch of jQuery' stage). He spends time explaining them and makes a fairly good attempt to avoid the overly-new or overly-complex.

We've had all the obvious reactions:

1. This isn't my stack of choice

2. React is flawed

3. Don't use frameworks at all

4. I hate dynamic typing

5. Javascript is broken beyond repair

6. It will have changed by next week

All of these are valid discussions to have but they get wheeled out every time and - maybe with the exception of point 1 - they are only indirectly related to the topic of this post.

So every js discussion becomes a meta-discussion. Same thing with Google posts ('oh they'll close that down next week'), ORMs ('they suck'), Python ('Python 3 was a mistake') etc.

HN comments needs an on-topic vs off-topic filter. Or a "yes we already know that" filter...

(The irony of the above statements when this is also a meta-post is duly noted)

My own feeling is that everyone should avoid jumping on complex frameworks until they are really needed. jQuery, Pjax or intercooler.js can take you a long way and save a lot of headaches. But when you do need a proper MVC-like framework then this article is a valuable guide of the sort that people have been asking for for months.

pkrumins 2 days ago 7 replies      
Ok, I've had enough. I'm making a prediction that the entire JavaScript ecosystem will collapse.

This just doesn't make any sense. None of this is nice. It's all ugly and complicated. There's no beauty to these tools. There are no fundamental tools either. Everything is evolving too quickly. You've a choice of 25 frameworks, libraries, tools that change every day and break between versions. The complexity is growing and the ecosystem is getting extremely fragmented.

Instead of just writing your application, you're in despair trying to choose the right tools, panicking about not being able to understand them, and then then you spend weeks learning them. You end up writing application that is glitching and you've no idea how to fix it because you depend on a dozen different tools. Worst part is that you don't even need them. You've been tricked by peer pressure into using them.

When you've a complex, fragmented ecosystem, and developers are in stress because they can't understand and learn tools quickly enough, then the only logical conclusion that it will collapse, and only a few technologies will survive that will get mass adoption, and everything else will be forgotten.

Lazare 2 days ago 3 replies      
I agree with 90% of this. React, Redux, ESLint with airbnb config, npm, webpack, lodash, ramda, fetch, css modules...absolutely.

I disagree with the breezy assertion that types don't matter, and the offhand dismisal of TypeScript. And saying that "TypeScript tries too hard to make JavaScript like C# or Java" reveals, in my view, a fundamental failure to understand what TypeScript does.

I also think the author is a bit too strongly in favour of `mocha`; I don't think `ava` should have been some easily dismissed, and I've recently run across a pretty nice framework called `painless`. And even if you do use `mocha`, I find `expect` to be a better assertion library than `chai`. I think a better answer here might be "use whatever works for you, so long as its not Jest". (The shoutout to `enzyme` was on point though; great library if you need to test React components.)

treenyc 2 days ago 3 replies      
Why not mithril? https://lhorie.github.io/mithril/

Seems to be way faster, and easier to learn than any of those other framework/libs.

For example: How is Mithril Different from Other React:Source: https://lhorie.github.io/mithril/comparison.html

"The most visible difference between React and Mithril is that React's JSX syntax does not run natively in the browser, whereas Mithril's uncompiled templates do. Both can be compiled, but React's compiled code still has function calls for each virtual DOM element; Mithril templates compile into static Javascript data structures.

Another difference is that Mithril, being an MVC framework, rather than a templating engine, provides an auto-redrawing system that is aware of network asynchrony and that can render views efficiently without cluttering application code with redraw calls, and without letting the developer unintentionally bleed out of the MVC pattern.

Note also that, despite having a bigger scope, Mithril has a smaller file size than React."

interlocutor 2 days ago 7 replies      
The problem with React is its patent rider. React.js comes with a BSD license, but has a patent rider that gives you a license to React's patents. This sounds like a good thing, right? But this rider has a "strong retaliation clause" which says that if you make any sort of patent claim against Facebook this patent license automatically terminates. Which means Facebook can now sue you for patent infringement for using React. You may think this is no worse than not having a patent rider at all. But that's not the case. If there is no patent rider then there is an implicit grant which cannot be revoked.

If you work for a software company and your company has patents then keep in mind that by using React you are giving Facebook a free license to your entire patent portfolio.

More info on weak vs strong retaliation clauses: http://www.rosenlaw.com/lj9.htm

sklivvz1971 1 day ago 1 reply      
Every post that lists a set of tools is eventually going to be completely wrong.

1) This is the author's favorite setup in 2016. With all due respect, what is the lasting value of this information?

2) There is no "best" architecture. It depends on what problem one is solving. The author does not specify that, making their conclusions likely completely wrong in most cases. Yet, the language they use is in absolute terms.

3) 99% of JavaScript simply enhances static HTML pages. Yes, even in 2016. You probably don't even need jQuery, although it's likely to be available already.

4) It curiously evangelizes the "latest and greatest" frameworks, thus incurring in novelty bias -- new frameworks have less apparent problems because many side effects become apparent years later, once the codebase is mature.

lsdafjklsd 2 days ago 2 replies      
At work we've turned an ember-cli ember app into a react redux app in place using immutable.js and ramda for everything. It's been a huge boon.

PS Ramda it eats lo-dash and its imperative API for lunch. It's for power users, everything curried, higher levels of abstractions. Pick it up and learn it, it'll make you a better programmer.

Next stop Clojurescript. Om next is a library where you can get a feel for a Falcor + Relay stack in like 70 lines of code all without the specific tech bloat. David Nolen is a UI profit, just follow him.

Kequc 2 days ago 2 replies      
A huge amount of this stuff is bloat, to me. It seems like this is a collection of libraries that largely were picked out of the problem that is javascript bloat today.

Use typescript and just write it to do whatever you want it to do. You don't need React, or any of this stuff. Chai is good for testing. But as far as deploying a production application, there should be literally no dependencies. You don't need them.

There is nothing today that typescript or even just raw javascript can't do perfectly fine on its own.

tobltobs 2 days ago 2 replies      
Measuring time intervals in years for everything JS related is to imprecise. "State of the Art in 3/2016" or at least quarterly would make more sense.
georgefrick 1 day ago 9 replies      
I'm not a fan of these discussions at all. We're all supposed to have been using X framework at X time period. In the enterprise we can't just keep rewriting all the god damn applications.

For us contractors, we have to answer to clients we had 2 years ago about why their app is in Backbone.

I mean, damn; we have to build software here and we aren't all Facebook. You might get warm and fuzzies from constantly starting over and feeling like you've chosen the right framework, but it's immature.

Oh, we got it right this time! React is a paradigm shift! We've quickly forgotten we were saying this with Angular bindings. Oh your model based stuff is crap, this has TWO WAY BINDING, it's a paradigm shift!

Now, I'm using Angular. I could recite the Backbone source code, we had a few small libraries and we built huge apps and they worked (and they were built with Grunt and it worked fine, but hey, move it all to Gulp! now! Paradigm shift!). In this case I was expecting it. I waited six months and Web Pack came along.

We're going to go ahead and build our app in Angular 1.x with TypeScript and Web Pack and test it with Jasmine.

This article is NOT correct. This hasn't been 'decided', there is no clear winner. You can't simply list the features of something as "amazing" and "where it's at". You are arguing finality here and you main data point is "coolness factor". It's not correct, it's not objective, and it isn't high quality, long term; well thought out software development practice.

Mc_Big_G 2 days ago 5 replies      
Ember barely gets mentioned and includes each "piece of modern web applications" OUT OF THE BOX with a great support community. I love not making all of those decisions and doing all the integration BS work. Ember feels like the worst kept secret.
marknadal 2 days ago 3 replies      
We all have our opinions and I would say take this article with a huge grain of salt (as well as my comment). State of the art javascript should still be considered a browser + editor + files. There is no need to overcomplicate things, and unfortunately Webpack and Babel do. It is terrifying to see javascript turn into the new Java - build steps and compile steps and configuration and everything you don't need except for distraction from getting real work done.

Build your app first. Then when you are unhappy, see if these tools make your life easier. But play with them first, don't make upfront commitments.

kenOfYugen 1 day ago 2 replies      
> Avoid CoffeeScript. Most of its better features are now in ES6, a standard.

This argument against CoffeeScript isn't very objective.One of CoffeeScript's best features is the minimalistic and expressive syntax.

"CoffeeScript (#6) appears dramatically more expressive than JavaScript (#51), in fact among the best of all languages."[1]

"CoffeeScript is #1 for consistency, with an IQR spread of only 23 LOC/commit compared to even #4 Clojure at 51 LOC/commit. By the time weve gotten to #8 Groovy, weve dropped to an IQR of 68 LOC/commit. In other words, CoffeeScript is incredibly consistent across domains and developers in its expressiveness."[1]

Using the author's train of thought I could state:"Avoid Bluebird[2]. Most of its better features are now in ES6 promises, a standard."

Yes promises are in the ES6 standard, but that's not the best feature of bluebird. There were and are many promise based libraries, but bluebird was built for unmatched performance. One will use it if performance matters.

Even today it's faster than the native implementations of promises [3].

> Tooling (such as CoffeeLint) is very weak.

Maybe because, as it turns out, in CoffeeScript you don't need a lot of tooling. Why would that be a bad thing?

> Electron is the foundation of the great Atom editor and can be used to make your own applications.

Atom is written in CoffeeScript [4].

[1] http://redmonk.com/dberkholz/2013/03/25/programming-language...

[2] https://github.com/petkaantonov/bluebird

[3] http://programmers.stackexchange.com/questions/278778/why-ar...

[4] https://github.com/atom/atom

STRML 2 days ago 3 replies      
This article echoes my experience this year and last, moving from a Backbone app into React + Fluxxor, and eventually into ES6 with Babel + Flow + Webpack.

It's a huge pain to configure and understand all this tooling, but man is it nice once it's all working. It definitely gives me hope for the web as an application platform.

Re: Flow, it's good but nowhere near as developed as TypeScript, but it's getting better. 0.22 of Flow (released just a week ago) brought massive speed improvements that bring it decently on par with any other linter. I found I could finally re-enable it in Sublime Text after this release. It catches all kinds of things linters won't, and the upstart cost is relatively low. On the other hand, TS has the benefit of years of type files available for almost any library; don't underestimate how great that is.

Using React has been great as well. It's what I wish Backbone views had been from the beginning.

We certainly aren't wanting for choice. Between the dozens of Flux implementations, React alternatives like Mithril, interesting languages like ClojureScript (with Om if you want to keep using React) or Elm, multiple typing systems, and even WASM on the horizon - web development is an exciting field. It's also overwhelming, and I say that as someone who keeps his head in it over 12 hours a day.


Re: CSS, the story still feels incomplete. I want a way to write a library that will effortlessly import its own styles, so consumers can simply import the component and go to town. Most solutions are severely limited, slow, or both, mostly because they rely on inline styles. Nothing's wrong with inline styles, until you want to do :hover, :active, pseudo elements or the like.

See the React material-ui project for a very real example of how this can go wrong - note how every component has a dozen or more extension points for styles. I built a project with this recently and it was intensely frustrating that I couldn't simply add some CSS to the app to fix styles across the board - I needed to keep a giant file of extensions and be sure to apply them every time I used a component. And, of course, component authors can't possibly anticipate every possible use, so some of my selectors were ugly "div > div > ul > li:first-child" monstrosities.

CSJS (https://github.com/rtsao/csjs) is one of the few solutions I like. I would be very happy to see it, or something like it, go mainstream.

tonyle 2 days ago 0 replies      
Mainly used angular and node.Someone was telling me about ember.js, I wondered why I would use broccoli over browserify.6 months later.Was comparing browserify vs webpack.1 month later.

Go to terminal

ember new app

ember server

start writing code.

alextgordon 2 days ago 6 replies      
> Unless youre working on a legacy application or have 3rd party libraries that depend on it, theres no reason to include it. That means you need to replace $.ajax.

> I like to keep it simple and just use fetch. Its promise based, its built in Firefox and Chrome, and it Just Works (tm). For other browsers, youll need to include a polyfill.

I laughed. Is this satire? There's no reason to have X, therefore you need Y and Z to replace X. It just works.

hokkos 2 days ago 0 replies      
I use all those tools daily, but this article feel patronising to me, I would feel inconfortable to say that whatever I use is the state of the art. Also the author is totally ignorant and bigoted about typescript.
achairapart 2 days ago 2 replies      
Can we all agree that State of the Art JavaScript in 2016 is a mess?
increment_i 2 days ago 3 replies      
Whenever these kinds of JS commentaries pop up on HN nowadays, I notice two distinct camps of commenters emerge: Those who are obviously steeped in the JS scene and are delighted to argue the finer, more sophisticated points of it, and those who take their best look, step back, and criticize this convoluted house of cards - and of course proceed to be downvoted into the Stone Age as the latest casualty of HN censorship.

My only observation of note I guess is the sheer marvel that all of this is nearly entirely supported by the Open Source community, and the amount of engineering effort being poured into the JS ecosystem at this point in time is nothing short of astounding. I can't wait to see where it goes in the next few years.

Niksko 2 days ago 2 replies      
I'm in the process of learning a JS stack at the moment, and I've come to almost an identical conclusion to this article about the packages and tools to use.

One difference though is that I've read Relay and GraphQL will eventually win out over Redux. Thoughts?

drumttocs8 2 days ago 2 replies      
Meteor isn't even being mentioned anymore in most of these posts, but it really does reduce the need for so much tooling. Plus, it can use React as the view layer.
oever 15 hours ago 0 replies      
To me the best compromise between using regular javascript and having type safety is to use closure compiler. With closure compiler, you can write a 'header file' which defines the functions of the libraries that you use. Then you can write all you own code with type annotations and even check that you have 100% type coverage (--jscomp_error reportUnknownTypes).

Anywhere in your code where you are using identifiers, there should be a method to check for typos. That holds for css classes and qnames too.

keithwhor 2 days ago 1 reply      
I'm a little bit bummed that the author seems to think there's "No API solution." Nodal [1] was released in early January to a great response, and I'd hope that more people are paying attention. Our most recent announcement was that we're focusing on being API-first [2]. We also have out-of-the-box GraphQL support [3].

[1] http://nodaljs.com/

[2] https://medium.com/@keithwhor/realtime-doesn-t-belong-everyw...

[3] http://graphql.nodaljs.com/

reedlaw 2 days ago 0 replies      
I agree that React, Webpack, et al. are great improvements compared to their predecessors. But do I really want to start an app in 2016 knowing that there may be even better tools in 2017? Most likely I would use these but what ails me is the thought that I might have to learn yet another tool and migrate. Why does the JavaScript ecosystem move so quickly? I get that browser technology is exciting and constantly improving. But why can't we settle on some well designed concepts and keep them well maintained? I hope React is it because I like it but it really depends on Facebook's commitment.
jastanton 2 days ago 1 reply      
Wow, this is nearly our teams exact stack. One addition would be superagent instead of fetch. I really like Fetch, especially for react-native but superagent is just so simple and easy to use
vmware505 1 day ago 1 reply      
Ember.js is already proved, the best framework out there nowadays. Corporate ready, production ready, future proof. I'm really looking forward, that Type Script support will be landing soon in the framework, so it will be a concrete solid environment. The whole addon ecosystem, with Ember Data and Ember CLI is just top-notch.

You can check here with this tutorial, how easy to write a complex application with Ember.js: http://yoember.com

calebgilbert 2 days ago 0 replies      
In a development world that is already rife with planned obsolescence, the Javascript world is the shining jewel. Reminds me of trying to construct the killer Yu Gi Oh deck. By the time you get all the cool cards, paying a small fortune for them of course, a new ban list comes out, new decks come out, and you have to start all over again. Is the game improved through all of this? Maybe it is, maybe it isn't, but I would say that this question becomes beside the point for many players...
dkarapetyan 1 day ago 0 replies      
Sigh. Types are very important for the front-end. Have you ever tried to validate the data going to the backend? Well then you're using types except you've implemented your own bug-ridden and gimped version of typechecking that already exists in flow and typescript. So just annotate your code. Both typescript and flow work with mixed code bases where some parts are annotated and some parts are not and there is clear benefit to even a half annotated code base. Typescript even has support for JSX.

Not to mention that any front-end code will invariably have a domain model and interactions that are much better expressed with typescript than vanilla modern javascript. The refactoring and basic validations you get by even a few annotation is just too good to pass up. I no longer write any javascript that doesn't pass through the typescript compiler/typechecker.

eranation 2 days ago 0 replies      
> The learning curve is very flat

So if the graph's x axis is time and y axis is the amount of stuff you learn, then a flat learning curve means that you gain very little added knowledge as time goes by. And a steep learning curve means that you learn a lot in a small amount of time.

It always confused me why is it backwards.

acron0 2 days ago 1 reply      
Not even a mention of ClojureScript? :(
doczoidberg 1 day ago 0 replies      
Do I bet on the wrong horse if I use Angular 2 instead of React+other tools?

I don't get the whole React love? I always learnt not to mix view and business logic? React does this?

z3t4 2 days ago 2 replies      
I'm making web apps/clients in the browser, servers using node.js and desktop "native" apps using nw.js. All in vanilla JavaScript, and I love it. If you need additional functionality, there's always a module for it (npmjs.com)

The interesting part is maybe that most apps look like this:

 <html><body><canvas id="canvas"></body></html>
And the rest is JavaScript!

It does seem a bit stupid to load the browser just for the canvas element though, so if someone know a better solution, please post!

kayoone 1 day ago 2 replies      
Because of all this chaos i heavily lean towards Angular 2 and TypeScript. It's a good common ground for larger apps imo and has good best practices.
bbcbasic 1 day ago 1 reply      
I have found using Elm a real joy, albeit for hobby projects. I do hope pure functional languages become state of the art by 2026.
neovive 1 day ago 1 reply      
Just wanted to mention VueJS as an option on the front-end for those looking for a simple JS library with components.
pdog 1 day ago 0 replies      
A summary, for convenience, of the state-of-the-art libraries for front end development in Q1 2016:

- Core: React

- State container: Redux

- Language: ES6 with Babel

- Linting: ESLint

- Style guide (ES6): Airbnb JavaScript Style Guide() {

- Dependency management: NPM, CommonJS, and ES6 modules

- Build tool: Webpack

- Testing: Mocha and Chai

- Utilities: Lodash

- Fetching: Use what-wg or isomorphic fetch rather than jQuery

- Styling: Sass, PostCSS, Autoprefixer, CSS Modules

ellius 2 days ago 1 reply      
Can anyone tell me where to start on all of this as a self-taught beginner? I feel like I follow tutorials well and get basic apps up and running via tutorials. I've even built some basic tools in use by a handful of people at my job with Node and Meteor. But the minute I try to dive into React and some of these more professional tools in the ecosystem, I find myself totally lost.
mhd 1 day ago 0 replies      
JavaScript for medium-size SPAs, that is. For just enhanced content, this still seems overkill and for true desktop replacement apps, there's a lot of missing gaps (unless you ditch everything and settle with ExtJS).
feiss 1 day ago 0 replies      
I feel dumb for not using React. Damn, I even feel I'm living in sin.
kmfrk 2 days ago 4 replies      
Why is eslint considered better than jshint+jscs? jscs also supports auto-fixing, which is pretty dope.
n0us 2 days ago 0 replies      
I would like to add that I've had an excellent experience with the Radium library for styles. Sass was a little better than CSS because you sort of get variables but being able to define styles in JS has fixed pretty much all of the remaining problems.
jgalt212 5 hours ago 0 replies      
What's the current thinking on React? Is it primarily indicated for SPAs? Or should one us it throughout a web app (even for pages that don't qualify as their own SPA)?
choward 2 days ago 1 reply      
No mention of SystemJS/JSPM?
hoodoof 2 days ago 4 replies      
I find lots of modules that won't build with Webpack and a very solid chunk of my available development time is spent trying to diagnose and fix problems caused by modules that won't build under Webpack. I recommend against it.
k__ 1 day ago 0 replies      
I would use something linke cyclejs with LiveScript, if I start something new.

the whole function composition has a clean feeling and LS removes a lot of cruft

cel1ne 1 day ago 0 replies      
Regarding CSS I recommend tachyons [0] over css-modules.

[0] http://tachyons.io/

melvinmt 1 day ago 3 replies      
Is there a boilerplate repo somewhere with all these frameworks and tools combined? Would love to dig more into this but configuring them all together would take me days.
sandra_saltlake 1 day ago 0 replies      
It's not so bad. You just pick the most popular thing or use the recommendations in articles like this one.
ausjke 2 days ago 0 replies      
I tried to like typescript however so far failed, don't feel like it though everyone else seems are applauding it.
raspasov 2 days ago 0 replies      
Here's a short story.

I used to do a little bit of JavaScript. Initially it was easy. Functions! Easy enough. Vars! Cool. I get all of that. I wrote some code, worked. Wrote some more and it started not working as expected. I was curious why. I started reading into it and discovered interesting concepts like "hoisting".

Then a friend of mine told me about Clojure/ClojureScript. I never understood hoisting.

arxpoetica 1 day ago 1 reply      
No mention of WebSockets...
b34r 2 days ago 0 replies      
I wrote an app generator (actively maintained) that actually follows many of these recommendations if anyone is interested: https://www.npmjs.com/package/generator-enigma
ilostmykeys 1 day ago 1 reply      
tomc1985 2 days ago 1 reply      
Who are all these people that need all these tools to write stuff in js? Go back to coder school!

Is this what it means to look for a job in the startup scene in 2016? It seems overwhelmingly likely that you'll be dropped into some unholy, frankenstein-esque work of "art"

While JS has a few pain points that should be addressed by tooling, I'd draw the line at React and jQuery and a bespoke, well-kept utility library

Amazon Echo, home alone with NPR on, got confused and hijacked a thermostat qz.com
392 points by potshot  2 days ago   144 comments top 24
bdhe 2 days ago 3 replies      
This reminds me of one of my favorite quotes from Douglas Adams in the Hitchhiker's Guide to the Galaxy. A man not just ahead of his time, but humorous about it too.

> The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitiveyou merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.

imglorp 2 days ago 15 replies      
Wow, this is a new DDOS attack vector. Get an ad on broadcast radio saying stuff like "alexa, order more milk", or "okay google, send a text to xxxxx".
eddieroger 2 days ago 2 replies      
What's really great about this is that it's a joke on the future that's been predicted so many times already, my favorite of which being the last vignette on Disney's Carousel of Progress. The future family is talking about points in a video game, and the oven hears it and turns the temperature way up, ruining another family Christmas dinner - the joke being that this convenience was finally going to make Dad able to not ruin dinner.
userbinator 2 days ago 0 replies      
Somewhat related story: me and some coworkers were talking in a room where someone had a Windows 10 laptop being used to present some data. We were talking as usual when the laptop suddenly decides to open a browser to a Bing search with what looked like a few (badly) voice-recognised words of our conversation. That was a rather awkward moment, given that we were discussing some extremely confidential information, and not helped by the "did someone say 'Hey Cortana'?" the laptop's owner promptly blurted out. If I remember correctly, none of us said anything that sounded remotely like that phrase, yet it activated.

It's now company policy that built-in microphones have to be disabled, and only external ones are allowed to be used when necessary.

brebla 2 days ago 1 reply      
Am I reading this correctly? Amazon essentially built a better integrated version of "The Clapper" https://www.youtube.com/watch?v=Ny8-G8EoWOw
mmanfrin 2 days ago 5 replies      
I think they need to pick a different name. 'Alexa' is very easy to trigger with other names, and reliably activates when I am watching any show with a character named 'Alex', 'Alexy', etc.

One side effect I've noticed is that they seem to have tried to account for it, which has made the Echo less responsive to actual requests; a few times I've stood in front of it yelling 'ALEXA' trying to get it to stop and it does not respond.

minimaxir 2 days ago 1 reply      
Interestingly, the same thing happened about 2 years ago with the Xbox One: http://www.slate.com/blogs/future_tense/2014/06/13/kinect_vo...
scott_s 2 days ago 2 replies      
This happens to me with Siri and podcasts - I listen to podcasts in my car, through my iPhone. Occasionally what people say will sound close enough to "Hey, Siri" that it stops the podcasts and and answers whatever question it could extract from the talking following what it thought was "Hey, Siri".

It's repeatable, too. One time it happened right as I was parking, on an episode of This American Life. (Or Serial. Or Planet Money. Yeah, yeah, I listen to a lot of NPR shows.) So I kept rewinding back over that part, and it kept triggering Siri.

chucksmash 2 days ago 0 replies      
Sometimes when you try to recognize speech you wreck a nice beach.
tlrobinson 2 days ago 1 reply      
I, for one, am looking forward to the day Alexa, Siri, Cortana, and Google Now can hold full conversations with each other.
mrbill 2 days ago 0 replies      
I had the wake-word on mine set to "Amazon" and then made the mistake of watching an online training video for AWS....

Had to stop it and change the wake word back to "Alexa".

sxates 1 day ago 0 replies      
I had something similar happen watching Battlestar Galactica on my Xbox and Kinect a few years back.

The show went through the opening sequence, then announced "Previously on Battlestar Galactica" at which point the xbox rewound back to the beginning of the show.

dredmorbius 2 days ago 1 reply      
I see a tremendous future in direct-to-voice-response advertising. Particularly for purchase-capable systems.
zanok 2 days ago 0 replies      
It reminds me of the Toyota radio ad that would place iOS into airplane mode.


beedogs 2 days ago 1 reply      
I guess I must be from the wrong generation, because none of these voice-activated products make any sense to me whatsoever. I really just can't see the point.
joeblau 2 days ago 0 replies      
I had a pretty funny story a few months ago. I was watching San Andreas and there is one part where Paul Giamatti (Dr. Lawrence Hayes) yells "ALEXI..." and sure enough Amazon Echo turns on. I had to stop the movie and turn the Echo off because the it subsequently tired to process everything the movie was saying after the trigger word.
grogenaut 1 day ago 0 replies      
I was on a PS4 launch title. We seriously considered writing things like "Xbox Off" into the script. Also that "Alexa buy me a motorcycle" commercial supposedly triggers it all the time.
jkot 2 days ago 1 reply      
That is a serious security issue, many apps and webpages have permission to use speaker.
yorwba 2 days ago 1 reply      
For most voice control applications, trigger words are enough to reliably detect owner intent, but it seems Echo needs a better mechanism. Maybe adding cameras and looking for eye contact would work?
nialv7 2 days ago 1 reply      
I don't understand why would anyone think having a remote control system without any form of encryption or authentication is a good idea.
MikeTLive 2 days ago 0 replies      
listening to XM radio, they frequently have station identification announcements.

"Siri us xm..."

with the iphone plugged in to charge while driving to work hilarity ensues as it cuts out the audio to speak of whatever it thinks was asked.

sandra_saltlake 1 day ago 0 replies      
That is a serious security issue
ljk 2 days ago 0 replies      
Wow 30 Rock predicted the future!
Hard Tech is Back samaltman.com
446 points by firloop  2 days ago   283 comments top 48
dlrush 2 days ago 5 replies      
Every company doing hard tech should be considering federal funding for early stage R&D. The US gov't has been consistently providing more early stage capital than VC for several decades through the SBIR program. (About $2.5B each year). Capital through the program is NON-DILUTIVE, Phase 1 grants can be $225K, Phase 2 grants to $1.5MM, and you likely have government and private sector customers at the end of Phase 2.

Steve Blank's Innovation Corp initiative is now working with the NIH, NSF and Defense to help companies development product-market-fit once basic funding has been achieved as well.

Disclaimer: My company GrantIQ.com helps companies find federal funding opportunities & helps large companies identify breakthrough technologies being developed through these programs.

jackcosgrove 2 days ago 5 replies      
I worked in industrial automation for a few years and most of the software there is terribly outdated. It's definitely ripe for disruption. There are "firm tech" business ideas that involve building better sensing and control software for hard tech industries. Given the low cost of sensors and networking these days it's not prohibitively expensive.

I think making a hard tech product for consumers is still very hard, and those who succeed should be commended. People wanting to dip their toes in the water might consider things like building a factory management portal like Splunk that aggregates sensor data, or something that involves sensing and reporting rather than actuation. Actuation can be dangerous and therefore legally risky.

saosebastiao 2 days ago 4 replies      
Well written and mostly agreeable but I've got a lowbrow dismissal incoming: I disagree that Cruise could be considered hard tech. The hard tech in that space was done by universities competing for DARPA awards, and Google after that. Cruise took inaccessible hard tech and made it more accessible.

And that isn't an insult to Cruise. The problem space is still difficult, and they created more value tackling that than the vast majority of startups do on much easier problems. I happen to think that the opportunities for making the cutting edge more accessible vastly outweigh the opportunities for cutting new edges, and nobody should be discouraged from doing it because it isn't as glamorous as the research on the ground floor.

mhb_eng 2 days ago 2 replies      
As a mechanical engineer, I am excited to see more of a focus on hard tech content here! Unfortunately, there isn't really a HackerNews for EE's or ME's (though if someone wants to build it, I'd be grateful). So it can be tough to find startups working on these sorts of hard problems that are looking for all flavors of engineer (except civil, they stink and nothing they make moves /s). Personally, I work in ocean robotics and defense, which is I think is very cool, but mostly handled by large defense contractors and military research departments.
6stringmerc 2 days ago 4 replies      
As an amateur inventor with a serious backlog of "hard tech" type inventions (e.g. actual products, not software or 'platforms') I've done my best to inquire and reach out to several different potential funding sources, all of which have been extremely quick to pass. The basic reasoning is "this doesn't fit with our preferred verticals" and I'm left to figure out what that meant other than We only speak app as a self-imagined consolation.

This article is not exactly re-assuring because the noted interests - AI, biotech, and energy - are all extremely crowded fields more than likely competing with large institutions, research organizations, and let's be honest, probably a TON of regulatory hurdles to consider if planning to do business in the US.

For context, one of my primary inventions is a mobility / utility device that would have residential, commerical, and industrial applications. By design it can be applied to a variety of uses. I've done basic patent research and the pathway looks extremely good, quite tempting for me to just go ahead and file ASAP. It's hard tech. It's not glamorous, but it's a huge market opportunity on prima face.

I've got a co-worker pal who finally got his beverage and branding invention patented and now he's in the marketing to local schools, catalogs, and if early indications are correct, he may have an genuinely lucrative future with it. In my view, he's a more likely investor target than anybody out on the West Coast or in a VC room. I simply base this perspective on personal experience and how this post sounds promising yet concludes with a short list of massive goals. Might as well wrap it up with "We just want to invest in a better mousetrap" considering the scope. Cottage industry isn't runaway freight train profit creation, I get that, but I'd also counter that if one wants to get into the next frothy bubble of biotech then good luck with that.

falcolas 2 days ago 2 replies      
So, was the Cruise acquisition for the talent or the tech? How many other "Hard Tech" startups have had successful exits, when compared to how many have failed? How does this ratio compare to software startups?
minimaxir 2 days ago 8 replies      
I'm all for taking Medium thought pieces on the fundraising environment down a peg, but the problem with Hard Tech is that it requires a nontrivial amount of venture capital. You can't be ramen-profitable when your startup requires purchasing cars for testing or renting giant computing clusters for training AI models.

In relative contrast, it is trivial and risk-free to make a lean MVP for a generic Uber-for-X and send in a YC application.

manx 1 day ago 1 reply      
Hard Problem: Current internet discussion technology does not allow millions of users to participate in a discussion without loosing overview. In other words: Current discussion protocols do not scale.

A technology providing that could help make our politics more democratic. It could also lower the need for communication hierarchies like in big companies.

I'm doing research in this area and built several prototypes. I strongly believe we can achieve the goal, but doing it with only one teammate researcher makes the process very slow. We are two master degree computer scientists and need a bigger team.

Contact me if you are interested.

amelius 2 days ago 2 replies      
Now if we all just stop focusing on how to make better CSS, we might actually create some new hard tech.
pilingual 2 days ago 0 replies      
Congratulations to Cruise!

Cruise Kyle Vogt's third acquired company. It's still ambiguous as to whether your first startup should be "hard tech" or not. I think in the case of self-driving cars it makes sense because automobiles are a trillion dollar market and the anxiety to stay competitive with the rapidly changing landscape is intense.

Plenty of room in this market yet. What is the biggest competitive advantage and who has it? Data & Tesla. Tesla is the only company with the appropriate data to make self-driving a reality faster than anyone else (tell me when Google starts collecting data in Norway). Data collection equipment is relatively inexpensive. Maybe figure out how to get people to attach sonar or lidar or tap into their car's computer to access that data. What else is a problem, even for Tesla? How about predicting pedestrian behavior and making eye contact -- or identifying if a pedestrian is disabled and sending a signal to those pedestrians that it is OK to walk.

At the end of the day the market being targeted has huge weight to the likelihood of a successful outcome.

Incidentally, if you want to get into YC: https://twitter.com/hunterwalk/status/708304772055441408

tclmeelmo 2 days ago 3 replies      
What is meant by "Hard Tech"? Hardware technology, nontrivial problems, or something else?
Animats 2 days ago 1 reply      
But is it compatible with YC? "Hard tech" usually means years of work and millions of dollars up front. YC is oriented towards sub-$100K fundings and a few months to "demo day".
n00b101 2 days ago 0 replies      
Building hard core technology products is very risky and challenging for entrepreneurs. Complicating matters further, recent financial market volatility and sentiments being expressed in "Medium thought pieces about when the stock market is going to crash" are amping up the fear and uncertainty faced by newcomers.

It is therefore extremely encouraging and important to hear this message clearly and directly from Sam, that this is indeed a lucrative area worth pursuing and that there are investors who have the risk appetite to continue to fund such ventures.

p4wnc6 2 days ago 3 replies      
> " popular criticism of Silicon Valley, usually levied by people not building anything at all themselves, is that no one is working on or funding hard technology."

Wow. Passive aggressive much?

freyr 2 days ago 2 replies      
> A popular criticism of Silicon Valley, usually levied by people not building anything at all themselves, is that no one is working on or funding hard technology.

I'm not critical of Silicon Valley. Silicon Valley includes many companies working on hard problems, or investing money into long-term moonshot programs.

The criticism is levied towards, for example, the social media giants that pull in top engineers to work on social media problems exclusively. The criticism is also directed at Silicon Valley VCs, who lure smart young people to work on semi-trivial problems because it's the quickest path to profit.

It's not necessarily fair criticism. VCs have an incentive, first and foremost, to fund successful businesses. If their surest path to success in today's economy involves building semi-trivial apps, that's what they'll pursue. The same can be said for the finance industry, where the most successful players are employing our nation's top mathematicians and scientists to extract money from public markets using high-frequency trading. We can't expect them to self-regulate. But how can we incentivize smart people to work on something less lucrative?

davidw 2 days ago 1 reply      
It was always kind of frustrating to see the amount of brain power and tech going into the place I worked in Italy, www.CenterVue.com compared with typical 'web' startups. Those guys have computer vision, mechanical and electriconic engineering, and lots of software of various kinds. They're doing pretty well, but not nearly as well as plenty of way more "trivial" startups. I put trivial in quotes because it's never easy, but... you'd like to hope there's some correlation between effort and reward.
te_chris 2 days ago 0 replies      
This is the rationale of EF (said as someone who was in it). They've recognised that IP is VERY valuable again and are capitalising on it in a massive way.
cryoshon 2 days ago 2 replies      
Here's a several billion dollar hard tech idea that would benefit many, many, many people: cheap/fast/efficient water desalination and purification. It's a hard problem. Current solutions aren't cutting it.

Unlock water from the oceans, and a lot of people can be helped. I'm ready to start this company, just need a few dozen engineers/chemists and a boatload of funding.

emcq 2 days ago 0 replies      
This acquisition fits exactly with GM's plans outlined by investing $500 million in Lyft to create self-driving car infrastructure [0].

It seems like companies who build products that have clear and significant business value with excellent market timing are successful. Perhaps you can extrapolate to saying hard tech is now a hot field but this is the first successful exit that comes to mind recently, rather than private investors putting more money on the table.

[0] http://www.reuters.com/article/us-gm-lyft-investment-idUSKBN...

tgb29 2 days ago 0 replies      
Good post. Congrats to Cruise for the acquisition and to YC for generating a return on their investment. There are definitely opportunities in AI-narrow applications using machine learning-and biotechnology right now.
mathattack 2 days ago 0 replies      
My impression on Appification vs Hard Tech is the length of the development cycle.

The combination of a few trends (Predominately smart phones, social networks & data analytics) enabled a whole new wave of technical possibilities. It was relatively quick to monetize this via App development, so that was the first wave to take advantage. Hard Tech can take longer (though from Cruise's point of view, not always that much longer) so these companies are only coming to fruition now.

When I look at AlphaGo's success, I think we are a new dawn of amazing things from Hard Tech.

conanbatt 2 days ago 0 replies      
I remember meeting one of the founders 2 years ago in a party and congratulating him for doing a startup with truly innovative goals.

Amazing to believe 2 years later they got bought for a billion dollars.

samstave 2 days ago 1 reply      
If I dont know anything about developing AI specifically - but have an idea of how AI can help an industry, how could I go about making that hard idea something with merit/velocity/validity?
tonylemesmer 2 days ago 1 reply      
Hardware is difficult. Investors are wary of difficult problems. They will invest if the problem is difficult and they think they are speaking to a team of people who can deliver the technology to market no matter what. Lots of good inventors are terrible implementers.

If the invention is good but the team seems incapable that would be a reason for non-investment.

Also - "hard tech" seems like the wrong thing to be aiming for. There are lots of hard problems to solve with simple tech and disrupting markets. Are these not valuable targets?

DrNuke 2 days ago 0 replies      
It seems to me that nuclear (Gen IV), space (propulsion, moon bases) and AI (virtual prototyping) may speed up innovation in a combined manner. Elon Musk and Sam Altman in the US are at the top of the game here and may counter stiff competition from China. On the other hand, my team of middle-aged nuclear engineers & related network from Italy can't do that much but would gladly help, if needed.
rdlecler1 1 day ago 1 reply      
I'm the founder of AgFunder. We've worked with a bunch of agtech/hard tech companies and it's been very very frustrating taking them to VCs because they don't fit in their box. We had one company called SWIIM which we called the AirBnb for water that was creating a market for water and in doing so helping to fix a major issue of properly valuing water--enormous market, great unit economics, but there was hardware and deployment and it didn't look anything like VCs normally invest in. Another company we have on now called Autonomous Tractor Company (ATC) which is basically Cruise Automatic for tractors... There are 500,000 used tractors out that they can turn into true driverless tractors (not just GPS assist). VCs can't get their heads around the fact that they don't actually have to build tractors and that this is actually very capital efficient. Kudos to YC which gives great signalling to many misunderstood companies.
kornork 2 days ago 1 reply      
Is there a list somewhere of these kind of companies?
nxzero 2 days ago 1 reply      
Congrats on the exit, but the idea that GM will be around for the next wave of innovation is what it is. To me hard tech is about building the future, not finding an off ramp. In fact, this maybe one of the many factors killing real change, the desire to make for the hills when a giant offers to take you.
tim333 1 day ago 1 reply      
>From the Summer 2014 batch, 3 of the 4 companies who have raised the most money since graduating YC are hard tech companies.

So I guess that's

Cruise $18.8m

Ginkgo Bioworks $54.12M (engineers new organisms) and

Helion Energy $12.11M (fusion) ?

ecesena 1 day ago 0 replies      
Can someone expand on the definition of hard tech? With possibly examples of what is or isn't considered hard tech?
lowglow 2 days ago 0 replies      
We're working on "Hard Tech" over at Playa. If you're interested in AI/Cybernetics/ML/Ambient Intelligence come check out the platform we're building out. Http://getplaya.com/
kposehn 2 days ago 2 replies      
> Leave the Medium thought pieces about when the stock market is going to crash and the effect its going to have on the fundraising environment to other peopleits boring, and history will forget those people anyway.

Blunt. True, but blunt.

Hats off to Sam.

Hydraulix989 2 days ago 4 replies      
So yeah, Sam is trying to recruit people into starting these kinds of companies.

Hard tech has always been "back" (when did it go away, Elon Musk?), but it is quite the pivot from the YC of yesteryear that capitalized on things like better UX for AJAX calendars, web hosting, and drag-and-drop file storage.

The really hard tech though really evokes that initial visceral reaction of "too risky" for investors, especially if there's no real indicators in the form of traction or an MVP ("well just wait a sec there, professor, the problem is hard, so we haven't solved it yet").

As an investor, I certainly wouldn't feel comfortable shoveling stacks over to some guys who told me they were going to build true AI with a decade-long outlook. Yikes!

It's also a tougher proposition for founders. You're basically betting 5-10 years of your life on a problem that you don't even know you can solve with no revenue/exit strategy in sight. Meanwhile, that guaranteed salary at Google sure is looking more and more appealing.

I would almost recommend graduate school for these types of people looking to leave their mark on the world in solving a really hard problem where any real contribution only inches the world closer to solving it.

The struggle is real for all actors here.

For founders, the trick is finding that sweet spot where a problem sounds hard on paper (such as self-driving cars or VR headsets, wooo!), but actually is feasible using current technology (e.g. stick some lenses in a piece of cardboard), but due to timing or market forces or whatever, nobody is currently paying attention to it yet.

Then at least, you can execute just like an AJAX calendar app would and obtain the same outcome (to vastly understate the challenges involved!).

tryitnow 2 days ago 1 reply      
What does he mean by "hard tech"?

And would most engineers consider Cruise really hard tech? I don't know (I mean that honestly, not passive agressively).

qaq 2 days ago 1 reply      
Any pointers on who might be a good VC outfit for SaaS type startup that actually needs to run their own clusters as opposed to throwing everything onto AWS?
jjtheblunt 2 days ago 0 replies      
It comes across as a grotesque arrogance to assert, with no facts cited, that people expressing opinions are usually not building anything themselves. Citations if you please.
david927 2 days ago 4 replies      
A popular criticism of Silicon Valley, usually levied by people not building anything at all themselves, is that no one is working on or funding hard technology.

Don't pretend that Silicon Valley is not superficial in many regards. Yes, some of those silly, fluff apps have gone on to make money and that's your metric so you defend it.

But for many people, Silicon Valley embodies a different principle, like the one found at Xerox PARC, of people trying to make the future better and not just drive a nicer car.

tardo99 2 days ago 0 replies      
Really, really bad deal for GM.
untilHellbanned 1 day ago 0 replies      
Cruise is an N=1, ironic to make this proclamation. And Cruise founder had massively successful exiting history. I wouldn't say this validates YC's recent bets on hard tech. People who do hard tech for a living (myself included) are more measured.

Sama sounds like Donald Trump with the "I told you so". Unlike real estate, science can't be bullied to success. This makes me increasingly bearish on YC's future.

al2o3cr 2 days ago 1 reply      
"Leave the Medium thought pieces about when the stock market is going to crash and the effect its going to have on the fundraising environment to other peopleits boring, and history will forget those people anyway."

Sam channels Baghdad Bob for a minute there - "there are no American tanks in Baghdad. Especially not the one that just rolled by on-camera"

balls187 2 days ago 2 replies      
> A popular criticism of Silicon Valley, usually levied by people not building anything at all themselves, is that no one is working on or funding hard technology.

Lost all interest in saltman's point when I read this line.

It's a valid criticism, regardless if the people who are making it are "builders" or not.

It was a turn off, because it makes the author seem bitter about (what I think is) a valid criticism.

gist 2 days ago 4 replies      
> A popular criticism of Silicon Valley, usually levied by people not building anything at all themselves

Why can or should criticism only come from [1] people that are building things themselves? What about the press (as only one example) or people who write books? What about people teaching in colleges? What about people leaving comments on HN or any other forum?

Criticism only valid if coming from someone building something themselves? Don't agree with that. [2]

[1] Which to me by the choice of words is what is meant by this statement.

[2] If that is the case companies should never solicit any feedback from their customers about their product or their business model.

profeta 2 days ago 0 replies      
press release at the top of the front page?

i thought Ycombinator self-posts were not "votable".

this is hardly hard-tech since the same thing is being done in a garage with off the shelf parts http://www.bloomberg.com/features/2015-george-hotz-self-driv...

FussyZeus 2 days ago 1 reply      
> A popular criticism of Silicon Valley, usually levied by people not building anything at all themselves,

I have to take issue here. I don't need to be a "builder" to see that what a given startup is building is either worthless or an attempt at solving a non-problem.

I'm not a helicopter pilot, but if I see a helicopter stuck in a tree, I don't need to be one to know that the guy screwed up.

vegabook 2 days ago 0 replies      
Sounds like a desperate scramble to rebalance the portfolio which, reputationally, is far too long of last decade's, now-stalling, strategies. Problem is the brand is not 'hard tech', it's irredeemably soft tech. The brand is consumer biz disruption and social. YC is a short (if only it were listed) just like the rest of the web stack, for reasons of acute oversupply. It is woefully ill positioned on hard technologies. Kudos for trying, though! Heroic effort.
MicroBerto 2 days ago 0 replies      
More technology for sending dick picks?

Jeez, can we please move on and do something that actually benefits this world with our talents?

msoad 2 days ago 0 replies      
ZOOX[1] is another self driving car startup that is up for acquisition.

[1] http://zoox.co/

graycat 2 days ago 2 replies      
"Hard tech"? Okay. I havesuch a project.

Big ass problem: Found one, where athe first good solution is amust have for nearly everyonein the world with Internet access.

"Hard tech" solution: A bunch ofapplied math, with advanced prerequisites,some original, fora unique, really good, fun to use, interactive andaddictive, by far the best in theworld solution -- Did that.

"Hard"? Silicon Valley hasmore hen's teeth than entrepreneurswho could understand the theoremsand proofs of my math even ifI explained it to them. Why?They didn't take the prerequisitepure/applied math courses ingrad school. Neither didmore than a tiny fractionof computer science profs.

Code: 80,000 lines of typing,running, in alpha test -- did that.

"We hope to hear from you."

You did, and you ignored it.

Using my HN UID, look up my submission in your records.If you are interested now,then let me know.

Please scan my towel jerrygamblin.com
405 points by jgrahamc  2 days ago   103 comments top 18
roel_v 2 days ago 15 replies      
RFID is one of those things that seem cool until you start playing with it and then it turns out that the only uses for it are really boring. Like, you can't use it to locate stuff because the range is too short; you can't rely on stuff to have chips (food, or clothes or whatever) and it's a major time sink if you'd have to start tagging stuff yourself; it's cumbersome to install readers everywhere and not being able to rely on half lives measured in decades instead of months, etc.

I used to have an rfid chip implanted in my hand. I had all these ideas of what I was going to do with it - log on to my computer by putting an rfid reader in my keyboard, build a magic 'touch the wall and music starts playing' thing etc. All of them turned out to be very boring and useless in practice. Logging into my machine - meh, turns out that it takes longer for monitors to come back on than to type the password. Building (useful) user interfaces based on rfid is very hard and doesn't add anything over a regular proximity sensor.

The only idea I have left is that in my current house, I have a place where I could put an rfid reader so that if I'd get a chip implanted in my gluteus maximus (my butt, essentially) I could activate an automatic door opener by bumping into it when I have my hands full. I just can't motivate myself to set this up and discovering again that it's a dud in real life.

jgrahamc 2 days ago 1 reply      
I posted this in part because I love the idea and because today is the anniversary of the birth of Douglas Adams.

Edit: said death meant birth.

helper 2 days ago 3 replies      
This story doesn't quite make sense. I believe that his RSA pass was clonable and that his towel also had an RFID chip embedded in it. What I find hard to believe is that the towel had a writable RFID tag.

My main experience with RFID is cloning tags onto T5557 chips, and I don't think I've ever come across a writable tag in the wild. It doesn't seem to make economic sense to spend an extra penny or two on every towel to put in a tag you are never going to change.

Animats 2 days ago 1 reply      
OK, the RSA conference failed here. They're supposed to be a security conference, yet they didn't use an RFID tag that's challenge/encrypt/response, so you can't clone it by passive listening. RSA itself used to make such things.

That tag is about the right level of security for towel inventory. The big win in this is managing outsourced laundry costs. Knowing how many items went to the laundry, and how many came back, rather than just counting linen carts, matters a lot. ABS Laundry Solutions overview (with ominous music) [1]

[1] http://www.abslbs.com/

anarchitect 2 days ago 1 reply      
Reminds me of how someone embedded the chip from their Oyster card (London travel card) in a magic wand :)
lucb1e 2 days ago 2 replies      
Wait, towels have RFID tags?! Never heard of that before.
gargravarr 2 days ago 0 replies      
There's a frood who really knows where his towel is.
bsenftner 2 days ago 0 replies      
I was a CTO strategy consultant for an RFID MIST startup back in 2008 - that is RFID chips on documents, id badges, and any item of value within an organization. The MIST is a series of wireless sensors & network placed throughout the organization, enabling 3D location of any RFID marked object within range of the MIST network. The system could track who, when, and where of any person or item of interest simply by following the RFID through the series of sensors. However, there was zero plan for security of the RFID signals nor the MIST network itself, they were planning on security by obscurity. As I made more and more noise that security was their entire mission, and by not having any themselves they were setting themselves up for massive hacking. Thankfully, their main inventor was more interested in surfing than working, and their investor got deported for tax fraud.
marvel_boy 2 days ago 2 replies      
Interesting. Where I can find more information about reading RFID chips?
RoseO 1 day ago 0 replies      
Disney has a pretty neat use for RFID that was featured in a previous HN post.https://news.ycombinator.com/item?id=9177105
HillaryBriss 2 days ago 0 replies      
This article succeeds with just a few words and well chosen pictures.
spants 2 days ago 0 replies      
The Proxmark is a great tool... there is a nice forum here: http://proxmark.org/forum
rtpg 2 days ago 0 replies      
RFID is definitely one of the cooler things that exist on the practical/futuristic feeling matrix. Really wish I had some need for one
collyw 2 days ago 1 reply      
Could this be done using a phone with NFC? Thats pretty much RFID tech isn't it?
bpicolo 2 days ago 0 replies      
I'm kind of wondering what sort of towel hacks can be done now to truly create a hitchhiker's kind of towel
poppingtonic 2 days ago 0 replies      
Oh, what a hoopy frood.
apaz 2 days ago 0 replies      
Is proxmark3 still the device to get?
Dells Skylake XPS 13, Precision workstations now come with Ubuntu preinstalled arstechnica.com
334 points by bpierre  1 day ago   225 comments top 42
ThePhysicist 1 day ago 11 replies      
I think it's really impressive what Dell has put together here. As my old Thinkpad T430 is nearing it's fourth anniversary I have been looking for an upgrade for a while and compared different options with a focus on lightweight, powerful laptops with good Linux support. And so far the XPS 13 seems way more attractive than the new Lenovo Skylake laptops (e.g. the 460s), which have lower resolution displays (some models still start with a 1.366 x 768 (!) display, which is just ridiculous in 2016), less and slower RAM, smaller hard drives and -as far as I can tell from the specs- less battery life as compared to the XPS 13 but are actually 300 - 400 $ more expensive, even when choosing three year guarantee for the XPS. The only thing I don't like about the XPS is Dell's guarantee, which is "send in", meaning that I probably won't see my laptop for a few weeks if it has to be repaired, whereas Lenovo will send a service technician to me who will usually be able to repair the laptop immediately (I already had to make use of this service twice, once to exchange a noisy fan and once to replace a broken display bezel).

I guess I'll wait for Apple to reveal the new MB Pro line before making a decision, but it seems that for the first time in 10 years my next laptop will not be a Lenovo/IBM.

jaseemabid 1 day ago 2 replies      
> They come with Windows by default, but you can pick Ubuntu instead and shave about $100 off the price.

How awesome!

jpalomaki 1 day ago 2 replies      
I was surprised to notice that the XPS 15 comes also with quad core CPUs and supports 32GB max memory. Interesting option for those looking for desktop level performance in reasonably sized package.
boothead 1 day ago 2 replies      
Any info on when this (or the XPS 15 with linux) will be available in the UK? I just had a look on Dell's website, and as expected it's still a shower of shit WRT finding what you want.

I bought one of the first or 2nd gen XPS 13s and loved it. However the experience of buying from Dell was awful and customer service was so intractable as to be useless too.

latch 1 day ago 2 replies      
I've oft wondered if these would sell better without the Dell branding. Put a nondescript logo on the back (no word), remove all "Dell".

This really annoyed me years ago when I spent a small fortune on a beautiful TV that had "COMPANY" in white letters on the otherwise perfect dark bezel.

siscia 1 day ago 9 replies      
I am a little scared by the touch screen, I have never had one and I don't think I need it...

Anyway the extra complexity that come with it doesn't makes me comfortable...

Any experiences so far ?

kylec 1 day ago 1 reply      
Apple better hurry up with Skylake MacBooks, these look very tempting.
tholford 1 day ago 1 reply      
Bought a used first or second gen Developer Edition XPS13 last year, installed Mint 17.2 on it and have been very pleasantly surprised. Pretty much just as functional as my old MBP for half the price :)
forgotpwtomain 1 day ago 1 reply      
I have a new XPS 15 running the 4.4 Kernel - Skylake is very buggy as is the broadcom wireless firmware.

Also slight physical tremors can cause complete system crashes. I would stay away from it.

davidw 1 day ago 3 replies      
I've been very happy with the various XPS 13 systems. This one looks even better. Most likely my next computer.
cttet 1 hour ago 0 replies      
The keyboard though.
Mikeb85 1 day ago 0 replies      
It's nice to see Dell beginning to actually adopt Linux and Ubuntu. I always kind of figured part of the strategy of going private was to be able to move away from the status quo of being just another Windows vendor... By offering choice and eliminating lock-in, they can go after techie types and serious users who otherwise would have probably just bought a ThinkPad or MBP.
nickpsecurity 1 day ago 0 replies      
History repeats: the Dell Inspiron that just crapped out on me (somewhat) after years of use was their first Linux model. It also had Ubuntu by default. Great laptop. Interesting enough, after all the updates, I'm having trouble finding something that works out of the box that's not Ubuntu. It's running Fedora fine right now but software management is totally different from my Debian-based experience. Might ditch it. ;)
giovannibajo1 1 day ago 1 reply      
The previous models didn't support DFS in wifi 5ghz making them unable to work in high density wifi environments. Actually what's worse is that they randomically lose connection on a DFS AP (when the channel gets into one of the DFS-reserved ones they can't access). So you basically have to force them on 2.4 or disable DFS on the APs.

This applied to both the Broadcom and Intel wifi. Any chance these models are better in this regard?

mistat 1 day ago 0 replies      
Are these available in Australia yet? I can only ever find reference to the US store
davidy123 1 day ago 0 replies      
This is great, but Thinkpads have always had good Linux support. I have a friend who bought the previous XPS 13 Ubuntu edition and it had all kinds of problems which are only being worked out now, problems that aren't present on most Thinkpads.

I got the X1 Yoga one month after it came out, installed an alpha version of Ubuntu 16.04 on it and everything just works, including the touch screen.

manaskarekar 1 day ago 2 replies      
I've had great luck with Dells for Linux support. Lubuntu on Latitudes has run flawlessly over the years.

It is unfortunate that Dell chose to use small arrow keys and at the same time overload the arrow keys with the 'Home-End-PgUp-PgDown'. Hard to believe this layout was chosen for their Latitude and Precision lines too.

Ruud-v-A 1 day ago 0 replies      
Ive been running Arch Linux on the non-developer edition XPS 15, and Ive experienced very little problems. Occasionally the touchpad does not work, and sometimes headphone audio is silent. Other than that, everything works like a charm, even the Broadcom WiFi adapter.
_RPM 1 day ago 0 replies      
I seriously can't tell if this article being here is an advertisement. Is it possible the site owners have been paid to have this post here?
otar 1 day ago 2 replies      
One tiny detail that bothers me is that there's a Windows logo on the keyboard. It could be Tux or Ubuntu logo.

Tux Penguin sticker solved my problem on my XPS 13, but would be nice to see it coming out of the box.

pascalo 1 day ago 1 reply      
For all the people dealing with shitty dell customer support on the phone, try using their @dellcares twitter account. Had a broken acreen glass and later a faulty fan on my 2014 xps 13, and they sent around a technician each time, all via twitter. Much less painfull than hanging on the phone. Excellent customer support.
dblooman 1 day ago 0 replies      
Is there a 13 or 15 inch laptop without a number pad that supports Ubuntu for less than 500 that uses an Core i5 Skylake CPU?
yitchelle 1 day ago 1 reply      
Can anyone share their experience when compared to Macbook Air?
Timshel 1 day ago 2 replies      
Looked really good until they had to botch something: let's put hdmi 1.4 and no DisplayPort, it's not like we're selling 4k screen ...
moonbug 1 day ago 1 reply      
At the other end of the spectrum, the 5" Inspiron 3552 that comes with Ubuntu, which I'm typing this on, is quite the best 200 dollar laptop you can get.
ciokan 1 day ago 1 reply      
Just got the xps 15 (9550) yesterday which had windows on it. Installed ubuntu 16.04 beta and works very well. I had huge problems trying to install any lower version of ubuntu & variants.
nivertech 1 day ago 0 replies      
I waiting for Lenovo Thinkpad X1 Carbon 4th gen with Skylake CPU. Anybody knows if it's already available?
jkot 1 day ago 3 replies      
Price is not that great. 16GB RAM version is more expensive than Windows edition at my local shop (Prague). At least it has Intel wifi.
rcthompson 1 day ago 0 replies      
The placement of the webcam in the lower left corner is truly idiotic.
natch 1 day ago 2 replies      
Does this have a magsafe-type connector for the power cord?

I did look for it in the review but maybe I missed it.

modzu 1 day ago 0 replies      
looks great on paper, but the 2015 xps13 had some serious issues like useless webcam and trackpad...

it's the laptop that flipped me to mac. wont go back.

intrasight 1 day ago 0 replies      
Is it less expensive than one with Windows?
Vlaix 1 day ago 0 replies      
Now make a laptop with a keyboard and touchpad that justifies me stopping frankensteining my old machines to keep them alive.

That chiclet keyboard and phone-sized pad nonsense is very limiting.

jgalt212 13 hours ago 0 replies      
I have a new (purchased Dec '15) XPS 15. And despite having dual booted about 10-15 different PCs and laptops (mostly Dell and HP) have thus far have had zero success getting Ubuntu on my new box. I suspect it has to do with two internal hard drives, but I've sort of given up at this point (I bricked the first box, and Dell sent me a new one) and relegate this otherwise very nice laptop to the accounting department to run Quickbooks and Office.
Raed667 1 day ago 0 replies      
The only thing holding me back is the CPU.
bliti 1 day ago 0 replies      
How much does this thing cost?
arca_vorago 1 day ago 0 replies      
I have ordered a few midline desktops from dell for testing their Ubuntu setup. In the end I wiped and installed my own, and the eula that pops up on first boot was fucking ridiculous, I mean I know they like tonpush the boundaries for self protection, and I understand things like wanted to keep any issues in their jurisdiction and stuff like that, but clauses in the eula stated you waved all rights including constitutional ones (yes, the word constitutional was used in the actual eula,) agreed to forfeit any trial by jury or anybother legal procedure except private arbitration in Dells jurisdiction, and some other stuff that really bothered me to see as the first thing that popped up on first boot.

Lots of it is obviously totally unenforceable and wouldnt stand in court, but they put it in there anyway just because they can get away with it.

Does no one do reasonable eulas/tos?

tunichtgut 1 day ago 1 reply      
Hell, its about time!
tiatia 1 day ago 0 replies      
I use an XPS 13 with Kubuntu.

I have no experience with preinstalled Linux, but similar to Android, I would be afraid of presinstalled crabware. Just remove the windows and make a clean install.

andreaso 1 day ago 4 replies      
Does it really matter that much what distro it ships with? As long as the laptop ships with any distro preinstalled that hardware tend to be properly supported by the Linux kernel, allowing you to feel safe about installing any other (up-to-date) distro.
akerro 1 day ago 0 replies      
Why is this a news? I bought two laptops before, both of them came with Linux, one Asus one Dell.
xkiwi 1 day ago 2 replies      
If anyone needs or have to install windows 7 on DELL brand laptops for any reason, I highly recommend you to wait until you confirm it can be done.

I have Dell XPS/Precision 11 and 13, the problem is the Windows 7 have difficulty to boot from UEFI, and you will stuck because AHCI is not supported by these DELL's BIOS.

Brendan Eich: WebAssembly is a game-changer infoworld.com
315 points by alex_hirner  3 days ago   301 comments top 32
jacquesm 3 days ago 25 replies      
Personally, I think this is terrible (and it really is a game-changer, only not the kind that I'd be happy about). The further we get away from the web as a content delivery vehicle and more and more a delivery for executables that only run as long as you are on a page the more we will lose those things that made the web absolutely unique. For once the content was the important part, the reader was in control, for once universal accessibility was on the horizon and the peer-to-peer nature of the internet had half a chance of making the web a permanent read/write medium.

It looks very much as if we're going to lose all of that to vertical data silos that will ship you half-an-app that you can't use without the associated service. We'll never really know what we lost.

It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm.

Feel free to call me a digital Luddite, I just don't think this is what we had in mind when we heralded the birth of the www.

kibwen 3 days ago 0 replies      
On the Rust side, we're working on integrating Emscripten support into the compiler so that we're ready for WebAssembly right out of the gate. Given that the initial release of WebAssembly won't support managed languages, Rust is one of the few languages that is capable of competing with C/C++ in this specific space for the near future. And of course it helps that WebAssembly, Emscripten, and Rust all have strong cross-pollination through Mozilla. :)

If anyone would like to get involved with helping us prepare, please see https://internals.rust-lang.org/t/need-help-with-emscripten-...

EDIT: See also asajeffrey's wasm repo for Rust-native WebAssembly support that will hopefully land in Servo someday: https://github.com/asajeffrey/wasm

s3th 3 days ago 1 reply      
As we get closer to having a WebAssembly demo ready in multiple browsers, the group has added a small little website on GitHub [0] that should provide a better overview of the project than browsing the disparate repos (design, spec, etc.).

Since the last time WebAssembly hit HN, we've made a lot of progress designing the binary encoding [1] for WebAssembly.

(Disclaimer: I'm on the V8 team.)

[0]: http://webassembly.github.io/[1]: https://github.com/WebAssembly/design/blob/master/BinaryEnco...

machuidel 3 days ago 2 replies      
Since I started hearing about WebAssembly I cannot stop thinking about the possibilities. For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.

For people thinking this will close the web even more because the source will not be "human"readable. Remember that JavaScript gets minified and compiled into (using Emscripten) as well. The benefits I see compared to what we have now:

- Better sharing of code between different applications (desktop, mobile apps, server, web etc.)

- People can finally choose their own favorite language for web-development.

- Closer to the way it will be executed which will improve performance.

- Code compiled from different languages can work / link together.

Then for the UI part there are those common languages / vocabularies we can use to communicate with us humans: HTML, SVG, CSS etc.

I only hope this will improve the "running same code on client or server to render user-interface" situation as well.

rl3 3 days ago 1 reply      
Considering how critical SharedArrayBuffer is for achieving parallelism in WebAssembly, I'm hoping we see major browsers clean up their Worker API implementations, or even just comply with spec in the first place.

Right now things are a mess in Web Worker land, and have been for quite some time.

eggy 2 days ago 0 replies      
I still think there is a lot of room for static pages with links in the style that people seem to be prematurely waxing melancholy about when forecasting where WebAssembly _may_ lead the internet. I was always able to find sites of interest that didn't include Flash, Java applets, and company when I just wanted to read something. I find some of the scroll-hijacking, and other javascript goodies on modern pages to either be a distraction, or non-functional on some different devices. On the other hand, I am particularly happy about, and working with Pollen in Racket, a creation by Matthew Butterick. Pollen is a language created with Racket for making digital books, books as code, and bringing some long-needed, real-world publishing aesthetics back to the web [1,2]. I may even by a font of his to get going and support him at the same time!

 [1] http://docs.racket-lang.org/pollen/ [2] http://practical.typography.com

stevenh 3 days ago 0 replies      
If anyone at infoworld.com reads these comments:

On the top of the page, there is a horizontal menu containing "App Dev Cloud Data Center Mobile ..."

When I position my cursor above this menu and then use the scroll wheel to begin scrolling down the page, once this menu becomes aligned with my cursor, the page immediately stops scrolling and the scroll wheel functionality is hijacked and used to scroll this menu horizontally instead.

It took a few seconds to realize what was happening. At first I thought the browser was lagging - why else would scrolling ever abruptly stop like that?

I closed the page without reading a single word.

petercooper 3 days ago 1 reply      
If you want to see Brendan's keynote from O'Reilly Fluent yesterday, a sample went up https://www.youtube.com/watch?v=9UYoKyuFXrM with the full one at https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...
Executor 12 hours ago 0 replies      
I'm conflicted. One one hand I support open data/raw documents. But this prevents native-like, real-time applications. It also forces developers to work on Javascript which is a terrible language.

On the other hand we have lock-in ecosystems, closed silos, that are detrimental to the commons.

The only consolation I have is that if WebAssembly provides a bytecode instead of machine code then we still have the ability to perform reverse engineering.

In the end, we have ALL have to do the hard task to inform every single person why Apple/FB/MS/Google are harmful to us and why we should boycott their programs/services.

no1youknowz 3 days ago 1 reply      
I think the web may split into two.

1) 'Simple' web pages will stick with jquery, react, angular, etc type code. Where you can still click view source and see whats going on. Where libs are pulled from CDNs etc.

2) 'Complex' saas web apps, where you need native functionality. This will be a huge bonus. I'm in this space. I would love to see my own application as a native app. The UI wins alone make it worth it!

gsmethells 3 days ago 1 reply      
To me, it's more about choice of programming language than performance. Though the latter is very important, I think the former is what will open up doors to making the browser a platform of choice (pun intended). Currently, it feels like JavaScript is the Comcast of the web. Everyone uses it, but that's only because there aren't any other options available to them.
hutzlibu 2 days ago 0 replies      
Sorry, but most of the discussion here is completly missing the point about WebAssembler.

It is just a technology, to make things brought through the web, faster.And it is open. And no less secure, than js.So I think it's great.

Good technology does exactly, what the creator wants.And if people don't like some of the things, that gets created with it, then it is not a problem of the technology itself.

So people can do good things, or bad things with it.But in the web, we have the freedom to choose, where we go.

And if we don't like ads for example, we should be aware, that Web-Site creators still want money for their work, so maybe we should focus and support a different funding model. I like the pay-what-you-want or donation model the most, Wikipedia shows, that this is possible on a large scale ...

mwilkison 3 days ago 1 reply      
Video of the talk?

EDIT: Here is the full-length one - https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...

vruiz 3 days ago 4 replies      
I want to agree with him, I'd like to see a future where WebAssembly closes the gap between native apps and the web. For better or worse browsers are the new OSes, and I dream of a future were all vendors come up with the equivalent of a POSIX standard where any web application can access all (or a wide common subset) of any device's capabilities, from the filesystem to native UI elements.
icedchai 3 days ago 1 reply      
WebAssembly... Wow, if we keep going, we'll re-invent what Sun achieved 20 years ago with Java. If only they hadn't f-ed it up...
nadam 2 days ago 0 replies      
A question to WebAssembly experts: How easy it is to use WebAssembly as a sandboxed embedded scripting mechanism in my own native (C++) application? I am writing a native real-time system (a distributed 3D engine for VR) in which I send scripts on the wire between machines, and I need to call an update() method of these sent scripts like 90 times a frame. I need complete sandboxing, because my trust model is that what is trusted on machine A may be absolutely not trusted on machine B: not only not letting the scripts do any functions other than what I explicitly let them call, but I need to have hard limit on their memory usage and execution time also, but preferably they should execute in-process, so they can reach memory I let them and be called from the thread I want. Currently I go wtih Lua, but to have really good performance I will need to research this topic more deeply later.
talles 3 days ago 1 reply      
Are those boxes in the picture Firefox OS phones?

Is this an old picture?

n00b101 3 days ago 1 reply      
What is the upgrade path for Emscripten users? I understand that LLVM will have WebAssembly backend, but how will OpenGL to WebGL translation work, for example?
bcoates 3 days ago 1 reply      
If you think WebAssembly (or asm.js) is a good idea, I would very much like you to do the thought experiment of what design decisions something like WebAssembly would have made 15 or 25 years ago, and what consequences those would have today.

Helpful research keywords: Itanium RISC Alpha WAP Power EPIC Java ARM Pentium4 X.25

spitfire 2 days ago 1 reply      
I wonder if along with these byte code engines we'll get capability grained control systems too. Somehow I doubt it though.

So in the future, when you visit a website they'll be able to Eg: open windows, pop up unblockable modals, webGL, bytecode loaded spam/ads, etc. The end users option will be to block everything, or live with it.

I do not like this bold new world we're entering.

xaduha 3 days ago 1 reply      
WebAssembly shouldn't be for the end users to use, it should used for implementations of other languages so they can access the same APIs Javascript can.

Add Lua to the browser, add Perl 6 to the browser, etc. There are plenty of decade old W3C specifications that never made it to the browser properly, like XSLT 2.0, XQuery 1.0, XForms, never mind the latest versions of the specs.

ak39 2 days ago 0 replies      
Is WebAssembly going to be host url resource based (like current .js files are) or will it be used as part of some centralized global assembly cache (GAC) solution where assemblies are only usable from a CDN type of authority?
vbezhenar 3 days ago 4 replies      
What exactly will be better? One can compile a lot of languages to JavaScript today. JavaScript is fast enough and size doesn't really matter for most use cases. Is WebAssembly going to be much faster than JavaScript?
chrstphrhrt 3 days ago 2 replies      
Has anyone tried NativeScript? https://www.nativescript.org

Heard about it on a podcast recently, haven't had a chance to try.

Pxtl 3 days ago 0 replies      
If we keep this up, the web will be almost as good of an application framework as a '90s era desktop application. Yay, progress!
jimmcslim 3 days ago 0 replies      
I wish the browser vendors focused on CSS Grid module support as much as they did WebAssembly.
madsravn 1 day ago 0 replies      
This looks AWESOME
travisty 2 days ago 3 replies      
Thanks for that update that no one asked for.
icosta 3 days ago 0 replies      
WebAssembly = SWF with diff name. Come on!
ape4 3 days ago 1 reply      
The format of WebAssembly could be Java ByteCode.
opacityIsCool 3 days ago 1 reply      
Yeah, great. Transform everything into opaque binary blobs, as far as the eye can see. Wonderful.

Thanks for nothing.

twsted 3 days ago 0 replies      
I don't know. I am not sure yet. What the HN folks think about this?
How Web Scraping Is Revealing Lobbying and Corruption in Peru scrapinghub.com
397 points by bezzi  4 days ago   72 comments top 11
kilotaras 3 days ago 0 replies      
I'm from Ukraine and the biggest success in battling corruption comes from system called Prozorro[1] (transparently) for government tenders.

It started as volunteer project and some projections put savings at around 10% of total budget after it will become mandatory in April.

[1] https://github.com/openprocurement/

carlosp420 4 days ago 5 replies      
Hi there, I am the author of the blog post. I will be happy to answer any question.
ecthiender 4 days ago 2 replies      
Very interesting, how tools like these can be so much helpful for journalists and generally transparency in government functions.

Probably world changing, when considering that even semi-technical folks can cook up tools to dig into things like this.

I know this tool was by a developer, but scrapinghub has web UI to make scrapers.

xiphias 4 days ago 0 replies      
Can you draw a covisit graph of people? Who visited the building at the same times as somebody else. The strength of the connections could be visitedboth^2/( visitedwithouttheother1+1)*(visitedwithouttheother2+1)))
jorgecurio 4 days ago 6 replies      
Really interesting use of data extraction....

For developers and managers out there, do you prefer to build your own in-house scrapers or use Scrapy or tools like Mozenda instead? What about import.io and kimono?

I'm asking because lot of developers seem to be adamant against using web scraping tools they didn't develop themselves. Which seems counter productive because you are going into technical debt for an already solved problem.

So developers, what is the perfect web scraping tool you envision?

And it's always a fine balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped.

It seems like web scraping is a really shitty business to be in and nobody really wants to pay for it.

alecco 4 days ago 1 reply      
In other countries, corrupt politicians found out a simple captcha per n items is good enough to defeat analysis.
danso 4 days ago 4 replies      
FWIW, if you live in the U.S., then you benefit from having such data in great quantity, though I don't think it's sliced-and-diced to near the potential that it has:

Lobbyists have to follow registration procedures, and their official interactions and contributions are posted to an official database that can be downloaded as bulk XML:


Could they lie? Sure, but in the basic analysis that I've done, they generally don't feel the need to...or rather, things that I would have thought that lobbyists/causes would hide, they don't. Perhaps the consequences of getting caught (e.g. in an investigation that discovers a coverup) far outweigh the annoyance of filing the proper paperwork...having it recorded in a XML database that few people take the time to parse is probably enough obscurity for most situations.

There's also the White House visitor database, which does have some outright admissions, but still contains valuable information if you know how to filter the columns:


But it's also a case (as it is with most data) where having some political knowledge is almost as important as being good at data-wrangling. For example, it's trivial to discover that Rahm Emanuel had few visitors despite is key role, so you'd have to be able to notice than and then take the extra step to find out his workaround:


And then there are the many bespoke systems and logs you can find if you do a little research. The FDA, for example, has a calendar of FDA officials' contacts with outside people...again, it might not contain everything but it's difficult enough to parse that being able to mine it (and having some domain knowledge) will still yield interesting insights: http://www.fda.gov/NewsEvents/MeetingsConferencesWorkshops/P...

There's also OIRA, which I haven't ever looked at but seems to have the same potential of finding underreported links if you have the patience to parse and text mine it: https://www.whitehouse.gov/omb/oira_0910_meetings/

And of course, there's just the good ol FEC contributions database, which at least shows you individuals (and who they work for): https://github.com/datahoarder/fec_individual_donors

This is not to undermine what's described in the OP...but just to show how lucky you are if you're in the U.S. when it comes to dealing with official records. They don't contain everything perhaps but there's definitely enough (nevermind what you can obtain through FOIA by being the first person to ask for things) out there to explore influence and politics without as many technical hurdles.

dkarp 3 days ago 0 replies      
This is really impressive, even more so by the fact that it has already led to discoveries being made.

Web scraping is a really powerful tool for increasing transparency on the internet especially with how transient online data is.

My own project, Transparent[1], has similar goals.

[1] https://www.transparentmetric.com/

prawn 3 days ago 0 replies      
Peruvians, do you think this would cause a majority of meetings to be held outside of public office buildings or via secretive messaging system?
Angostura 3 days ago 0 replies      
This is a fascinating project - If successful I suspect the result will be that lobbying to longer takes place in the government offices: "Shall we meet at that little place down the street", or will be carried out over the phone.
dang 4 days ago 0 replies      
We've banned this account for repeatedly violating the HN guidelines.

We're happy to unban accounts when people give us reason to believe they will post only civil and substantive comments in the future. You're welcome to email hn@ycombinator.com if that's the case.

America's High School Graduates Look Like Other Countries' High School Dropouts wamc.org
295 points by tokenadult  3 days ago   357 comments top 40
gregatragenet3 3 days ago 5 replies      
Of course the US does poorly in this survey. The survey measures life skills. We've spent the last decades changing our schools from teaching life skills to pure college prep. A terrible disservice to young people. Here's some sample questions http://www.oecd.org/site/piaac/Education%20and%20Skills_onli... .
esaym 3 days ago 29 replies      
I don't expect the public school system to teach my children anything. Hence why they won't be attending. Most of the schools I attended growing up were completely useless. Filthy, full of drugs and gangs. Sadly I even listened to some of my teachers more than I did my own parents (indoctrination) which I now see lead me down the wrong path.

I don't expect the public school to teach my kids any kind of career skills or path. Think they'll teach them Python or Swift? Even if they did, it'd be boring as heck and wouldn't happen until they're 16 years old. Conversely, I fully expect to start "outsourcing" my web dev work to my children when they are 13. My wife thinks it won't work. I think having them make $60 an hour while their friends make nothing will be a large deciding factor.... And by the time they're 16, I am expecting them to be taking classes at the local community college which is just a few miles from here.

tokenadult 3 days ago 0 replies      
Here is a link (hosted by the United States National Center for Educational Statistics) to the study report referenced in the news story submitted to open this thread, that is "Skills of U.S. Unemployed, Young, and Older Adults in Sharper Focus: Results From the Program for the International Assessment of Adult Competencies (PIAAC) 2012/2014: First Look". The report includes a summary of findings, including findings about numeracy and technology skills:

"In numeracy and problem solving in technology-rich environments, the United States performed below the PIAAC international average. In numeracy, the U.S. average score was 12 points lower than the PIAAC international average score (257 versus 269, see figure 1-B), and in problem solving in technology-rich environments, the U.S. average score was 9 points lower than the international average (274 versus 283,see figure 1-C). Compared with the international average distributions for these skills, the United States had

" a smaller percentage at the top (10 versus 12 percent at Level 4/5 in numeracy, and 5 versus 8 percent at Level 3 in problem solving in technology-rich environments, see figures 2-B and 2-C), and

" a larger percentage at the bottom (28 versus 19 percent 6 in numeracy, and 64 versus 55 percent in problem solving in technology-rich environments at Level 1 and below)."


panglott 2 days ago 1 reply      
I always wish these reports would break out the average vs. the distribution. America, more than any other developed country, tolerates a high level of poverty. And poor kids just don't have the resources to do well in school, and bring the average down. The single largest predictor of how well kids will do in school is not any in-school factor, but rather the socioeconomic status of their parents.
nibs 2 days ago 2 replies      
This study does not surprise me. All four of my grandparents were educators for their entire career. And as a result, my parents decided to unschool my brothers and I.

In trying to be as objective and self-aware as possible, it is clear to us that the homeschoolers/unschoolers we built a community with are vastly better prepared for adulthood, regardless of level of general intelligence. There is a reason the acceptance rate at Stanford is ~27% for homeschoolers vs. 5% for those who went to school. [1]

Regardless of the pursuit, it seems like our friends who went to school the whole time are stuck in this weird immature pergatory where they can't make decisions or stick to things.

For the most part, the unschooler/homeschoolers are similar demographics, and from all different "walks of life", and yet invariably omit this issue of accepting adulthood.

Our thesis was always that school spoon feeds you, and you have to learn the pain of learning independently to be successful and learn a real growth mindset. But who knows. It is a complicated issue.

[1]: http://pqdtopen.proquest.com/doc/1011320109.html?FMT=AI&pubn...

hackuser 3 days ago 5 replies      
When it comes to technology skills, the story gets worse. The U.S. came in last place right below Poland.

The study looked at basic technology tasks: things like using email, buying and returning items online, using a drop-down menu, naming a file on a computer or sending a text message.

Something to consider about your users.

eldavido 3 days ago 2 replies      
American academia is the envy of the world.

I haven't met a single ambitious person in the US who dreams of teaching in a high school.

It's kind of sad that secondary ed isn't high-prestige in today's American culture, but it's not.

InclinedPlane 3 days ago 0 replies      
This is why the college situation is so dire these days.

It used to be that a high school diploma meant something, and guaranteed a certain level of literacy and basic familiarity with mathematics and other useful basic skills. Now there are virtually no jobs where the core requirement is literacy or basic math skills etc. which one can get as only a high school graduate. The credential of a college education, or even just having spent some time in college, is the new high school diploma. Except whereas the public funds high school, it does not completely fund college.

And even though total government outlays to colleges have actually increased, admissions have increased even more, and spending on administrators has as well, so per student costs not covered by tax payers has gone through the roof.

> She offers a sample math problem from the test: You go to the store and there's a sale. Buy one, get the second half off. So if you buy two, how much do you pay?

> "High school-credentialed adults, they can't do this task on average," says Carr.

High school graduates can't do what is basically a middle school level math problem. It's no wonder employers don't want to hire them. As a society we spend hundreds of thousands of tax dollars on K-12 students as they pass through the school pipeline but when they graduate they aren't educated and they have very little to show for all that time, effort, and expenditure.

This is unquestionably a national tragedy that will haunt our country for decades to come. We've got a "lost generation" already with millenials who were vastly underemployed for several years after the big financial crash (which would be expected to have a lifetime impact on career and wealth development). And we're seeing that there's a new lost generation of young adults who have been ill served by the educational establishment.

Edit: even worse, HS education does a poor job of preparing students for college, leading to the double whammy of debt + dropping out of college without earning a degree.

camelNotation 2 days ago 3 replies      
The purpose of school is to equip students on how to teach themselves, not to spoon feed them facts. For that reason, philosophy, rhetoric, logic, music, literature, and mathematics should be the priorities. Students who are well-versed in those areas WILL figure out the rest of it a million times faster than some kid who drilled on history, science, or other facts. Teach the kids how to study history, don't feed them names and dates. Teach the kids how to research, don't make them regurgitate someone else's results. Teach kids how to comprehend and analyze a text, don't make them read Homer and then regurgitate the events. Modern education is bullshit.
victorhugo31337 3 days ago 8 replies      
This is what happens when no child is left behind. How can EVERYONE graduate from HS without lowering the bar?
2bitencryption 3 days ago 0 replies      
This also demonstrates the drastic disparity of quality.

You can either be getting a world-class education in the best high schools, or apparently a terrible one. I'm glad I went to a good high school. I never knew how good my education was until my first year of college, in my first writing class. Yeesh.

rb808 2 days ago 1 reply      
I'm also a bit tired about hearing about how great Finland & Japan's schools are.

How many illegal immigrants do these countries have in their school system? How many with minorities who dont speak the main language at home? How many kids in Japan/Finland dont eat properly because their families can't afford food? Of course the American's student's results are going to be worse on average.

rb808 2 days ago 0 replies      
I thoguht the article was very positive:

> Americans who went to college and graduate school did well. They scored above their peers with similar degrees in other developed countries.

The problem is that the bottom half does badly. I think this is largely due to poverty, culture and low expectations rather than poorly resourced schools. Maybe its also that its so easy to get a job in America that working classes really dont need to study hard.

ryanackley 2 days ago 2 replies      
As a parent of a school-age child, I hate studies like this. It's exactly the sort of thing that leads to mandated programs like common core.

Here's the thing, my son spends more time doing homework and studying for tests in elementary school than I ever did throughout my entire public education.

I have worked internationally. From my own anecdotal experience, I didn't see a difference in intelligence or ability to do the job between cultures.

bunkydoo 2 days ago 0 replies      
Well one thing to keep in mind is that some American States alone are the same size in terms of population of some of these entire countries being praised for having better education systems than the US. Sure we can learn from some of the small nuances of their systems, but infrastructurally we are in a much different situation than most of these examples (Finland/Japan)
Cookingboy 3 days ago 6 replies      
And in this age of information and technology, many of these people are voting to decide the next leader of the only superpower left on Earth.

This may sound grossly elitist, but democracy can be super scary sometimes.

smartbit 2 days ago 1 reply      
Nothing on creativity. I wonder why?

"creativity is essential to successful civilizations" http://www.amazon.com/dp/1907794883


spike021 3 days ago 3 replies      
I won't go into much detail because I would wind up ranting and only be somewhat understandable..

However, as an American citizen who went to public schools as a child/teenager and am now finishing up at a public state university, I'm inclined to say that the education system here is a complete wash.

It works for some people who fit the one-size mold that the system here seems to target, but there are a large number of children/teenagers/college-aged students for whom it does not.

That isn't to say one set of students are anymore gifted than the other. Just that the approach to education is deeply flawed. It works for the type of student it is set up for, and has little to no appreciation for other types, least of which if they even exist. The approach needs a serious revolution in order for our country to have a successful academic system.

source: I am from California and the state university I currently attend and am almost graduated from is in CA as well.

hourislate 2 days ago 0 replies      
The single largest factor in determining how well a student does in High School is poverty.

Poor kids don't have the support nor the means to excel. They typically have to worry about other things. Like earning money or working the farm or just trying to keep the family together anyway they can. While other kids have computers, you didn't even have a desk to do your homework. Other kids get dinner, you go hungry. When your mom was sick it meant staying home to take care of her.

The potential that is lost to poverty is no different than the potential lost to Wars. Millions of people who could have changed the world never get a chance.

kriro 2 days ago 0 replies      
"""things like using email, buying and returning items online, using a drop-down menu, naming a file on a computer or sending a text message."""

Way to measure stuff! How about using instant messaging, swiping and dragging stuff, taking pictures with a phone and sending them and doing tasks with Siri/HG instead? We're taking about HS-level students, right? They seem to be measuring stuff for older folks :P

madengr 3 days ago 2 replies      
Not able to send a text messsage? That I don't believe.
Xeoncross 2 days ago 0 replies      
Has anyone ever studied the history of the "school system"? It was invented in Germany to help standardize away individualism and help promote "patriotism".


alanwatts 2 days ago 0 replies      
>when it comes to technology skills, we're dead last compared with other developed countries.

Maybe because the education model in the US (and elsewhere) relies heavily upon Ludditism.

Student's intelligence is measured on a single linear numeric scale based on paper-and-pencil administered examinations and are strictly limited to using computer technology (calculators) that was invented 40 years ago[1] even though they posses in their pockets computers that are millions (billions?) of times more powerful.

"Never memorize something that you can look up."


Jugurtha 2 days ago 0 replies      
There's a presentation by Stephen Krashen where he talks about the latest research he wrote about in his book, "The Power of Reading".[0]

He addresses the problem of education and the ways it is being done. One really important remark he has made was that one of the biggest differences between children of well off families and others less fortunate is the availability of books: kids of comfortable families have access to more books since they are very young, contrary to their peers. The more important remark is the follow up: libraries tend to offset the impact of economic differences.

It's an interesting talk.

[0]: https://www.youtube.com/watch?v=DSW7gmvDLag

dclowd9901 1 day ago 0 replies      
I had to laugh when I got to this:

> She offers a sample math problem from the test: You go to the store and there's a sale. Buy one, get the second half off. So if you buy two, how much do you pay?

It doesn't even seem answerable. The closest I can think of a way to answer that question is "75%".

jorgecurio 3 days ago 3 replies      
yeah just talking to elementary school kids from America vs. Korea was an eye opening experience. I couldn't help but feel that the future is kinda bleak for American society because so much of it is centered around jock/warrior culture that places far more emphasis on physical education whereas in South Korean kids who study and is intelligent is revered as the alpha male model.

Kinda explains why South Korea ranks #1 for the most innovative economy. There's no hazing of nerds although bullying and suicide due to over studying is a definite problem....it just follows the trend that American kids are seriously being left behind by other countries.

wmt 2 days ago 0 replies      
"Americans who went to college and graduate school did well. They scored above their peers with similar degrees in other developed countries.

For young adults with a high school diploma or less, things did not look so good. These Americans performed significantly worse than those in other countries with the same education level."

Doesn't that just mean that smart kids in America are more likely to get a higher education than in other many other countries?

pinaceae 2 days ago 0 replies      

See the details on race/social rank and performance of high schoolers that went on to college vs the ones that stopped after hs to fully understand the data.

the US has layers, you have a first world country, a second world country and a third world country intermixed, skewing all these data sets. well-off whites perform as well in the US as everywhere else. Asians too. no need for homeschooling or other panic modes.

if you're poor, you're fucked. just like anywhere else.

bawana 2 days ago 0 replies      
This is news? We have been devaluing education for decades as we pay teachers less than a poverty level wage (unless you are in that unnecessary branch-administration). Witness the populist appeal of Trump. Anyone with common sense (which our school system has been so effective in extinguishing) would recognize a demagogue even without the prior example of Hitler.

Solution: Khan academy. let them issue diplomas.

breanwangji 2 days ago 2 replies      
It should not be judged so easily.

However I have to say when I was in MBA course in US, I don't know why the teacher needs to teach the students how to calculate a linear equations of two unknowns. In China that is a question in elementary school and less than grade 5. I wonder how they graduated from high school and learning business.

Tehnix 2 days ago 2 replies      
As a non-native speaker this sentence actually got me a bit unsure (perhaps because I looked too much at it):

"Buy one, get the second half off."

Is that meant as the second unit you buy, you get half off? Don't know if this is a common way to phrase a discount like that in the US..

ommunist 2 days ago 0 replies      
But! Maintaining brain drain from the third world and the UK, negates this effect on the US economy.
vox_mollis 2 days ago 1 reply      
Not just any "other countries". PIAAC measures against other European and East Asian countries.

To put it politely, the US has much greater demographic diversity, with its associated implications for IQ and consequently, all associated g factors.

Geojim 2 days ago 0 replies      
How else are you going to educate kids for a Donald Trump world....
chatmasta 3 days ago 3 replies      
I'd be much more interested in a study that compares the top 1 percentile of graduates between countries.

Of course the bulk of US graduates are going to be unskilled... that's just how the bell curve works, and the US has the resources to support lots of unskilled laborers.

It's the top of the bell curve that matters. How do the smartest engineers and scientists in the US compare to their counterparts in other countries?

Eventually we'll have a basic income so if people don't want to learn, they won't have to.

droithomme 2 days ago 1 reply      
The test checked literacy, numeracy, and IT skills. US did fine on literacy, so-so on numeracy, and last on IT skills. Which is interesting given where most tech comes from.

Here's an example of the sort of task you had to complete (http://nces.ed.gov/pubs2016/2016039.pdf):

> Level 3: Meeting rooms (Item ID: U02) Difficulty score: 346This task involves managing requests to reserve a meeting room on a particular date using a reservation system. Upon discovering that one of the reservation requests cannot be accommodated, the test-taker has to send an e-mail message declining the request. Successfully completing the task involves taking into account multiple constraints (e.g., the number of rooms available and existing reservations). Impasses exist, as the initial constraints generate a conflict (one of the demands for a room reservation cannot be satisfied). The impasse has to be resolved by initiating a new sub-goal, i.e., issuing a standard message to decline one of the requests. Two applications are present in the environment: an e-mail interface with a number of e-mails stored in an inbox containing the room reservation requests, and a web-based reservation tool that allows the user to assign rooms to meetings at certain times. The item requires the test-taker to [u]se information from a novel web application and several e-mail messages, establish and apply criteria to solve a scheduling problem where an impasse must be resolved, and communicate the outcome. The task involves multiple applications, a large number of steps, a built-in impasse, and the discovery and use of ad hoc commands in a novel environment. The test-taker has to establish a plan and monitor its implementation in order to minimize the number of conflicts. In addition, the test-taker has to transfer information from one application (e-mail) to another (the room-reservation tool).

Basically, users are given an intentionally badly designed user interface in which they receive no training, and a task that is impossible to accomplish within the obvious constraints of the interface, and asked to accomplish a goal. It simulates the experience of being a low paid customer service rep in the third world using crappy software and seeing if you can handle it or not. If one has common sense, intelligence, and a sense of valuing their own time, they will recognize this tasks as useless BS and refuse to cooperate further in the test.

tn13 2 days ago 1 reply      
Public schooling is trash in USA. As an Asian parent I can tell you it is more than worthless. That is what happens when you put government into too much of control.

Americans as a society have clearly traded job safety of teacher for good quality education.

Stossel's Stupid in America : https://www.youtube.com/watch?v=Bx4pN-aiofw

pokolovsky 2 days ago 0 replies      
That's really funny.
avaku 2 days ago 0 replies      
On average
aaron695 3 days ago 0 replies      
Yet America, has lead and continues to lead the world.

Maybe we are barking up the wrong tree, maybe school isn't as important for a country as what we take as just fact.

Image Processing 101 recurse.com
364 points by abecedarius  3 days ago   41 comments top 11
yan 3 days ago 6 replies      
Slightly related, but I had a small epiphany when taking a class on DSP on Coursera. The kernel that is used to blur an image and that which is used to remove the treble/high frequencies from an audio sample are identical, except one is in 2 dimensions and the other is in 1. And this makes perfect sense! A low pass filter removes high frequencies, and sharp edges are high frequencies in the 2D plane.

TFA only mentions Gaussian blur, but a Gaussian blur is just a moving average, with "closer" pixels being valued higher, plus a smooth falloff. When you replace each value with an average of its neighborhood, you "soften" the transitions.

leni536 2 days ago 1 reply      
I know that this is an introduction, but I wish there was a warning about the use of improper color spaces for different tasks (like using sRGB or any nonlinear colorspace for downscaling images, blurring or Phong shading). A warning about the existence of different colorspaces and their different use cases would be enough in an introductionary write up. It's still an issue in most of today's software [1].

Like open this image in your browser or in your favorite image viewer and scale it down to 50%:http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png

[1] http://www.4p8.com/eric.brasseur/gamma.html

haffla 2 days ago 1 reply      
So OK you can use cv2.COLOR_RGB2GRAY to get a grayscale image. But what does that teach you? In my image processing 101 course a couple of years ago we actually didn't use any libraries except for reading images and showing them (written by the teacher). Just Java. A picture would be a 2 dimensional array. The pixel is represented by an integer. So you just need a nested for loop and you can manipulate every pixel yourself, thus learning what really happens under the hood, how filters work.
joshvm 2 days ago 0 replies      
A good follow on from this is the Learning OpenCV book (O'Reilly), written by a couple of the lead developers. It goes into detail on the mathematics, but it's not heavy or verbose at all. I found it far more useful than a lot of introductory image processing books simply for its theoretical content.

Don't forget scikit-image and scipy.ndimage too.

dosshell 2 days ago 1 reply      
Strictly speaking image processing is image in -> image out. And image analysis is image in -> data out. The author gives the expression that everything is image processing. Not a big thing but it helps to know the difference if you want to take the correct course :)
utkarshsinha 2 days ago 1 reply      
Have you looked at http://aishack.in/ - it has a bunch of opencv tutorials and projects.
EvanPlaice 2 days ago 1 reply      
Instead of using built-in method calls that come with a library, why look into the algorithms used to generate the different transforms?

I once worked on a UI where the users wanted to capture a screenshot of the current page.

Because color toner is more expensive they also wanted the option to print grey scale. I'm pretty terrible at working in 2D space but a quick Google search let me know that the conversion to grey scale involved averaging the RGB values for each pixel.

Unfortunately, the coloring of the UI was darker more than light so the resulting greyscale image was still black toner intensive. So we provided an additional option to invert black and white.

To make it work a second transform was applied to each pixel that reversed the pixel value from upper to lower bound (or vice versa depending on how you look at it).

The result was an output that trended toward white instead of black. The output looked surprisingly good and saved on toner so the users could print many screen captures without worrying about wasting resources.

For the business, it resulted in a cost and resource savings. For users, picking the resulting output provided better results that were easier to understand. From a development perspective, the implementation wasn't difficult at all to add. So, win-win-win.

What surprised me was how easy these transforms were to apply. It's a bit CPU intensive on high resolution images but it's not terribly difficult to come up with good results.

It would be awesome to see some more examples of algorithms used for image processing. So much material covers generic algorithms and data structures that come with the typical CS degree.

It would be much more interesting to see algorithms that can be used in practice. For example, how to scale images, implement blur, color correction, calculate HSL, etc...

Libraries are great but these concepts are simple enough that they don't require 'high science'.

The article mentions a curiosity related to how edge detection works. I'd assume that you select a color and exclude anything that falls outside a pre-determined or calculated threshold. For instance, take a color and do frequency analysis of colors above-below that value by a certain amount. Make multiple passes testing upper and lower bounds.

A full color image @ 24 bit (8R 8G 8B) will take a max of 24 passes and will likely have logarithmic runtime cost if implemented using a divide-conquer algorithm.

Things like blur and lossy compression sound a hell of a lot more interesting because they have to factor in adjacency.

jlubawy 2 days ago 1 reply      
I'm actually taking a image processing course right now, and at least one thing I didn't see in this article that I have found very useful is histogram equalization (OpenCV equalizeHist). It basically takes images with low contrast and increases the contrast. This is really useful for many applications but one I've actually been able to use is increasing the legibility of scanned pencil on paper images.
yompers888 2 days ago 3 replies      
As a sort of abstract question, do readers here think of <Class X> 101 as meaning fundamentals of X, or basic techniques in X? Having taken image processing from both sides, I'd say that learning the principles was much more useful (and would have been more useful still if I'd had a proper background in linear algebra). This article is the equivalent of naming some tools and showing us where they fit.
Gepsens 2 days ago 0 replies      
My CS lab specialized in image processing, can confirm this is indeed the 101.
coin 3 days ago 2 replies      
-1 for explicitly disabling pinchzoom on mobile/tablet devices.
Work for only 3 hours a day, but everyday plumshell.com
378 points by NonUmemoto  18 hours ago   124 comments top 26
err4nt 13 hours ago 3 replies      
I have experienced a similar thing while freelancing or design and web development. I used to work 16 hours some days and less hours others, but then sometimes I would need to work and found it hard to kick it into gear.

I think creativity is like a well, and when you do creative work its like drawing that water out. If you use too much water one day, the well runs dry. You have to wait for the goundwater to fill it up again.

Not only did I begin viewing creativity as a limited resource I create and have access to over time, but I noticed that some activities, like reading about science, listening to music, and walking around town actually increase the rate at which the well fills up.

So now I have made it a daily habit of doing things that inspire me, and I also daily draw from the well like the author said - but Im more careful not to creatively 'overdo it' and leave myself unable to be creative the next day.

Viewing it this way has helped a lot, for all the same benefits the author listed. Im in a rhythm where I dont feel I need a break on the weekend, I just still have energy.

JacobAldridge 16 hours ago 5 replies      
If I told you that every car needed 8 gallons of gas to drive 100 miles, you'd point out I was wrong - so many different makes and models, not to mention variables from tire pressure to driving style.

Yet for the potentially even more complex range that is different people, it amazes me that so much of the advice is didactic - we all need 8 hours sleep, 8 glasses of water, and 8 hours of work with breaks is optimal.

The closest I get to advice is 'learn your body and what works for you'. Thanks to the OP for sharing what works for him.

jiblylabs 14 hours ago 3 replies      
As a freelancer, I understand where some of the comments "As a freelancer this won't work" are coming from. However, the last year I've flipped my freelancing model where I offer a more productized service with a clearly defined scope and set price. Instead of doing design work for $XXX/h, I'll deliver A,B,C within Timeframe Y, for Price $XXXX. With clearly defined services, I've actually been working for the last 12 months using a similar model, usually constraining myself to 4h/day with weekends off. My productivity + revenue have increased dramatically. Productizing your service makes it easier to market and generate leads, while it gives you the flexibility to work the way you want and actually free up time. Awesome post OP!
wilblack 10 hours ago 2 replies      
I started contract work last fall. I set me rate assuming a 25 hour work week. At first I tried working ~4 hrs / day everyday day. I quickly realized this did not work for me. Working everyday, even just a little is not sustainable for me. I have a family and they are still on the 9 to 5 schedule, so working even a few hours on weekends cut into my family time which is important to me. So now I force myself to take at least one weekend day off with no prgramming. This is hard because I love to program. Also I have a hard cutoff time during the week days at about 5:30pm when my wife and kid get home. I usually feel like I want to keep working but that forces me to stop (at least until my daughter goes to bed). So now I work 5 or 6 days a week but seldom exceed 6 hours/ day. Most days are closer to 4hrs. It's great at this pace because I usually always feel like i want to keep programming so I don't get burnt out. And if I do have an off day I just don't work.

The problem I am running into now is what do I do with my spare time? All my hobbies are computer based (video games and Raspberry Pi projects) but I am trying to minimize my screen time in my off hours. This will get better in the spring and summer as the weather gets better but during winter on the Oregon Coast going outside is hit or miss.

And I hear you about not being able to go to bed until I solve a problem I am stuck on, that drives me crazy.

susam 13 hours ago 3 replies      
I agree with this article mostly, although 3 hours a day might be too little to make good progress with work for some people.

This article reminded me of my previous workplace (about 7 years ago) where my manager discouraged engineers from working for more than 6 hours a day. He recommended 4 hours of work per day and requested us not to exceed more than 6 hours of work per day. He believed working for less number of hours a day would lead to higher code quality, less mistakes and more robust software.

Although, he never went around the floor ensuring that engineers do not exceed 6 hours of work a day, and some engineers did exceed 6 hours a day, however, in my opinion, his team was the most motivated team on the floor.

shin_lao 16 hours ago 1 reply      
3 hours a day is just not enough for everyone.

For some projects it's perfectly fine but some tasks can only be done if you focus for a large amount of time on it, work obsessively on it until you reach a milestone.

The greatest work I have ever done, was always done when I retreated like a monk for several weeks, cutting myself of the whole world and working almost non-stop on the task until I made a significant breakthrough.

Then I go back to the livings and share the fruits of my work, and of course, take a well deserved rest for several days.

The trap into most people fall is that they are confusing being active and working.

andretti1977 15 hours ago 2 replies      
I agree with the author with some exceptions: when you are working as a contractor or freelancer for someone else's project maybe 3h/day is not acceptable. When you've got externally imposed deadlines 3h/day may not be sufficient.

But i agree that working less than 8h/day could be really more productive. I also liked the "less stuck for coding" topic as "...it is sometimes hard to go bed without solving some unknown issues, and you dont want to stop coding in the middle of it..." so maybe forcing themselves to stop could be a solution.

Anyway, i would really like to work 4 or 5 hours a day but keeping holidays and weekends free from work and i think this can only be achieved if you can pay your living with products of your own such as your apps and not by freelancing (i am a freelance and i know it!).

But i enjoyed the idea behind the article and i will try to achieve it one day.

dkopi 16 hours ago 1 reply      
I'm pretty sure this has worked for the author, and it will work for a lot of other people as well, but a lot the benefits raised can still be achieved when working more than 3 hours a day.

A few points are raised in the post:1. If you only work 3 hours, you're less tempted to go on twitter/facebook/hacker news.

True - but that's really a question of discipline, work environment and how excited you are about what you're working on.It's perfectly possible to perform for 10 hours straight without distractions, just make sure to take an occasional break for physical health.

2. Better prioritization.

Treating your time as a scarce resource helps focus on the core features. But your time is a scarce resource even if you work 12 hours a day.Programmers are in shortage. They cost a lot. And the time you're spending on building your own apps could have been spent freelancing and working for someone else's apps.Always stick a dollar figure on your working hours. Even if you're working on your own projects.You should always prioritize your tasks, and always consider paying for something that might save you development time (Better computer. better IDE. SaaS solutions, etc).

3. Taking a long break can help you solve a problem you're stuck on.

Personally, I find that taking a short walk, rubber duck debugging or just changing to a different task for a while does the same.If I'm stuck on something, I don't need to stop working on it until tomorrow. I just need an hour or two away from it.

rmsaksida 15 hours ago 3 replies      
I mostly agree with the author, but I don't see the point of stopping yourself when you're "in the zone". Why lose the flexibility?

What works for me is having a baseline of 3 or 4 hours of daily work, and not imposing any hard limits when I want or need to do extra hours. This works out great, because I have no excuses not to do the boring routine work as it's just a few hours, but I also have the liberty of doing obsessive 10h sessions when I'm trying to solve a tough problem or when I'm working on something fun.

jacquesm 11 hours ago 1 reply      
There is a much better alternative: work really hard for 2 to 3 months per year and then take the rest of the year off. If you're doing high value consulting you can easily do this. You may have to forego some luxury but that's a very small price to pay for the freedom you get in return.
joeguilmette 2 hours ago 0 replies      
I work on a remote team and I am only accountable for my output. I end up working 15-25hrs a week. Sometimes more if something is on fire.

I usually work 7 days a week, but invariably a couple days a week I only work an hour, checking email and replying to people.

The work I do is of better quality, I'm happier, and I easily could work at this pace until the day I die.

jjoe 14 hours ago 0 replies      
It reads like someone who isn't doing much of realtime support. This works great for projects that haven't been unveiled or even ones that require little ongoing maintenance like a game. But if I worked 3 hours a day, my clients would crucify me.

Sadly, it isn't always possible.

maxxxxx 7 hours ago 1 reply      
When I was freelancing there were a lot of days when I didn't do much but then there were days when I got into the flow and worked 2 or 3 days almost straight. Most of the time this ended up at around 40 hours/ week on average but in spurts. This was probably the best work environment I have ever been in.

I hate about the corporate workplace that it doesn't accept any kind of rhythm but treats you like a machine that performs exactly the same at all times. Nature is built around seasons and so are humans. They are not machines.

I would much prefer to have a time sheet where I can do my 40 hours whenever I feel like it.

Shorrock 10 hours ago 0 replies      
One size certainly does not fit all, however, my one take away is that this is huge benefit to paying close attention to what works best for you and optimizing your life around that. When you focus on productivity and happiness (often the 2 are linked) ignoring, when possible, schedules dictated upon you your quality of life will improve.
LiweiZ 13 hours ago 0 replies      
I work 4-5 hours everyday but everyday on my own project. I wish I could have more time on work since most of the rest time I have is allocated to housework and taking care of two little ones. I guess the key is to control your work pace. When a sprint is needed and you are ready for it, a two-week with 90-100 hours in each week would not be a bad idea. Just like running. Listen to your body, pick your pace and keep going towards your goal.
1123581321 10 hours ago 0 replies      
I read an essay several years ago that suggested working three focused hours a day. But, it suggested slowly increasing the hours worked while keeping the same level of focus, and doing restorative activities in the remaining time. The idea was that this would "triple" productivity.
a-saleh 5 hours ago 0 replies      

I actually had similar routine while at school, but it was 6 hours a day total. 3 hours in the evening, usually just before I went to sleep, might be 19-22, or 21-24 and 3 hours in the morning when I woke up and continued for ~3 hours and then left for lectures.

I started doing this because I realized that I am no longer capable of pulling all-nighters. And it worked surprisingly well :-)

TensionBuoy 7 hours ago 1 reply      
3 hours is not enough time to get anything done. I'm self employed. I go 12 hours straight before I realize I should probably eat something. I love what I'm doing so I'm drawn to it all day, every day. At the end of the day I've hardly made a dent in my project though. 3 hours is just getting warmed up.
abledon 5 hours ago 0 replies      
This is so true of people who give 100% every moment they work, but can't work long hours without feeling drained. compared to someone who goes at 50% and can manage the 40hr/work-week, I wish this method would become more recognized.
spajus 15 hours ago 2 replies      
How to pull this through when you are paid by the hour?
JoeAltmaier 6 hours ago 0 replies      
"Work for only 3 hours a day, but every day".

'everyday' is an adjective

amelius 10 hours ago 1 reply      
> Making money on the App Store is really tough, and people dont care how many hours I spend on my apps. They only care if it is useful or not. This is a completely result oriented world, but personally, I like it.

I would guess that, if the OP had a competitor, then the OP would be easily forced out of the market if that competitor worked 4 hours a day :)

mrottenkolber 11 hours ago 0 replies      
What about work 11 hours a week and be happy? Works for me, and I am a freelancer.

Edit: I usually do three blocks of three hours each and one two hour block each week. I find three hours perfect to tackle a problem, and a good chunk to be able to reflect upon afterwards.

jamesjyu 11 hours ago 0 replies      
Work hard. Not too much. Focus on what's important.
xg15 14 hours ago 1 reply      
So no going out for drinks where you might have a hangover the next day?
logicallee 16 hours ago 7 replies      
Historically, working 24 hours a day (I include sleep because after a certain number of hours you even dream of code or your business) for 1 year typically accomplishes more than working 3 hours per day for 8 years. Or 1.5 hours per day for 16 years. There is just some kind of economy of scale.


EDIT: I got downvoted. Come up with whatever standard of productivity you want (ANY standard that you want) and adduce a single human who in 16 years times 90 minutes per day accomplished more than I can find a counter-example of someone doing in the same field in 1 year. 1 year of 24 hours a day strictly dominates 16 years of 90 minutes per day, and you cannot find a single counterexample in any field from any era of humanity. Go ahead and try.

oh and by the way, in 1905 Einstein published 1 paper on the Photoelectric effect, for which he won his only nobel prize, 1 paper on Brownian motion which convinced the only leading anti-atomic theorist of the existence of atoms, 1 paper on a little subject that "reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity" and 1 paper on Massenergy equivalence, which might have remained obscure if he hadn't worked it into a catchy little ditty referring to an "mc". You might have heard of it? E = mc^2? Well a hundred and ten years later all the physicistis are still laying their beats on top.


Your turn. Point to someone who did as much in 16 years by working just 90 minutes per day.

Closer to our own field, Instagram was sold for $1 billion about a year after its founding date, to Facebook. Point out anyone who built $1 billion in value over 16 years working just 90 minutes per day.

How OpenGL works: software renderer in 500 lines of code github.com
356 points by gregorymichael  2 days ago   36 comments top 16
sclangdon 2 days ago 1 reply      
This is great and so concise, but I'm surprised that the author didn't implement OpenGL's interface (obviously not all of it), if the goal is to show how OpenGL works.

Another good example of this is Trenki's software renderer for the GPX2[1], which implements a shader architecture if memory serves. I haven't looked at it for many years, but I remember it being a useful resource when learning this stuff.

Other useful resources are, of course, Michael Abrash's Graphics Programming Black Book[2] (despite it's age, is still a great read filled with useful information), and for a really deep dive into the graphic's pipeline, ryg's (of Farbrausch fame) A Trip Through the Graphics Pipeline[3].

[1] http://www.trenki.net/content/view/18/38/

[2] https://github.com/jagregory/abrash-black-book

[3] https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...

8bitpimp 2 days ago 1 reply      
I wrote a software implementation of OpenGL with the only goal of being able to play Quake3 using it. I can vouch that it is an amazing learning experience. The tutorial here doesnt seem to aproach the issues of performance however, which really are another learning experience entirely.
exDM69 2 days ago 3 replies      
The title is a bit misleading, this is boilerplate code for a graphics programming course and it hasn't got much to do with OpenGL.

The articles describing the operation are much more interesting than the code itself.

It's just an inefficient triangle rasterizer. All it does is loop over the pixels in a rectangle covering a triangle, and for each pixel inside it calls a "shader" function. All the beef is in these 40 lines [0].

I don't know how they've done the texturing in all those pretty pictures (it's in the "shaders", not included here), but they don't calculate the partial derivatives required for correct, mipmapped texturing. Simple non-mipmapped perspective correct texture mapping can be computed in the shaders, with the usual caveats.

OpenGL is much more than a rasterizer, there's texturing, depth-stencil operations, blending, compute shaders and efficient memory management.

[0] https://github.com/ssloy/tinyrenderer/blob/master/our_gl.cpp...

edit: Someone in reddit pointed out that this is a translation of a Russian language course. The original Russian version looks to be a bit longer than the English translation (but I don't read Russian, so I can't tell if it is better): https://habrahabr.ru/post/249467/

fsloth 2 days ago 0 replies      
Wov, this is so elegant, short and sweet. It's the most beautiful code I've seen in a while - because it's pithy, to the point, but yet retains enough critical detail to be educationally valid. Thanks for sharing.
speps 2 days ago 0 replies      
Related but more efficient (includes a rasterizer) :


leni536 2 days ago 2 replies      
I remember once I needed to macro up a software that needed an X server with OpenGL to run. (Like really dirtily hack it up, with xmacro and stuff like that). I wish I could set up a headless X server with a dummy OpenGL renderer (witch doesn't actually render anything), so it doesn't bottleneck on rendering that isn't used anyway. I guess it's even easier to write such and OpenGL implementation.

edit: Now I see it doesn't implement the OpenGL API though, the goals are obviously different.

stop1234 2 days ago 0 replies      
I just went through the lessons and all the code compiled and worked as expected. Nice and simple. As things should be.

Thank you for posting this online.

theoh 2 days ago 0 replies      
Stanford's past notes on polygon rasterization are interesting for historical background and algorithmic elegance:http://www.graphics.stanford.edu/courses/cs248-98-fall/Lectu...

Edit: Also thishttps://graphics.stanford.edu/courses/cs248-08/scan/scan1.ht...

sklogic 2 days ago 0 replies      
jahnu 2 days ago 1 reply      

What's going on there with the template<> template<> ?

erichocean 2 days ago 0 replies      
If anyone has a link that shows the Vulkan equivalent (e.g. how pipeline states might be implemented, etc.) I would really appreciate it.

In particular, I'm very curious how tile-based deferred rendering wound interact (positively!) with a Vulkan software rendering implementation by keeping all tile buffers for a render pass in the on-chip cache of a modern Intel CPU. It seems like Vulkan provides a better API for a software rendering than OpenGL for that reason, and I'd like to see that confirmed one way or the other.

riazrizvi 2 days ago 0 replies      
OpenGL is partly these widely available graphics algorithms. Thank you for explaining them so well here. OpenGL also has a particular architecture that provides concurrency and extensibility among other things. If we are talking about OpenGL specifically vs other Rendering Engines, then it would be good to explain the architecture to help folks understand the reasons why OpenGL is so widely used.
lbenes 2 days ago 3 replies      
Will this teach you how shader-based OpenGL works or the older fixed function pipeline rendering method?
valine 2 days ago 1 reply      
I would love a section on anti-aliasing. It seems to be the big thing missing.
latenightcoding 2 days ago 0 replies      
This looks amazing, I wish it was in C, but I will check it out.
Ontario announces that it will begin a basic income trial in 2016 sciencealert.com
282 points by evo_9  1 day ago   271 comments top 23
CydeWeys 1 day ago 18 replies      
Can someone explain to me why universal basic income seems to be more popular than negative income taxes these days? To use some examples I'm picking out of my hat:

1. Universal basic income: Everyone gets $10K per year.

2. Negative income tax: Everyone gets $15K per year, phased out linearly across an income of $60K (i.e. if you earn $0 you get $15K, if you earn $20K you get $10K, if you earn $40K you get $5K, and if you earn $60K then the negative income tax is fully phased out).

Why is 1 preferable to 2? Is it just that it's less susceptible to tax fraud? Note that the amount that an unemployed person gets in UBI is less, because the same amount of money is being distributed to more people, even millionaires.

elcapitan 20 hours ago 7 replies      
While I can understand the benefits of basic income, it still bugs me that this might massively undermine democracy and turn out to be a point of no return for a society. Once every single voter has an incentive to vote for politicians that promise increases of their basic income, each election will turn into a competition on increasing that type of spending. It's basically a massive redistribution scheme. This is different from welfare systems, which only support a small number of people who actually need it.
thedevil 1 day ago 9 replies      
This basic income idea has been popping up a lot lately. And there's some merit to it, but there's some major problems too.

The biggest problem I see is giving people cash. Why not instead give each adult a voucher for housing (just barely adequate for cheap housing) and each person $5/day food stamps (or something similar), regardless of income.

Everyone gets a lower stress level this way. The risk of being homeless or hungry go to near zero. It doesn't matter if you're unemployable, or starting a business, or between jobs. You know you're going to be okay.

While giving people cash seems to have the same effect, it doesn't. At least here in the US, money problems are (mostly) a money-management problem. Many people use any cash they get to pay for whatever seems the most pressing at the moment - whether it's rent or a big screen TV. People buy the TV when they have money and rent isn't due yet, then have a little unexpected expense and can't pay rent. The stress level hurts them, hurts their families, causes increased expenses (e.g. payday loan).

My parents were like this, in six-figure-income years and in dead-broke years. It hurt us quite a bit. And "loaning" them money NEVER helped - they'd pay the mortgage today and then buy the TV when the next payday came in. And there's LOTS of people like this, which is why there's a payday loan on every corner.

Now, we've taken care of my mom by covering her housing and utilities directly. And her stress level is down a lot. It works great. This would have also helped me a lot when I was a student. And it means more startups - since all startups would be "ramen profitable" by default. And in the US, we could fund this with about a 10% tax (probably less if we took some funds out of SS, disability, section 8, etc.).

allengeorge 1 day ago 0 replies      
The linked budget announcement says simply that the government will begin consultations on how best to test and implement a Basic Invome scheme in 2016, not that it'll actually trial it this year. Not that a trial will begin his year. And, given the so-so response towards Ontario's supplemental pension plan, I wouldn't take support for this scheme as a given.
d0m 13 hours ago 0 replies      
I'm from Montreal where there's already income to families not making enough money. This is great but obviously some people abuse the system and use it to drink beer all day so some Montrealers don't like the idea of using their taxes to give them money.

But one very important point that people don't know is that giving basic income like that, even if abused, reduce criminality. When you have no money and are desperate, you're more likely to start doing illegal actions. Economically speaking, when you start thinking about the cost of more criminality, you realize that it's a pretty good deal to give basic income.

Not everyone here agrees with what I just said - most people don't even know that - but I think that in itself is a great reason.

mladenkovacevic 1 day ago 2 replies      
I have nothing against basic income as long as there are incentives and opportunities to motivate people to do better for themselves either financially or creatively.

As an Ontario resident it'll be interesting to see if this goes into effect and what the long term results are.

Can we get another province to try a libertarian approach and we can compare notes in about 25 to 50 years?

nokya 1 day ago 2 replies      
We will be voting to implement the same mechanism in Switzerland in next June. Fear campaigns by right-wing parties are intensifying.
manishsharan 1 day ago 5 replies      
Given the fact that the Canadian economy isn't doing well and the government is running deficits, how are we going to pay for this ? What are we going to cut ? What taxes are we going to increase? Given the fact that value of CAD has dropped significantly which has led to increased food prices, how will this not hurt the working poor and the middle class ?
shurcooL 19 hours ago 1 reply      
I wonder how many people would use this as on opportunity to be able to spend more time working on open source.
avz 1 day ago 3 replies      
The often raised argument in defense of UBI is the automation of production.

Well, work isn't just about producing goods. We work to solve problems. And there are plenty of problems and challenges that machines won't solve for us ranging from cancer and dementia to clean energy to global warming. Not to mention some nice-to-haves for the long-term like space colonization, life extension or nanotechnology.

Saying that humans should not need to work is like saying this is it. We're done here. This is the world we want.

Pxtl 1 day ago 0 replies      
A challenge with Ontario is that the province is already on hard financial times and the current government is politically beholden to the bureaucracy. The financial idea of mincome is that the government can shut down a bunch of over-managed social programs in favour of a unified simple payment direct to the residents...

...but there's not political will to shrink the government complexity and capture this savings, which means mincome in Ontario is the fiscal policy equivalent of this xkcd comic:


hippich 1 day ago 2 replies      
Simple question - in ideal world where UBI replaces welfare system, what happens to person, who takes UBI check, spend it all in casino and next day is due day for rent, health insurance, no food in refrigerator, etc?
refurb 21 hours ago 2 replies      
I wonder if "payday loan" companies would jump all over basic income? "Can't wait for your next month's basic income check? We'll get you CASH right now for only a small fee.* We're here to help you!"

*Annual interest rate of 1200%

vamur 1 day ago 3 replies      
Basic income should be done like in the Expanse (e.g only for those with no/minimal income) and it should be dynamic - percentage based on the current economy size. It should also be spent only on domestic products to stimulate local economy. Otherwise, it will be another Ponzi scheme like the current pension systems.

Which would be a shame since it's a good idea and necessary due to technological advances.

shitgoose 23 hours ago 0 replies      
Being already over 300 billion CAD in debt, it only make sense for Ontario to spend a bit more. Looks like Ontario government gave up hopes to repay the debt, so who cares - a basic income more, a basic income less...
guylepage3 1 day ago 0 replies      
Definitely excited to see this trial take place.
foota 1 day ago 0 replies      
Looks like someone beat YC to it :)
ZoeZoeBee 1 day ago 0 replies      
The title of the article is completely misleading.


From the Budget:>One area of research that will inform the path to comprehensive reform will be the evaluation of a Basic Income pilot. The pilot project will test a growing view at home and abroad that a basic income could build on the success of minimum wage policies and increases in child benefits by providing more consistent and predictable support in the context of todays dynamic labour market. The pilot would also test whether a basic income would provide a more efficient way of delivering income support, strengthen the attachment to the labour force, and achieve savings in other areas, such as health care and housing supports. The government will work with communities, researchers and other stakeholders in 2016 to determine how best to design and implement a Basic Income pilot.

I find it quite interesting they are presupposing a Universal Basic Income, will strengthen attachment to the labor force instead of decreasing it. Human Nature suggests other wise.

marcoperaza 1 day ago 1 reply      
I think there's a real opportunity for a grand compromise between the left and right here. If a basic income/negative income tax is bundled with abolishing most or all other government handouts and retirement plans, small government minded people like myself could get behind it.

One thing I worry about is that this could cause massive inflation and a recession (stagflation) as people drop out of low-wage work in droves. What percent of society will decide to live on solely the basic income if it's high enough to pay for basic expenses? Work is virtuous and builds character. Idle hands are the devil's playthings.

And what would it do to our democracy if a huge portion of the population is living on someone else's dime and not even trying to join the workforce. Isn't it fair to call them children? What insight could they possibly have in the democratic process except to vote their own immediate monetary interests? I believe in universal suffrage, which is why this is a conundrum for me.

sageikosa 1 day ago 1 reply      
In social science, society is the guinea pig.
johnny_kinds 1 day ago 7 replies      
The issue with basic income is in the long-term. When more and more generations of people start to depend on it (I've seen it with welfare in my hometown), it becomes a crutch and will stifle their future success.

Eventually, there won't be enough people giving back into the system and the whole thing will collapse. Before this happens, taxes will continue to be raised in people in lower income brackets.

The politicians love it though. It creates an instant voter base. Why would a person, receiving free money, vote for someone that will take it away?

Because of things like this, I wish we had laws in place that all voters had to at least 1) work some sort of job (it doesn't matter what it is) and 2) proof they paid income taxes.

eliteraspberrie 1 day ago 7 replies      
For context, Ontario has no natural resources. We have always been an export economy. With globalization and automation, manufacturing is mostly gone, and it's creating social problems. We have a fundamentally different view of the role of government here, we believe government should promote quality of life and happiness. Yes it is socialism and we don't apologize.
ra1n85 1 day ago 6 replies      
Basic income seems to come from the right place from people that support it.

I do think it's misguided benevolence, though. I hope that largely removing adversity and creating dependence aren't viewed as trivial changes here. People need to be challenged. We need to consider the more subtle ramifications of this. Humans have always labored. Work is in our blood.

PyPy 5.0 Released morepypy.blogspot.com
283 points by mattip  3 days ago   61 comments top 10
fijal 3 days ago 3 replies      
Because everyone asked, I gonna clear a few things about PyPy3 support, please keep the comments civil.

* We are not against working on Python 3 - it just happens that there is a lot of interest these days in things like numerics, warmup improvements and C extensions that we want to focus on.

* We essentially exhausted Py3k pot. I personally think it delivered what it promised, despite being short on the funding goals. It's crazy what level of expectations people have with crowdfunding - it's really difficult to find someone to deliver a big, multi-year project for 60k, even outside the states.

* We're closely watching py3k adoption - since we're always a few releases behind, we'll probably do a 3.5 after CPython 3.6 is out, but it all depends on good will of volunteers, who I have no control over.

* Money can easily change focus, but it would need to be a significant enough amount to actually commit to delivering a fast and compliant PyPy 3.5, not 5 or 10k

I hope this clear some things up, those opinions are my own and not necesarilly represent everybody in the pypy project

EDIT: there is just over 8k USD left in the py3k pot. At $60 USD/h (official SFC rate) it's 146h. That's not enough to even fix the inefficiencies in the current version. We hope to use it to get to version 3.3

JelteF 3 days ago 2 replies      
I think its a real shame that pypy3 is not updated with the new releases of pypy. It is still on 2.4.0.

I understand that the major sponsors of PyPy are interested in the python 2.7, but not updating it for 1.5 years seems like they have abandoned it.

AdamN 3 days ago 0 replies      
I'd like to use PyPy but all of my new projects are Python3.
wyldfire 3 days ago 0 replies      
Well, great job, team!

I get a free speedup for my non-numpy/scipy projects and that's flipping awesome. The 2.7 and 3.2 support is just fine for my needs. Your focus on the C API emulation seems totally appropriate to me.

If numpy and friends worked well on pypy, IMO there'd be little reason left to use CPython.

yahyaheee 3 days ago 2 replies      
Pypy is really cool, and I hope it becomes the default interpreter down the road. However, little py3 support makes me edge away from it for now. Hopefully, all pythons will merge in the near future
psandersen 3 days ago 1 reply      
Good to see PyPy progressing.

I mainly use Scikit Learn, theano, numpy and pandas; is PyPy able to work with the above, and likely to give any speedups at this stage?

aleksi 3 days ago 2 replies      
Why major version change, why 5.0 after 4.0.1?
Animats 3 days ago 1 reply      
What version of CPython does this match? The announcement doesn't say.
rafinha 3 days ago 0 replies      
no numpy, not interested ... will check again next version.
gaze 3 days ago 0 replies      
How is numpy doing? How about the sandbox?
If a thing is worth doing, it is worth doing badly chesterton.org
324 points by dang  3 days ago   102 comments top 24
oskarpearson 3 days ago 9 replies      
Is it just me, or have some of the comments here missed the point of this entirely? This isn't about MVP or Lean Product Dev.

It's about doing the things that are "worth doing". And about doing them yourself, instead of outsourcing them to someone else. Take responsibility for doing the things that are difficult but worth doing.

Things that people outsource:

Gym - people outsource their Gym attendance to "the experts", Personal Trainers.

Their health - to "the professionals", be those Doctors, vitamin salesmen, or chiropractors.

Music - to professional musicians.

Terr_ 3 days ago 3 replies      
For a bit more context, Chesterton is complaining about how the education system (1910) blindly processes girls as if they were facial-hair-impaired boys, and he digresses a bit into discussing creative/artistic play.

Here, I'll try to edit/snip/boil it down into something easier to read. Money-quote is at the very end. (Original text at http://www.online-literature.com/chesterton/wrong-with-the-w... )


All the educational reformers did was to ask what was being done to boys and then go and do it to girls [...] "Would you go back to the elegant early Victorian female, with ringlets and smelling-bottle, doing a little in water colors, dabbling a little in Italian, playing a little on the harp, writing in vulgar albums and painting on senseless screens? Do you prefer that?" To which I answer, "Emphatically, yes." [...]

There was a time when you and I and all of us were all very close to God; so that even now the color of a pebble (or a paint), the smell of a flower (or a firework), comes to our hearts with a kind of authority and certainty; as if they were fragments of a muddled message, or features of a forgotten face.

To pour that fiery simplicity upon the whole of life is the only real aim of education; [...] To smatter the tongues of men and angels, to dabble in the dreadful sciences, to juggle with pillars and pyramids and toss up the planets like balls, this is that inner audacity and indifference which the human soul, like a conjurer catching oranges, must keep up forever.

This is that insanely frivolous thing we call sanity. And the elegant female, drooping her ringlets over her water-colors, knew it and acted on it. She was juggling with frantic and flaming suns. She was maintaining the bold equilibrium of inferiorities which is the most mysterious of superiorities and perhaps the most unattainable. She was maintaining the prime truth of woman, the universal mother: that if a thing is worth doing, it is worth doing badly.

david-given 3 days ago 3 replies      
I've always preferred Douglas Adams' version: Some things you should care enough about to do badly.
wilshiredetroit 3 days ago 0 replies      
I guess this is the cycle:

1) "Some things you should care enough about to do badly." - Start as a hobby

2) If a thing is worth doing, it is worth doing badly. - You work on it some more but you are still mediocre at it

3) "If it is worth doing, it is worth overdoing." - You work at it, again and again and you have a ton of iterations

But you get tired and you question what you are working on; #4,#5,#6 creeps in your head

4) "If a thing is not worth doing at all, it's not worth doing well."

5) "If doing something isn't worth the effort, doing it well won't fix that."

6) "There is nothing so useless as doing efficiently that which should not be done at all."

I see 4,5,6 a lot in "features". Techs spend too much time on features that no one really cares about. Its the same for crappy movies.. lots of talented people work on really crappy work.. 99% of the time its not their own passion project. In todays, work-world, we are forced to do great work on really vapid stuff.

I see 1,2,3 in really passionate people and what the world gets are iterations, variety and meaningful work. *the world is better for it - scientist, entrepreneurs and artist do this. Many variations and angles of an idea. A lot of times, the body of work becomes meaningful.

cafard 3 days ago 1 reply      
Years after encountering this in Chesterton, I read The Soul of a New Machine, and found Tom West quoted as saying "Not everything worth doing is worth doing well." I don't know whether he had read Chesterton or whether he simply made an obvious additional turn on a turn of phrase.
kazinator 2 days ago 0 replies      
If we stick in the word "even", it's clearer: worth doing even badly.

If something is worth doing, it may be worth doing even badly, rather than insisting it must be done well, resulting in paralyzing inaction.

RangerScience 2 days ago 0 replies      
Upon consideration, I think maybe it's clearer in the inverse:

If you have to worry about how well you do it, it's not worth it for you to do.

I think this applies equally well to things doing the things you love doing - raising kids, making art - as it does to clearing blockers and doing-things-what-need-doing.

Note that the "have to" is a key part - most people will and/or should actually worry about how well they do - the difference is whether you're required to worry.

This definitely doesn't apply to all situations - it missed the entire field of "things you're good at" - but I found it decently insightful for my personal life.

arvinsim 3 days ago 1 reply      
An inspiration for amateurs everywhere. It is a useful idea at the start. But at some point, you will acquire an itch to to hone your skills. By then, you probably don't need to worry about doing badly.

P.S. I love this guy's wit. I came into his works when I researched Catholicism and because of some references from Neil Gaiman. But I would have never thought that this writer would ever be in HN.

EC1 3 days ago 2 replies      
I got stuck starting a company with my friend and wasted months coding everything to absolute perfection with the latest bleeding edge tech when we could have coded it in a week, then we missed out chance because a huge competitor swept in with lots of VC money and ate the market. So it goes.
msutherl 3 days ago 1 reply      
Before realizing that this was a Chesterton website, I took the word "Quotemeister" to indicate that this was a website called "The Quotemeister", wherein, I guessed, popular quotes were analyzed to get at their true meaning.

Does such a website exist?

tim333 3 days ago 0 replies      
As some context from Wikipedia:

Chesterton is often referred to as the "prince of paradox."...

"Whenever possible Chesterton made his points with popular sayings, proverbs, allegoriesfirst carefully turning them inside out."

jimkri 2 days ago 0 replies      
>The line, if a thing is worth doing, it is worth doing badly, is not an excuse for poor efforts. It is perhaps an excuse for poor results. But our society is plagued by wanting good results with no efforts (or rather, with someone elses efforts).

This relates to weight loss and the dependency on weight loss drugs. I see ads all the time for new weight loss supplements, or new workout machines, or fat burning belts. Stating that this will burn fat faster, with less effort. Society now thinks that in order to look fit you must take a fat burning pill, or some crazy concoction to actually loose weight. >We have left the things worth doing to others, on the poor excuse that others might be able to do them better. But in reality it just takes a healthy life style to be fit, so if loosing weight is worth doing, it is worth doing badly.

bovermyer 2 days ago 1 reply      
I think I prefer this quote without the context of the original author, given his attitudes and motivations in writing it.

Are there any quotes that you know of where you feel similarly? Where you have transplanted the quote into your own context, rather than the original context, and felt better about it as a result?

frandroid 3 days ago 0 replies      
"As for your second question, the Quotemeister generally tries to avoid explaining what Chesterton means. For two reasons. [...] Two, we think students should write their own class assignments rather than having us do it for them." Bwahaha
unabst 3 days ago 2 replies      
Steve Tyler of Aerosmith:

"If it is worth doing, it is worth overdoing."

... and you know he would say that.

tacon 2 days ago 0 replies      
I was thinking this must be the original source of a Zig Ziglar quote:

"Anything worth doing is worth doing poorly until you learn to do it well."

But the two quotes appear unrelated.

Kenji 3 days ago 1 reply      
Or another way of looking at this is the following. If you take on new, large projects, you're often worried about the problems you face, how things are going to work, if your skills, effort and knowledge are enough. More often than not, you will end up with something reasonable upon completion (if you have enough self-control to get that far) and even if you fail miserably, you learned valuable lessons. So it is not a bad thing to confidently step into new directions, even if there's a risk or certainty that it won't be better than bad.

Example: I never baked a cake (ok that's not a large project ;p). I will bake a cake. I know it is most likely not going to turn out well. The result of my work might even be garbage that I have to dispose of. I will bake a bad cake. But the next cake will be better, I learned something from my mistakes.

ctdonath 3 days ago 0 replies      
If a thing is worth doing, doing it badly is still worthwhile because what matters is that it is done - to whatever degree.

Were doing it badly not worth doing at all, then the thing is not worth doing - only doing it well is.

wilshiredetroit 3 days ago 1 reply      
What I find really interesting after mulling it over.. is that there's soo much crap out there -- being made well by really really talented people. I wonder if there's an easy heuristic to figure out if something is worth doing -- I think that we've exhausted the whole-> "do what you are passionate about" and "solve problems" bit as a source to figure out what to work on. Or have we?
cmdrfred 2 days ago 0 replies      
Brain surgery is one exception.
EpiMath 3 days ago 0 replies      
If it's not worth doing, it's not worth doing well.
nether 3 days ago 2 replies      
IOW: Perfect is the enemy of good.
applecore 3 days ago 2 replies      
Charlie Munger's quote is also insightful: "If a thing is not worth doing at all, it's not worth doing well."
vijayr 3 days ago 4 replies      
Doesn't it depend on what it is though? If my todo app breaks, no harm done other than a few irritated users. But if a dialysis machine breaks...
Re: Obama on Fetishizing Our Phones jonathanmh.com
335 points by jonathanmh  6 hours ago   265 comments top 28
tomlongson 5 hours ago 10 replies      
A key promise of Obama's campaign for the presidency was to run the most transparent government- however the only person to really deliver on that promise was a whistleblower. Secret courts, secret domestic spying, and now calls for weakening of the digital equivalent of the safe shows that he either was not honest about transparency, or has radically changed his opinion since becoming POTUS.

Maybe it's that he decided to use his political clout to pick healthcare as his signature in American history, not wage war against the NSA, but either way it saddens me to have campaigned for someone who has empowered a surveillance state instead of fight against it.

Liberty literally means "freedom from arbitrary or despotic government or control", and freedom in the information age means the liberty to communicate and store information. Anything to compromise that makes us all more vulnerable to control in all parts of our lives, not just those stored in zeros and ones. I believe America can be "Land of the free, home of the brave", but not without digital liberty.

eigenvector 4 hours ago 3 replies      
> So if your argument is strong encryption, no matter what, and we can and should, in fact, create black boxes, then that I think does not strike the kind of balance that we have lived with for 200, 300 years.

Mr. Obama, you are the one who upset the balance with secret, dragnet surveillance of nearly all communications. That's not the bargain the public has had with law enforcement for the last 300 years. Widespread, end-to-end encryption is simply the natural reaction to the arms race you started. We would have never come to this point if the government had kept surveillance within court-supervised bounds.

1024core 6 minutes ago 0 replies      
I hate it when leaders get all upset when the balance is shifted away from them, and yet are perfectly fine when the balance tilts in their favor.

FTA: "then that I think does not strike the kind of balance that we have lived with for 200, 300 years."

200 years ago, it was not possible to cast a dragnet and catch everyone who was doing Something Bad(tm). The government had to get a warrant to open mail; today, the NSA can sift through billions of messages (metadata, they say) in a second.

Even considering US Mail: you could not keep track of who was sending whom mail, at scale. But today, every letter that is mailed has its front and back scanned (for reading the address); but more importantly, these images are saved for future use.

All of this is possible thanks to technology. And when the balance was tilting in their favor, the Establishment was quite happy. But when the balance tilts the other way, suddenly they're crying like a spoiled child whose toys have been taken away.

You can't just throw tantrums when things don't go your way. If the technology permits E2E encryption, they'll just have to live with it and find other ways to catch criminals.

cromwellian 4 hours ago 10 replies      
Obama has a meta-point however that proponents of the absolutist position don't seem to want to face. Democracy relies on transparency. Many of the progressives who are rallying in support of absolute right to privacy are some of the same people who constantly criticize Swiss bank accounts, Cayman island financial shenanigans.

But if companies were to implement the same sorts of impenetrable encryption, on every device, all the way down to the corporate desktop, in a way that not even the company executives themselves can read the email of their own employees, then lots of regulations the government applies to companies would be mooted.

Taken to the extreme, if all communication is digital, and 100% impregnable, and people maintain good OpSec, then it will be hard to impossible to execute lawsuits or regulatory investigations into malfeasance because they'll be no paper trail.

The end result of going full tilt on crypto is cryptoanarchy. This was pretty much well argued in the 90s among the cypherpunks community. Most of the libertarians and Objectivists were salivating over how strong crypto protocols would end fiat currency, end taxation, end regulation, and so on.

So how far as a society are we willing to take this? Does it just extend to private data? Does it extend to transactions? To payments you make for things? To transfers of money? To business transactions? Will Democracy be able to audit nothing of the interactions of citizens or our institutions in the future?

You don't have to agree with Obama's position to see that cryptoanarchy and Democracy are on a collision course, and it makes sense to discuss the possibilities openly without just plugging your ears and taking an absolutist position that demonizes anyone who disagrees.

thom 5 hours ago 2 replies      
We're going to fight hard so our grandchildren have really secure email in a world where everything they do and every word they say is uploaded to the internet, transcribed and annotated in realtime by swarms of drones controlled by other kids that, only 50 years earlier, would have been at home doxxing people on Twitter.

Privacy will die, not because it's undesirable or a bad idea, it'll die like copyright and DRM - because it's technically and economically easy to defeat, and people will be motivated to do so. What's more, those people will be hard to catch - after all, the drones will be communicating over very strongly encrypted channels.

[Please refute - I genuinely have nightmares about this future]

downandout 4 hours ago 3 replies      
In the end, they are going to introduce laws that make it illegal to implement end-to-end encryption. This sucks, but it's going to happen. France is already moving to do it [1], and in the US, John McCain and others are also calling for similar laws [2]. They will all start off saying that it will be controlled carefully etc., as Obama keeps saying, and then it will be used with reckless abandonment.

What this essentially means is a move to Android for criminals, terrorists, and anyone that wants privacy (since Android allows installation of apps that have not been approved by a gatekeeper bound by the laws of the countries it operates in, whereas iOS does not by default). Open source Android apps with strong encryption will be built in countries without such laws. All of this will likely be the downfall of a few lazy drug dealers that don't want to give up their iPhones, but since the cat is out of the bag and apps with end-to-end encryption already exist and will continue to be built, the governments making these moves will not actually catch any reasonably intelligent terrorists that install and use these apps. They will, however, gain exactly what they want: the ability to conduct surveillance on most people in the world whenever they want.

[1] http://fortune.com/2016/03/04/french-law-apple-iphone-encryp...

[2] http://www.digitaltrends.com/computing/senator-mccain-joins-...

tudorw 5 hours ago 4 replies      
One strong point that comes out is reference to the transitory nature of governments, just because you trust yours now (!), does not mean you can trust future incarnations, don't give this kind of power to an unknown.
thoughtsimple 4 hours ago 1 reply      
This is akin to the politicians that don't want to believe the science around global warming. They believe that if they just deny it, they will turn out to be correct. Obama is doing the same with "golden key" encryption. It is not that the experts are correct and know what they are talking about, it is just that they are disengaged and being stubborn.

"I'm the President of the United States of America and if I say that there must be math that gives me what I want. If you don't invent it, you are disengaged."

joelhaus 6 hours ago 1 reply      
Rather than rhetoric, it would be nice to see more HN arguments based on the strongest possible counterarguments [].


plcancel 4 hours ago 1 reply      
A couple of other fun quotes:

"And what we realized was that we could potentially build a SWAT team, a world-class technology office inside of the government that was helping across agencies. Weve dubbed that the U.S. Digital Services."

Yes, that's a great analogy! Go with that!

"And this was a little embarrassing for me because I was the cool, early adaptor President."

Cooler and adepter!


SwimAway 5 hours ago 3 replies      
Obama, our manipulating word artisan of a president yet again attempting to use strong language (i.e "fetishizing") to polarize our view.
aldeluis 5 hours ago 2 replies      
This interview with Snowden has been broadcasted tonight in Spain. Ended just minutes ago.


a3n 3 hours ago 0 replies      
This is not a fetish. This is a conflict between a government that has gotten away from the "we serve the citizens" mentality, and gone to "we'll do anything we want, routing around the constitution whenever we want, and use "serving citizens" as the excuse, and the citizens who should be able to specify, to any level detail that we want, exactly how those "servents" will serve us. It seems to be sliding away from us.
Mandatum 4 hours ago 0 replies      
If law was adopted to require backdoors - those who are privacy conscious would simply move their data to countries that allow for strong encryption as well as deniability.

It'd still pose an issue for when people are accessing their data, however depending on your setup this will be very hard to prove from a third-parties perspective given ample security precautions taken (ie using an offshore VPN all the time, data never full accessed locally).

For larger tech companies, I'd assume setup of a new company structure offshore and sensitive data handling to be "outsourced" offshore too (ie parts of the EU, other OECD countries).

brudgers 4 hours ago 0 replies      
This has been the default position of US government across administrations at least since the 1976 Arms Export Control Act. Nothing has changed except that "the terrorists" have replaced "the communists".

It doesn't matter who says it.

jonathanmh 5 hours ago 0 replies      
I honestly didn't expect this to be seen. /me is humbled and reading comments
wl 2 hours ago 0 replies      
All this talk about "warrant-proof spaces" presumes that they're something new, when in fact they aren't. In the past, conversations in private tended to not be reduced to tangible form and were lost to law enforcement unless they had the foresight and the warrant to bug the location. Now that so many of our communications are mediated by technology, they are by necessity reduced to a tangible form. Secure end-to-end crypto merely takes us back to the status quo antebellum.
exabrial 1 hour ago 0 replies      
"You don't need encryption"

"You don't need a gun"

Oddly enough both are classified as munitions.

kailuowang 5 hours ago 3 replies      
Would like to see more detailed logic in the original post. For e.g. why specifically the parallel between physical world and digital is flawed.
basicplus2 3 hours ago 0 replies      
Obama, like all the recent US presidents is just a puppet.
studentrob 3 hours ago 0 replies      
Let's get the facts straight about encryption and security. The DOJ needs our help, and we need theirs

We need a grassroots movement here. I know we have the EFF and Apple and a slew of others. But we all need to be writing about this to have our voices heard.

I am in between projects and writing about this extensively online. Would anyone like to work together to organize facts and promote discussion in a concerted manner? The goal would be a) to make a concise message that is understandable by a non-techie, b) back it up with facts and primary source, and c) seek out public figures who can share our message. I have a running summary of events here which I will put in a github repo [1]

Dear technologists:

The task of educating the public and our government on encryption may be even harder than you think. Everyone needs to understand the issues at stake in order to make up their own mind, and it could take years to educate the general public about encryption.

We can expect to continue seeing terrorists attacks in the news regardless of what laws Congress passes. This much we know, and this is, of course, out of our control. However, uninformed law enforcement will blame encryption and they will blame technologists for not allowing them to catch these attacks. Unless all law enforcement truly understands the technology, then they will always blame citizens for fighting for their right to privacy.

Of course, we know this is about security vs. security, not security vs. privacy. Privacy is a secondary focus for many. But law enforcement believes our primary focus is privacy.

My primary concern is that law enforcement does not know how to keep us safe in a world where criminals can sometimes communicate with smartphones across the world in a way that cannot be monitored with a warrant. Regardless of whether Cyrus Vance, James Comey, Loretta Lynch or Obama truly understand this or if they are putting up a smoke screen, the fact is that law enforcement across the country trust them the most. Non-technologists will be more moved to understand the security and economic implications of forcing backdoors upon Americans and US phone manufacturers. For the most part, they are not going to see eye to eye with us on privacy concerns. Lindsey Graham has already changed his view. We can share facts with others and let them make up their own minds.

Some damage is already done. The fact that Vance and Comey have been fighting this for so long is going to make it difficult for them to go back and convince officers of the law that technologists were right, and they were wrong. Many officers will continue to feel snubbed by the tech community.

If we're to advance to the next level of our mutually trusting society, we must all understand encryption technology and its implications. To the extent that we do not all understand encryption, and the ease of which it can be used regardless of government mandates, we will continue infighting and not progress together.

The idea that technologists feel the issue is black and white or absolutist is absolutely incorrect :-). Math is black and white, but our public safety and security is not. It is a complex equation that must be balanced, and we have that focus just as President Obama does. The difference between us and the DOJ is we understand a few more pieces to the equation. I'm open to the idea there are pieces that technologists do not know about, and I encourage the administration to share these details with us. Until all the details are on the table, we won't be able to come up with a solution together. Let's focus on discussing and sharing the variables and their weights. Given information, people can make up their own minds.

If backdoor laws are passed, it's not the end of the world, but our industry will suffer while non-technologists struggle to understand why terrorist attacks continue to occur. It'll be another 4-8 years until we can dig ourselves out of that hole. Let's keep the great country we have and bring facts to the table for open discussion.

[1] https://www.reddit.com/r/SandersForPresident/comments/49otvu...

awqrre 3 hours ago 0 replies      
I used to think that Bush was bad and Obama was good... I'm really upset that I was wrong and that now Bush appears to be the better of the two.
shitgoose 5 hours ago 1 reply      
so they tell us that we have to arrange our private lives in such way that it would be easier for them to investigate/persecute us, if sometime in the future they decide that we are guilty of breaking their laws.

[with great sadness]: how low have we fallen if we seriously discussing this instead of grabbing pitchforks.

x5n1 5 hours ago 0 replies      
Obama, the technology, sir, is absolutist. You can have it one way, with privacy, or another, complete lack of it. That's how things are, and you are, with all due respect, stupid for arguing otherwise.
Patronus_Charm 4 hours ago 1 reply      
Obama always telling people what to do. Its just not a good look.
pasbesoin 5 hours ago 1 reply      
Fetish. Rubber hose "security". Coincidence?
ecma 5 hours ago 3 replies      
This is incredibly naive bordering on puerile. To suggest that POTUS' view on this is without nuance is to miss his point. POTUS went on to cite existing warrant mechanisms and their underlying principle:

"And we agree on that, because we recognize that just like all of our other rights ... that there are going to be some constraints we impose so we are safe, secure and can live in a civilized society."

I'm not suggesting that this means POTUS and the government have the right answers at the moment. Despite that, we can't ignore the important role law enforcement plays in society, the requirements in support of their role, and the complexities surrounding the right to privacy. We need people advocating for the right balance, not just getting each other frustrated.

OP may have worries other than US law enforcement being from another country. This is one of the /many/ complexities in this space.

mozumder 6 hours ago 9 replies      
I think a lot of libertarians miss the fact that many of the communications monitoring aspects that the administration is proposing includes authorization by court order.

It's not like your communications are being monitored by random government employees at will for no reason. There's a specific safeguard here for personal privacy, and that's through a court order.

If government is monitoring your communications, then there's a pretty dammed good reason, as determined by a judge.

Sure, you might call that final safeguard as not enough or susceptible to corruption, but once you do that, you cease to be able to function in a society.

Judicial review is the "trust zone" that citizens are expected to have on society. If you don't trust judicial review, then there's no hope left for you to function in a normal society filled with other people. If you don't have such a "trust zone" in government, then you are basically forced to build your own army to protect you, since you don't trust government.

Since having your personal army is stupid, your best option is to make sure judicial review cannot be corrupted.

Fred Brooks retires dailytarheel.com
286 points by cperciva  3 days ago   72 comments top 21
ericboggs 3 days ago 5 replies      
I worked part-time in desktop support at Sitterson Hall, home of the UNC Computer Science program, when I was an undergrad in the late 90s / early 2000s. My team supported Windows, hardware, printers, etc. I distinctly remember closing help tickets for Prof Brooks (and Matt Cutts while he was a PhD student).

My fellow undergrad tech support doofuses and I knew that Prof Brooks was a god and thus walked on eggshells when we were around him...which we quickly learned was totally unnecessary. He was incredibly friendly, gracious, and encouraging. A true Tar Heel.

Congrats to Prof Brooks.

officialchicken 3 days ago 1 reply      
I think reading this book[1] is even more important than learning to use a keyboard (and mouse) with the intention of creating software or any complex system.

Knowing your limits is one thing, but understanding why/how they are being manipulated by outside forces (e.g. overestimating ability) is another. And how to counter those forces is also included in these pages.

Thank's for the sanity and well-managed project advice Fred!

[1] http://www.amazon.com/Mythical-Man-Month-Software-Engineerin...

henrik_w 3 days ago 2 replies      
Most people associate "The Mythical Man-Month" with Brookss law: adding people to a late project makes it later. For me, the best part of it is one page at the end of chapter one, entitled The Joys of the Craft.

It is excellent on what makes programming so great: http://henrikwarne.com/2012/06/02/why-i-love-coding/

jgrahamc 3 days ago 15 replies      

 Heres Fred Brooks, this giant. I mean made IBM, adviser to presidents, all this stuff. And this lady is looking for directions, so he walks with her out to the street and down the street to show her where she needs to go, Bishop said.
Isn't it sad that this was deemed even worth reporting? Why assume someone like Fred Brooks wouldn't do that?

tarvaina 3 days ago 0 replies      
If UNC had hired 612 people instead of him, we would have had the job done in a month!
sizzzzlerz 3 days ago 1 reply      
The MMM was first published in 1975. I began working in the industry in 1978 and first read his book around then. Thirty-eight years later, we still try to fix late programs by adding people. Brooks wrote the seminal, magnificent book on project management but he's still a voice, crying in the wilderness.
zabouti 8 hours ago 0 replies      
Way back when the CS department was still in Old West, I remember seeing Dr. Brooks at different lectures and talks, always taking notes. I try to emulate his example but I'm nowhere as consistent as he was. At least in my old age I'm still trying to learn things, just as he is.
superdude264 3 days ago 0 replies      
Just last Spring, he lent me his copy of "What Color is Your Parachute" and invited me into his office to discuss two job offers I was contemplating. He did all this after he passed by the CS library and saw me looking for something.
PaulRobinson 3 days ago 1 reply      
He was in attendance at the Turing 100 conference a few years back - http://curation.cs.manchester.ac.uk/Turing100/www.turing100.... - which I was fortunate enough to attend.

It was full of great names. Roger Penrose, Donald Knuth, Gary Kasparov, Vint Cerf, Tony Hoare, etc.

Brooks was one of the speakers who seemed really interested in talking to delegates in coffee breaks and sharing stories. A lovely man, and his retirement is well deserved. He has shaped the industry more than any other attendee, even if others may have contributed more to the science, so to speak.

oneeyedpigeon 3 days ago 1 reply      
Macbook Air - check. Tie and cardigan - check. Is Fred Brooks one of the original hipsters? ;-)
ScottBurson 3 days ago 1 reply      
Brooks' famous essay "No Silver Bullet" is worth a (re-)read [0]. I still think AI and Automatic Programming will eventually change the face of software development in a bigger way than Brooks thinks possible; but I can't tell you when it will happen.

[0] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf

tiernano 3 days ago 0 replies      
> Although Brooks officially retired in 2015, Jeffay said he is still active in the department. He says I didnt retire. I just went off the payroll, Jeffay said.

I like that.

AKrumbach 3 days ago 2 replies      
I have, sitting at my elbow right now, a copy of Mythical Man-Month, as part of a mini-bookshelf of the ten books I found most influential in my career / wish to share with my co-workers.

[For the curious: http://i.imgur.com/CGv9PGc.jpg ]

cwingrav 3 days ago 0 replies      
I had an opportunity to spend a day with him and his VR research team a few years back. Very exciting. Insightful. I loved his contributions to Software Engineering, but few know how much he impacted Virtual Reality as well!
mathattack 3 days ago 0 replies      
The Mythical Man Month [0] hit me at a perfect time - I was working at a company that was obsessed with tracking everything in man-months without considering who was doing the work, or when they were added. The book gave me the academic support to back my intuition when I would push back on management.

Now on the other side, I take his advice in There is No Silver Bullet [1] very seriously. Improving software engineering is a slog, not a shiny buzzword.

My favorite quote [2] of his "The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere."

I guess the one surprising thing for me was that he was still actively working. Even 15 years ago I thought of him as a grand dean from a past generation. Great to see him stay so vibrant for so long.

[0] https://en.wikipedia.org/wiki/The_Mythical_Man-Month

[1] https://en.wikipedia.org/wiki/No_Silver_Bullet

[2] https://en.wikipedia.org/wiki/Fred_Brooks

svec 3 days ago 0 replies      
I saw him speak on "A Personal History of Computers" last year and wrote about it here: http://chrissvec.com/fred-brooks-talk-a-personal-history-of-...

Brooks's overview: "I fell in love with computers at age 13, in 1944 when Aiken (architect) and IBM (engineers) unveiled the Harvard Mark I, the first American automatic computer. A half-generation behind the pioneers, I have known many of them. So this abbreviated history is personal in two senses: it is primarily about the people rather than the technology, and it disproportionally emphasizes the parts I know personally."

It was a great talk covering his whole career. A video of the same talk is here: http://www.heidelberg-laureate-forum.org/blog/video/lecture-...

tarheelredsox 3 days ago 0 replies      
Had Fred for Advance Computer Architecture back in 89 when he was writing his book. Great class and fantastic prof, loved the anecdotes and details on why certain decisions were made for various iconic computer systems. Oddly enough my mother had him as an advisor when she was in grad school working on masters #2.
girkyturkey 3 days ago 0 replies      
Wow, this man is a true hero. What a humble, outstanding man that created this from the ground up! Thank you Fred for everything that you have done. It's great to see that fame does not change everyone.
crdoconnor 3 days ago 0 replies      
It's sad that his message never really filtered through. I've worked at more places that thought you could speed up project development by throwing developers at it than otherwise.
carlsborg 3 days ago 0 replies      
I will let you in on a secret. Brook's best work isn't the Mythical Man Month, its a 2010 book called The Design of Design.
coverband 3 days ago 1 reply      
Best title I've seen on HN yet ;-)
A SimCity inspired city builder where you design an MMO RPG tigsource.com
382 points by doener  2 days ago   79 comments top 19
aaronjb 2 days ago 8 replies      
Hey everyone, dev here. Just got wind of this thread on twitter. Happy to answer any questions. If you're interested you can find out more here:





nyandaber 2 days ago 5 replies      
Reminds me of Dungeon Keeper, a old PC game where you have to build your own dungeon, with rooms for your monsters, treasure room, magic room to research spells, etc. Heroes try to invade your dungeon, and you also have to fight neighbouring dungeons. One of the cool feature was that while monsters were controlled by an AI, you could take manual control of one, going to a first person view and playing like a FPS game.

So yeah, seeing this feels like Dungeon Keeper meets Minecraft. Which could be very interesting if executed properly, but it's not going to be easy.

rl3 2 days ago 1 reply      
This is one of these fairly simple, yet brilliant ideas that make you go "Why didn't I think of this?" Granted, tycoon-style game ideas are a dime a dozen, but few are this novel or meta.

Superhot is another recent game that also meets the criteria; time advances only with player movement. Dead simple core concept. I suppose Minecraft probably stands as the most famous example though (which is ironic considering this game's art style and game mechanics appear to have been heavily influenced by it).

PhasmaFelis 2 days ago 2 replies      
The concept reminds me a bit of Majesty: The Fantasy Kingdom Sim, in that you're loosely overseeing a bunch of AI adventurers running around and getting into trouble. Of course it's coming at the idea from a very different direction. Mostly it's just that Majesty holds up astoundingly well for a 15-year-old sim game, and I've been thinking about it lately. :)
egeozcan 2 days ago 2 replies      
I always dream about an RTS where one gets to be the DM (God, Gaia... whatever you call it) where computer players fight against each other. I love scripting maps to make current games (especially AoEII and C&C Generals) work like that. I'm hoping this is, in a way, something like that. I know an RPG is played in the map (city) you design, but still. I'll give it a try.
emehrkay 2 days ago 2 replies      
This is a random question, slightly off topic, but I was looking at the No Man's Sky videos and wondered how video game programming worked with regard to a server and interfacing with multiple users at once. Does the team design the game to run on the console and also a version to run on the server or is it two separate applications each with its own set of business rules?
ceejayoz 2 days ago 0 replies      
There's a subreddit now: https://www.reddit.com/r/mymmo
CaptSpify 2 days ago 1 reply      
reminds me somewhat of Towns: http://www.townsgame.com/

I think this shows a lot more promise though

Edit: As pointed out below, don't buy Towns unless you know your buying an abandoned beta-stage game. Good idea for a game, but, IIRC, the dev got kickstarted, took the money, and left

Vekz 2 days ago 0 replies      
This is awesome. Its very similar to one of my favorite games Majesty The Fantasy Kindom Sim


I've been craving a new game in this genre for a while. I love the 'idle' game play where the world evolves. Point me to the crowdfunding page!

rodionos 2 days ago 1 reply      
Can someone please build new SimCity. I'm missing it badly and I'll pay anything. This was the game that had a meaningful educational value (real estate development, microeconomics) coupled with entertainment. I didn't like people part (Sims) as much as city planning.
alexc05 2 days ago 0 replies      
This is really cool! What engine are you building in?

Have you considered approaching Sony's indie-developer program?

As far as I can tell, on this generation they are being incredibly supportive of indie developers ... and I would 100% pay for an indie game like this on PS4.

doener 1 day ago 0 replies      
omnivore 2 days ago 0 replies      
Looking forward to seeing the progress. Was an active SC4er and really enjoyed Skylines and not really an MMO guy at all. I just love sandbox simulations and this seems like a super unique idea and love your approach. Keep up the great work!
herbst 1 day ago 0 replies      
This looks really nice, like a game i will waste hours after hours with. Will it support Linux?
andremendes 2 days ago 0 replies      
A MMO tycoon/simulator, what a cool idea!I couldn't run the online version but it looks impressive (guess my setup is missing Unity Plugin).
swozey 2 days ago 0 replies      
Does anyone remember idleRPG on IRC?
owenwil 2 days ago 0 replies      
Ugh, I really hope this is released publicly!
akamaozu 2 days ago 0 replies      
Interesting idea!

Never seen anything like this before.

twoquestions 2 days ago 0 replies      
I'll have to give this a shot later!
Graph Databases 101 cray.com
274 points by BooneJS  4 days ago   101 comments top 10
gtrubetskoy 3 days ago 2 replies      
I have spent a lot of time figuring out how to deal with a large graph a couple of years ago. My conclusion - there will never be such a thing as a "graph database". There are many efforts in this area, someone here already mentioned SPARQL and RDF, you can google for "triple stores", etc. There are also large-scale graph processing tools on top of Hadoop such as Giraph or Graphx for Spark.

For the particular project we ended up using Redis and storing the graph as an adjacency list in a machine with 128GB of RAM.

The reason I don't think there ever will be a "graph database" is because there are so many different ways you can store a graph, so many things you might want to do with one. It's trivial to build a "graph database" in a few lines of any programming language - graph traversal is (hopefully) taught in any decent CS course.

Also - the latest versions of PostgreSQL have all the features to support graph storage. It's ironic how PostgreSQL is becoming a SQL database that is gradually taking over the "NoSQL" problem space.

valhalla 3 days ago 2 replies      
If anyone's curious about Network Science/Graph Theory in general here's a free online textbook used by a grad student friend of mine


valine 3 days ago 14 replies      
Question as someone new to graph databases: Are there any open source graph databases worth looking into?
AdamN 3 days ago 2 replies      
Everybody's focused on graph databases here but let's talk about Cray! One of the most forward-thinking computer technology companies ever to exist is starting to get out there again. If they got a few hundred million dollars from an outside investor, they could do friggin' incredible things. They already do incredible things but not out there in the way it so easily could be.
amirouche 3 days ago 0 replies      
I am huge fan a graph-y stuff. I did several iteration over a graph database written -- in Python -- using files, bsddb and right now wiredtiger. I also use Gremlin for querying. Have a look at the code https://github.com/amirouche/ajgudb.

Also, I made an hypergraphdb, atom-centered instead of hyperedge focused in Scheme https://github.com/amirouche/Culturia/blob/master/culturia/c....

Did you know that Gremlin, is only srfi-41 aka. stream API with a few graph centric helpers.

edit: it's srfi 41, http://srfi.schemers.org/srfi-41/srfi-41.html

lobster_johnson 3 days ago 0 replies      
I've seen people using graph databases as a general-purpose backing store for webapps/microservices. What are people's opinions about this?

My feeling is that graph databases are not suitable/ready for for lack of a better term the kind of document-like entity relationship graphs we typically use in webapps. Typical data models don't represent data as vertices and edges, but as entities with relationships ("foreign keys" in RDBMS nomenclature) embedded in the entities themselves.

This coincidentally applies to the relational model, in its most pure, formal, normal form, but the web development community has long established conventions of ORMing their way around this. The thing is, you shouldn't need an ORM with a graph database.

SloopJon 3 days ago 0 replies      
The author's next post describes RDF and SPARQL in the context of the Cray Graph Engine:


TimPrice 3 days ago 0 replies      
1-Would it be more efficient to store objects that contain its relations if you only do (simple) read operations?(e.g. JSON database)

2-Instead, do graph DB engines try to break through bottlenecks for big data and analytics scenarios?

thesz 3 days ago 4 replies      
It introduces false dichotomy "graph vs relational".

In fact, most (if not all) graph algorithms can be expressed using linear algebra (with specific addition and multiplication). And matrix multiplication is a select from two matrices, related with "where i=j" and aggregation over identical result coordinates.

The selection of multiplication and addition operations can account for different "data stored in links and nodes".

So there is no such dichotomy "graph vs relational".

Xyik 2 days ago 0 replies      
One of the biggest challenges in databases is handling concurrency and sharding, wish this would have talked a bit more about how that changes between a graph database and a relational database.
Stop Using the Daylight Savings Time stopdst.com
285 points by pbkhrv  20 hours ago   145 comments top 29
wlesieutre 19 hours ago 4 replies      
Maybe it's just me, but the way these statistics are worded is setting off my skepticism alarms:

> Between 1986 and 1995, fatal traffic accidents rose 17% the Monday following the switch to Daylight Saving Time.

Accidents rose 17% that Monday? Does it mean 17% more than any other day, or just that the raw number for that Monday is 17% higher than it used to be? Because they worded it like the latter.

For all this page says, accidents were up 17% every day of the year over that decade.

EDIT: Wikipedia says total US traffic deaths were lower in 1995 than 1986, so I'll chalk this up as poor wording. https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...

manigandham 19 hours ago 3 replies      
All these articles seem to keep gloss over the actual issue: the switch between standard time and DST twice a year.

People are fine with the time system, especially since timezones themselves are somewhat arbitrary in their regions. DST is actually more comfortable to live with by allowing for more daylight after work hours. It would be better to just switch to DST permanently and avoid the constant frustrating changes.

yes_or_gnome 19 hours ago 9 replies      
It's the switch that everyone hates. Instead of ending DST, can we agree to stop using Standard Time?
anexprogrammer 17 hours ago 3 replies      
Both sides of the Atlantic these sort of articles crop up regularly. Move to summer time year round and so forth.

What is always forgotten is latitude, and that we forget to learn from history and experience.

In Southern England, or California I doubt it's much more than an annoying relic of olden days. But I don't think anyone has true statistics on whether it is or not. Go north and it starts to matter and accident rates go up when you don't have DST.

The UK had an experiment of staying on summer time between 1968 and 1971, introducing British Standard Time. At the end of the period, the vote was to restore the old way, by a large cross party majority.

I believe at the start of the expermient it was generally thought it would confirm the sense of getting rid of summer time permanently. Switching clocks twice a year is annoying after all.

ianbicking 10 hours ago 0 replies      
DST is about the compromise between two reasonable scheduling systems: in one you use a 24 hour day with a clear landmark (noon, when the sun is highest in the sky). In another model you use sunrise as a natural beginning to the day.

Sunrise is a bit complicated, and in winter it compresses the afternoon more than many people would like. Also hard to build the necessary clocks. So we simplify things and make a compromise between the two systems, and we get DST.

legulere 19 hours ago 4 replies      
As a European that's living further up north than most Americans: Sorry, but no!

I don't want to get up totally in the night in the winter. And in summer I want to be able to use the long evenings with the sun still up instead of getting up too early.

Except one, the cited effects are all about the switch from winter time to summer time.

fsiefken 15 hours ago 1 reply      
I live by Wintertime since a few years now under the motto "if you want to change the world start yourself". So only half a year you have to shift your calendar. Main reason for me is that I can easier read the time from the position of the sun in the sky (I don't use a watch).I've my computer, tablet and phone shifted to mediterranean Tunis as they don't have DST there since 2009.http://www.timeanddate.com/news/time/tunisia-cancels-dst-200...
executesorder66 19 hours ago 3 replies      
Why can't they just leave the time alone, and change their working hours during summer and winter?
bgentry 19 hours ago 2 replies      
That's "Saving" time, not "Savings" time.
koolba 14 hours ago 4 replies      
While we're at it let's get rid of leap seconds.

... and switch the entire planet to a single timezone (UTC).

... and require everyone use the same text encoding (UTF-8).

... and pick a single format for separating fields in numeric fields (commas for 000s and dots for decimal points).

dheera 17 hours ago 0 replies      
I just do my entire calendar in UTC, and keep all my devices in UTC. No daylight savings. I pretty much refuse to use it. It also wreaks havoc on my logs and things.
toast0 6 hours ago 0 replies      
There's actually a bill in the California Legislature to present removal of DST to the voters[1], DST was imposed on the state by the voters in 1949, so it must similarly be removed by the voters.

[1] https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

kirian 15 hours ago 0 replies      
A recent article in the Washington Post (Wonkblog) making the case for "Why daylight saving time isnt as terrible as people think". US centric. The argument uses the data of number of days with "reasonable" sunrise and sunset times based on latitude/longitude when using DST or not.


qjighap 13 hours ago 0 replies      
I live in a a pocket region doesn't have Daylight Saving. This year we added the town of Fort Nelson to our little time zone. Nobody concretely seems to remember why we started to do this, but it is generally agreed on that we do it for business reasons. Much of our business is tied to our neighbors to the east in a different time zone. With the winter months typically being more busy. Coordinating resources is much easier given this system. I thought I would throw a counter argument into the ring, although I would state it is an edge case.

Also, I am thrilled about news being at 11 again.

transfire 12 hours ago 0 replies      
Split the difference (30 mins) and be done with it. Please.
sugarfactory 12 hours ago 1 reply      
I think Daylight Saving Time is a bad idea in the same sense that (abusing) global variables is a bad idea in programming.

In programming, changing a global state in order to achieve something is almost always a bad practice because it affects everywhere and sometimes in unpredictable ways. Instead of abusing global states, we invented object-oriented programming, which I consider as a way to keep states locally (inside objects).

So if someone wants to save daylight, that should be achieved locally for example by changing school schedules.

mshenfield 9 hours ago 0 replies      
I was most interested by how the generated the tweets with my state and county representatives. The scripts is all in js/app.js and uses maxmind's geoip lookup to get back location information. It then uses the Sunlight Foundation's api with the loaction information to pull back the twitter ids for the reps. Cool stuff.
xirdstl 13 hours ago 0 replies      
I can't help but think some of these statistics are ridiculous fear mongering, particularly the heart attacks.

"Get rid of DST! If you get up an hour earlier than usual, you might die!"

weaksauce 14 hours ago 0 replies      
Let's switch to dst all the time. It's really nice to have an extra hour of light during spring and summer but we lose an hour when the days are shortest during winter. (Where I am in California)
mbfg 5 hours ago 0 replies      
Love DLST personally, don't want to get up in the pitch black in the winter, but want long light in the evenings in summer. perfect.
shmerl 19 hours ago 1 reply      
I agree, DST causes more problems than any benefits from it.
donatj 19 hours ago 2 replies      
I've always been curious where the federal government gets the power to define what time it is? If we simply ignored it would there be fines, or does it not have teeth?
sanqui 16 hours ago 0 replies      
It's a shame the website seems to focus on the U.S. only. I would support it and link it if it were a worldwide effort.
beefsack 16 hours ago 3 replies      
It might seem crazy and impractical, but I feel that daylight savings is treating a symptom and removing time zones altogether would be treating the cause.
bitwize 17 hours ago 1 reply      
DST will never ever be abolished in this country. A major part of the reason why is evening sports games become more difficult to schedule.

In the USA, sports aren't just sports. They're more like sacraments, tentpole observances which help to shape the order of society.

paulddraper 13 hours ago 0 replies      
It's Daylight Saving, not Daylight Savings.
stevesun21 15 hours ago 0 replies      
So far I know that Chine made this choose long time ago.
minikites 14 hours ago 1 reply      
My favorite counterargument:


>If we stayed on Standard Time throughout the year, sunrise here in the Chicago area would be between 4:15 and 4:30 am from the middle of May through the middle of July. And if you check the times for civil twilight, which is when its bright enough to see without artificial light, youll find that that starts half an hour earlier. This is insane and a complete waste of sunlight.

>If, by the way, you think the solution is to stay on DST throughout the year, I can only tell you that we tried that back in the 70s and it didnt turn out well. Sunrise here in Chicago was after 8:00 am, which put school children out on the street at bus stops before dawn in the dead of winter.

DST is the only sensible option in my opinion.

enraged_camel 19 hours ago 2 replies      
This whole conversation got me thinking... Why not abolish time zones altogether and have the entire US be on the same time zone?
Why do we work so hard? 1843magazine.com
325 points by wslh  1 day ago   179 comments top 35
burgessaccount 1 day ago 8 replies      
This will sound moralistic, and I don't mean it that way, but I think this relates to our general lack of clear values as a society. With religion absent in many people's lives, not much has stepped in to take up the slack and say "what really matters is being a good person" or "what really matters is wisdom" or "what really matters is family". So we're left with the messages we ARE getting, which are from a combination of advertising (what really matters is having a lot of money/nice things), Hollywood (what really matters is being good at shit) and the news cycle (which focuses on careers and especially major careers). "Mom gets home from work and has a fun late afternoon with her kids, spending no money and getting nothing much done" isn't a story we ever see upheld as a great, positive way to spend our time. So we pursue the things we do see people praising and talking about - promotions, money, milestones, important-ness.
buf 1 day ago 15 replies      
I moved to SF in 2008 as a poor college grad. Since then, I've spent all of my time in startups. It's an unhealthy addiction and it's going to kill me.

I was the third engineer at Eventbrite, and I spent years working many extra hours. After 4 years, it felt like I worked 10 years.

I quit and moved to Europe to try to leave the startup scene, but a month later, I found myself the CTO of a startup in London. The addiction continued. Eventually I found myself back in SF. I'm on my 3rd CTO role now. We're about to raise a series B.

More than I'd like to admit, I want to stop this madness and just enjoy life. Hang out with my family. Perhaps move to Denver or Austin to maintain some semblance of tech life, but get out of the madness. I've been looking at houses in Denver for over a year. And it depresses me.

I know that it can't happen. I know that I'll be working like this until my health prohibits me.

lquist 1 day ago 2 replies      
You are fettered," said Scrooge, trembling. "Tell me why?""I wear the chain I forged in life," replied the Ghost. "I made it link by link, and yard by yard; I girded it on of my own free will, and of my own free will I wore it.


madengr 1 day ago 2 replies      
I enjoy designing RF/Microwave hardware and I am paid well to do it. I plan on doing it till I keel over. I have a state of the art lab at work, and a nice lab at home. I'm passionate about electronics and really wouldn't know what else to do.
prirun 3 hours ago 1 reply      
One thing that would help is if our governments, all governments, would stop spending money they don't have, and therefore stop devaluing currencies. When the Fed "buys" $4T in mortgages to relieve banks from holding any risk, where do they get all that money? They don't - they just "print" it, by clicking on a computer. And in the process, they devalue everyone's wealth.

Take a look at this chart: http://blogs.wsj.com/economics/2015/12/14/a-brief-history-of...

Yes, things are not quite as volatile since 1930, but also note that there is only inflation, whereas before, there was deflation to balance out the inflation. I know everyone says deflation is the devil. I'm not so sure about that.

yason 14 hours ago 0 replies      
Work is a good excuse around which to build your life. The excuse approaches perfection in occupations such as programming which is getting paid for what would basically be your hobby. Andas any excuseit is, as lovingly ever-favoured by your mind, easily used to avoid facing things in life, and edges in yourself. Staying busy is the opposite of having enough. It is doing versus being. When you do, you're trying to achieve something. When you be, you're liable to realise that you aren't missing anything, and that the keys to your life are found within yourself. People need both flavours but work rarely offers the latter.
iokevins 1 day ago 1 reply      
Excellent article; at 3,860 words, it's a bit long, but recommended.

Here's a much shorter response, with more pathos:

"Oh. And if your reading this while sitting in some darkened studio or edit suite agonizing over whether housewife A should pick up the soap powder with her left hand or her right, do yourself a favour. Power down. Lock up and go home and kiss your wife and kids."

-- Linds Redding, "A Short Lesson in Perspective"

Read the whole thing:http://www.lindsredding.com/2012/03/11/a-overdue-lesson-in-p...

Animats 21 hours ago 2 replies      
Competition. It's not that we really have to work that hard to produce a product or service. It's that we have to work really hard to beat the other people trying to do the same thing.

Increased productivity doesn't help, because everybody is still running flat out to compete, at a higher level.

iopq 1 day ago 2 replies      
I wish I wanted to work a lot. I just kind of force myself to do it, even if it's something I'm excited about.
itsAllTrue 1 day ago 5 replies      
The only reason I work hard is to surround myself with people I respect.

When I find myself pushed into situations where irritating people have crept into the mix, and no one seems to be willing or able to do anything about that, I look for an exit.

Boss' nephews. Obnoxious assholes who constantly talk about getting laid. Bitchy careerist ladies who constantly demand bullshit, and seem to foment panic with every breath they can muster. Narcissistic retards dumb as a bag of hammers, but smug to the core about everything they do, which usually turns out to be sitting on their asses all day, looking up trivia about sports. Weirdos who can't seem to bathe themselves, even though they're like 40 years old?

I work hard to separate myself from these people.

jensen123 16 hours ago 1 reply      
What about getting laid? It's no secret that women tend to find rich men attractive. Or men with a high social status. I wonder how many men work hard because of this?
gedy 22 hours ago 0 replies      
> "Of all things, hard work has become a virtue instead of the curse it was always advertised to be by our remote ancestors... The necessity to work is a neurotic symptom. It is a crutch. It is an attempt to make oneself feel valuable even though there is no particular need for ones working."

C. B. Chisholm

bikamonki 1 day ago 0 replies      
It really depends. Most times you just need the money. Sometimes you have nothing better to do. Many times you just keep doing it because you are used to doing it. A few times is your dream job and you just can't stop doing it. Work is more than economics, is not a function of money. Work is action. My father is a 73 'retired' professional and scholar, now serving as a congressman, and already thinking what to do next. Action is life and you should never stop it: work til you drop.

Another question would be: why do we keep working so hard for money when technology could already solve many of our needs?

bitmapbrother 1 day ago 1 reply      
Whenever I read topics like this I always think of Office Space.

Bob Porter: Looks like you've been missing a lot of work lately.

Peter Gibbons: I wouldn't say I've been missing it, Bob.

Tempest1981 1 day ago 1 reply      
Some people thrive on "solving puzzles", which is sort of what engineering is.

For better or worse, there are an infinite number of puzzles to solve.

esfandia 23 hours ago 1 reply      
This is what I've realized lately about job satisfaction:

Purpose, autonomy, work/life balance: pick 2

DrNuke 14 hours ago 0 replies      
Earth is a small rocky ball among billions in the void. We don't know why we are here and what happens next. Stick your head into a mission (work, religion, family, whatever you like) and don't think about that.
holri 21 hours ago 0 replies      
The answer can be found in this excellent analysis of modern economics by E.F Schumacher from 1973:


stegosaurus 14 hours ago 0 replies      
I don't want to be a serf.

Working is the only way I know of for me to accumulate capital in order to become financially independent.

The more I do, the faster I accumulate capital, the more years of my life I'll spend able to do what I want to do.

It doesn't feel like a choice to me. The alternative is to live a mediocre life and always have one eye on my "responsibility" towards my capitalist overlords.

ACow_Adonis 1 day ago 2 replies      
There's several issues intertwined here. The easy ones are those market forces I experienced at the relative bottom of the pay-scale and at the smaller end of the company scale: you're asked to work harder and more hours because it makes more money/profits for your employer to get you to work as long as possible for as little as possible.

However, I think there's two more aspects I think I can contribute.

Firstly: some kind of social identity. I've worked in a fair number of fields and a fair number of jobs, so being attached to a job or thinking of "myself as a particular profession" seems quite alien. But a fair number of my colleagues saw/see themselves as possessing a particular identity, and work/professions defined that for them. We have a very powerful social indoctrination that you are your job: we have titles, little boxes for "profession" on forms, and many people have internalized the messages that "you are what you are employed as", and that you need this external direction/identity to tell you what to do (I don't want to retire, what would I do with myself?!), and that their social identity is formed through their work/networks. Its a bit of a self-fullfilling prophecy, because as we move away from community oriented networks, people's social networks do become defined by where they work. Even if you manage to get out of the ratrace, you discover that your friends are still in it, so they don't have time to spend with you and you can't identify with some of their everyday struggles if you aren't going through it also. I should also note that people who gained this identity through work took retrenchment and change the hardest psychologically, and its easy upon reflection to understand why.

The second aspect though is this: generating the impression of work. I don't know whether its base human psychology (I think there are good arguments from anthropology that it isn't) or a culture-bound phenomenon, but I believe two things: that most humans still have a fundamentally reptilian-brain/cargo-cult psychology that is pretty close to the marxian concept of a labor theory of value, and that in modern large-scale professional life, metrics able to easily tie a worker or professional's inputs to outputs/profits accurately aren't commonly available.

So there is a social/cultural aspect here: how do most people judge how much you're bringing to the workplace? If you're not working in a widget factory, most people's have a heavily weighted proxy to just look at "how busy you look".

Would any CEO, politician, or professional in our culture, ever, justify themselves in taking their salaries by saying everything was running smoothly, and their job was to sit there, like a good taoist-esque ruler, just sitting and facing the horizon and not interrupt? The very idea is absurd, even though we must admit, I think rationally, that in some situations at the very least, that may be the most reasonable course of action. No, instead we justify such by "hours work put in" because it seems to be both a good cultural proxy, and I suspect because, even if its pretty bad, at least its a good cultural value to motivate the lower-downs into being good workers.

But it is of course, on an intellectual level, obscene and ridiculous. And it results in the promotion and workplace culture that I've experienced now at a lot of firms and professional workplaces.

Fresh out of university (economics), I was under the belief that government was generally wasteful. And I worked there, and I saw that it was, and it is, and all was good :)

But I didn't know true waste until I worked for the larger private corporations. We'd hire 15 men, 10 consultants, 4 managers, and support staff to do in 2 years what I could probably do with a skilled team of 4 in my area in government in 12 months. Am i being a little bit hyperbolic, maybe...

When I worked in government, the 2-3 staff would tell you something was bullshit, bitch, take a long lunch break, but get something done...maybe not everything, but something. They weren't salespeople.

In private sector professional firms, people will just lie and say everything they do is productive and a success. Its a hustle, its a sale. They'll come in to the office and rather than eat with their families at home, they'll eat breakfast at work while still not doing anything. They'll go to conferences and say "how great we are!". They'll come up with as many jobs and tasks as they can, and the efficiency of what they do is totally irrelevant. They still play solitaire on their computers. They get tonnes of people to proofread documents n times with n meetings (before eventually switching back to the original version). They'll restructure before restructuring back. They'll fire. They'll hire. It doesn't matter, just do STUFF.

The philosophy is just spend all the money you have and get your staff to do STUFF, expand your empire as much as possible, make everyone work, be seen to work, take credit for everything good, disown everything bad.

To them, long hours wasn't/isn't inefficient or a sign of intellectual failing, its a sign of how awesome you are, and you come in early and you stay back late not because you're doing anything (indeed, amongst the honest ones, there is a haunting realisation that your job, or at least the hours you're putting into it, maybe isn't actually producing anything, or might even be creating more work...), but because its a culturally-structurally reinforcing meme.

I'm not saying that all this culture is universal amongst us, or our workplaces, or our societies. But its there, and I think its all having a pretty powerful impact on our relationship with work, labor, and status...

mx4492 20 hours ago 1 reply      
Alexis De Tocqueville had some thoughts relevant to this: https://youtu.be/Rzr3AOtFA8o
ezequiel-garzon 20 hours ago 1 reply      
I'm curious: why did The Economist choose the name 1843 for its magazine? The About page doesn't say. Anybody know the significance of that year? Thanks.
agentgt 15 hours ago 0 replies      
It has been discussed many times why work makes us happy and the most compelling is flow: https://www.ted.com/talks/mihaly_csikszentmihalyi_on_flow?la...

It was also covered in the documentary "Happy".

I actually was thinking the article would discuss more in detail this or even just put a citation (he cited Keynes and Marx) but instead it went on a long personal anecdotal comparison after comparison.

I also was hoping the author would discuss the developing trend of people working from home and how that relates but.. nope.

IMO the article was too long for my liking. A fairly disappointing read.

yugai 18 hours ago 0 replies      
There are a lot of hard work being done that is personally rewarding (money and fame) and either meaningless or destructive in a broader sense (ecological destruction, social destruction, unethical activites, fraud). Lehman brothers were very successful and was greatly rewarded before the financial crash.
myth_drannon 13 hours ago 0 replies      
Read more Bukowski books, who knows maybe slowly you will heal yourself from this addiction. Also a great read How to Be Idle/Tom Hodgkinson
binarycrusader 23 hours ago 0 replies      
It's easy to spend a few hours on work (especially in the computing field) and feel a sense of accomplishment; not so much with life.

Programming often feels like a series of little victories to me, and it's much harder to achieve that outside of work.

tim333 1 day ago 0 replies      
Good article. I've moved a bit from the five-year-old daughter's position to the "thinking about identity, community, purpose the things that provide meaning and motivation" stuff. Still working on it.
wapapaloobop 22 hours ago 0 replies      
PG noted that getting rich is largely about running errands. In fact most of what society regards as 'work' is like this. What one needs is a hard problem to work on.
pasbesoin 8 hours ago 0 replies      
I worked hard -- or, long -- because I had a shit personal life and neighbors who made it miserable to be at home. And, I was taught early and thoroughly that there was nothing I could do about such things.

Let me tell you, it is a terrible way to live.

Working hard and smartly and with fun, which I occasionally got to do, was something different and immensely satisfying.

But, if you are "working hard" because life sucks. Get your life in order. The sooner the better, not just for you, but ultimately, for your career.

Anyone who says you can't. Or that you have to "pay some sort of dues"? Fuck them.

As I overheard in the cafe, the other day -- my paraphrase may not be as snappy as the original: There's one choice where the outcome is 100% certain: Not choosing. Making no choice, taking no action, no chance.

The young-ish fellow was advising another young fellow on whether to ask a girl out.

As someone who's ended up spending his life alone -- and, is that "not by choice", or, per the above, precisely by choice. Let me tell you, there is no more important choice.

Family, friends, lovers, work and interests that matter (however, and, big or small). There is no more important choice. "Work hard" on those.

normalist 11 hours ago 1 reply      
Get Work Done Syndrome is truely a modern notion entirely absent from the laissez faire peasant farmers of yonderyear who tilled soil in return for a long glass of summer wine at the end and maybe a 20 minute long smoke of fine tobacco. Modern notions of work involve McDramas taking place in every western household where first world problems really are a truely despotic problem indeed. Master slave relationships engender this and it leads to a domestic tilling of the soil in the fields of suburbia, working for McJobs with a McPay. The kind of real work; that of some spiritual understanding, or mastering the body, or doing shadow work of the mind; 'Dumping all your ailments on a plate for those to ponder' is entirely absent in the west, where it is assumed that only those more credentialed shall offer answers, and none else. The East might have these problems just not as severe, and I worry about the encroachment of americanisation into the Eastern mindset and rich tradition and approach to existential crises.
erikb 19 hours ago 1 reply      
I like the explanation, but I don't like the idea of telling everybody that things where different in the past. When most people were farmers they also worked more than 12 hours a day, since you need to take care of such a big place. When most people were factory workers people also worked more than 12 hours because the boss thought it would make him more money if you worked more (and with simple, mechanical work that's actually true). The kind of work changed, as well as the reason. But we always worked too much.

And there is one reason he didn't mention: When you don't work so much, you need to figure out what to do with your time. You can't watch movies all day. Nobody can do that for a long time. So you need to think hard about other reasonable things to do. And thinking is painful and scary.

arca_vorago 1 day ago 0 replies      
Because in a poor economy the most job desperate set ceilings for themselves they dare not touch. I know people who produce millions in profit a year, on 50-60k salary with any bonus over 3k for xmas is unheard of. Disparate power between employee and employer. I personally the problem is people dont understand the correct process of negotiating a contract.

In essence, making people desperate makes robots that are handy, but the lack of reward incentive creates demonstratably worse worke product.

vermooten 19 hours ago 0 replies      
I don't. I stopped after I realised that with my skills and experience I can get a job any time.
bronlund 15 hours ago 0 replies      
They want to keep us busy!
Turning two-bit doodles into fine artworks with deep neural networks github.com
324 points by coolvoltage  3 days ago   56 comments top 15
bd 3 days ago 2 replies      
These are really cool. Though if you were, like me, puzzled how could some really complex and coherent features come from those simple drawings / masks, have a look at the original paintings that were used as sources and compare them with generated images:

Original #1:


Generated #1:


Original #2:


Generated #2:


So those new generated images are structurally very similar to the original sources. Neural net seems to be good at "reshuffling" of the sources. That's probably how things like reflections on the water got there, even if not present in the doodles.

nuclai 3 days ago 2 replies      
(Author here.)

For details, the research paper is linked on the GitHub page: http://arxiv.org/abs/1603.01768

For a video and higher-level overview see my article from yesterday: http://nucl.ai/blog/neural-doodles/

Questions welcome!

pygy_ 3 days ago 4 replies      
I'd love/dread to see this this kind of work (neural nets run in reverse mode) applied to voices and accents.

You could credibly put any words in the mouth of anyone.

beeswax 3 days ago 2 replies      
That's pretty cool. Might speed up asset creation for games by orders of magnitude: Train with concept art, generate the variations via these networks; adds consistency to the output and helps loosen the asset bottleneck / content treadmill esp for smaller studios/individuals.
ogreveins 3 days ago 1 reply      
I played with something similar for a while, https://github.com/jcjohnson/neural-style

What I've found so far is that it takes a while to get good results like something that looks like its own creation instead of an overlap of pictures. There's no exact way to do this. If you modify existing artwork it works well enough since the source is already somewhat divorced from reality but photos are difficult. When it works it's amazing though.

Angostura 3 days ago 0 replies      
Looked at the images and honestly thought that someone had posted an April fools joke a few weeks early. Amazing.
MichaelBurge 3 days ago 1 reply      
Very interesting! The thing that amazes me most about these neural network projects is how small the source usually is compared to what they're doing. Your doodle.py is only 453 lines.
amelius 3 days ago 1 reply      
What data has been used to train the neural network?
mkj 3 days ago 0 replies      
In coming years this will create a very strange reality combined with improving VR tech...
Dowwie 3 days ago 2 replies      
Have you run children's paintings through this yet?
wslh 3 days ago 1 reply      
Exciting! Where can we find image databases for this?
api 3 days ago 0 replies      
This project should be named Bob Ross.
intrasight 3 days ago 0 replies      
Now please combine this with TiltBrush
mhurron 3 days ago 0 replies      
Finally a way to draw things without learning how to draw. I'll be famous!
tjaad 3 days ago 2 replies      
Would this work with photos?
       cached 14 March 2016 04:11:01 GMT