hacker news with inline top comments    .. more ..    11 Mar 2016 Best
home   ask   best   3 years ago   
Responsive Pixel Art essenmitsosse.de
1568 points by bpierre  1 day ago   119 comments top 52
essenmitsosse 1 day ago 6 replies      
Hello this is Marcus, the guy who did the resolution independent illustrations. I just uploaded the current version of the site, so it will no longer freeze, when the image gets to small. I also updated the Tantalos image.

I am currently a bit blown away by the reaction to this. I finished this over a year ago but didnt manage to make it public till now. I presented this at a meetup in Berlin yesterday and someone asked for an online version, which I postet to Twitter then things escalated quickly.

So because a lot of people are asking: I am currently redoing my completely outdated homepage. The new one will include an in-depth explanation of what is actually happening there, whats the idea behind it and how I want to apply this to actual web design to make resolution-independent work more easy and less restrictive for both designers and developers.

Until then I will try to answer some questions here

Scalable GreetingsMarcus

fredley 1 day ago 4 replies      
This is incredible stuff. Zeus in particular is amazing. It all seems to be done with weighted objects and a clever way of rendering them, but it's indistinguishable from magic from where I'm sitting.
noonat 1 day ago 0 replies      
This is really cool! If you look at the source, it appears that the images themselves are defined as JS code, almost like a vector image. For example, Zeus: http://essenmitsosse.de/pixel/scripts/zeus.px

The author then has a renderer to turn these into pixel data. It seems to render them down to an actual pixel image on the fly.

FilterSweep 1 day ago 1 reply      
The amazing part about the Tantalos slide was that its responsive implementation actually captured the ethos of the myth itself - no matter how far he reached, the fruit moved ever farther away. outstanding work.
hakvroot 1 day ago 1 reply      
Besides moving your mouse around, zooming in and out also works beautifully (might put a bit of a strain on your system though).

Given the sheer amount of work for one piece, with The Three Graeae appearing to be around 1000 lines of code, I'm also quite amazed he managed to produce seven. Brilliant, and Art indeed.

dahart 1 day ago 0 replies      
So freaking awesome, this just rules!

My faves are 1- Zeus, and 2- Teiresias

Those two look like they took the most work, and are most technically challenging. The Man-Eagle-Bull-Snake morph especially.

Halfway random tangent, and halfway related, but I'm really super looking forward to wider adoption of SVG 1.2 precisely because it adds absolute unit constraints in addition to the relative constraints, so you can do some of the same kind of stuff in an SVG image, and have authoring tools to support it. Not the pixel art side of it, and nowhere near as crazy as this project, but it will still be really useful.

imurray 1 day ago 1 reply      
Reminds me of the following different project for content aware image resizing or retargeting:



onion2k 1 day ago 2 replies      
Awesome. I didn't really get it until I looked at the Brother picture, but that is brilliant.
kdamken 1 day ago 0 replies      
Are you some kind of wizard? This may be the coolest tech demo I've seen all year.
drcode 1 day ago 0 replies      
This is the kind of stuff I come to HN for! MORE OF THESE KINDS OF POSTS PLEASE!
onetwotree 1 day ago 0 replies      
I got a good chuckle out of the Sphinx one -- the guy is standing in front of it saying "look, a sphinx!", and then if you make it vertical he's all "oh shit! a sphinx!". At least that's how I saw it.

Pretty awesome!

TheOneTrueKyle 1 day ago 0 replies      
As others have stated, this is an amazing demo! I've played around in the past with designing responsive web comics with no real luck, but could you imagine creating a web comic using the techniques used in this demo?

The potential!

jeromeparadis 1 day ago 1 reply      
Would love to read some explanation on how this is made. Very impressive!
Patient0 1 day ago 2 replies      
Does this work on anyone else's phone? I just get a heavily pixelated virtually unrecognisable picture. (iPhone 6 - same for both Chrome and Safari)
njharman 1 day ago 0 replies      
I'm a jaded, cynical, cranky, old curmudgeon. Still I say that was pretty flippin awesome. Especially Zeus!
stared 7 hours ago 0 replies      
It reminds me of: http://xkcd.com/1037/ "Umwelt"
jraedisch 1 day ago 0 replies      
I kept resizing the window until I realized that there was a simpler way already implemented. Thought that was a bug first. Really nice!
adam12 1 day ago 1 reply      
It breaks if you position the mouse to the top or the left edge of the canvas.
Jordrok 1 day ago 0 replies      
Holy crap, very cool! At first glance I thought it was just a neat little resizing trick, but then Zeus morphed right before my eyes and I did a double take. The way it responds to your movement makes it really effective in a way that would be extremely hard (impossible?) to reproduce in a static image or even a video.
rhaps0dy 1 day ago 1 reply      
This is amazing!

I really like that in "Teiresias", if you make the canvas narrow enough, the man jumping (presumably Teiresias) changes position and gets a white beard instead of white long hair. Just to still give the impression that it's an old sage, in small vertical space.

silveira 1 day ago 1 reply      
That's amazing. What scaling algorithm does it use?
sbarre 1 day ago 0 replies      
So is this some kind of specialized constraint solver? I'm looking at the JS source for the various art and it seems to be defining relationships alongside the shapes and styles.
jak1192 1 day ago 0 replies      
This reminds me of those flip-books you made when you were younger (or of current age, whatever).
supernintendo 1 day ago 0 replies      
Very well done! Just a small QA note for the creator, it's possible to hit the "next" arrow until you reach a broken page [1].

[1] http://essenmitsosse.de/pixel/?showcase=true&slide=7

thedaemon 1 day ago 0 replies      
This is really awesome. At first I was going to write a comment about how this completely ruins pixel art by changing the pixels which are hand placed by the artists. But then after viewing a few images I realized that this is hand tooled changed, not just compression. Bravo.
hammock 1 day ago 0 replies      
How do you get to the other images? Or is it broken on mobile (Android Chrome)
z3t4 19 hours ago 0 replies      
Just like responsive web design, only those who "resize" will enjoy it. blink
jianyuan 1 day ago 0 replies      
I like how you can save the image as png! Pretty awesome stuff!
SerLava 1 day ago 2 replies      
>That's right, I'm tracking this side

Do you mean you're tracking visits that hit "view page source"? Does that work? I can't find any info about that on Google.

nom 1 day ago 0 replies      
That's how image resizing should behave. But I guess this will only happen when AIs take over ^^
elwell 1 day ago 1 reply      
Try hitting Cmd - a bunch of times.
tompetry 1 day ago 0 replies      
By the beard of Zeus! Bravo, loved this.
valine 1 day ago 1 reply      
I wonder if this could be extended beyond pixel art by rendering it with webgl.
uneewk 1 day ago 0 replies      
Some pretty awesome stuff!
sreejithr 20 hours ago 0 replies      
Only Zeus becomes a serpent in 16:9.
awqrre 1 day ago 1 reply      
There appear to be a resize bug... for example with The Sphinx, if you go wide and short, the guy is pointing left with his arms, but if you go tall and skinny, the guy is pointing up with his arms...
Kenji 1 day ago 1 reply      
I like this. A little bug has snuck into the code somewhere: If you move your mouse pointer directly into a corner, the script fails with "Uncaught TypeError: Cannot read property '0' of undefined". I presume it is a division by zero when the image height or width becomes zero.
ljk 1 day ago 0 replies      
pretty cool stuff! but is anyone else not able to resize the drawing after the cursor is moved out of the window when either the drawing width or height is at 1 pixel?
spdustin 1 day ago 1 reply      
Grab the lower right corner of the image frame and drag it.
Exuma 1 day ago 0 replies      
This is absolutely nuts... extraordinary.
cousin_it 1 day ago 0 replies      
Two-parameter morphing, cool :-)
Nadya 1 day ago 0 replies      
This is absolutely bloody amazing. The cleverness with the `Zeus` artwork was equally entertaining as it was bloody amazing .

I'd love to hear about the inspiration behind the project.

zaf 1 day ago 0 replies      
That's great work.
mirap 1 day ago 0 replies      
Zeus is the winner! ;)
alienbaby 1 day ago 0 replies      
A joke right?
RUG3Y 1 day ago 0 replies      
This is stellar.
mucker 1 day ago 0 replies      
I agree with the many comments here. This is stunningly good work down with such a simple canvass. I'll be taking it to show my kids (who enjoy Greek mythology) this evening.
lpbonenfant 1 day ago 0 replies      
this is incredibly impressive!
daodedickinson 1 day ago 0 replies      
Now we just need neural networks to do this in the style of Jan van Eyck.
lemiffe 1 day ago 0 replies      
Wow, epic!
0xADADA 1 day ago 1 reply      
Should've been tagged <NSFW>
konne88 1 day ago 2 replies      
Why do people find this interesting? Is there more to it than the insight that one can define a function that maps image-dimensions to an image?
AlphaGo beats the world champion Lee Sedol in first of five matches twitter.com
1067 points by atupem  1 day ago   548 comments top 55
sethbannon 1 day ago 7 replies      
I was at the 2003 match of Garry Kasparov vs Deep Junior -- the strongest chess player of all time vs what was at that point the strongest chess playing computer in history. Kasparov drew that match, but it was clear it was the last stand of homo sapiens in the man vs machine chess battle. Back then, people took solace in the game of Go. Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.

Tonight, that happened. Google's DeepMind AlphaGo defeated the world Go champion Lee Sedol. An amazing testament to humanity's ability to continuously innovate at a continuously surprising pace. It's important to remember, this isn't really man vs machine, as we humans programmed the algorithms and built the computers they run on. It's really all just circuitous man vs man.

Excited for the next "impossible" things we'll see in our lifetimes.

dwaltrip 1 day ago 3 replies      
This is my generation's Gary Kasparov vs. Deep Blue. In many ways, it is more significant.

Several top commentators were saying how AlphaGo has improved noticeably since October. AlphaGo's victory tonight marks the moment that go is no longer a human dominated contest.

It was a very exciting game, incredible level of play. I really enjoyed watching it live with the expert commentary. I recommend the AGA youtube channel for those who know how to play. They had a 9p commenting at a higher level than the deepmind channel (which seemed geared towards those who aren't as familiar).

21 1 day ago 8 replies      
The thing that was supposed to take at least 10 years happened. Only last month people were still saying that no way AlphaGo will beat the champion and that it will be crushed. Today everybody will have seen it coming and say that it was normal.

Yet people will still tell that worrying about AI taking over is like worrying about overpopulation on Mars, and that this is a problem at least 50 years out.

cgearhart 1 day ago 1 reply      
I was really hoping to see a more technical discussion than what I found here in the comments. It's too bad that such a cool accomplishment gets reduced to arguments about the implications for an AI apocalypse and "moving the goalposts". This isn't strong AI, and it was at least believed to be possible (albeit incredibly difficult), but it is still a remarkable achievement.

To my mind, this is a really significant achievement not because a computer was able to beat a person at Go, but because the DeepMind team was able to show that deep learning could be used successfully on a complex task that requires more than an effective feature detector, and that it could be done without having all of the training data in advance. Learning how to search the board as part of the training is brilliant.

The next step is extending the technique to domains that are not easily searchable (fortunately for DeepMind, Google might know a thing or two about that), and to extend it to problems where the domain of optimal solutions is less continuous.

clickok 1 day ago 2 replies      
I posted in the earlier thread because this one wasn't up yet[1].

Some quick observations

1. AlphaGo underwent a substantial amount of improvement since October, apparently. The idea that it could go from mid-level professional to world class in a matter of months is kinda shocking. Once you find an approach that works, progress is fairly rapid.

2. I don't play Go, and so it was perhaps unsurprising that I didn't really appreciate the intricacies of the match, but even being familiar with deep reinforcement learning didn't help either. You can write a program that will crush humans at chess with tree-search + position evaluation in a weekend, and maybe build some intuition for how your agent "thinks" from that, plus maybe playing a few games. Can you get that same level of insight into how AlphaGo makes its decisions? Even evaluating the forward prop of the value network for a single move is likely to require a substantial amount of time if you did it by hand.

3. These sorts of results are amazing, but expect more of the same, more often, over the coming years. More people are getting into machine learning, better algorithms are being developed, and now that "deep learning research" constitutes a market segment for GPU manufacturers, the complexity of the networks we can implement and the datasets we can tackle will expand significantly.

4. It's still early in the series, but I can imagine it's an amazing feeling for David Silver of DeepMind.I read Hamid Maei's thesis from 2009 a while back, and some of the results presented mentioned Silver's implementation of the algorithms for use in Go[2]. Seven years between trying some things and seeing how well they work and beating one of the best human Go players. Surreal stuff.


1. https://news.ycombinator.com/reply?id=11251526&goto=item%3Fi...

2. https://webdocs.cs.ualberta.ca/~sutton/papers/maei-thesis-20... pages 49-51 or so)

3. Since I'm linking papers, why not peruse the one in Nature that describes AlphaGo? http://www.nature.com/nature/journal/v529/n7587/full/nature1...

Aissen 1 day ago 1 reply      
Just for context, this is the first of a five-game match. Next one tomorrow at the same time! (6am CEST, 8pm PT).
rybosome 1 day ago 1 reply      
What an incredible moment - I'm so happy to have experienced this live. As noted in the Nature paper, the most incredible thing about this is that the AI was not built specifically to play Go as Deep Blue was. Vast quantities of labelled Go data were provided, but the architecture was very general and could be applied to other tasks. I absolutely cannot wait to see advancements in practical, applied AI that come from this research.
mark_l_watson 1 day ago 0 replies      
I just wrote a blogg about this. I was up to 1am this morning watching the game live. I became interested in AI in the 1970s and the game of Go was considered to be a benchmark for AI systems. I wrote a commercial Go playing program for the Apple II that did not play a very good game by human standards but did play legally and understood some common patterns. At about the same time I was fortunate enough to get to play both the woman's world Go champion and the national champion of South Korea in exhibition games.

I am a Go enthusiast!

The game played last night was a real fight in three areas of the board and in Go local fights affect the global position. AlphaGo played really well and world champion (sort of) Lee Sedol resigned near the end of the game.

I used to work with Shane Legg, a cofounder off DeepMind. Congratulations to everyone involved.

tunesmith 1 day ago 3 replies      
I watched the commentary that Michael Redmond gave (9-dan-professional) and he didn't point out one obvious mistake that Lee Sedol made the entire match. Just really high quality play by AlphaGo.

Really amazing moment to see Lee Sedol resign by putting one of his opponent's stones on the board.

geebee 1 day ago 6 replies      
Terrific accomplishment.

Just a question to throw out there - does anyone feel like statements like this one "But the game [go] is far more complex than chess, and playing it requires a high level of feeling and intuition about an opponents next moves."

seem to show a lack of understanding of both go and chess?

I understand there may be some cross-sports trash talking, but chess, played at a high level by humans, relies on these things as well. The more structured nature of chess means that it is (or at least was) more amenable to analysis by brute force computer algorithm, but no human evaluates and scores hundreds of millions of positions while playing chess or go.

Eh, the mainstream media is going to say this regardless, and I suppose it's just unrealistic to expect them to draw a distinction between complex for humans and amenable to brute force computation but statements like this always seemed to show a remarkable lack of awareness of how people actually play these games (though I am not an especially skilled chess or go player).

cm2012 1 day ago 3 replies      
After Go, the next AI challenge they're looking at is Starcraft: https://twitter.com/deeplearning4j/status/706541229543071745
bencoder 1 day ago 2 replies      
I was really expecting Lee Sedol to win here. I'm very excited, and congratulations to the DeepMind team, but I'm a bit sad about the result, as a go player and as a human.
jonbaer 1 day ago 1 reply      
"AlphaGos Elo when it beat Fan Hui was 3140 using 1202 CPUs and 176 GPUs. Lee Sedol has an equivalent Elo to 3515 on the same scale (Elos on different scales aren't directly comparable). For each doubling of computer resources AlphaGo gains about 60 points of Elo."
narrator 1 day ago 5 replies      
The funny thing about AI at this scale is we don't really know why the computer does what it does. It's more of a inductive extrapolation that we can verify that a technique works for a small problem, so we'll throw a whole bunch of GPU power and data at it and it SHOULD work for a big problem. How it actually works is fuzzy though as there's just a couple of gigabytes of floats representing weights in neural networks. No human can look at that and say: "Oh! I see why it made that move". It's so much data that it becomes kind of nebulous what the AI is doing.
tarvaina 1 day ago 1 reply      
codecamper 1 day ago 1 reply      
A human was beaten with some thousands of CPUS & GPUS. On a calorie level, the human is still more efficient.

On a time to learn these skills... going from zero (computer rolls off assembly line) to mastery, the computer wins.

Actually maybe the computer wins even on the caloric level, if you consider all the energy that was required to get the human to that point (and all the humans that didn't get to that point, but tried).

hrnnnnnn 1 day ago 2 replies      
We still have Arimaa. It's designed specifically to make it difficult for computers to play.


moonshinefe 1 day ago 5 replies      
Can someone explain why this is more impressive than a computer beating top chess players over a decade ago? I'm not very familiar with Go, and while there were far more squares on a Go board, it seems less sophisticated than chess to me.

Maybe Go has way more moves possible and emergent strategies or something I'm not taking into account.

agentultra 1 day ago 4 replies      
Isn't this jumping the shark a bit? It's a 5-game match. The first was really, really close.
imh 1 day ago 0 replies      
I want to scratch my itch and play some go. I suck, and playing against other players online I get destroyed so quickly I feel like I'm ruining their fun. Where can I find a fun bot with variable difficulty?
kowdermeister 1 day ago 1 reply      
I'm truly amazed also, I'm not surprised or shocked. Once I knew that the previous master was beaten, I knew it's just a matter of time to see the #1 player topped.

What would be shocking is to find out that a famous writer, musician or scientist is in fact, just an alias for an advanced AI system :) It needs a little trick, because people should be tricked into believing that there's a real person behind the name.

Oh wait, I just remembered that there's a (mediocre) movie made on the subject: S1m0ne ( http://www.imdb.com/title/tt0258153/ )

Are you saying it won't happen? Think of the guys saying the same of go :)

Radim 1 day ago 2 replies      
Beating humans in Go is, in itself, not all that exciting. Go bots have been beating strong humans for quite some time now (just not the very top humans).

There are other implications that make this AlphaGo progress super exciting though. Go captures strategic elements that go well beyond the microcosm of one nerdy board game.

That's the real reason Go has been around for >2,000 years, and why this AI progress is relevant, despite its limited "game domain".

I wrote about it here, from my perspective of an avid Go player & machine learning professional [1].

[1] http://rare-technologies.com/go_games_life/

ankurdhama 1 day ago 0 replies      
What this actually means is that "the approach" AlphaGo team developed to "computationally" play Go, which is an computationally intractable problem, will be very useful in other computationally intractable problems. The media is going to get crazy without understanding what actually happened.If you are going very hysteric over this and thinking that robots are going to take over then please try this:-Before the start of the game add/remove/update any rules of the game and tell both the players - the human and computer - at the start of the game about new rules and lets see who wins.
terryf 1 day ago 1 reply      
Extremely interesting news and kind of sad as a human being :)

I don't really know that much about AI, but hopefully some experts can tell me - how different are the networks that play go vs chess for example? Or recognise images vs play go?

What I mean is - if you train a network to play go and recognise images at the same time, will the current techniques of reinforcement learning/deep learning work or are the techniques not sufficient at the moment?

If that works, then it really does seem like a big step towards AGI.

devy 1 day ago 0 replies      
I had a feeling that AlphaGo would beat Lee Sedol yesterday after watching Fan Hui's interview [1].

According to Hui's recall, the defeat all came down to these things: the state of the mind, confidence and human error. The gaming psychology is a big part of the game, without the feelings of fear of being defeated and almost never making mistakes like humans do, machine intelligence beating human at the highest level of competitive sports/games is inevitable. However, to truly master to game of Go, which in ancient Chinese society, it's more of an philosophy or art form than a competitive sport, there is still a long way to go.

There were a ton of details Hui cannot speak of due to the non-disclosure agreement he signed with DeepMind, but those were the gist of the interview.

In the end, AlphaGo match is 'a win for humanity', as Eric Schmidt put it. [2]

[1] http://synchuman.baijia.baidu.com/article/344562 In Chinese)

Google Translate: https://translate.google.com/translate?hl=en&sl=zh-CN&tl=en&...

[2] http://www.zdnet.com/article/alphago-match-a-win-for-humanit...

conanbatt 1 day ago 1 reply      
This not only shows the insane advances in computer AI, but an incredible advancement between the Fan Hui games and this one. Im still going through the kifu to get a sense of how could it have improved so much in only 6 months.
pushrax 1 day ago 0 replies      
That sequence on the right side was excellent, I am so impressed with the level of play.
joe563323 12 hours ago 0 replies      
Learning from experience goes both to the program and to the champion. Does this mean if the champion keeps playing with the machine several times, he has a chance of winning?
ccvannorman 1 day ago 0 replies      
reference: SGF file on OGS: https://online-go.com/demo/114161

To my untrained eye, AlphaGo was already way ahead by move 29 in the match tonight with black having a weak group in the upper side, while black wasted a lot of moves on the right side as white kept pushing (Q13, Q12), which white erased later because those pushes were 4th line for black and the area was too big too control. Black never had a chance to recover this bad fight. After those reductions and invasion on right side white came back to the 3-3 at C17 which feels like solidified the win.

Some people are asking what was the losing move for Lee Sedol? I wanted to joke and say "the first one.." but maybe R8 was too conservative being away from the urgent upper side where white started all the damage.

ausjke 1 day ago 0 replies      
No surprise at all, human brain is an organ with limited neurons, and computer doubles its performance very 18 months. In fact not just the chess, I would say that AI will beat human all around at unlimited ratio in the future, when they learned how to improve themselves especially.
GraffitiTim 1 day ago 2 replies      
A historic moment here, folks.

Incredible, and in my opinion a little terrifying.

bwang29 1 day ago 0 replies      
I was just thinking, does AlphaGo's game strategy also emulate some sort of psychological strategies used by real human, such as bullying, confusing or making fun of its opponent when it sees fit.
nefitty 1 day ago 0 replies      
What do you guys think of the future progress on the game Go? Will our only chance against AI be to team up with an AI to beat the lone AI? Like in this article about centaur chess players: http://www.wired.co.uk/magazine/archive/2014/12/features/bra... (2014) It all sounds very Gundam Wing to me.
nopinsight 1 day ago 1 reply      
Deep Blue:

Massive search +

Hand-coded search heuristics +

Hand-coded board position evaluation heuristics [1]


Search via simulations (Monte Carlo Tree Search) +

Learned search heuristics (policy networks) +

Learned patterns (value networks) [2]

Human strongholds seem to be our ability to learn search heuristics and complex patterns. We can perform some simulations but not nearly as extensively as what machines are capable of.

The reason Kasparov could hold himself against Deep Blue 200,000,000-per-second search performance during their first match was probably due to his much superior search heuristics to drastically focus on better paths and better evaluation of complex positions. The patterns in chess, however, may not be complex enough that better evaluation function gives very much benefits. More importantly, its branching factor after using heuristics is low enough such that massive search will yield substantial advantage.

In Go, patterns are much more complex than chess with many simultaneous battlegrounds that can potentially be connected. Gos Branching factor is also multiple-times higher than Chess, rendering massive search without good guidance powerless. These in turn raise the value of learned patterns. Google stated that its learned policy networks is so strong that raw neural networks (immediately, without any tree search at all) can defeat state-of-the-art Go programs that build enormous search trees. This is equivalent to Kasparov using learned patterns to hold himself against massive search in Deep Blue (in their first match) and a key reason Go professionals can still beat other Go programs.

AlphaGo demonstrates that combining algorithms that mimic human abilities with powerful machines can surpass expert humans in very complex tasks.

The big questions we should strive to answer before it is too late are:

1)What trump cards humans still hold against computer algorithms and massively parallel machines?

2)What to do when a few more breakthroughs have enabled machines to surpass us in all relevant tasks?

Note: It is not entirely clear from the IBM article that the search heuristics is hand-coded, but it seems likely from the prevalent AI technique at the time.

[1] https://www.research.ibm.com/deepblue/meet/html/d.3.2.html[2] http://googleresearch.blogspot.com/2016/01/alphago-mastering...

scott_hardy 1 day ago 0 replies      
What an amazing game to watch. Congratulations to the AlphaGo team, and good luck to both players in the next four games!
mrdrozdov 1 day ago 0 replies      
How much did this match cost the AlphaGo team? (From a computing resources perspective
socrates2016 1 day ago 0 replies      
I think it will be very interesting if Lee Sedol can win one. Humans have different blueprints and environments. Who is to say a human can't become better?
bitmapbrother 1 day ago 1 reply      
Some people were downplaying the victory of AlphaGo over the European champion because he was only a 2p player. I wonder what they have to say now.
bane 1 day ago 0 replies      
It's almost kind of bad timing in the U.S., what with one of the most insane primary seasons in our history -- this will probably not make the news at all let alone the front page like Kasparov's and Magnus's games did.
dropdatabase 1 day ago 0 replies      
I don't think a computer could ever beat me at Calvinball
panic 1 day ago 1 reply      
It'll be interesting to see what new things we learn about Go itself from DeepMind. The game is very deep, and apparently we haven't found the bottom yet!
chimtim 1 day ago 0 replies      
AlphaGo can be beaten. It uses reinforcement learning so it will perform the set of moves that in the past led to its win. So predictable. Sedol just needs to take control and make it play in a predictable fashion. Also, perhaps play obscure moves that AlphaGo wouldn't have trained on. Perhaps next year's Go winner will have a PhD in computer science.
couchand 1 day ago 1 reply      
When there's a computer that can beat the world champion at both go and chess with no modifications, then I'll be scared.
georgehaake 1 day ago 0 replies      
I have read a fair amount about how it was written without much detail. Anyone know what it was written in?
vancan1ty 1 day ago 0 replies      
Does Lee Sedol have access to AlphaGo training games and/or matches?
kul 1 day ago 2 replies      
Here's one: how long until a computer can beat a human assisted by a computer?
pvinis 1 day ago 0 replies      
i would like to see the same match, but switched placed. alphago plays itself, this time as black, to kind of see the choices it would make, and if they would align with lee's.
supergirl 1 day ago 0 replies      
after so much press about this, it would be funny if overall the human wins
jorgecurio 1 day ago 0 replies      
man I am fired up to watch tonights game...like I am fired up for UFC

there should be like a North American Go Nationals or something like that televised on twitch

Anyone putting money down on Sedol? He said it will be either 5-0 or 4-1 in his favor.

EGreg 1 day ago 3 replies      
Does this mean in the next few decades, computers will make better sex partners and companions than any human?
arao 1 day ago 0 replies      
Lee is not the best player NOW.
thomasahle 1 day ago 0 replies      
Giant spoiler! Does Hacker News have any policy against these things?
typeformer 1 day ago 0 replies      
Lee Sedol should have played that top left 3,3 move earlier (at least before white covered it) WTF. Humanity is not longer at the top of the intelligence pyramid...
andrepd 1 day ago 3 replies      
He has lost 1 game of a 5 game match, on a handicap. Hardly a defeat.
AlphaGo beats Lee Sedol again in match 2 of 5 gogameguru.com
904 points by pzs  18 hours ago   530 comments top 64
fhe 13 hours ago 23 replies      
As someone who studied AI in college and am a reasonably good amateur player, I have been following the matches between Lee and AlphaGo.

AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.

I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.

These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.

But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.

It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.

Cookingboy 16 hours ago 19 replies      
Someone somewhere asked why a lot of people in the Go community is taking this in a somewhat hard way, here is my hypothesis:

Go, unlike Chess, has deep mytho attached to it. Throughout the history of many Asian countries it's seen as the ultimate abstract strategy game that deeply relies on players' intuition, personality, worldview. The best players are not described as "smart", they are described as "wise". I think there is even an ancient story about an entire diplomatic exchange being brokered over a single Go game.

Throughout history, Go has become more than just a board game, it has become a medium where the sagacious ones use to reflect their world views, discuss their philosophy, and communicate their beliefs.

So instead of a logic game, it's almost seen and treated as an art form. And now an AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics. It's a little hard to accept for some people.

Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

davelondon 17 hours ago 5 replies      
Let's compare Go and Chess. We all know that Go is more complex that Chess, but how much more?

There's 10^50 atoms in the planet Earth. That's a lot.

Let's put a chess board in each of them. We'll count each possible permutation of each of the chess boards as a separate position. That's a lot, right? There's 10^50 atoms, and 10^40 positions in each chess board so that gives us 10^90 total positions.

That's a lot of positions, but we're not quite there yet.

What we do now is we shrink this planet Earth full of chess board atoms down to the size of an atom itself, and make a whole universe out of these atoms.

So each atom in the universe is a planet Earth, and each atom in this planet Earth is a separate chess board. There's 10^80 atoms in the universe, and 10^90 positions in each of these atoms.

That makes 10^170 positions in total, which is the same as a single Go board.

Chess positions: 10^40 (https://en.wikipedia.org/wiki/Shannon_number) Go positions: 10^170 (https://en.wikipedia.org/wiki/Go_and_mathematics)Atoms in the universe: 10^80 (https://en.wikipedia.org/wiki/Observable_universe#Matter_con...)Atoms in the world: 10^50 (http://education.jlab.org/qa/mathatom_05.html)

mixedmath 18 hours ago 4 replies      
This game was largely played extremely well by both sides. There were a a few peculiar-seeming moves made by AlphaGo that the commentator found very atypical. These moves ended up playing a very important role in the end game.

I should also say that it's somewhat clear that Sedol made one suboptimal move, and AlphaGo capitalized on it. Interestingly, the English commentator made the same mistake as he was predicting lines of play. This involved play in the center of the board, in a very complicated position. Prior to this set of moves, the game was almost a tie. Afterwards, it was very heavily in AlphaGo's favor.

skc 18 hours ago 5 replies      
I find it very interesting that to a layperson, the idea of a computer being able to beat a human at a logic game is pretty much expected and uninteresting.

You try and share this story with a non-technical person and they will likely say "Well, duh..it's a computer".

mark_l_watson 6 hours ago 0 replies      
I wonder how this will affect future human play. About 30 years ago my brother and I started playing a simple African stone game Kala. We each won about half the games until I coded up a brute force search to play against. Given a game tree to the end, the program made the weirdest looking opening move, when playing first. I started making that move and forever after won.

The situation with Go is different. (I wrote the Go program Honninbo Warrior in the 1970s, so I am a Go player and used to be a Go programmer.) Still, I bet the AlphaGo, and future versions, will strongly impact human play.

Maybe it was my imagination, but it sometimes looked like Lee Sedol looked happy + interested even late in the two games when he knew he was losing.

rurban 15 hours ago 4 replies      
What I really liked about those games so far, and Michael Redmond commentary, is that AlphaGo not only beat Lee Sedol twice, but also Redmond. He is playing the same style as Sedol, he constantly predicts Sedol's moves and he is as surprised and does the same miscalculations as Sedol. He really needs some time to find out when he made a mistake, the same mistake Sedol was eventually doing. This is high class commentary. Even if they have no Computer screen to clear the screen after some variations. He remembers all the stones and immediately clears his own moves, amazing. I'm not sure if a better device would actually help.
IvyMike 18 hours ago 2 replies      
I sense a change in the announcer's attitude towards AlphaGo. Yesterday there were a few strange moves from AlphaGo that were called mistakes; today, similar moves were called "interesting".
bradley_long 18 hours ago 2 replies      
Human can become tired, emotional and nervous.However, a computer/ software would not have these problems.

Especially for Lee, the whole world is looking at him. An "ordinary" human like me won't be able to make the right decisions under this pressure.

A great respect to Lee and the Developers of AlphaGo. Good Game!

brianchu 18 hours ago 3 replies      
I'm totally uninformed about Go, but by now it seems that unless you're clearly in the lead by the end of the midgame, AlphaGo is going to win, simply because at that point its Monte Carlo Tree Search is going to our-compute you in examining all the tactical variations in the endgame. Lee Sedol really seemed to be under a lot of time pressure at the end.

EDIT: clarified to what I originally meant: "end of midgame"

jknz 18 hours ago 4 replies      
The next person that will beat alphaGo may not be a top go player.

In particular, I'm wondering if a computer scientist with access to the alphaGo source code and all the weights of the network could trick alphaGo in order to win games automatically (cf. the papers that show a neural net can be tricked to classify a plane as any other class).

If a human with the knowledge of the source code and the weights can do this, it is scary. Imagine a similar algorithm runs your car. An attacker that knows the source code and the weights may trick the algorithm to send your car in a wall!

pushrax 18 hours ago 7 replies      
If AlphaGo wins all 5 matches, what do you think DeepMind will do with it? My intuition is that they won't continue development, and instead focus on other applications.

Great game btw, a pleasure to watch.

bronz 17 hours ago 4 replies      
Who was the GO professional commentator? He was consistently predicting the moves of both Sedol and alphago. I was extremely impressed.
oneeyedpigeon 15 hours ago 2 replies      
What I find fascinating - and I guess this really highlights that I have no idea whatsoever how AlphaGo works - is that at the start of game 2, AlphaGo plays P4, then Lee Sedol plays D16. To a layman, this looks like it would be a very, very common opening. Moreover, it's symmetrical - I'm not sure how that affects things, but my naive intuition is that it makes the game state less complex.

Nonetheless, AlphaGo takes a minute and a half to play its next move. Can anyone explain what on earth is going on during those 90 seconds?

dvcrn 3 hours ago 0 replies      
I think this is really fascinating but also scary. Imagine you are the best in the world in something. That is your thing and no one else can do it better than you.

Then suddenly a computer comes along and takes that title from you. But it takes it in such a way that you are never in your life able to re-take it because of how the AI works.

A game will likely just be the first field. My girlfriend is working in translation and interpretation which is another area already in the crosshair of neural networks. AIs will step by step become more efficient than people and that is terrifying.

sams99 18 hours ago 0 replies      
The thing I find amazing about this is how soon this has happened. We all were expecting this to eventually happen but if you asked anyone who played go and was across the computer go scene when it would happen, say a year ago, they would say it was "10 years out". AlphaGo is one incredible feat of engineering.
pmontra 17 hours ago 1 reply      
Does anybody know how many CPUs and GPUs they're using this time? It was 1200+ and 700+ in October against Fan Hui. It would be interesting to know if AlphaGo became better only because the extra learning or also because of extra hardware. I googled for that and didn't find anything but I could have missed the right source.
typon 12 hours ago 1 reply      
I've had this thought watching this play out over the past few months. You have this deeply mystical, zen-like game of ancient China which represents the philosophy of the East and it's pitted against this pure capitalist, cold and calculating (literally) machine.

You can hold out for a few thousand years, but eventually the uncontrollable and amoral technological imperative will catch on and crush you.

It's kind of poetic and sad. It feels like technology will render everything un-sacred eventually.

naveen99 1 hour ago 0 replies      
Well, the nice thing with go is the handicap system. I wonder how many stone handicap the human champion needs to beat alpha go, and watch that number increase over time.I wonder if chess could use a handicap system to keep things interesting.
pavpanchekha 18 hours ago 1 reply      
What's even more exciting is that there weren't direct mistakes by Lee Sedol in this game, like there were in Game 1. So does that mean that AlphaGo is just playing on a level beyond him?
pkaye 11 hours ago 2 replies      
Let's say AlphaGo can beat all the best human Go players. Then what will the next more difficult game for computers to compete against humans and win?
ljk 18 hours ago 0 replies      
Wow you're fast!

good to know they'll play all 5 games no matter what the result is though

People seem to think Lee knew he lost and was just playing to learn more. Hope he learned enough to take the overlord down in the next three games

studentrob 18 hours ago 0 replies      
That was entertaining and I don't even really know the game. Props to Google for making this available live on a solid feed.

I wonder if Lee Sedol will have an interest in studying deep learning after this =)

danielrm26 6 hours ago 0 replies      
What I find fascinating about this is that the system was programmed by people who were presumably not as good at Go as Lee Sedol.

So if the first comment in this thread (about how it's a completely non-human approach) is true, it's really interesting that humans can enable computers to come up with non-human ways of solving complex problems.

Seems like a big part of this story, if I'm not being completely dumb here.

Jach 17 hours ago 2 replies      
I may be glad no one took my bet offer of me paying $19 if AlphaGo won 3/5 vs them paying $1 otherwise... I had a prediction at 90% confidence that nothing would show up before the end of this year that would be capable of beating the top players (though since I first heard about MCTS's success the idea of coupling it with deep learning seemed obvious, so I had an unfortunately non-recorded prediction that if a company ever bothered to devote about 8-12 months of research and manpower into combining those two algorithms with a very custom supercomputer or tons of GPUs then they would have something that could beat the best), then AlphaGo was announced. But the top pros weren't too impressed with its defeat of Fan Hui, and Ke Jie estimated something like "less than 5%" chance of it beating Sedol so I updated to 5% for this match of it winning 3/5...

Tonight's game was beautiful. Last night's was a fighting game way too high level for me to really grasp (I have no idea how to play like that, all those straight and thin groups would make me nervous). I'm expecting Sedol to win Friday since I imagine he's going to have a great study session today, but I'm no longer confident he'll win the last two.. Still rooting for him though. :) (I also want to see AlphaGo play Ke Jie (ed: sounds like from the other submission on Ke's thoughts that may happen if Sedol is soundly defeated), and for kicks play Fan Hui again and see whether it now crushes weaker pros or is strangely biased to adopt a style just slightly stronger than who it's facing.)

0x777 18 hours ago 5 replies      
Lee Sedol seemed to be doing well before he went into extra time (as far as I could follow from the commentators). How is it ensured that this is a fair game given the time constraints? I'm guessing adding more computing power to the AlphaGo program should definitely help it in this regard.
astrofinch 13 hours ago 1 reply      
So given that this victory seems to be happening a decade or so before experts predicted, how likely are we to see similar acceleration in reaching other AI milestones? (Especially given that AlphaGo is using the same algorithm that won the Atari games, so it has the potential to be very general in its application
drjesusphd 15 hours ago 1 reply      
So is this it then as far as games go? Does anyone know of any efforts to develop a more "human-friendly" complete information game than go?
TheArcane 14 hours ago 0 replies      
I wonder how long until AI starts writing bestselling novels.
Dawny33 18 hours ago 0 replies      
Wow! Monte Carlo Search learning into play in this match.

Especially when AlphaGo capitalized on just one suboptimal move of Lee Sedol.

spdy 16 hours ago 0 replies      
Must be amazing seeing how the program you helped to create beat the best player in one of the most complex games on this planet.

This is a milestone in modern informatics.

axelfreeman 14 hours ago 1 reply      
I don't get the mystery of this. This algorithm is complex. SURE! But deep learning is very fast training / repeatition of a game (or some other goal) while saving the good or bad results. Predict user moves. Find good positions/patterns. Or did i miss some here?


blacktulip 12 hours ago 2 replies      
Does anyone notice the lack of ko[1] in the games? In all 7 public games (5 with Fan and 2 with Lee) there isn't any ko. This is unusual. If we still can't see ko fights in the following 3 games...I would suspect that AlphaGo isn't able to handle ko well enough yet, and Google asked Lee and Fan to not initialize ko fights in the games.

[1] https://en.wikipedia.org/wiki/Ko_fight

toolslive 7 hours ago 1 reply      
These matches are not really fair: the AI team can "prepare" and examine the human's previous games, find weaknesses, aso, while the human doesn't really have anything to guide his/her preparations.
dynjo 13 hours ago 1 reply      
"By the 4th game, AlphaGo apparently became self-aware and the fate of mankind was sealed..."
bronz 18 hours ago 0 replies      
I am so glad that I got to see this live. These matches will be historic.
Jerry2 18 hours ago 1 reply      
Does anyone know what DeepMind's software stack looks like? Just based on past work of some of the people working there, I'm guessing most of the code is C++ with some Lua. Anyone know for sure?
vancan1ty 9 hours ago 0 replies      
Did Lee Sedol have access to a dataset of AlphaGo games in preparation for this match series? I wonder if it would help him if he could study the computers moves and strategies in other matches.
jonbarker 9 hours ago 2 replies      
AI enthusiast and amateur player here: Michael Redmond made a great point yesterday, if the algorithm is only interested in maximizing probability of win and ignoring margin of victory, shouldn't there be some override for weak moves played when the lead is sufficient? AlphaGo played some weak moves when it perceived it was sufficiently ahead yesterday in the end game. A truly intelligent opponent will play strong moves even when sufficiently ahead, no?
bennyg 3 hours ago 0 replies      
I wonder how "smart" the AI can become once Lee Sedol starts pattern matching and playing against its moves better.
grouma 18 hours ago 0 replies      
Exciting match with top notch commentary. I'm rooting for a sweep of the series.
mhagiwara 9 hours ago 0 replies      
I always wanted to learn to play Go and one of the reasons was because it was the only game where computers hadn't defeated human - well, it is no longer the case and I kind of lost motivation to learn it.

I wonder what would be interesting games (intellectual sports) where computers have yet to defeat humans that you would probably be interested in learning?

blahblah12 14 hours ago 0 replies      

Slack channel for discussion if anyone's interested. We're using it for commentary while the games go on. Was created by AGA people.

jasonjei 8 hours ago 0 replies      
Isn't it kind of interesting that Google is pushing the lead for these projects? It reminds me when IBM took on the gusto of developing Chess AI when they had strong technical superiority. It's almost as if Google is taking the mantle from IBM to develop these renaissance projects.
dineshp2 12 hours ago 0 replies      
Most people other than the researchers and hackers, really did not understand what AI was capable of doing. The very idea of AI seemed too abstract to comprehend(I consider myself guilty).

But AlphaGo showed us what AI is really capable of doing in an eerie sort of way and I think interest in AI will soon become mainstream which is a good thing for the development of AI.

Now it's at least easier to comprehend the context of all those doomsday warnings about AI destroying humanity which I never took seriously.

lottin 7 hours ago 0 replies      
From what I gather, if you have a computer powerful enough, you can solve any game by simply applying Game Theory, as long as you can assign a numerical value to the possible outcomes.
skarist 15 hours ago 0 replies      
I predict 5-0 for DeepMind. Now, Lee has a broken self-confidence to battle (crucial for a human player), something that will not and can not trouble the DeepMind team.
hutzlibu 12 hours ago 2 replies      
Does anyone know of a site/video, where I can just see the game moves without commentary and thinking pauses?
myohan 11 hours ago 0 replies      
I would like to see another experiment where Lee is aided by a computer and plays against AlphaGo and see who wins...some believe that human intuition working with a mediocre computer is much more powerful than a supercomputer by itself.
karussell 14 hours ago 0 replies      
Wasn't reading the whole thread, but was it possible for Lee Sedol to play against the final AlphaGo before? Although AlphaGo seems to be a huge achievement I would find the lack of training before a bit unfair as AlphaGo was probably able to play lots of Games from Sedol before.
i_don_t_know 18 hours ago 2 replies      
Is there a complete recording of the commentary? They had one for game 1. The current live stream only goes back two hours and doesn't include the beginning of the game.

I'm looking at the DeepMind channel on Youtube:https://www.youtube.com/channel/UCP7jMXSY2xbc3KCAE0MHQ-A

w8rbt 9 hours ago 0 replies      
Would it be possible to play in a random/unpredictable fashion and win a game of go? If so, that may be one approach to beating the computer.
github-cat 14 hours ago 0 replies      
Should we be worried about the win of AlphaGo? http://www.pixelstech.net/topic/141-Should-we-be-worried-abo...
conanbatt 13 hours ago 2 replies      
What is interesting to me is that the computer makes clear mistakes when its on the lead. Since it might find the chances to win equally among different scoring results, it often picks a weaker one.

This has a powerful consequence: we have not seen AlphaGo pushed to the limit, he is lowering the distances as if it were playing a teaching game.

Lee Sedol I think came to this conclusion, and the only human strategy left is to take a lead big enough to maintain the rest of the game. And that might be the last strategy to play to show the computer is already unbeatable, because it will be pushed to its limits to win a game and it might overcome humans.

vedaprodarte 16 hours ago 2 replies      
A question: Should we take it as "a computer beating a human" or "developers beating a Go player"?I had this discussion with my friends and we have opposite opinions.
awl130 11 hours ago 0 replies      
do you think lee sedol should change his goal from trying to win all remaining three games to winning just one? in other words, sacrifice the next two games to learn about alphago and then try to win the final game.
pgodzin 18 hours ago 3 replies      
Is it best of 5 or are they definitely playing 5 matches?
Tistel 12 hours ago 0 replies      
does anyone know anything about the implementation (language etc)?
andreyk 5 hours ago 0 replies      
Very impressive. Since there is a ton of hype about this and many media stories (at least NYTimes, with no citation at all) saying that this came 'a decade early', I think its worth looking over Yann LeCun retrospective on research in this area (https://www.facebook.com/yann.lecun/posts/10153340479982143). Clearly he was saying all this to preface the results of Facebook research in comparison to Google's, but I still think it is a very good overview of the history and shows the ideas did not come about suddenly. Quoting a few key things since the whole things is very long:

"The idea of using ConvNet for Go playing goes back a long time. Back in 1994, Nicol Schraudolph and his collaborators published a paper at NIPS that combined ConvNets with reinforcement learning to play Go. But the techniques weren't as well understood as they are now, and the computers of the time limited the size and complexity of the ConvNet that could be trained. More recently Chris Maddison, a PhD student at the University of Toronto, published a paper with researchers at Google and DeepMind at ICLR 2015 showing that a large ConvNet trained with a database of recorded games could do a pretty good job at predicting moves. The work published at ICML from Amos Storkey's group at University of Edinburgh also shows similar results. Many researchers started to believe that perhaps deep learning and ConvNets could really make an impact on computer Go.


Clearly, the quality of the tactics could be improved by combining a ConvNet with the kind of tree search methods that had made the success of the best current Go bots. Over the last 5 years, computer Go made a lot of progress through Monte Carlo Tree Search. MCTS is a kind of randomized version of the tree search methods that are used in computer chess programs. MCTS was first proposed by a team of French researchers from INRIA. It was soon picked up by many of the best computer Go teams and quickly became the standard method around which the top Go bots were built. But building an MCTS-based Go bots requires quite a bit of input from expert Go players. That's where deep learning comes in.


A good next step is to combine ConvNets and MCTS with reinforcement learning, as pioneered by Nicol Schraudolph's work. The advantage of using reinforcement learning is that the machine can train itself by playing many games against copies of itself. This idea goes back to Gerry Tesauro's NeuroGammon, a computer backgammon player that combined neural nets and reinforcement learning that beat the backgammon world champion in the early 1990s. We know that several teams across the world are actively working on such systems. Ours is still in development.


This is an exciting time to be working on AI."

chm 6 hours ago 1 reply      
This will be buried by now but:

What happens if the Go master tries to deceive the oponent? As in purposefully play a counter-intuitive position, or even "try to lose"? Will the AI's response be confused as it is expecting rational moves from its oponent?

EGreg 10 hours ago 0 replies      
Someone here said an interesting thing. Perhaps the next AI challenge would be to see whether AI running on weaker machines can beat AI of yesterday on stronger machines. And this test can be automated to find even better algorithms. Like can Rybka running on an iPhone today beat Fritz running on a distributed supercomputer? Or thinking for 2 seconds rather than 2 minutes, on the same computer?

There is something unnerving about a computer that can answer in 0.01 seconds and still have the move be better than any human would come up with in an hour. At that point a robot playing simulatenous bullet chess would wipe the floor with a row of grandmasters, beating them all without exception.

LaFolle 18 hours ago 0 replies      
This is superb awesome!!!

In future, it will be interesting to see AlphaGo playing against itself!

eruditely 18 hours ago 0 replies      
Oh come on Lee Seedol we believe in you man, you might crack under pressure, it's cool. Bring it home for us meatbags will you? HK-47 why T_T.
openaccount 15 hours ago 1 reply      
Meanwhile 'Google Translate' translates texts terribly bad. Why don't they work on important tasks?
How to Pass a Programming Interview triplebyte.com
995 points by runesoerensen  2 days ago   546 comments top 73
quanticle 2 days ago 23 replies      

 Being a good programmer has a surprisingly small role in passing programming interviews.
And that just says it all, doesn't it? I agree that interviews should test candidates on certain basic skills, including (time/space) complexity analysis. But do you really learn anything by asking the candidate if they can recite the time complexity of a moving window average algorithm (as I was asked to do by an interviewer yesterday)? What does the candidate's ability to code a palindrome checker that can handle punctuation and spaces tell you about their ability to deliver robust, maintainable code?

I don't have the answer, but I just don't see how the questions typically asked in programming interviews give you a good picture of the candidate's actual programming ability. I much prefer "homework" projects, even if they involve me working "for free", because I feel like they ask for actual programming skills rather than the "guess the algorithm" lottery of phone screens and whiteboard coding.

staunch 2 days ago 9 replies      
> candidates who have worked at a top company or studied at a top school go on to pass interviews at a 30% higher rate than programmers who dont have these credentials (for a given level of performance on our credential-blind screen).

Welcome to Silicon Valley meritocracy.

And it's much worse for founders seeking investment, where there are no hard skills to test at all. It's almost purely about being the same class as the investor.

Which is why you get only upper class people funding upper class people, which then hire upper class people. The 99% only makes it in because there aren't actually very many qualified people among the "elite".

From http://paulgraham.com/colleges.html

> We'd interview people from MIT or Harvard or Stanford and sometimes find ourselves thinking: they must be smarter than they seem.

bshlgrs 2 days ago 7 replies      
Another tip which I give: Interviewers vary widely in how much they care about whether your syntax is accurate, whether you handle invalid inputs, and whether you write unit tests. It's really useful to ask the interviewer whether they want you to worry about those things.

If you handle invalid inputs for an interviewer who doesn't care about that, they're going to be a little annoyed by you going more slowly than needed. If you don't handle invalid inputs for an interviewer who does care, then they'll think you're careless.

methodover 2 days ago 3 replies      
Interesting article.

For a while, we had a non-typical interview strategy: A take-home project. We would give the candidate a week or so to work on a smallish project, the requirements of which we would specify. After they completed the project, we would do a group walkthrough with them.

We've hired five engineers over the last three years. For the first two, we did the take-home project. But, then I started to wonder a bit about if it was reasonable to ask programmers to work a weekend on a project. There were a bunch of persuasive comments in HN threads on the subject saying it was unfair -- a job seeker would have to spend an incredible amount of time on each application. And one candidate that I really liked aborted the interview process once I told him about the take-home test.

So I changed the process to something much more typical, with live, in-person coding exercises. We hired three more engineers under this system.

So, how did they compare? Well, the engineers hired when we were doing take home projects have worked out INCREDIBLY well. They are independent and very resourceful. They are excellent.

The engineers hired under the more typical system have not done well at all. We had to let go of two of them, after months of coaching, and the third isn't doing that well.

Random chance plays a huge role here, I'm sure. Maybe we just got lucky with the take-home project engineers. But personally, I think it makes a lot more sense to have the interview process match the work. /shrug

osagatisop 2 days ago 9 replies      
I really wish that at some point during my CS education I would have realized how typical programming interviews worked and just how impossible they are for me. None of my internships had this sort of stuff and after a long string of failures interviewing after graduating, I can openly admit that being able to solve algorithm stuff just isn't in my blood.

It doesn't matter how many books I read or questions I practice, I just can't work these kinds of problems. If I had an inkling of what it was like beforehand, I would have switched majors or dropped out. Half a decade of hard work, tens of thousands for tuition and a useless degree at the end.

I don't know whether to laugh, cry or jump off a cliff. Maybe all three.

BinaryIdiot 2 days ago 4 replies      
> This situation is not ideal. Preparing for interviews is work, and forcing programmers to learn skills other than building great software wastes everyones time. Companies should improve their interview processes to be less biased by academic CS, memorized facts, and rehearsed interview processes. This is what were doing at Triplebyte.

Thank you! This is a good write up and just like it concludes it's far from ideal.

I'd love to see more interviews based on real-world type things like maybe code a project, come in and explain the architecture, reasons for your data structures, performance questions, etc. Shows how you code and communicate and even better: work with others and maybe walk someone through extending your project or something similar.

Honestly though my biggest issue with interviews is the lack of response with a negative result. For instance one of my last interviews I spent literally months with the company interviewing on and off on the phone and in person. I never heard a SINGLE negative thing from anyone, always answered every question correctly, shot the shit with many of them and everything seemed perfect. Even the team lead asked me not to go after something else because he wanted me. Then I was ultimately declined with the only reason given was "lack of experience". But I had over 12 years of experience, all of the interviewers told me I went above and beyond, said they agreed and liked the solutions I came up with, that I talked through them well, etc etc. I was never able to get anything else out of that.

If something is wrong with a candidate and they don't fit that's perfectly fine. But please give them accurate and detailed feedback where possible. This leaves me absolutely nothing helpful and instead of possibly improving on something I'm left thinking they made a mistake or everyone just simply lied to me during the entire process. I was even exchanging some texts and emails with the lead up until I was given the negative result and then nothing.

BoysenberryPi 2 days ago 3 replies      
I've interviewed for a lot of YC companies and companies that frequently post in the "who's hiring" thread and the programming interviews they give are absolutely horrendous. I've had programming test where companies look at my resume and go "so you are very experienced in Ruby? Great, solve these algorithms in C++ for us. I've actually had someone give me a ACM-ICPC world finals question.

I don't have a problem with pushing someones limits and see where they stumble but a lot of these interviews seem intentionally trying to screw you over with the abstract way these questions are asked and the brain teasers they give you.

gurgus 2 days ago 1 reply      
Late to the party but I'll just add in my $0.02

I remember the best interview I had was at a company offering open source software. For the coding components of the interview they give applicants a task (they cherry pick one of the easier ones) from their JIRA backlog and told applicants they've got two weeks to come up with a patch. It didn't matter if the applicant could fully solve the bug/feature, since it's not particularly fair to expect the applicant to be familiar with the ins and outs and gotchas of the system they are working with, but what did matter is that they A) submitted something, and B) could talk their way through their thought process and how they arrived at their current solution.

This sort of process really clicked with me. Obviously it's not as straightforward for some organisations that may offer proprietary software, but the process could certainly be adapted for a lot of organisations, in my opinion.

kelvin0 2 days ago 5 replies      
What if designers had to go through a similar interview process? Here are some colors, please arrange them in palette groups that are color coordinated for a given visual effect? Why is red font on blue background bad, please justify? That would simply be hilarious.
vkjv 2 days ago 3 replies      
> "Thats exactly the point. These are concepts that are far more common in interviews than they are in production web programming."

The list includes things like Big-O analysis. While, formal analysis is certainly not a day to day occurrence of most programming, knowing what the runtime complexity of the code you are writing is almost always important.

While, I generally don't care for most algorithmic problems, I am honestly concerned when someone can't tell me that something runs in O(n^2) and good probably be optimized.

EDIT: I should note, I'm not necessarily looking for them to use precise term here, but they should be able to clearly articulate it.

entitycontext 2 days ago 2 replies      
It seems like there is a large middle area of concepts in between low-level algorithms/data structures and high-level system architecture that are left out of many of these interview prep guides:

* Principles and patterns of object-oriented (or functional) design

* Relational (or NoSQL or analytics) database design

* Unit, integration and system testing

* Logging, profiling and debugging

* Source control (e.g. branch/PR/merge flows)

* Deployment and devops

Do these subjects really not come up in some programming interviews?

dbcurtis 2 days ago 0 replies      
When the the job interview become a quiz show? I've spent plenty of time on both sides of the interview table. Sure, I've asked problem-solving questions. It was never to pay stump-the-candidate or to see if they could come up with some CS proof on the fly. It was to examine their approach to problem solving, and the way they interacted (back-and-forth questions). The right answer never entered into it, and if the interview question has a "right answer" then it is probably a lousy interview question.

I knew a college recruiter at Large Semiconductor Maker. She had been around the company many years, starting out as a mask designer. She knew nothing about engineering, but had worked with engineers on a daily basis for 10 years. She was great as a recruiter for new-grad engineers. One of her favorite questions was: "My back yard is 60 feet wide, and I want a brick fence across it. How many bricks do I need?" You would be surprised how many people never said another word, worked out a numerical answer, and told her the number. boggle. The point of the question is to find out how good the candidate is at uncovering and understanding the customer's wishes and what the customer's vision is of the desired end result.

Somewhere along the line there has evolved a group of people who never learned how to interview job candidates for problem-solving oriented jobs. A quiz-show lottery is a lousy way of finding out if someone can flesh out under-defined problem statements, replace a stupid problem statement with a better one, and creatively explore the dark corners of a solution space.

Let's stamp out the quiz show now.

Udo 2 days ago 0 replies      
> If youre interested in what were doing, wed love you to check out our process.

Well, I say this as someone who was apparently blacklisted on TripleByte despite ostensibly getting a perfect result on the programming taks, getting an interview would have been a start. Or at least getting told what and why it happened, instead of trying to request a pre-screen phone interview without result over and over until I "got the message".

I understand the TripleByte team is perceiving problems that exist in the programmer hiring process, including the fact that interviews are not necessarily designed to gauge a candidate's qualities as a programmer. I also understand that you probably can't change the status quo in that space without first establishing yourself firmly in the existing field. But my own experiences make me skeptical about how transparent TB really wants to be.

lackbeard 2 days ago 1 reply      
>Similarly, most programmers just want a good job with a good paycheck. But stating this in an interview is a mistake.

Just like everyone only hires the best, they only hire people who are "passionate" about <blub>.

nice_byte 2 days ago 1 reply      
> Use a dynamic language, but mention C

I take issue with that recommendation. You should use whatever language you feel most comfortable with. If it's C, use C. If it's Java, use Java. You don't have the luxury of an IDE or anything like that, so you need to have enough of the language in your head to write a program without looking something up.

pyb 2 days ago 3 replies      
Another great post from Triplebyte, but I am confused about their model. Why would candidates want to apply to Triplebyte, if they still have to go through the companies' full interview process on top of the Triplebyte process ?
Aleman360 2 days ago 1 reply      
How I did it for a Microsoft interview: cram for a weekend with a good algorithms text book. I do UI development where 99% of the work is figuring out the UI framework but the study session definitely got me into the right mindset for interviews.
DeadeyeRolf 2 days ago 2 replies      
As a junior in university looking for internships this summer, I can attest that going through the programming interview process is a pretty foreign process compared to traditional interviews. I just stumbled through my first programming interview last week.

I believe in the future point 3 will be especially helpful. The hardest parts for me were trying to figure out how appropriate it was for me to be rambling as I coded (something I'm not used to doing at all), and trying to understand what was and wasn't appropriate to ask the interviewers about my code.

Time to brush up on breadth-first search and hash tables!

jameshart 2 days ago 2 replies      
Programming interviews aren't pass/fail, they're match/no match. Imagine how it would read if this said "how to pass a first date".
hoodoof 2 days ago 2 replies      
I've built alot of stuff and apart from hash tables, never really needed to understand:

 Hash tables Linked lists Breadth-first search, depth-first search Quicksort, merge sort Binary search 2D arrays Dynamic arrays Binary search trees Dynamic programming Big-O analysis
I guess it depends if you are going for a job that REQUIRES these techniques then yes it is important, but for web application development - even sophisticated web application development - not needed.

bogomipz 2 days ago 1 reply      
I'm sorry but could someone explain who tripplebyte is and why they are so "precious"? This blog post in my opinion is symptomatic of of this bizarre silicon valley interview culture. This whole "how to ace the programming interview" bullshit is really disturbing. Are you looking for people who are good at test taking or people that can produce good product? Yes CS fundamentals are important - data structure/algorithms, but I'm much more interested in whether a candidate understands the problem space and can reason about it. I want to know about systems they've designed in the recent past why they made the choices they made. Being able to code up Kruskal's spanning tree perfectly on a white board while being timed is a neat parlor trick but if you don't understand the bigger holistic picture of systems I don't think it means that much.

As far as these take home assignments go, I find this a disturbing trend as well. Especially egregious is telling someone there is a time limit on your working for free. If you're going to pay me for my time awesome lets talk, otherwise lets maybe look at some code I've already written.

This industry seems to get more up its own ass every day.

noonespecial 2 days ago 5 replies      
Interviewing should now be part of most CS educations, if not its own three class/semester course. Its that weird, and that important.
Omnipresent 2 days ago 5 replies      
The practice section doesn't mention anything other than the book. Are there any other resources that people use to prep for an interview? Looking for something that tests algorithms and data structures more than solving tricky problems.
aprdm 2 days ago 1 reply      
I've been interviewing in the past month as I need to find a new role and it is just crazy.

It is SO random, a lot of useless questions, small startups having a long hiring process harder than the big 4.

Just one example: I've received an offer from one of the big 4 after going through their process. I was lucky in the questions - things I had studied.

I also applied for around 40 start ups / small companies. I made through the final on-site interview in only 5 of them. Lots of white boarding, silly technical questions that don't proxy to day-to-day work and etc.

I really think that passing in a process in a company like Facebook and not passing in other companies working in a much less complex environment says a lot.

Another thing that annoyed me a lot was that in some companies, when I froze upon a problem and was in a dead end, instead of them trying to help me or give constructive advice they would just keep adding pressure. It's craaazy. You're in a white board in a position of someone judging you in a on-the-fly-absurd-problem and the guys is trying to talk you down instead of help.

Crazy stuff.

m4tthumphrey 1 day ago 0 replies      
When I interview candidates I sit with them for half an hour or so to get to know them. Then I give them purposely broken, poorly written piece of code which I tell them to pull apart. This proves incredibly effective as even if they miss some of the more obvious errors I can at least point them in that area and then see if they can see the problem on their own. There are about 100 different things to talk about so it really gives me an idea of the level they are at, and also the type of programmer they are; passionate, lazy, smart, meticulous, inexperienced, confident etc.

Then if I feel they are worth a second interview, I get them back to sit with me and my team for the day to see how they fit in with the team. Then all being well I offer the job.

jroseattle 1 day ago 0 replies      
"They eat large sprawling problems for breakfast, but they balk at 45-min algorithm challenges."

Care to guess what we're looking for in screens and interviews?

Some of our best performers were poor interviewers. Likewise, we've had interview aces not pan out. Rather than expecting the world to become interview clones (and make our hiring decisions even more difficult), we're learning how to be better interpreters of people to get to the answer of our real question -- is this person an engineer we'd like to have on our team?

It's still a work-in-progress and we're nowhere near perfect, but we're simply not going to outsource our decision-making to the status quo of technical interviewing in 2016.

gtolle 1 day ago 0 replies      
At my startup, I've had some success hiring mobile devs through "audition programming" on Google Hangout.

I create a "real-world-lite" task like "connect to this simple JSON API I built and implement a recursive product category browser on top of it". I've done this task myself already with a timer and am confident that it will take about an hour to implement. Then I ask the candidate to share their screen and implement it in Xcode while I watch. As they develop, I can get a sense for how they attack problems (quick and dirty, slow and methodical, stack overflow copying, etc), and afterwards I can ask questions about their thought process.

If they did well in the first one, we block out a second one for another hour, and a third for another hour after that, each one testing different skills.

This avoids the time imbalance inherent in take-home projects, because I'm spending just as much time as they are. And it avoids the painful "implement a red-black tree" whiteboard questions by focusing on real-world work in their own dev environment. It also means I have a decent sense of their skills before I ever invite them to an on-site interview.

rurban 1 day ago 0 replies      
Given that none of the most popular dynamic languages know how a good hash table should be implemented with dozens of half-way experienced programmers over many years, and most of them would not be able to write a proper binary search, these questions are certainly too hard for a jobseeker. Those languages still survived with improper implementationsfor decades. So will TripleByte.

A good programmer must be able to survive mediocre colleagues and terrible managers. How to check for that in an interview?

eiopa 2 days ago 1 reply      
It's always been strange to me that tech interviews tend to check for basic CS, rather than deep engineering.

When was the last time you had to implement bsearch() in a real project?

ammon 2 days ago 2 replies      
A large number of bad things influence interview decisions (credentials, targeted practice, how well you know the specific algorithms that come up again and again in interviews). I hope that more programmers getting better at interviewing skills will help move companies toward measuring actual programming skill.
graycat 2 days ago 2 replies      
A candidate has an impressive STEM field educational background,say, Bachelor's, Master's, or Ph.D.degrees, has peer-reviewed publicationsof original research in the STEM fieldsincluding in computer science, has taughtcomputer science in world famous USresearch universities, has createdoriginal, fast algorithms in computerscience, has written successful softwarefor a wide variety of applications in a wide variety of programming languages,and, then, somehow needs to learn someadditional, special lessons on "howto pass a programming interview"? Such an interview is by the Queen inAlice in Wonderland or from someonewell qualified in computing?

Why the heck the needs for special lessons to do what the candidatehas been doing successfully for years?

balls187 2 days ago 1 reply      
> The good news is that interviewing is a skill that can be learned.

When I hear folks complain about programming interviews, I point to that.

The month I spend gearing up for coding interviews usually guarantees me a job, that offers at minimum a $10k raise. I consider that a very good use of my time.

jorgecurio 2 days ago 1 reply      
Well, technical interview questions are why I've given up on pursuing a programming career path and I was shocked that employers were still giving out technical interview questions for a product manager role, it didn't matter if I was a software dev years ago. Rote method still trumps real world experience building real commercial software that people will pay you for according to a lot of interviewers....and I've heard about few managers hiring neophiles based on their 'algorithmic' performance on a whiteboard build shit on Node.js, panic when it's far more work necessary and pull the plug on their CMS (reinventing wheels) because PHP is 'slow'...what in the fuck did I just hear you want to rebuild a static CMS website in Node.js because you think it's going to give you wings?

well at least that has been my experience so far trying to get a job and I'm coming up dry every time. I technically have no work experience because I've been holed up writing a big data mining SaaS tool for a few years and since I was self employed it seems to mean jack all for credentials.

I don't know I'm in a bit of a jam. Starting a complex SaaS product from scratch that thousands of people have used is simply useless against a fucking sorting algorithm that will be used heavily on the actual product.

Like I feel like I'm living in a bizarro world sometimes...I have all this experience and knowledge in this one area, building shit and getting people to pay for it, and it's going to waste as I'm half heartedly applying for jobs I know I will not be able to pass the second round of interviews when the technical algorithm questions begin....I'm sure if I wanted to learn more about the different variety of sorting algorithm I would've fucking consulted stackoverflow already....come on man I just wanna solve real world problems with real world product experience not write fucking code on a whiteboard. I'd be happy to architect out an entire stack powering your product in to the future on a white board but fuck man if you want help on your sorting algorithm just google stackoverflow.

my 2 cents.

progproprog 1 day ago 0 replies      
If you can't talk about design implications quantitatively, nor have a rigorous understanding of how to build data structures - not memorize anything - you can't get mad that I make $100,000 more than you do and work half as much.

If you're a career programmer, and you've never bothered to hone these skills, don't be surprised when you can't easily find work in 10 years. The cheaper guy or girl who comes with less risk will beat you.

blisterpeanuts 2 days ago 0 replies      
My solution has always been pretty simple -- I do poorly in exam interviews, and usually I don't get the job. Regurgitating Algorithms 301 -- nope, not for me. They'll end up with someone who's great at regurgitating. They're happy and I'm happy.

I can explain very clearly how I would go about solving a problem, maybe whiteboard it, flesh out a design, ask them some key questions to show I'm a thinker and not just a follower. That kind of interview usually goes well for me.

As a result, I usually end up with jobs where I have a lot of creative latitude, where I can think up new product ideas and prototype them out, where I can solve problems my own way.

I wish I could pass those Google tests, though. It would be nice to have it all. But some of us just have limitations, I guess. Luckily, it hasn't held me back from having an enjoyable and relatively well compensated career.

Probably today the thing is to have a couple of apps on the Android or Apple appstore, a Github page with some interesting toys and experiments, some open source contributions, and of course the old-fashioned networking that is how many of us still get our best jobs and most successful business relationships.

ascetonic 2 days ago 0 replies      
While there are several resources to practice the use of algorithms and data structures, I find there aren't as many good resources to prep for the 'system design' interview.

If one wants to switch from a completely different domain - like working in investment bank tech or embedded development, there is no way one can have prior knowledge to tackle the design of a Google Docs style system.

It seems like something that can only be gained through on-the-job experience.Is there no hope for newbies?

felhr 1 day ago 0 replies      
I had an interview some years ago which demanded a difficult CS problem to be solved and besides coding the offer stated that you should happily accept being helpful with the IT needs of the marketing guys installing their Antivirus, email...
yarou 2 days ago 2 replies      
Is it weird that I find this entire blogpost eerily similar to PUA strategies?

I think it ultimately boils down to confidence. To use the dating analogy, nothing drops the proverbial panties (or boxers, if you're into that sort of thing) faster than having confidence. Being physically attractive (i.e. fit and in shape) doesn't hurt either.

Take my analysis with a grain of salt, though. I happen to be hilariously bad at interviewing, and don't get me started on dating.

ancymon 1 day ago 0 replies      
The company says:"We help programmers find great jobs at Y Combinator startups."

What's so great in Y Combinator startups? I mean why do they narrow only to such startups? Wouldn't it be better to just say they help with finding jobs in startups?

boopuyy 2 days ago 0 replies      
Edge cases, running time, and correctness. If you're not hitting at least 2 of these perfectly, then you're not writing good code.
valine 2 days ago 3 replies      
It very much depends on the gig, but I would add another item to the list: Show a basic understanding of UX design. Companies need people who can mediate between designers and programmers. While demonstrating an understanding of Photoshop and Illustrator says nothing about your skills as a programmer, it could be the thing that makes you stand out from the crowd.
joe_the_user 2 days ago 0 replies      
To past a sane interview with sane people, focus your mind on the "how to be a good programmer" question - programming as a cooperative human endeavor.

Of course, it may that a bit of craziness may be involved and they'll ask ridiculous little or big problems but someone with a reasonable amount of can answer those.

Or it may be that a lot of craziness is involved, things veer into bit-twiddling assembly, top management steps in unannounced to shoot random questions, suddenly syntax or whether you are "server side oriented" or whatever matters a lot - "we want the absolute best programmers on the market and we cement their loyalty by paying well under market rates..." etc.

Now, the more craziness appears, the less you'll actually want the job. But markets being what they are, you may need the job. By the end, there are no easy answers. Keeping is probably the main advice.

k__ 1 day ago 0 replies      
I had a bunch of interviews in my life and everyone had its own "special sauce"...

My first one was with a Java company and the hiring manager wanted me to draw UML diagrams, which was the only thing that I learned in the software engineering course in university.

Another one was about programming a game. Nothing much, but it needed a few 2D transformation I knew nothing about, so I failed miserably.

Most jobs I got were just "talks" about what I can do.

"You know web development? HTML? Nodejs?" - "Yes" - "You're hired!"

innertracks 2 days ago 0 replies      
While not strictly programming I just finished interview number four/five with a company about an hour ago. SQL and data modeling. First, one non-technical phone interview, one technical phone interview, one online take home test. Today, two different sessions, second one primarily technical, and lunch.

White boarding a simple data model of a real world scenario was surprisingly difficult even though the interviewers were very cordial. The exercise was used to gauge my question asking skills (interviewer was acting as subject matter expert) as much as data modeling and SQL. It was kind of fun as I used it as an opportunity to expand my ability to problem solve in stressful environments.

gerbilly 2 days ago 0 replies      
I was saying to someone the other day that timed programming tests (codility seems popular) are really just youth tests.

They are meant to weed out older programmers, in favour of younger cannon fodder...I mean candidates.

mooreds 2 days ago 0 replies      
What's the least stressful way to pass a programming interview?

Not to have one at all!

This obviously doesn't apply to people just starting out, but I've found the easiest way to get a job is to have worked with someone at the company in a similar role. Many of the issues that interviews are designed to highlight (attitude, flexibility, stick-to-it-ivness, culture fit) simply are non issues if you have someone on the inside who has experience with you.

vadym909 1 day ago 0 replies      
Recruiting is F*&^%$No matter what you do to the interview processHiring managers are looking for @#$#% and they may be #%@%@ and their company maybe $@%@ and their team dynamics are ##%# Candidates are looking for $@#!@ and they may be $@@$ and their personal situation is $$@@ and their salary expectation is ##%@@
piokuc 2 days ago 0 replies      
This is a very good piece of advise, matches my past job seeker's experience really well. I think I got most of my jobs largely thanks to enthusiasm I had for the stuff the companies were doing. It was genuine but I'm sure it could be pretended as well. Everything else spot on, very honest stuff.
gaius 1 day ago 0 replies      
It you "mention C" in an effort to game the system, any competent interviewer will eat you alive.

I remember when "having a Github" was a signal, as soon as word got out everyone had one, but most were zero or near-zero original content.

smaili 1 day ago 0 replies      
I think it's also important to mention that you should come in with some good questions of your own that you've prepared for them. A lot of times the interviewer uses this to see how much homework you've done on both the company and the job you're applying for.
eranation 2 days ago 3 replies      
If you have an unbounded abundance of good candidates, it is a different story then when you are a new startup fighting for talent.

At highly targeted companies such as Google, Facebook et al, I'm sure that if they have a dryspell of good candidates in a given month (can't think of a reason why), then they revert to things like: "We don't care if you don't get the 'trick' immediately, we'll give you hints" and "we just want to see how you think and how you code" and "just talk through the problem" and "you should not learn specific problems and if you see one you know just tel your interviewer" or "cracking the code interview type of questions are banned".

But the reality is that people who apply to Google (or Apple or Amazon or Facebook or Microsoft...), are very smart, and want to work there very much, so while they can probably do well without preparation, due to the fact they will have competition with all this year's new Stanford / MIT / CMU graduates on a limited amount of positions, they take no chances. I have a friend who has a masters degree from a target school and it took him 4 attempts to get into Google. He is smart, he probably did well in the interviews, but you are being compared to others, so until he went and worked on those pretty useless skills of: practicing writing fast on a whiteboard, getting interview books and practicing tricky problems, doing a lot of online judge problems, he didn't get in. Why? because if you have two candidates, both smart, one has practiced whiteboard coding for all of the problem sets on geeksforgeeks / careercup / glassdoor, and one haven't, then even if both are presented with a new problem, most chances it might be a variant of one of those other "usual suspects". e.g. after I solved the famous water trapping problem (tough one if you don't get hints), the idea for the largest water container problem just pops to mind, and if you know that you can find the only non duplicate number in a list in O(1), then the problem of finding if an unsorted list of numbers is an arithmetic series with just O(1) memory is practically the same trick.

So think of two developers, both are awesome, both know CS and both are fast coders.

One practiced whiteboard coding and knows the XOR trick for duplicate numbers, his code for such a question will be written in 1 minute

 public int findDupe(int[] nums){ int dup = 0; for (int num : nums){ dup ^= num; } return dup; }
The other guy, who didn't see these kinds of problems will probably use a hashmap and x2 more lines for the same problem.

Both are O(N) time an O(1) complexity, but the hashmap guy might accidentally say it's O(N) memory (common mistake for frequency maps)

Bottom line, both are good candidates, and the only reason the first one thought of the XOR solution in 1 minute without a hint is that they either saw it before (it's not that rare) or a real genius (statistically less likely, but still possible)

If you don't have enough good candidates, you might have the time and energy to really see which one of these will perform better at work using work related questions other than tricks like this.

But if you have unlimited good candidates coming in, the one that will solve it in 2 minutes will be the one that will stand out from the crowd, there is simply no other good way to filter out so many people. I'm sure they have tons of false negatives. (and probably also a few false positives, but I doubt it's too many)

So the system is broken, but also SATs and GREs are broken. Popular schools, popular jobs, will have to put filtering systems that are not only directly related to the ability to do the job. Someone at Google is simply writing CRUD apps for a living all day, I'm sure. But I'm sure his interview tested him on a much harder set of problems.

fibo 1 day ago 0 replies      
Good article.Anyway, I am also disappointed with many interview processes.Sometimes I think there are courses or books about how to hire that spread a particular concept or practice that is common in some period.
andrewstuart 2 days ago 0 replies      
Recruit people who are smart, work hard, solve problems, enjoy learning and get stuff done. Joel knows.
yueq 2 days ago 0 replies      
tl;dr -- current programming interview format should be drastically changed.

I have interviewed 100+ candidates so far. I'm doing interview less and less recently simply because I found those coding/design questions less meaningful to judge how good a candidate is.

With websites like leetcode.com/lintcode.com that collect interview questions and provide online judge, all you need is to put enough time practicing. 10-years ago we use "reverse a linked-list" and nowadays we maybe use questions like "topo-sorting". The latter is significantly harder than the previous one -- but if candidate saw solution beforehand, it's actually easier to implement.

jconley 2 days ago 0 replies      
I'm pretty sure I could never get an engineering job at companies that interview like this. I've shipped tons of real world products over the last 20 years, founded companies, and just reading about what it is like to interview scares the shit out of me. I'm a firm believer in proving real world skills, not silly/irrelevant high pressure short time domain stuff.

Safe to say, we don't interview with whiteboard hazing. We do a phone screen mainly for personality and to see if the candidate deeply knows about a project they recently worked on. We dig in a bit there and look for the spark of passion. Then, if we are moving forward, we give a take-home project in a private git repo that is relevant to the role and tell them to spend 4-8 hours on it depending on their availability and ask them to time it and be honest. They choose the scope of work they want to accomplish. If the code looks reasonable we have a video conference where we do a code review and dive deep and ask "why" a lot.

We have been really surprised at the difference in quality of these take home projects. Some people can barely get started and struggle to produce anything. Others build full on, useful, applications.

Obviously we are screening for people that are self-starters, so being able to choose scope and regulate and make larger decisions is important to us.

An example project for a full stack c# developer:

Feel free to search around and work on the challenges as if you were on the job. Let me know when you think you can have this stuff done by. Please do this on your own time, with your own equipment and tools, and not your employer's. We don't want any lawsuits or IP ownership questions. Also, for the code, you can retain copyright if it's something you'd like to publish on GitHub, etc.

Clever CodeCan you send me a code snippet in the language of your choice of something you have done that solves a hard problem in an elegant way?

For instance, here's one of my snippets from a few years back. It solves the complex problem of transactional optimistic concurrency control using ~30 lines of code. The usage of lambda expressions in C# and generics makes it succinct and expressive.


Production Backend QuestionLet's say I have a very popular consumer facing service with 10 load balanced front end web servers running the latest asp.net mvc and averaging 100 simultaneous requests each, appropriately sized ms-sql servers, and a heavily used memcached cluster of 4. Everything is running great until we do some capacity planning and double the number of web servers to 20 and experience a 25% increase in simultaneous requests. Suddenly load shoots up on the ms-sql tier with far more transactions per web request than typical, overwhelming a ms-sql cluster that should have handled twice as much web traffic. Web server requests start timing out and throwing 500 errors to the clients, and the system practically comes to a halt. There were no code changes. Describe how you would troubleshoot this and what you think might be a few likely causes.

FailureTell me about a project you have worked on that failed. What was your role? Why did it fail?

The ChallengeThis is meant to be a practical exercise, and is representative of the types of challenges we face. We want to see if you can make reasonable product choices under time pressure as well as write code. You get a lot of ownership here. Please use any frameworks/services/etc you want. We suggest using something you know very well. Feel free to search Google, go to the library, call a friend, or whatever you need for research. Please write all the code/copy/etc yourself. The output can run on whatever OS or PaaS/SaaS provider(s) you want, but it should be something we can also easily run. Please try to limit this to 4-8 hours of your time.

If you want to keep the repo private, please share it with me: https://github.com/jdconley

Create a web site with the following characteristics:Make the hackathon/minimal/proof-of-concept version of some popular consumer web app that you think could use some love. Craigslist, eBay, reddit, whatever...?Site must be responsive, gracefully scaling down to smartphone resolutions and up to full screen desktop displays. It does not need to be beautiful.Supports all the way down to IE8, with appropriate feature degradation as needed.Lean toward implementing things in-browser rather than in back end code.Bonus points if you do a Show HN and your implementation makes it to page 1 on hacker news.

swillis16 2 days ago 1 reply      
I look forward to seeing more articles that find ways to game the interviewing process. Perhaps showing interviewers how easily their hiring process could be gamed will 'inspire' improvements in interviewing.
erbo 2 days ago 0 replies      
"Line up offers" only works if you already have a job, I suppose. If you're unemployed, you generally have to take the first offer you get, or you lose your unemployment benefits.
geebee 2 days ago 0 replies      
"You need to be able to write a BFS cold, and you need to understand how a hash table is implemented."

Great advice! The list provided in this blog post is an excellent description of what you should know, cold, before you go into an interview. The reason you need to know them "cold" is that you (probably) won't be simply asked to code up mergesort. Instead, you'll be presented with a problem that can be reduced to mergesort. You need to know it cold so that you can reason more abstractly with it.

While this is great advice, it also demonstrates why people eventually develop interview fatigue over a career. I'm not talking about fatigue from your third interview this week, I mean, I mean fatigue that sets in over decades.

See, a year ago, just before I interviewed at google, I could have done all this "cold". I could code up a BFS, mergesort, find the shortest path between two nodes, print all permutations of a set, and so forth. Cold. And you know, I think in many knowledge-intensive fields, most practitioners are required to do stuff like this cold. But I probably wouldn't be able to do it all cold now. I could figure it out, but not in 45 minutes at a whiteboard, and certainly not in time to reason abstractly. I wouldn't be able to do this with partial differential equations or shakespeare's plays, two other subjects I was highly prepared for exams in a couple of decades ago.

See, people in other professions have to do this, but they do it once. Actuaries need to know vector calc and linear algebra, cold, to take their exams. But they don't have to remember how to integrate by parts when they are interviewing for a Sr Actuary position 15 year later. Physicians need to know Organic Chemistry, cold, at some point in their lives. But an experienced anesthesiologist isn't expected to answer whiteboard questions about undergraduate oChem.

I don't have an easy solution, since I actually do completely understand why tech employers rely on these exams. But I do think they take a serious toll on the field, and are a major contributing factor to attrition (as well as aversion among people who never go into the field at all). We, as developers, really do have to re-load complicated undergraduate coursework into exam ready memory over, and over, and over.

I'll finish the way I always do: if you interview like this, that is your choice, and you should feel free do do so - I really mean this. But why then do these employers act mystified that there is a "shortage" of developers? It seems to me that aversion and/or attrition is a very natural outcome for the way we do things in software. "No thanks, I'll do something else" seems like a very reasonable response to an industry that hires like this.

progproprog 1 day ago 0 replies      
If you're given a take home project and you take a lot of time on it, it's usually a signal that we shouldn't hire you. We're not using you for free work, and if you think your take home project is assigned in that vain, then you're probably not qualified for the job. It's great personal work ethic when you tell us you worked super hard and spent a week on it, but we give you the expected time so you can filter yourself out. Also, shame on you because it makes us feel shitty to have to reject you after that.
soham 2 days ago 1 reply      
At the risk of repeating myself what I've said elsewhere on the page:

This method of interviewing has been around ever since, and is going to be around for the foreseeable future. Nobody loves it, including the interviewers, but there just isn't a better way to do it at any sort of scale. Especially when there are much bigger problems to solve when you're running a business. And a lot of other reasons.

It's best to take the bull by the horns. I run http://InterviewKickstart.com, which is a bootcamp for preparing for such technical interviews. We do almost exactly what is in the blog post. It works. Spectacularly well.

sbierwagen 2 days ago 1 reply      

 This is a different skill [1].
The [1] is inside an <a> tag, but the tag doesn't contain an href attribute, so it doesn't link to anything.

hooliganpete 1 day ago 0 replies      
How about technical interviews for non-technical roles. Any advice?
chavesn 2 days ago 0 replies      
> To do well in an interview, then, you need to be able to solve small problems quickly, under duress, while explaining your thoughts clearly.

I certainly hope they aren't interviewing anybody under duress.

qaq 2 days ago 0 replies      
align your interview requirements with your actual desirability as employer and with realities of job at hand.
zyngaro 2 days ago 0 replies      
Recruitment in this industry is FUBAR.
deedubaya 2 days ago 1 reply      
> I fundamentally do not believe that good programmers should have to learn special interviewing skills to do well on interviews

> 2. Study common interview concepts


patmcguire 2 days ago 0 replies      
Being evaluated is a skill.
boubiyeah 2 days ago 0 replies      
The article starts pretty fine but then just explain how to be good at bullshit interviews?
vegabook 1 day ago 0 replies      
Ummm, this should be titled 'how to pass an interview', period. Most of this advice applies well outside of programming and reads like any of 1000 self-help books of the past many decades. Sure there are a few specifics to coding but all the themes are as old as the hills.
known 1 day ago 0 replies      
interview != quiz
gedy 2 days ago 3 replies      
"Whiteboard hazing" is the most apt description I've heard it called. Pass the wringer, you can join the club.
jlarocco 2 days ago 2 replies      
The point of most job interviews is to weed out people who need to read articles like this one.
pklausler 2 days ago 3 replies      
I hated this article. A good technical interview reveals an aptitude for programming or a lack of same, and can distinguish a true aptitude from an ability to fake it. I've been interviewing programmers for a very long time and I'm pretty good at avoiding "false positive" results with a few straightforward questions.

If you have aptitude and talent, brush up on your algorithms and try to have fun with the interview. If you don't, then I'm sorry, but maybe you'd be happier doing something else.

One of the FBIs Major Claims in the iPhone Case Is Fraudulent aclu.org
756 points by danielsiders  2 days ago   251 comments top 32
lisper 2 days ago 10 replies      
This is really annoying. I wrote a blog post last week making this exact same point, posted it here, and it promptly got flagged to death, most likely by the same people who were commenting that I was "absolutely, totally wrong".


Nice to be vindicated though.

Dowwie 2 days ago 1 reply      
TL;DR: "they're asking the public to grant them significant new powers that could put all of our communications infrastructure at risk, and to trust them to not misuse these powers. But they're deliberately misleading the public (and the judiciary) to try to gain these powers. This is not how a trustworthy agency operates. We should not be fooled."
toyg 2 days ago 3 replies      
I have to say, whoever at the FBI decided this was the right case to push their new doctrine, could have done his/her homework a bit better. Technically speaking, this is the last iPhone you can actually crack without assistance from Apple. They are making it harder for themselves. They only have to wait for another major incident, retrieve (or plant, why not) an iPhone 6 from the scene, and do it again, this time for real.

Unless they are trying to pre-empt something else (like the recently-touted shift to "devices even we can't access" from Tim Cook, which may or may not be simple advertising), they just picked the wrong time to stir this particular pot.

kabdib 2 days ago 2 replies      
Heck, the FBI could also disable writes to the chip, or simply interpose some logic that pretends to write, but actually doesn't (a non-write-through cache :-) ).

That is, if the secrets in question are on that NAND chip.

geographomics 2 days ago 3 replies      
Interesting technique, but it doesn't remove the long interval between permitted passcode attempts - an equally important problem for brute-forcing.

So the FBI would most likely still require Apple's assistance in this.

croddin 2 days ago 2 replies      
Apple said that could sync the data if the AppleID password wasn't changed. Can Apple just revert the AppleID account on their servers to a backup with the old password hash (or however it is stored)? Why wouldn't this work? Has something on the phone changed because of the password change or is Apple unwilling or unable to revert the AppleID account?
codeonfire 2 days ago 7 replies      
The device is evidence, so all of you saying they can just start desoldering things and such need to think about that. What is the first thing a defense attorney would say if the data were to be used in a criminal trial? That's right, "the FBI replaced the memory chip on the phone with one they wrote their own copy of the data to." That is only after they potentially permanently damage the device and data.
Spooky23 2 days ago 2 replies      
I think this makes the FBI look dumb, but I don't think this really helps them either.

If the NSA did this for espionage it's one thing, but I'm curious as to whether substantially modifying the iPhone in this way would stand up in court.... How would the police assert that they preserved evidence after doing this?

I was involved in a drawn out case challenged the validity of data recovered from backup at great. That was easy to assert with normal IT people, and yet it took weeks to litigate. Couldn't imagine how this would go.

tylercubell 2 days ago 0 replies      
It seems like there are several articles and security experts out there explaining how to recover data from a locked iPhone as if it were a cakewalk but where is one example of a complete soup-to-nuts case study on unlocking the same model phone as the San Bernardino shooter?

If you want the American public to believe the FBI is making fraudulent claims, show demonstrable proof that it can actually be done instead of all the talk and theories.

baldajan 2 days ago 0 replies      
This reminds me of the republican congressman from Cali, Issa, telling the FBI in very technical terms (inserting in between that he could be completely wrong) the exact same thing mentioned in this article. I'm unsure if the author was inspired by congressman Issa or if he came to it by his own accord.

More over, what's more fascinating is, some people may say it's privacy v security and the fight for terror. But what has emerged from the last few weeks is multiple reason why the FBI should not win in court, regardless of your perspective of terror. It's been very clear from day 1 that the intentions of the FBI are vicious and non-genuine, and with every passing day, more people are finding out.

iLoch 2 days ago 16 replies      
I wonder if the FBI has checked for any ways to circumvent the passcode screen using software bugs.

Edit: Not sure why I got downvoted. I can currently circumvent my keyboard passcode with a number of steps, and I'm on iOS 9. Steps to try for yourself:

Edit: Ok I've been tricked. The steps below are unnecessary as the first step actually unlocks your iPhone in the background. \_()_/ The fact remains though that these bugs have existed in the past and may exist on the device the FBI wants to unlock.

1. Invoke Siri, "what time is it?"

2. Press the time/clock that is shown

3. Tap the + icon.

4. Type some arbitrarily long string into the search box. Highlight that text and copy it.

5. Tap on the search box. There should be a share option if your device is capable. Tap the share option.

6. Share to messages.

7. Press the home button.

Congrats, you're more effective than the FBI.

loumf 2 days ago 1 reply      
I wouldn't be so sure the FBI knows this. Apple certainly does -- if they told the FBI, why didn't they also put that in their letter?
ChuckMcM 2 days ago 0 replies      
Seems like a pretty articulate explanation of what is going on here. Of course I realize that my confirmation bias will cause me to see articles more in line with my way of thinking as 'right' but I've also worked with NAND flash devices and believe that the chip[1] they use in the phone does not have any sort of protections on the NAND flash itself, you should be able to just drop it into a test fixture and read it out.

[1] http://toshiba.semicon-storage.com/info/docget.jsp?did=15002...

albinofrenchy 2 days ago 6 replies      
Anyone else a little surprised that apples security feature here is so easy to sidestep? I'd have thought, in the least, that any such keys were stored in the main processor without external read/write capabilities.
drivingmenuts 2 days ago 1 reply      
From the sound of various blogs, articles, etc., it sounding like the FBI doesn't have anyone who has technical expertise in this area (or if they do, those persons are being kept buried). While the court case is important to the FBI (and very wrong to the public), the technical details of breaking into an iPhone should not have been an issue for them.

I'm starting to think no one is driving the clown car in their technical division.

kevin_thibedeau 2 days ago 0 replies      
> If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it.

Sounds like a better hack would be to interpose the flash memory interface with a RAM cache that simulates writes without modifying the original flash data. Then they can hammer away at brute forcing it without the delay of reburning the flash.

Aoyagi 1 day ago 0 replies      
Sorry about the slight OT, but what truth is there in this statement I was presented with?

>Even if an iPhone is locked, all of that encrypted data can technically be read easily so long as the phone had at least been unlocked once since the time it was booted up.

Obviously I think it's a nonsense, but I have no way of disproving it (even though the burden of proof is on the claimer, naturally).

Edit: OK I found this http://www.darthnull.org/2014/10/06/ios-encryption so never mind, I guess...

revelation 2 days ago 1 reply      
The ACLU is not wrong, they are right in the technical sense.

But I very much doubt you would practically manage to remove that NAND chip and replace it very often on that umpteen layer ultra thin board. Instead, remove it once and stick it in a test fixture, then try brute forcing it.

ldom66 1 day ago 1 reply      
Never attribute to malice that which can be attributed to stupidity. Some engineer probably told upper management they couldn't decrypt the phone because the software would erase all data. Maybe because they didn't know, or didn't want to, but still this has blown out of proportion.

To be clear I don't think apple should compromise the phone, just that this is not a long con by the FBI to compromise all phones.

payne92 2 days ago 0 replies      
This attack was already widely discussed here, last week: https://news.ycombinator.com/item?id=11199093
zaroth 1 day ago 0 replies      
Relevant grant from the Department of Homeland Security from 2011: https://www.sbir.gov/sbirsearch/detail/361729

I'm surprised someone at Uni hasn't made demonstrating this exact attack a class project.

emcq 2 days ago 3 replies      
Maybe their exists experts that can get this right every time but there are significant risks to damaging a chip desoldering and resoldering. It's not just removing a through hole capacitor.
SocksCanClose 2 days ago 1 reply      
the most frustrating part of this whole thing is the multi-headed response by various agency chieftains. fbi says one thing. nsa says another. former generals say another.

am i crazy to want the president step up and say: "our position as a government is: x"? there's no/no way this has escaped his notice. isn't that part of the job description of "leader of the free world?"

bertil 2 days ago 0 replies      
What strikes me as odd in all those analysis is that they all assume that the FBI is not expecting that weakened security will mean that there will be far more difficult to address crime -- i.e. far more on their plate.
darksim905 1 day ago 0 replies      
I don't know why this case is getting so much attention when it's readily apparently the FBI could just get everything off the phone with a cellebrite & call it a day.
differentView 2 days ago 0 replies      
> Why the FBI can easily work around auto-erase

If it's so easy, then the ACLU should have no problem demonstrating it with an actual iPhone 5c.

pbkhrv 2 days ago 5 replies      
How practical is it to remove-restore-replace the NAND chip every 10 tries if you have to search through millions of combinations?
sabujp 2 days ago 0 replies      
so john mcaffee was right?
officialchicken 2 days ago 2 replies      
JaRail 2 days ago 3 replies      
This article seems wrong to me. I don't know a ton about the iPhone's specific implementation. That said, I was under the impression that these systems all worked similarly to the PC's TPM. Essentially, the encryption key is stored in a chip that acts as a black box. That chip is manufactured in such a way that makes it extremely difficult to extract data from. You can't simply copy it. You'd have to take it apart, inspect it with a microscope, and hope you don't destroy the data in the process.

The OS should set the security level initially. The TPM would enforce it. You can't modify the OS to make an attempt without it counting against the initially configured limit.


sathackr 2 days ago 3 replies      
With 14 million combinations just in a 4 character alphanumeric(upper/lower/numbers) password, I would think they would start to encounter flash reliability issues re-writing this "Effaceable Storage" long before the password could be broken.

This would also slow down their attack considerably.

I disagree that the claim is fraudulent.

timr 2 days ago 2 replies      
"The FBI can simply remove this chip from the circuit board (desolder it), connect it to a device capable of reading and writing NAND flash, and copy all of its data. It can then replace the chip, and start testing passcodes. If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it. If they plan to do this many times, they can attach a test socket to the circuit board that makes it easy and fast to do this kind of chip swapping."

Right. They could do this, and risk destroying the device, or they could ask Apple to do the easy, reliable thing, and just install a build on this phone that allows brute-force attacks.

Given that Apple has a long history of complying with these kinds of requests for valid search warrants, and that this situation is about as clear as it gets when it comes to justifiable uses of government investigatory powers, it's obvious why they're taking the latter approach, and not the former.

There's a legitimate privacy debate in this case, but this isn't it.

Edit: I'm just stating facts here, folks. Downvoting me won't change those facts, or make the government change its tactic.

CocoaPods downloads max out five GitHub server CPUs github.com
822 points by jergason  2 days ago   306 comments top 43
onli 2 days ago 5 replies      
Note how perfect that response from mhagger is. A clear, honest sounding assurance of what Github wants to deliver. A perfectly comprehensible description of what is the problem, and where it is coming from. And then suggestion how to fix it the project actually can work on, plus mentioning changes to git itself that Github is trying to make that would help. It not only shows great work going on behind the scenes (and if that is untrue, it at least gives me that impression, which is what counts), but also explains it in a great way.
Gratsby 2 days ago 7 replies      
From CocoaPods.org:

> CocoaPods is a dependency manager for Swift and Objective-C Cocoa projects. It has over ten thousand libraries and can help you scale your projects elegantly.

The developer response:

> [As CocoaPods developers] Scaling and operating this repo is actually quite simple for us as CocoaPods developers whom do not want to take on the burden of having to maintain a cloud service around the clock (users in all time zones) or, frankly, at all. Trying to have a few devs do this, possibly in their spare-time, is a sure way to burn them out. And then theres also the funding aspect to such a service.


So they want to be the go-to scaling solution, but they don't want to have to spend any time thinking about how to scale anything. It should just happen. Other people have free scalable services, they should just hand over their resources.

Thank goodness Github thought about these kinds of cases from the beginning and instituted automatic rate limiting. Having an entire end user base use git to sync up a 16K+ directory tree is not a good idea in the first place. The developers should have long since been thinking about a more efficient solution.

pjc50 2 days ago 5 replies      
This reply: https://github.com/CocoaPods/CocoaPods/issues/4989#issuecomm...

"Not having to develop a system that somehow syncs required data at all means we get to spend more time on the work that matters more to us, in this case. (i.e. funding of dev hours)"

In other words, using github as a free unlimited CDN lets them be as inefficient as they like. Such as having 16k entries in a directory ( https://github.com/CocoaPods/Specs/tree/master/Specs ) which every user downloads.

Package management and sync seems to suffer really badly from NIH. Dpkg is over 20 years old and yum is over a decade old. What's up with this particular wheel that people keep reinventing it seemingly without improvement?

indygreg2 2 days ago 2 replies      
I help run Mozilla's version control infrastructure and the problems described by the GitHub engineer have been known to me for years. Concerns over scaling Git servers are one of the reasons I am extremely reluctant to see Mozilla support a high volume Git server to support Firefox development.

Fortunately for us, Firefox is canonically hosted in Mercurial. So, I implemented support in Mercurial for transparently cloning from server-advertised pre-generated static files. For hg.mozilla.org, we're serving >1TB/day from a CDN. Our server CPU load has fallen off a cliff, allowing us to scale hg.mozilla.org cheaply. Additionally, consumers around the globe now clone faster and more reliably since they are using a global CDN instead of hitting servers on the USA west coast!

If you have Mercurial 3.7 installed, `hg clone https://hg.mozilla.org/mozilla-central` will automatically clone from a CDN and our servers will incur maybe 5s of CPU time to service that clone. Before, they were taking minutes of CPU time to repackage server data in an optimal format for the client (very similar to the repack operation that Git servers perform).

More technical details and instructions on deploying this are documented in Mercurial itself: https://selenic.com/repo/hg/file/9974b8236cac/hgext/clonebun.... You can see a list of Mozilla's advertised bundles at https://hg.cdn.mozilla.net/ and what a manifest looks like on the server at https://hg.mozilla.org/mozilla-central?cmd=clonebundles.

A number of months ago I saw talk on the Git mailing list about implementing a similar feature (which would likely save GitHub in this scenario). But I don't believe it has manifested into patches. Hopefully GitHub (or any large Git hosting provider) realizes the benefits of this feature and implements it.

jdcarter 2 days ago 2 replies      
Wow, really impressive response from GitHub. The right amount of technical detail coupled with balanced tone--halfway between "we support you" and "you make us crazy."

One correction to the post title: it's not maxing five nodes, but five CPUs.

web007 2 days ago 2 replies      
I keep coming back to point #4 - who ever thought that 16k objects in a single directory would be a good idea? Ever since FAT that's been a bad idea, and while modern FSes will handle it without completely melting down it's still going to cause long access operations on anything to do with it.

Even Finder or `ls` will have trouble with that, and anything with * is almost certainly going to fail. Is the use-case for this something that refers to each library directly, such that nobody ever lists or searches all 16k entries?

mikeash 2 days ago 4 replies      
The criticism against CocoaPods here seems awfully harsh.

Think about it from their perspective. GitHub advertises a free service, and encourages using it. Partly it's free because it's a loss leader for their paid offerings, and partly it's free because free usage is effectively advertising GitHub. CocoaPods builds builds their project on this free service, and everything is fine for years.

Then one day things start failing mysteriously. It looks like GitHub is down, except GitHub isn't reporting any problems, and other repositories aren't affected.

After lots of headscratching, GitHub gets in touch and says: you're using a ton of resources, we're rate limiting you, you're using git wrong, and you shouldn't even be using git.

That's going to be a bit of a shock! Everything seemed fine, then suddenly it turns out you've been a major problem for a while, but nobody bothered to tell you. And now you're in hair-on-fire mode because it's reached the point where the rate-limiting is making things fail, and nobody told you about any of these problems before they reached a crisis point.

It strikes me as extremely unreasonable to expect a group to avoid abusing a free service when nobody tells them that it's abuse, and as far as they know they're using it in a way that's accepted and encouraged. If somebody is doing something you don't like and you want them to stop, you have to tell them, or nothing will happen!

I'm not blaming GitHub here either. I'm sure they didn't make this a surprise on purpose, and they have a ton of other stuff going on. This looks like one of those things where nobody's really to blame, it's just an unfortunate thing that happened.

(And just to be clear, I don't have much of a dog in this fight on either side. My only real exposure to CocoaPods is having people occasionally bug me to tag my open source repositories to make them easier to incorporate into CocoaPods. I use GitHub for various things like I imagine most of us do, but am not particularly attached to them.)

wpeterson 2 days ago 2 replies      
It's totally reasonable to host your code on github and to build a package manager that loads the content of a package from it's github repo.

What seems insane is to use a single github repo as the universal directory of packages and their versions driving your package manager.

There's a reason rubygems has their own servers and web services to support this use case for the central library registry, even if the source for gems are all individually projects hosted on github.

riscy 2 days ago 1 reply      
> Scaling and operating this repo is actually quite simple for us as CocoaPods developers whom do not want to take on the burden of having to maintain a cloud service around the clock (users in all time zones) or, frankly, at all.

The CocoaPods developers seem to be missing the entire point of git: it's a _distributed_ revision control system.

Setup a post-recieve hook on Github to notify another server, that is setup with a basic installation of git, to pull from Github so as to mirror the master repo. Then, have your client program randomly choose one of these servers to pull from at the start of an operation. Simple load balancer to solve this problem.

spoiler 2 days ago 5 replies      
I find it amusing how GitHub's contact[1] form has (probably a recent addition):

> GitHub Support is unable to help with issues specific to CocoaPods/CocoaPods.


[1]: https://github.com/contact

rmoriz 2 days ago 2 replies      
CocoaPods (and Homebrew) mainly exist because of a lack of tooling in the typical Apple ecosystem. So I would blame Apple for not supporting the community with money or tooling. Letting GitHub with its limited amount of funding pay the bill isn't a nice move. Apple dev relations should throw some money at GitHub so they can provide some dedicated resources or offer to pay the cost of other solutions (like a 3rd party CDN/AWS/Google Cloud/).
zymhan 2 days ago 5 replies      
I've always found Github's business model interesting. What if a massive open-source organization (e.g. Fedora, Apache) decided to use it for all of their development, integrating it with continuous builds and all the associated pulls. Of course this isn't likely to happen for a number of reasons, but there are large open source projects that could put a significant load on their infrastructure if they chose to use Github as their main code versioning system.
fpgaminer 2 days ago 1 reply      
GitTorrent: http://blog.printf.net/articles/2015/05/29/announcing-gittor...

Imagine a world where GitTorrent is fully developed, includes support for issue tracking, and has a nice GUI client that makes the experience on-par with browsing github.com.

I mention this not as an "Everybody bail out of GitHub and run to GitTorrent!!!" sort of statement, because I believe GitHub's response here was excellent and confidence inspiring. But it's an unnatural relationship for community supported, open source projects to host themselves on commercial platforms such as GitHub. GitHub primarily hosts them to promote its business. That's not necessarily a bad thing, but it results impedance mismatches like demonstrated here.

That isn't to say that a mature GitTorrent would replace GitHub. Rather, I envision GitHub becoming a supernode in the network, an identity provider, and a paid seed offering, all alongside their existing private repo business.

Honestly, once I scrape a few projects off my plate, I'm inclined to dive into GitTorrent, see where it's at in development, and see if I can start contributing code. It just seems like such a cool and useful idea.

paradite 2 days ago 1 reply      
I have been seeing this trend of GitHub getting "abused" for purposes other than hosting source code.

- My school uses GitHub to host and track our software engineering project (which still can be argued as OSS).

- People using GitHub issue system as a forum.

- Friends uploading pdfs to GitHub.

- Recently people posted on HN about using GitHub to generate a status page.

I think this is a really bad trend and people should stop doing that.

iBotPeaches 2 days ago 0 replies      
This bug report is a great step in the direction for GitHub. As of this comment there are 3 different GitHub staff members responding and providing feedback to the CocoaPods team. From the previous "Dear GitHub" messages and responses, this seems like perfect community involvement.
pavlov 2 days ago 1 reply      
I've never really understood CocoaPods. Dragging a framework into Xcode was never much trouble, and the amount of 3rd party libraries in a OS X / iOS project ought to be fairly small, so the gains are trivial.

The potential downsides seem much more annoying. Do you really want to have your dependencies on an overloaded central server somewhere?

jrochkind1 2 days ago 0 replies      
What an unusually reasonable discussion. good on everyone.
sdegutis 2 days ago 3 replies      
I love how this was like the perfect storm of things that could go wrong, and how it seems like mhagger is just amazed more than anything else.
ak217 2 days ago 2 replies      
I love GitHub's response, but I would urge the project more strongly to use modern CDN solutions. CDNs are dirt cheap and incredibly powerful nowadays, for the data sizes that we're talking about here.
tjdetwiler 2 days ago 1 reply      
Rust's cargo does something similar, however it looks like they were much more conscious of git-scalability (ex: limiting the directories in a single level, only appending lines to files to make diffs small).


maaku 2 days ago 0 replies      
Using Github as your CDN is a dick move. Kudos to GH for not banning the project out-right, but CocoaPods should seriously reconsider what they are doing.
iamleppert 2 days ago 0 replies      
Amazing to me that people create inefficient systems like this and then complain when they are rate limited.
xemdetia 2 days ago 0 replies      
As a current maintenance developer/systems guy I can definitely feel the tempered annoyance from mhagger here. It's definitely nice to not remind yourself that it's not only your set of recurring issues in front of you that people have to deal with.
noahlt 2 days ago 2 replies      
Go's package manager, `go get`, also downloads from GitHub. I don't know the details of how `go get` and CocoaPods work, but I would be interested in learning why one is unscalable and the other seems to work.
SuperKlaus 2 days ago 0 replies      
In fact, they are maxing out five CPUs - not five nodes, big difference.
fokinsean 2 days ago 1 reply      
I found the solution humorous. Ironically shallow clones are causing the problems, so fetch the max :)

$ git fetch --depth=2147483647

rcthompson 2 days ago 1 reply      
Reading the issue, it seems that one of the problems is a single directory with lots and lots of files in it, which is something of a pathological case for Git. Now, this could be "fixed" by splitting the files in that directory into subdirectories, but the one giant directory will still exist in all the past commits. So would this actually fix anything, or just keep it from getting worse?
kodablah 2 days ago 0 replies      
Has any consideration been given to Bintray[1] as an alternative store for this stuff?

1 - https://bintray.com/

kmm 2 days ago 0 replies      
Funny thing is that the repo is only 7 MB gzipped (or 4 with lzma). Not that surprising, since it's just metadata of course. They say they have about 1 million fetches/clones per week, so that would make about 16 TB per month. I'm not sure how much bandwidth costs, but wouldn't some sympathetic CDN host that for free, since they're OSS?
soheil 2 days ago 0 replies      
I was up until 2am last night trying to publish my Pod [1] and Github kept timing out.

I had no idea it was just CocoaPods repo because my other repos were working fine. I accepted defeat, went to bed and everything was working great in the morning.

[1] https://github.com/soheil/SwiftCSS

sly010 2 days ago 1 reply      
I would be interested to know what are the other top GitHub repositories.Afaik the Nix package manager uses a similar model (using a GitHub repo as a database), however they periodically release snapshots and the default configuration uses those instead of git.
zoul 2 days ago 3 replies      
Another reason I consider https://github.com/Carthage/Carthage a better solution of the dependency management problem.
Negative1 2 days ago 2 replies      
When he says approaches similar to 'other packaging systems', which ones is he referring to? I can see why this is a bad approach but am unfamiliar with what would be considered a better practice (outside of just hosting a .tar on CloudFront).
superuser2 2 days ago 2 replies      
Just last night, all my pod installs were timing out after ~30ish minutes. That explains it.
joeblau 2 days ago 0 replies      
I just installed Cocoapods last night and tried to clone down the repo. It took about 5 minutes and I thought to myself "Is my 150MB/s connection slow?" This definitely clears up what was going on.
debacle 2 days ago 1 reply      
Why aren't the packages distributed? Composer is incredibly distributed and likely doesn't cause nearly the same headaches for GitHub.

Seems like a poor design decision on the CocoaPods side.

voltagex_ 2 days ago 0 replies      
It's difficult to run an open source project on a budget of $0. You're always relying on the goodwill of others.
LoneWolf 1 day ago 0 replies      
While I do not have much knowledge on the subject, why not using rsync?
rdancer 1 day ago 0 replies      
tl;dr: "Using GitHub as your [free-of-charge] CDN is not ideal, for anybody involved."
nimish 2 days ago 0 replies      
hopefully we can now move to using real artifact repositories.
speps 2 days ago 1 reply      
Why is everyone talking about CocoaPods where the title is CoacoaPods anyway? :)
whitehat2k9 2 days ago 1 reply      
Only the Apple development community would think it's OK to have 16,000 subdirectories in one place and abuse GitHub as a free CDN instead of putting some actual effort in and develop their own repository infrastructure - you know, like almost every other package manager in existence.
Const-me 2 days ago 1 reply      
I dont think GitHub acts wisely here.

Short term sure, theyre doing the right thing, implementing a nice way to manage the free rider problem without hurting them too much.

But long term its different.

Financially, one average programmer = $80k/year, one average cloud server = $4k/year.And, GitHub has hundreds of millions of venture capital.More than enough to provision a few more servers, even if they will be installing new servers just for those pods.

The way they act now will lead to someone will develop a decentralized git+torrent hybrid.When that happens, sure, those pods will no longer consume precious GitHubs resources.However, for the rest of the github users, there will be no reason to stay on GitHub either.

How We Build Code at Netflix netflix.com
607 points by hepha1979  1 day ago   130 comments top 14
mkobit 1 day ago 13 replies      
I'm interested in knowing more about the "25 Jenkins masters" that they have, and how much they have modified/built for Jenkins to make it work for them.

We are currently in a state of "big ball of plugins and configuration". A bunch of plugins have been installed, and lots of manual configuration has been put into jobs so that everybody has what they need to build their software. It has led to Jenkins being a "do everything" workflow system. The easy path that Jenkins provides, to me, seems like the wrong one - it makes it easy to just stuff everything in there because it "can" do it. This seems to leads to tons of copy/paste, drift, all types of different work being represented, and it is starting to become unmanageable.

Have others seen this happen when using Jenkins? How have you dealt with it?

vlucas 1 day ago 1 reply      
For those wondering about how this applies to Node.js use a Netflix like myself, it's in there towards the bottom of the article:

> "As Netflix grows and evolves, there is an increasing demand for our build and deploy toolset to provide first-class support for non-JVM languages, like JavaScript/Node.js, Python, Ruby and Go. Our current recommendation for non-JVM applications is to use the Nebula ospackage plugin to produce a Debian package for baking, leaving the build and test pieces to the engineers and the platforms preferred tooling. While this solves the needs of teams today, we are expanding our tools to be language agnostic."

Gratsby 1 day ago 2 replies      
> The Netflix culture of freedom and responsibility empowers engineers to craft solutions using whatever tools they feel are best suited to the task.

I absolutely love that. I'm a huge fan of what Hastings and company have done over there in terms of culture and making Netflix a unique and desirable place to work.

I think it's time for another round of "find a way to make Netflix hire me."

gjkood 1 day ago 3 replies      
Major outage being reported worldwide.


Anything interesting deployed in the last hour?

Something in the CI/CD tool chain, Spinnaker, failed for it to move all the way to Live without being caught.

moondev 1 day ago 2 replies      
Spinnaker is an amazing tool. Really makes it easy to confidently deploy applications via immutable infrastructure.
mattiemass 1 day ago 2 replies      
Very cool article. Amazing how much tooling Netflix has built themselves.
Scarbutt 1 day ago 5 replies      
What are the reasons for Netflix choosing nodejs for their front-end server and not java like in their back-end?
markbnj 1 day ago 2 replies      
I'm a Netflix fan, as a consumer and an engineer, and this blog post just reinforces my fanboi status. Amidst the descriptions of deployment tools and pipelines one thing stood out for me: the fact that AMI bake times are now a large factor, and that "installing packages" and the "snapshotting process" were a big piece of this. Containers are definitely the answer to this problem. You can deploy base images with the OS and common dependencies, and have the code changes be a thin final layer. Of course with such a sophisticated pipeline based on AMI deployment this change would not be trivial for Netflix, but the bottom line is they have described the primary container use case perfectly, imo.
neduma 1 day ago 2 replies      
How do they 'externalize config' with respect to http://12factor.net/config?
oconnore 1 day ago 1 reply      
x0rg 1 day ago 1 reply      
Isn't Netflix using Mesos (see http://techblog.netflix.com/2015/08/fenzo-oss-scheduler-for-... )? I don't understand what role it plays here.
sayrer 1 day ago 2 replies      
Seems like a nice system, but would be improved by building with Bazel or Buck instead of Gradle.
dang 1 day ago 0 replies      
Please stop posting these.
pyman 6 hours ago 0 replies      
This is how my company used to build software 10 years ago.

Now we have Docker containers, cloud VMs, GitHub, 1 click deployment, advanced metrics, Grafana, Go microservices, Slack bots, etc.

Sounds like Netflix is stucked in the past.

Pentagon admits it has deployed military spy drones over the U.S usatoday.com
467 points by jonbaer  1 day ago   171 comments top 25
cheath 1 day ago 1 reply      
I'm pretty sensitive to these things. That said, if you look at the partial list provided (granted this was potentially cherry picked), we're looking primarily at disaster awareness stuff. Flooding, wild fires, and search & rescue. Not spying.

Military resources, such as The National Guard, get called up for disaster relief all of the time. I think I'd rather drones helping people in these scenarios than what they're primarily used for.

spdustin 1 day ago 10 replies      
It's not just drones. Law Enforcement often flies surveillance flights, circling over locations of interest. These planes often have thermal/visual imaging video cameras and, according to some stories, may also be carrying other tracking or signal interception hardware (think Stingray). HN user jjwiseman [0] scooped most of the press about this.

You can locate such "interesting" flights right now, using your browser. Just open up ADSB Exchange Virtual Radar [1], which doesn't filter out flights with certain squawk codes like other online virtual radar sites do. If you do, select "Menu" from the map (with the gear icon), then "Options". Select the "Filter" tab, select to "Enable filters", select "Interesting" from the dropdown listbox, and select "Add Filter". Now you can zoom out over the country, and see all the "interesting" flights using the table on the left. Note any flights with the "LE" or "FBI" or "DHS" user tag.

Right now, as I write this, an FBI-owned aircraft is circling over the Norwood/Bronx area of NYC [2], tail number N912EX, registered to OBR Leasing (one of the "shells" that the US Gov't uses for registering its law enforcement aircraft), as mentioned in an AP story last summer [3]

[0]: https://news.ycombinator.com/user?id=jjwiseman

[1]: http://www.adsbexchange.com, select "Currently Tracking [number] Aircraft on upper-right"

[2]: http://i.imgur.com/EUYqx98.png

[3]: http://bigstory.ap.org/article/4b3f220e33b64123a3909c60845da...

Edit: Another one, flying around NW Los Angeles, right now: http://i.imgur.com/PwRpqRe.png

Edit2: Any aircraft squawking transponder beacon codes between 4401-4433 are engaged in law enforcement operations. More on the various squawk codes reserved by US Gov't operations can be found here (pdf link): http://www.faa.gov/documentLibrary/media/Order/FINAL_Order_7...

protomyth 1 day ago 3 replies      
Why, yes, they used one to find a cattle rustler in ND http://www.forbes.com/sites/michaelpeck/2014/01/27/predator-...

I still believe this should have illegal as they had no warrant and it violates the Posse Comitatus Act.

beau26 1 day ago 1 reply      
It's shameful that

(a) it took a freedom of information request to make this information public.

(b) the Pentagon did it's own internal report and found that there was no wrongdoing.

(c) that nobody in the government is going to hold these clowns responsible or create any sort of legitimate process for determining whether these flights were legal or not.

jallmann 1 day ago 8 replies      
> an unnamed mayor asked the Marine Corps to use a drone to find potholes in the mayor's city

Let's play a guessing game! San Diego? We've got Miramar and Camp Pendleton here, along with crumbling infrastructure and terrible potholes. Although the city certainly doesn't need help finding potholes around here...

Using drones for this kind of thing actually makes a lot of sense, although not a $20 million militarized Predator.

mmaunder 1 day ago 1 reply      
It will take a few years, but drone use will trickle down from the military into federal and local enforcement branches.


I'm curious about the flight plan approval and filing process they go through with the FAA.

Also wondering if the flights show up on the 5 minute delayed ASDI API:


And if the drones carry ADS-B transceivers and if they show up on other transceivers during flight. (Which would make them visible/trackable to anyone ground or air based that is listening)


cgriswald 1 day ago 4 replies      
> "Sometimes, new technology changes so rapidly that existing law no longer fit what people think are appropriate," Stanley said.

Sometimes, maybe. I'd argue rarely. I don't see much difference between an unmanned drone and an unmanned satellite or a manned helicopter in terms of applicable law.

Believing that a new law is needed because computers/drones/robots/AI/whatever now exist can lead to bad laws, or laws that are out-of-balance in terms of punishment. (i.e., commit a crime - 5 years. commit the same crime WITH A COMPUTER - 10 years)

> "It's important to remember that the American people do find this to be a very, very sensitive topic."

I think the media finds this to be an eyeball-grabbing topic, but AFAICT, the American people do not care much about it.

jngreenlee 1 day ago 0 replies      
There's a lot of talk about the Posse Comitatus Act[0]. The real distinction is in "intent of the mission". A mission that is:

A)Conducted by United States Army or the United States Air Force, and

B)Conducted to enforce domestic policies within the United States

Would be in violation of the Posse Comitatus Act. However, there is disagreement over whether this language may apply to troops used in an advisory, support, disaster response, or other homeland defense role, as opposed to domestic law enforcement.[1]



bolivier 3 hours ago 1 reply      
All I can think is "duh." Why wouldn't the US deploy military drones over her own soil? Who's going to stop them?

I doubt any level of wrongdoing within the US Government could shock me anymore.

jameslk 1 day ago 1 reply      
It seems hypocritical to assume this is any worse than having these drones flying over other countries, sometimes without their consent.
cbanek 1 day ago 1 reply      
A couple of weeks ago, I was taking in some scenery around Creech AFB, North of Las Vegas. While there, I saw a Predator (although it might have been a MQ-9 Reaper) being launched from the airstrip.

It looked like a giant model aircraft getting launched when the wind hit it. Pretty cool at the time, although after reading this, I hope it was just a training mission over some non-existent Nevada AFB...

EasyTiger_ 1 day ago 3 replies      
Next come drone attacks on US citizens? And who is going to stop them, now that they can do anything they want in the name of terrorism.
tokenadult 1 day ago 0 replies      
From the article:

"The Pentagon has publicly posted at least a partial list of the drone missions that have flown in non-military airspace over the United States and explains the use of the aircraft. The site lists nine missions flown between 2011 and 2016, largely to assist with search and rescue, floods, fires or National Guard exercises.

"A senior policy analyst for the ACLU, Jay Stanley, said it is good news no legal violations were found, yet the technology is so advanced that it's possible laws may require revision."

This sounds a lot less dramatic than the article headline, but it's good that this is being reported and discussed publicly.

cm2187 14 hours ago 0 replies      
What I don't understand is that the Pentagon isn't the CIA, it is not prohibited to operate in the US, is it? I mean you have ICBM deployed in the US. What not drones?
nerdcity 1 day ago 0 replies      
>any use of military drones for civil authorities had to be approved by the Secretary of Defense

Gee, what oversight. I'm sure they'll be denying approvals left and right.

deevus 22 hours ago 1 reply      
Edward Snowden talks about this in Citizenfour.


I also learned at NSA,we could watch drone videosfrom our desktops.As I saw that, that reallyhardened me to action.- In real time?- In real time.Yeah, you... it'll streama lower quality of the videoto your desktop.Typically you'd be watchingsurveillance dronesas opposed to actuallylike you know murder droneswhere they're going out thereand bomb somebody.But you'll have a drone that's justfollowing somebody's housefor hours and hours.And you won't know who it is,because you don't havethe context for that.But it's just a page,where it's listsand lists of drone feedsin all these different countries,under all these different code names,and you can just click onwhich one you want to see.


He doesn't say explicitly that this includes the U.S., but I made that assumption and here it is: proven.

Zhenya 1 day ago 2 replies      
I wonder who mayor genius is:

 One case in which an unnamed mayor asked the Marine Corps to use a drone to find potholes in the mayor's city.

cagey_vet 19 hours ago 0 replies      
drones? how about surveillance planes every day i see over gaithersburg seen via SDR
TazeTSchnitzel 17 hours ago 0 replies      
usatoday.com would like to use your current location.
m23khan 1 day ago 0 replies      
Pakistan says Hi!
madaxe_again 1 day ago 0 replies      
Legal != right.
l3m0ndr0p 1 day ago 1 reply      
Those people that are responsible for this activity should be brought before a court and tried for treason.
tempodox 17 hours ago 0 replies      
The F.B.I, the N.S.A, the Pentagon, ...

We're being governed by criminals.

pinaceae 1 day ago 1 reply      
Now you know how it feels.

/signed by the rest of the world.

Let's Encrypt has issued its first million certificates eff.org
437 points by thejosh  2 days ago   152 comments top 15
StavrosK 2 days ago 7 replies      
> It is clear that the cost and bureaucracy of obtaining certificates was forcing many websites to continue with the insecure HTTP protocol

I never realized this so clearly, but it's true. The biggest hindrance to security until LE was that certs were expensive and hard to install. I don't think it was so much the former as the latter. I'd gladly pay 10% more for my cert if it meant my server could renew automatically without me touching it at all.

Then again, for small things, like my home computer that I want to access some stuff on but don't want to pay $10 for, self-signed was fine, so I guess the price was a problem, to a degree.

XorNot 2 days ago 5 replies      
The default LE client was kind of a pain to work with. The docker container was better but where it really helped was the Lego golang implementation. That one 'just works' and was super easy to setup behind nginx to run automatically. It also writes a nicer config dir.
kardos 2 days ago 2 replies      
So what does this mean for the incumbent CAs? Are we going to see a lot of consolidation in that area?

At a first glance it looks like LE has neutered the DV cert business. How much of their revenue is up for grabs here... how strong is the incentive to pursue extra-legal means of killing off LE? (Such as by stealing and leaking their signing keys...)

ngrilly 2 days ago 2 replies      
I hope that Let's Encrypt will be able to issue wildcard certificates at some point.
bpicolo 2 days ago 1 reply      
Set up my first let's encrypt just a couple days ago. Was incredibly painless to then go add some subdomain certs.

Here's hoping for wildcards some day : )

pingec 2 days ago 6 replies      
Do their certificates still expire in only 90 days? That makes them very unappealing to me :/

Edit: I understand and agree on why they made it like this. But automating it is not an option in my use case, oh well... I agree it's for the better in the grand scheme of things :).

Sir_Cmpwn 2 days ago 1 reply      
Given that LE does certificate transparency, would it be possible to find out what their millionth certificate was?
igravious 2 days ago 1 reply      
Super! Just set up my first secure website with Nginx.

Absolutely simple. Literally the only way it could have been easier is if letsencrypt had been installed on my Centos 6.7 box but it was only a `git clone' away.


1) Stop the web server.

2) ./letsencrypt-auto certonly --standalone -d _my_domain1_ -d _my_domain2_ ...At the curses prompt give it your contact email address, and accept the licence

3) Edit nginx.conf - Change all listen 80s to listen 443s. Add the following commands

 ssl_certificate /etc/letsencrypt/live/_my_domain_/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/_my_domain_/privkey.pem; # bump up protection ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
4) Start the web server

5) Doh! Go through website changing hardcoded http:// links to protocol (scheme) relative links // See here: https://www.paulirish.com/2010/the-protocol-relative-url/

6) Restart the web server


Ok. It's doesn't seem that simple now that I say it but it was easier than wrestling with Apache and rewrite rules :)

pkaye 2 days ago 2 replies      
How useful is this for a home server where there is no domain name registered? Can it be configured for my local secure web server?
kriro 2 days ago 0 replies      
Their certificates over time graph looks hockey stick-ish. Very good sign :)
ohitsdom 2 days ago 2 replies      
I host many small sites on a $10/month shared server with a typical LAMP stack host. Unfortunately, SSH access is limited so I keep running into issues getting Let's Encrypt running. Has anyone else run into any issues? Not looking for step-by-step help, just wondering if I'm alone.

I have seen paid software promising to solve this [0], but I'd rather not pay to get a free certificate.

[0] https://letsencrypt-for-cpanel.com/

Sodel 1 day ago 0 replies      
First of all, I love this.

One nagging thing in my mind, though, is how easy it seems it would be for a Three-Letter Agency to backdoor LE to pieces. Then again, I guess that's nearly just as true for any CA out there.

(I don't mean to pooh-pooh this useful service! And, if there's any interloper-mitigation going on that I don't know about, I'd be happily put straight!)

kn9 2 days ago 0 replies      
Certify for Windows IIS with autorenewalhttps://certify.webprofusion.com/
siquick 2 days ago 1 reply      
What are the benefits of using LE over Cloudflares HTTPS?
superkuh 2 days ago 1 reply      
LE is okay but the 90 day limit puts me off. It's so much easier to do a self-signed for 10 years. The problem with self-signed certs isn't intrinsic to them, it's a problem with browsers scaremongering for the lowest common denominator.

For my mail server which only I use, my websites which only technical people visit, etc, there's no reason to deal with the hassle of LE.

Painkillers now kill more Americans than any illegal drug vox.com
403 points by davidbarker  1 day ago   301 comments top 29
krschultz 1 day ago 26 replies      
I've never taken an illegal drug in my life. I've smoked a cigarette about 5 times. I drank in college but lately I've cut that out of my life as well. I had a security clearance with drug testing requirements for a while and now I just don't like the feeling of a hangover from alcohol or the risk of ingesting random plants/chemicals made by shady people.

In short: I'm the most vanilla, square, anti-drug person you can find. I don't want to use them, and I think other people would be better off if they reduced their usage as well.

Yet I can not for the life of me understand why drugs are illegal. Not just pot, all drugs. I'm totally onboard with making it our public policy that we want to reduce the use of drugs. That makes perfect sense to me. It does not make sense to me why anyone still believes that using the criminal justice system as the mechanism for getting to that goal is the right path. We are spending insane amounts of money on a failed approach while also generating huge negative side effects by creating an enormous group of people with criminal records. It's probably the worst thing this country has done to our own people since segregation and it seems like all of the policy people understand this. Why can't we get political will to do something different?

disposeofnick9 1 day ago 2 replies      
I've lost at least two elderly, extended family members this way. Both applied both a patch and took a pill, which caused an OD.

The issue is that many opioids and non opioids gap between the therapeutic dose range and LD50% is often dangerously narrow.

Complication #0: serum bioavailable molecule assay is rarely monitored. People metabolize and clear drugs at vastly differently rates.

Complication #1: Hospital mistakes still happen quite frequently, despite many measures to prevent them, especially with inexperienced and overworked nurses/assistants.

Complication #2: cummulative dosing error or interactions, especially multiple, independent prescriptions for similar opioids with different administration routes (patches, sprays, pills, injections)

Complication #3: overprescription of opioids because they're cheap, especially to veterans, which also leads to prescription and hard drug addictions.

Solution: opiods need to be singularly controlled at home or in the hospital by an integrated, blood/interstitial fluid measuring/dispensing unit to avoid OD and push back on abuse.

Plus, anyone taking opioids should also have narcan or equivalent antidote readily available, and wear a medalert QR code bracelet which lists relevant conditions and medications should they be found unresponsive.

Finally, avoid painkillers as much as possible and take the least dose which reduces stress level.

6stringmerc 1 day ago 0 replies      
I have chronic pain due to a heriditary, incurable condition. Right now one of my ankles is in an elastic wrap to ease the irritation from arthritis. In our home, we did have a bottle somewhere of Tylenol 3, aka codeine. Regular acetominophen was usually what I got to help with an issue.

As I grew into adulthood, I knew the pains I experienced were directly related to my condition, and it was my desire to not really 'cloak' the pain, but avoid it in the first place. Preventive if you will. It helps, but it's clear to me that I wanted to be healthy, and if I have to occasionally take something, so be it. Naproxen sodium has worked quite well of late.

The point of all this rambling is that I simply don't want the hassle of becoming addicted to pain pills. Or sleeping pills. Or nasal spray when it's allergy season. I've lived with pain so long for my life that I'm kind of used to it, and I do say so as a point of pride. It's the body I was born with and it's the one I'll have to use for this gig, take care of it.

I don't fault people for wanting pain treatment. I think the way the system was set up with pills flooding the US was incredibly destructive, and highly indicative of the dangers of for-profit medicine as a system. Toss in the DEA's drug laws and it just turns patients into criminals and that benefits only a very limited group.

When I eventually started seeing commercials on TV for a treatment for opiod induced constipation all I could think about was Trainspotting and that we have a real, genuine problem on our hands in the US.

musgrove 1 day ago 2 replies      
If 47k deaths per year is an "epidemic" as the author terms it, the 610k that die each year from heart disease must be an all-out pandemic.It never was a war on drugs. Drugs are inert and aren't capable of fighting a war. It's a war on addiction. And good luck winning that on a national scale with "laws."
brandonmenc 1 day ago 1 reply      
Lots of comments from people who have no experience with chronic pain aghast that doctors would fulfill patient "demands" for painkillers instead of treating the underlying cause.

Pain is self-report, so all a doctor can do is prescribe based on patient demand. Maybe they can't identify an underlying cause, or maybe the treatment (ex: back surgery) is too risky.

Spine surgery that might not work and can leave you with say, loss of bladder control? I'd take the pill every time, and if my doctor didn't just hand it over, I'd find another doctor.

jrapdx3 1 day ago 2 replies      
It's troubling the way this article presents the issue. Treating chronic pain is an enormously complex problem that clinicians have to deal with, especially as it gets bound up with collateral pitfalls of drug dependence, politics of health care delivery systems and conflicting pressures from patients, government regulators and others.

While unethical prescribers (not all are physicians) contribute significantly to rising misuse of opioids, the vast majority of practitioners want to do what's best for patients. As the article notes, there are few options for managing chronic pain, leaving opioids the only realistic choice in many instances.

None of the providers I know think opioids are preferable, but more like a necessary evil. They prescribe opioids sparingly, reluctantly, diligently. Patients have told me it's become increasingly difficult to get prescriptions for quite modest doses of opioid agents they've used for years without dose escalation. The tendency to throw babies out with the bathwater is not unique to this situation, but no less problematic.

Blaming pharmaceutical companies doesn't seem a constructive approach. Probably there's a lot of R&D going on in this domain without much success, meaning it's a very hard problem to solve. I'm certain that a major breakthrough would be eagerly marketed, highly likely the profit margins would be huge. Meanwhile, we're left with the status quo, and manufacturers are meeting market demands. Isn't that how our economy works? Pharma sales are already more highly regulated than nearly all other industries, what more should be done?

Few legal drugs are as controlled in the US as schedule II opioids. If there were no such controls it's likely that the number of overdose-related deaths would be higher than it is. No one knows what solution will work, the need to be being careful about changing "rules" should be obvious.

The article's advocacy of "medical marijuana" as an alternative is IMO inappropriate. Simply enough, research on the uses of cannabis components for pain treatment is in very early stages. Specific indications and side-effect risks are inadequately understood. Recommending use of these components as treatments for pain is premature.

AlleyTrotter 1 day ago 0 replies      
Simple commentWhat about the people who find relief from chronic pain with opioids and have no other option? These people are the ones who will suffer from the " we know whats best for you crowd"
Havoc 1 day ago 0 replies      
I try to stick to Aspirin & Paracetamol for this reason. Even Paracetamol feels a touch dicey @ liver failure stats.

However I've been in decent pain for 1 year+ before so I know what its like & can totally understand why people go for the powerful stuff. Continuous pain like that slowly but surely grinds your psyche to fine dust over the long run. Thats the part that people without chronic pain miss...

joveian 1 day ago 1 reply      
This seems like a particularly limited article, although a better slant than many. The NY Times just had this (also not wonderful, but with some additional information) article a few days ago:http://www.nytimes.com/2016/03/07/us/heroin-epidemic-increas...

While the title mentions heroin, the article at least mentions that deaths are frequently due to more deadly prescription painkillers being mixed in. One thing I wonder that I haven't seen addressed (I'm not sure if there is even data available) is how many overdose deaths are due to use of multiple drugs at the same time (alcohol for example makes many drugs more deadly).

Hopefully there will be more and better reporting on the issue. IIRC (and wikipedia agrees at least), these numbers mean that drug overdoes are now killing non-trivially more people in the US than car accidents.

brbsix 1 day ago 0 replies      
The really sad thing about this is that nature has a remedy for the grip of opiate addiction, iboga, yet it is illegal as well.
ashwinaj 1 day ago 0 replies      
This is why you need a counter balance to monopolistic tendencies of the free market. Be it in the form of regulations or making companies liable for their greedy actions.

It has been proven time and time again that systematically removing "common sense" [0] regulations only harms society in the long run.

[0] Please don't start a mundane discussion about what "common sense" means.

user_0001 1 day ago 3 replies      
What are the rates like compared to other countries?

Does the US just over prescribe painkillers, meaning more flood to the blackmarket?

Is it people are getting it from the Dr and accidentally ODing?

Are the Drs prescribing without care, so those who want the drug for a high and no medical reason can?

I never knew painkillers to be used as party drug / fun drug in the UK (outside of the heroin using demographic) nor ever heard of some one ODing on prescribed painkillers.

Seems strange it is such a big issue in the US

mc32 1 day ago 7 replies      
So painkillers used against prescription kill more people than any individual illegal drug and since people demand painkillers to treat chronic pain physicians are looking to treat chronic pain with alternatives one such is using MJ as one of those alternatives because misuse doesn't result in fatal overdoses, generally.

Vox, stop with the hyperbola.

cplease 21 hours ago 0 replies      
100 million Americans struggled from chronic pain in the 1990s? (linked video) One in three men, women, and children, suffered from pathological, chronic pain? How can they quote a ludicrous statistic like that straight without comment?

Or is that straight statement supposed to distort some watered down, meaningless figure factoid like 100 million people have lingering pain due to some cause at some point in their entire?

c3534l 1 day ago 0 replies      
This isn't really surprising. Opiates have long been used as both recreational drugs and effective analgesics. All the major opiates people abuse besides opium itself were created at one point or another as a painkiller. It's unfortunate, but they're also really good at their jobs. I think that if you need prescription painkillers you should have them. Taken without wanton disregard they're actually fairly safe, although physical addiction is always possible.
nathanvanfleet 1 day ago 2 replies      
Just so you know, opioid pain killers are actually not useful for chronic pain at all. Over the long term it actually makes patients sense of pain GO UP. It's excellent for non-chronic pain however.
cpfohl 1 day ago 2 replies      
Knowing what I know about opioid painkillers, I don't think I'd ever accept a script for them. I'd accept them in the hospital, but never in a bottle that goes home with me...
lazyant 1 day ago 2 replies      
Are there any alternative treatments of pain that don't involve drugs? for ex http://www.570news.com/2016/03/06/waterloo-man-praises-local...
njharman 1 day ago 0 replies      
Joining cigs and alcohol, eh?
swillis16 1 day ago 1 reply      
All it takes is one or two extra pills to get high from the standard opioid pain prescription. It would've been nice to see this mentioned in the article but it seems pretty light on content.
tosseraccount 1 day ago 1 reply      
Rhetorical question: how many of these deaths also involve alcohol?
trophycase 20 hours ago 0 replies      
But illegal drugs destroy many more lives.
joesmo 1 day ago 2 replies      
Send patients home with Narcan and train the people they live with to administer it as well. Have every EMT, firefighter, and police officer in the nation carry and know how to administer Narcan. Have it be part of every single first aid kit sold in this country. Remove the social stigma of drug abuse. Remove penalties for people who help others who are overdosing. You'd think someone would have some common sense in this country but you'd be wrong. It will never get better the way things are going now. It's ridiculous to even have a fucking article like this that doesn't mention the numerous tried and true solutions that exist but are simply not being put into place because the people in power in this country want to see people dying.

The problem isn't that we don't have solutions. Solutions are a plenty. The problem is that no one in America cares. No one in this country gives a shit that people are dying. Most people want it to happen. They support the fucking drug war. They want people to die. Until this fucking shit changes, people will continue to die and idiots will continue to wonder what can we do? So many fucking things, I don't even have time to write them all down. That's the fucking sad part.

FussyZeus 1 day ago 0 replies      
Is this really surprising? All the benefits of illegal drugs without the risks involving prison time and public disgrace. All you need to do is figure out what things to tell your Doctor to make him think you need one of these things, and you have a legal (and probably insurance funded) supply.

Not saying of course that everyone who gets these doesn't need them, I'm sure many do, but we have something like 90% of the worldwide consumption occurring in the States, so something is clearly up.

yarou 1 day ago 0 replies      
For some reason, I never saw the appeal of opiates.

Granted, I use them somewhat occasionally (as needed) for pain, but they don't really cause in me the compulsive, addictive behavior I've read about. My internet addiction (HN included) is far worse than any chemical substance I've ever used.

julie1 1 day ago 0 replies      
Prescribing opium ... a trend that have not been seen in the world since victorian era in the UK.

Opium having the reputation to make people amorph losing their will to rebel.

The new trend is opioid are now cheap and not prescribed to the rich but the poor.

Religion used to the opium of the people they said, and now that opium is cheap, religion is not needed anymore to make people servile.

I love this new era of progress.

Tomorrow we make an application to help parents poor sell their kids body part on the internet for the cure of richer people?

I mean, let's try to make even more dystopic. We can do better. That is what progress is. Making system more efficient.

bobby_9x 1 day ago 0 replies      
It makes sense just based on statistics. Americans have more access to pain killers than any illegal drug and just based on this, will result in more mis-use (and death).

If illegal drugs were all made legal tomorrow, we would see something similar.

fapjacks 1 day ago 0 replies      
Kratom can solve this problem, but the FDA won't have it.
C2: Affordable X86-64 Servers scaleway.com
440 points by fooyc  1 day ago   218 comments top 46
lazyjones 1 day ago 4 replies      
Avotons are very slow, an 8C SoC will typically be slower than an 8 year old 2C desktop CPU (I ran Go builds as a benchmark on my own 2.4 GHz C2750 vs. a 2008 iMac with 2.8 GHz Core 2 Duo).

As for Scaleway, some people seem to like it very much, but I found their policy about spamming their users problematic. They (online.net) mock you at registration with a sleazy checked and disabled box for receiving spam ("product news" etc.), therefore I would consider their offers "ad-supported".

The C2 is advertised as "bare-metal", but since they offer a 4C variant, I doubt that (there's a 4C variant, the C2550, but that doesn't seem to be a sane choice). C2L might be a full dedicated box (or not), but C2S and C2M seem very much VPS/shared. It's likely to be based on SuperMicro MicroBlades: http://www.supermicro.nl/products/MicroBlade/module/MBI-6418... 4 nodes in 1 3U blade!).

cfallin 1 day ago 1 reply      
Prices seem really great but a few paragraphs down they say the servers are based on Avoton SoCs. Intel Avoton is an Atom chip (Silvermont core), so CPU-bound performance will be somewhat lower than the usual Sandy Bridge/Haswell/whatever core that you get on AWS or Google Compute Engine. It's a server SoC though so I/O throughput is probably pretty decent...
speakeron 1 day ago 0 replies      
I've been testing the performance of the C2.

Compared with a Xeon D-1520 (the current hot chip of low-cost cloud computing and actually very nice), single core speed is less than half of the Xeon D (at about the same clock rate of 2.4GHz); multi-thread speed (8 threads running lame encodes) is about 66% of Xeon D.

Not bad for the price.

gravypod 1 day ago 2 replies      
Using this for a remote backup server would be cool. Start the server, copy over backup, take their "storage snapshot", and turn off the server.

Lets do some napkin math: (In USD)

 - $0.01/hr USD - 10hr of backups each day (Heavy Usage) - 10 cents/day, $3/month, $36/year
You can also take snapshots. So that's extra cool.

Edit:Calculations for 150GB storage instances:

 - 0.01/hr (@ 150GB of storage) - 20hr of backups each day (Heavy Usage and more data transferred) - $0.24/day, $7.27/month, $87.29/year
Just did these for my own needs so thought everyone else might enjoy having them too.

LogicX 1 day ago 5 replies      
I've been very pleased with http://packet.net and their typo 0 server. It's bare metal, but much more performant than here. They have an amazing network, it's by the hour, and it's < $40/mo. Bandwidth is not included: $.05/GB, but that's half AWS.

Also they're based in the US, so less latency for those of us with primary customer bases here.We use them at DNSFilter in NYC as part of our anycast network, for the last two months. Looking forward to their San Jose and Amsterdam datacenter expansions coming soon.

earlz 1 day ago 5 replies      
Their support isn't great, and they're only in Canada and EU, but Kimsufi[1] is my goto for beefy but cheap dedicated servers. Their cheapest offering is $5/month for an Atom, 2G RAM, and 500G harddrive. But where the value really lies is a step up to about $25 where you start getting non-Atom processors, 16+G of RAM, and 1+TB harddrives. Also free bandwidth. It's a real dedicated server so you can install whatever on it, but KVM access is expensive if you break it into not booting. You can deploy a fairly good set of distros through their built-in wizard for free

1: https://www.kimsufi.com/us/en/

zschallz 1 day ago 2 replies      
The blade servers at Delimiter (https://www.delimiter.com/) are even more affordable. I pay $20 a month for a dual Xeon blade with 16gb of ram.

They did have a very long downtime this year with no service credit, but uptime has been reasonable over the past year. If you're looking for a hobby box it's a pretty good deal.

codecamper 1 day ago 8 replies      
unlimited bandwidth. Is that for real?

300 Mbits. Let's assume you can pump out 100 Mbit sustained. That is about 10 MB/s. or 26,000,000 Megabytes/month. 26,000 GB / month.

 AWS & google cloud are about $.10/GB. That'd cost 2600 USD to serve.
Scaleway claims that'd cost just 12 euro.

For realz?

thenomad 1 day ago 5 replies      
Dammit. Cheap - very cheap - for everything but storage.

It seems impossible to find a low-cost server with some big, slow spinning disks on it at the moment. I'm really not sure why.

Anyone got any recommendations there? Where would I look, if anywhere, for, say, 4Tb of storage attached to a low-cost virtual or dedicated server, for less than Google Nearline or equivalent?

muhpirat 1 day ago 0 replies      
CPU:model name : Intel(R) Atom(TM) CPU C2750 @ 2.40GHzNetwork:root@scw-4e7977:~# ./speedtest-cli Retrieving speedtest.net configuration...Retrieving speedtest.net server list...Testing from Free SAS ( best server based on latency...Hosted by NEOTELECOMS (Paris) [1.59 km]: 2.652 msTesting download speed........................................Download: 881.83 Mbit/sTesting upload speed..................................................Upload: 513.56 Mbit/s
Vlaix 1 day ago 0 replies      
Not surprised to read in the comments they're a subsidiary of Online.net. Ever since they've introduced ultra-cheap dedicated boxes ten years ago, it seems everybody's been sub-renting either them or OVH. I wonder why there aren't more similar offers worldwide, at least in Europe (I can only think of Leaseweb, and they're data capped). The network backbone isn't much different in Britain, Holland or Germany. And the server units aren't really custom.
siscia 1 day ago 1 reply      
I had a great experience with scaleway.

However last time I checked the still have only IPv4 and very few of them, it happens to me that I wanted to spin up an instance for a quick experiment but wasn't possible to have a public IP where to connect, mine was just a simple experiment, nothing important, but still...

cateye 1 day ago 1 reply      
How would this compare to Hetzner offerings? https://www.hetzner.de/hosting/produktmatrix/rootserver
sspiff 1 day ago 0 replies      
Hosted in the same datacenters, with similar low-end offerings: https://www.online.net/en/dedicated-server#perso.
tluyben2 1 day ago 4 replies      
Invite only.... So you cannot even get them. What's the use of this announcement?
tokart 1 day ago 1 reply      
C1 is a good idea and works well especially while arm distro are maturing.IMHO C1 perfs are quite good for the price (700-900 tps on pg_bench, 1500 req/s on rabbitmq on debian).We are running an ELK server which is performing well enough for us even with 2gb of ram.

I was just expecting a low cost (6$/month) 4cores ARM64 server with 8gb of ram, I think it would have been more exciting !

aidenn0 1 day ago 0 replies      
$13/mo for 8GB of ram is the big thing for me here; most of my servers are not cpu bound, but require a lot of tuning to fit in the smaller VM instances.
reynoldsbd 1 day ago 2 replies      
Where are Scaleway's data centers located? I found mention on their site that they're a Paris-based company, but is this the only geographical location for their offerings?
kyriakos 1 day ago 3 replies      
the prices look sweet. anyone has experience with them? How's the network stability? support response times?
vardump 1 day ago 0 replies      
ECC RAM? Sounds suspicious when this kind of offers don't mention memory type.
nonuby 1 day ago 1 reply      
A real x86/64 bare metal server for ~ us$3.30/mo inclusive, wow, it seems like only yesterday people were drooling at $99/mo 300mhz Cobalt RAQs at ev1servers (it was more like 15yrs ago). I was able to spin one up in <1min however I was previously registered at scaleway due to their last offering.
therealmarv 1 day ago 1 reply      
has somebody experience with Scaleway? Especially if you compare to DigitalOcean? First time I see them. Seems for me the biggest difference is that their servers are baremetal and not VMs?!
kazinator 1 day ago 2 replies      
If you do CPU-intensive work on these servers, might not the electricity cost surpass the 0.02/hour income? Example: if a kWh costs 0.10, then to have something burning 200W costs 0.02 per hour. (Someone substitute real-ish numbers.
stcredzero 1 day ago 0 replies      
Right now I'm developing on Amazon EC2, because of the low ping times and because I have a range of options in terms of being hosted around the world to minimize low ping times for potential customers. I would like to know what options would I have at scaleway.com, but I can't find that information easily.

All I found was: How are my servers positioned in terms of network proximity and resiliency? -- The ability to group your servers to create placement preferences is already integrated in the core of our system. We will expose it to you in the coming months.

ab4275 1 day ago 0 replies      
I already have Scaleway account. I just finished some benchmarks on their starter vps and looks very interesting price per performances.Cores are dedicated, based on Intel C2750 (on /proc/cpuinfo).I have approx 23% more performances on cpu and iops than DO $20 plan. For equivalent $2.70 price, its 8-9 times less expansive for more performances. It's time for me to think about migration :-)
netforay 1 day ago 1 reply      
Just yesterday tried to install Gitlab-ce in C1 then realized that there is no ARM build for it. Went to digitalocean just for x86. And today they introduced x86, great news.
driverdan 1 day ago 1 reply      
How does the CPU performance of Avoton compare to a Pi 2? How do they perform on single threaded encryption, eg for a Tor relay?
sofaofthedamned 1 day ago 0 replies      
I'd still like to know where I can buy the new Xeon D variants at an affordable price. The Avoton variants are okay but the availability is not good, certainly in the UK.
fermigier 1 day ago 0 replies      
"Avoton" sounds very close to "Avorton", which is not what you want powering your computer.


z3t4 1 day ago 0 replies      
I can't wait for the day when it's actually cheaper to host in the cloud then hosting the servers yourself, electric, bandwidth, and storage costs included. This is a step in the right direction!
Marazan 1 day ago 0 replies      
I would like them to make their S3 replacement available to new signups.
iofur 1 day ago 0 replies      
I've just tried it, but did not found how to resize/scale... And a bit disappointed about snapshot: you have to power-off your server before snapshot/backup.
ikurei 1 day ago 1 reply      
I can see nothing about where the servers are, which can impact latency dramatically in some cases. I have to serve to a chunk of our users in China, and I need to have at least a server in Asia.

Am I missing something?

rogeryu 1 day ago 1 reply      
I've tried the C1 to run JSPWiki on Tomcat, but it didn't work. I guess too little RAM. I'm sorry to see that there is nothing inbetween a C1 and a C2.
pjc50 1 day ago 1 reply      
If you want even more affordable (but lower performance), http://lowendspirit.com/
pyvpx 1 day ago 1 reply      
how easy is it to boot/install your own OS on these bare metal servers? some of these offers look interesting for unikernels.
ctstover 1 day ago 0 replies      
Are they still offering the C1? It's absent from the pricing page.
tempodox 11 hours ago 0 replies      
What is unmeted bandwidth?
dvfjsdhgfv 1 day ago 1 reply      
insanely cheap? For this price you can get a real i7 from Hetzner instead of Atom...
throwaway21816 1 day ago 0 replies      
>invite required

Great way to drive away customers

topbanana 1 day ago 0 replies      
Anything similar in the UK?
vbit 1 day ago 0 replies      
Any support for FreeBSD?
ino 1 day ago 0 replies      
How do they handle DoS?
imaginenore 1 day ago 1 reply      
Meh. I just got a 4-core / 6GB RAM / 150GB HDD / 1Gbps VPS for $6/month.
imdsm 1 day ago 1 reply      
It'd be nice if Docker was supported out of the box. I don't want to be pissing around with kernels just to test them out.
rfreytag 1 day ago 1 reply      
I tried Scaleway last year. I was very disappointed with the usability of their offering (I was trying to run a simple Ubuntu Minecraft server - it was unnecessarily complicated to set up everything from ssh keys to snapshots), and stopped the account quickly. That very next day and for the first time ever my card picked up a fraudulent $3000 bar bill in Las Vegas. I don't think it was a coincidence.

I cannot recommend Scaleway.

Add Reactions to Pull Requests, Issues, and Comments github.com
394 points by WillAbides  5 hours ago   153 comments top 38
mrharrison 4 hours ago 14 replies      
-1 or the thumbs down reaction, I think is a mistake. They aren't usually that constructive, because most of the time they are used as retaliation against a specific user, instead of constructive criticism. If someone downvotes you, you tend to downvote them. At the least, it should be a privilege to down vote, like SO and HN do. http://stackoverflow.com/help/privileges/vote-down
Sir_Cmpwn 4 hours ago 1 reply      
A missing feature here is sorting issues by public support. An example is FontAwesome, which explicitly asks users to leave a +1 comment on issues they support. You can then get a pretty good idea of the most desired features by sorting the issues by most commented.



Would also be nice to see these reactions on the issue list so you can get a feel for the issues at a glance without digging deep into each one.

bsimpson 4 hours ago 1 reply      
Looks like Dear GitHub[1] is having a rather quick impact on the product; first templates[2], now this:

[1] https://github.com/dear-github/dear-github

[2] https://github.com/blog/2111-issue-and-pull-request-template...

franciscop 6 minutes ago 0 replies      
Awesome! Just a niptick, I think the reaction action should be near where the interface is. So either change the icon to make a reaction to bottom left, or change where it's shown to the right (where milestone, tags, etc are).

This way you get better feedback.

jkire 4 hours ago 2 replies      
I wonder if this is too featureful? What is the difference between +1, heart, and hooray? Having just a +1 and -1 is unambiguous and probably covers the vast amount of use cases? Perhaps not, but I'd be very interested to know the reasoning between being choosing between "unambiguous" and expressive.
richerlariviere 4 hours ago 0 replies      
I think it is definitely the end of +1 era, folks! Thanks Github to listen to the community feature requests. You should allow more icons like Slack is doing currently.
chrismonsanto 3 hours ago 0 replies      
Can you downvote replies in threads that are locked? Can collaborators delete these reactions like we would with comments?

> Have feedback on this post? Let know on Twitter.

Not everyone uses Twitter. It would be awesome to give feedback using the one account I'm guaranteed to have: a GitHub account. Otherwise I have to ask my question on HN...

bhaumik 14 minutes ago 0 replies      
First* Slack, then Facebook, and now GitHub. Looks like reactions are replacing (or expanding on..) the unary like/upvote/heart expression for tech products.

*Or at least the first time I've seen them used as an important feature.

city41 4 hours ago 2 replies      
I think people will still write +1 comments because they won't notice this new feature, at least initially. It'd be nice if Github just converted "+1" comments into reactions.
bengotow 4 hours ago 1 reply      
Finally. Let's just hope it doesn't email you when someone leave's a +1 reaction.
ma138 4 hours ago 0 replies      
Awesome move by GitHub. ZenHub[1] will be phasing out our +1 button now that it's no longer needed feels good to focus. We're excited to use this reactions data as part of our reporting suite, please keep the improvements coming!

[1] https://www.zenhub.io/

ruffrey 1 hour ago 2 replies      
Why is the thumbs up a white hand, and thumbs down is a yellow hand?!
voaie 1 hour ago 0 replies      
Well, I think a voting pool is more practical than manually counting upvote/downvote of every comment. I don't know how often the maintainers will come back and see how an issue is going and see which comments are popular. Also, sorting comments is not fun because of duplicated contents.
Mikushi 3 hours ago 1 reply      
April 1st is not there yet. I get the idea, but seriously, emojis...
Animats 3 hours ago 0 replies      
I'd keep downvotes, but lose the emoji.
rocky1138 4 hours ago 0 replies      
Emojis are terrible, but they're better than "+1".
donretag 2 hours ago 2 replies      
"So go ahead:+1: or :tada: to your :heart:s content."

Or please don't. Part of the problem with the +1s is that they add noise. How are reactions going to cut down on the noise? Telling people to go ahead a +1 an issue (increase noise) is the opposite of what the "Dear Github" maintainers want.

Many projects do not use +1 or any other voting scheme to illicit priority from the general public. +1 comments and reactions provide little value. I have seen Github issues where people +1 already closed issues because they do not bother reading.

gsmethells 4 hours ago 0 replies      
Wow! GitHub is being influenced by GitLab (who released this feature recently in GitLab 8.4).
dpflan 4 hours ago 7 replies      
I like the idea of adding more expressiveness, pictorially capturing sometimes fleeting moments of emotion or accurately representing an emotional state that can occur.

These are the following reactions:

 1. +1 2. -1 3. smile 4. thinking_face 5. heart 6. tada
Do they capture the necessary expressiveness for the context? Facebook's reactions cover more emotions, but FB is trying to support reaction to anything that can be posted.

gjreda 4 hours ago 1 reply      
This is a welcome addition. I've run into bugs in projects before and wanted to "+1" a thread, but it always felt like spamming the maintainers.

It'd be cool if they added a way to search through your list of reactions. This would allow you to effectively comment on an issue in an OS project, while simultaneously bookmarking it, so that you can go back and commit a fix when you have a free moment.

mrmondo 2 hours ago 1 reply      
Ah yes, following in the steps of Gitlab which has had this for a while, the thumbs up / down and voting types are useful, everything is a distraction IMO
choward 3 hours ago 1 reply      
Am I missing something or is there still no way to +1 issues? All I see are ways to react to comments. Whenever I feel the urge to "+1" something it's for the issue, not a specific comment. Can someone explain how to add a reaction to an issue?
knd775 5 hours ago 0 replies      
Well, I guess this at least sorta solves the "+1" issues.
lettergram 3 hours ago 0 replies      
Getting dangerously close to that Facebook patent[1]...

[1] http://www.freepatentsonline.com/8918339.html

mkobit 4 hours ago 0 replies      
I don't think these necessarily cover all of the responses that can be made, but I think it is a great start to getting simple feedback like this. Like other users mentioned, it would be awesome to be able to sort or perform some kind of action based on the quantify of the reactions.

I wonder if they will allow for repository owners to select which reactions they will allow? I think that would help with the limited selection but still allow owners to select what they consider useful to them.

wilg 4 hours ago 0 replies      
Stoked about this feature! Can't wait until these are available in the API!
yuribit 4 hours ago 0 replies      
Is there a way to sort by "reactions"? Otherwise I think this feature is useless.. I would have preferred having more detailed issues rather than ugly Emojis.
SnaKeZ 4 hours ago 1 reply      
Could they convert the exiting "+1" comments into reaction?
hiphopyo 3 hours ago 0 replies      
Speaking about new features -- what I'd like to see is the ability to remove items from my public profile / activities list. Often I make mistakes. Often I do stuff I don't want the public to see, and I'd rather not have to email GitHub support asking them to remove them manually all the time.
thejameskyle 5 hours ago 0 replies      
jwilk 4 hours ago 4 replies      
Um. What does :+1: mean when applied to an issue? "I like this bug"?!
looneysquash 3 hours ago 0 replies      
I assume this is inspired by gitlab's similar feature?
maaarghk 5 hours ago 1 reply      
SnaKeZ 5 hours ago 0 replies      
End of "+1" era?
cpr 4 hours ago 0 replies      
Nice to see them moving quickly on some major OSS community requests.
fiatjaf 4 hours ago 0 replies      
GitHub: social coding
dkopi 4 hours ago 0 replies      
The best way to +1 an issue is with a pull request.
TomasEkeli 4 hours ago 0 replies      
Wow, Eric Elliott (@_ericelliott) just asked for this on twitter - and now it happened. He must be a witch.
Leaf: Machine learning framework in Rust github.com
376 points by mjhirn  2 days ago   49 comments top 13
wall_words 2 days ago 3 replies      
The performance graph is deceptive for two reasons: (1) Leaf with CuDNN v3 is a little slower than Torch with CuDNN v3, yet the bar for leaf is positioned to the left of the one for Torch, and (2) there's a bar for Leaf with CuDNN v4, but not for Torch.

It's good to see alternatives to Torch, Theano, and TensorFlow, but it's important to be honest with the benchmarks so that people can make informed decisions about which framework to use.

IshKebab 2 days ago 1 reply      
I think Microsoft's approach with CNTK is far preferable to this. Rather than defining all the layers in Rust or C++ it uses a DSL to specify mathematical operations as a graph.

You can easily add new layer types, and recurrent connections are easy too - you just add a delay node.

Furthermore, since the configuration file format is fairly simple, it is possible to make GUI tools to visualise it and - in future - edit it.

rubyfan 1 day ago 0 replies      
I'm honestly skeptical that Rust is all that appealing for this type of work. It just doesn't seem like the main concerns like performance and type safety are #1 the top priority in this space and #2 this offering is differentiated enough from what you already get from Java today.

Honesly, many modeling problems are clunky and inefficient at scale - however that's ok. When you need to scale bad enough, you already have a significant set of library support in Java to support this.

I'm failing to see an answer to the one question I have, "why rust?"

andreif 2 days ago 0 replies      
Previous discussion 4 months ago https://news.ycombinator.com/item?id=10539195
YeGoblynQueenne 2 days ago 4 replies      
> super-human image recognition

That's a bold claim. As far as I know there was one paper that reported a model beating human scores in a specific test (imagenet, I believe). Whether that translates to "superhuman" results in general is followed by a very big question mark.

In general I really struggle to see how any algorithm that learns from examples, especially one that minimises a measure of error against further examples, can ever have better performance than the entities that actually compiled those examples in the first place (in other words, humans).

I'm saying: how is it possible to learn superhuman performance in anything from examples of mere human performance at the same task? I don't believe in magic.

kingnothing 2 days ago 4 replies      
I'm completely new to ML and what real world applications it's suitable for. Are we at the point yet where you can train a computer to look at arbitrary images and count the number of people in it? What if it was the largely on the same background and only the number of people were changing -- for example, a camera shooting a queue of people to determine queue depth at a bus station.
eggy 2 days ago 1 reply      
I will take a look at it, but are the benchmarks comparable, since to quote the site, "For now we can use C Rust wrappers for performant libraries."? Torch is LuaJit over C, and Tensorflow has Python and C++. Is Rust making it fast, or the interface code to the C libraries?
eranation 2 days ago 2 replies      
This is very cool! When I presented it to my CTO however, he said that he doesn't think this will gain traction from data scientists over Scala or Python, as Rust is even more complex than Scala (which is not the simplest language out there, even though I'm a big fan of both Scala and Rust and I know this might start a flame war)

Do you think Data Scientists can write their models directly using Leaf? do you think there will need to be a DSL that translates form the R / Python world to something you can run on Leaf to make it happen?

ybrah 2 days ago 4 replies      
Its interesting to see "technical debt" become a more common term. Is there a rigid definition for it?

From the article: "Leaf is lean and tries to introduce minimal technical debt to your stack."

What exactly does that mean?

rck 2 days ago 1 reply      
The benchmarks would be a lot more useful if the context around them were more obvious. In particular, it would be nice to know if the benchmarks are for a single input, or for a batch of inputs. If for a batch, then the batch size is important too. Maybe this stuff is somewhere on their site, but it shouldn't require digging.

Without this information it's hard to make a useful comparison at all.

zump 2 days ago 1 reply      
Any recurrent layers?
mastax 2 days ago 1 reply      
I'm glad that rust has crossed the point where posts to HN that would be "_ in Rust" are now just "_". I hope this means that Rust is starting to be used for its own merits rather than just novelty.
yarrel 2 days ago 0 replies      
1. Rust warning.

2. If "for hackers" is the new "for dummies" then gentrification is complete.

Microsoft will release a custom Debian Linux theregister.co.uk
313 points by l1n  1 day ago   125 comments top 22
Someone1234 1 day ago 2 replies      
The article says "Microsoft will release a custom Debian Linux," but the linked Github repository says:

> Q. Is SONiC a Linux distribution?

> A. No, SONiC is a collection of networking software components required to have a fully functional L3 device that can be agnostic of any particular Linux distribution. Today SONiC runs on Debian

caf 1 day ago 1 reply      
In tangentially-related news, there was a lot of talk at NetDev about switchdev, a new Linux driver model for hardware-offload switching hardware.

It allows the kernel's Layer-2 and Layer-3 switching/routing configuration to be reflected down into the switch offload hardware, and the switch's ARP and MAC table data to be reflected back up to the kernel stack.

The overall idea being you can continue to use the same userspace tools to configure the routing/switching, and it all just magically goes faster if you have supported switching hardware.


coldtea 13 hours ago 0 replies      
Well, Microsoft had the best (and most widely deployed) desktop UNIX distribution back in the 80s too.


harry8 1 day ago 1 reply      
I imagine anything Microsoft releases that could possibly have gpl software, such as the linux kernel in it will have the most aggressive search for violation of any software ever. Memories are long and that distrust is not going away any time soon.
cpeterso 1 day ago 1 reply      
They should call it XENIX. :)
exabrial 15 hours ago 1 reply      
In a sudden twist of fate, Microsoft announces that are writing their own closed source systemd alternative. Millions of naysayers flock to the systemd hailing it as the Savior of Linux.
mankash666 1 day ago 6 replies      
Why Linux? The networking stack on BSD is superior, and the OS places no copyleft restrictions!

I'm starting to believe that developers choose OS/Tools the are used to (Linux in this case) versus the one best suited for the job (BSD)

criddell 1 day ago 2 replies      
I always thought that Microsoft was blocked from getting into Linux by the terms of their sale of Xenix.
merb 1 day ago 1 reply      
Actually what Microsoft is doing could be breat.However I don't understand why they even use Jenkins for this project (https://github.com/Azure/sonic-build-tools)I mean I love jenkins, but wouldn't it be at least good if they would've used their own build tool? I mean something like tfs-linux-worker I know that doesn't exists, but if they would've done something they could've done something good somehow.Using jenkins feels like "we can't yet do that with our own stuff"
ajarmst 1 day ago 0 replies      
On the plus side, that will be the most carefully reviewed, evaluated and checksummed distro in Linux history.
corncobpipe 1 day ago 0 replies      
I'm sure the lawyers at SonicWall will love this
chris_wot 1 day ago 1 reply      
Satya Nadella is a breath of fresh air. It's amazing the difference in management styles from the Balmer days.

When Microsoft put Nadella in charge, they made a great decision. And I honestly don't say that very often about top level management.

Taniwha 17 hours ago 1 reply      
Do I hear the distant shivering of thousands of tiny daemons?
duncan_bayne 1 day ago 2 replies      
Life imitates (comedic) art ...


MayMuncher 1 day ago 1 reply      
Anybody have any links to switches/routes that support SAI? I couldn't find any
chenster 1 day ago 1 reply      
It's probably also the OS that used to run SQL Server on Linux announced this week - https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-s....
anonbanker 9 hours ago 0 replies      
In other news, there is now a stable Microsoft operating system I could use on my computer.

If they ported Windows to it, they could probably make a solid Wayland/KDE competitor.

qwertyuiop924 1 day ago 2 replies      
Ah, how wonderful it will be to live in a world without embrace and extend.

Wait. systemd, kdbus, GNOME and systemd-udevd. Shit.

We have met the enemy, and befriended it. Now we are the enemy.

hathym 17 hours ago 0 replies      
if you can't beat them join them
tempodox 15 hours ago 1 reply      
Linux has been the best friend of MS Windows for quite some time now. All the Linux users dual-boot into Windows each time they need some half-decent GUI for an app or whatever. Linux might be a good server host OS, but it failed spectacularly to conquer the desktop.
thescribe 1 day ago 1 reply      
I thought they already did, and it was called RHEL. I guess that's not Debian.
plugnburn 20 hours ago 0 replies      
Finally M$ Linux comes true. So we must prepare to viruses, antiviruses, "defenders" and other whole infrastructure industry that lives on creating problems out of nothing and then heroically solving them.
Python 3 Is Winning Library Developer Support microsoft.com
293 points by brettcannon  2 days ago   180 comments top 19
toyg 2 days ago 3 replies      
Windows could be the first platform to ship Python3 without having ever shipped python2. Total clean slate, perfect opportunity, they could sell it as MS "aligning with the cloud" and offer something special -- if I could start and provision a Windows cloud instance from any python3 repl without having to bother with devops tool (Vagrant, Packer, Ansible and whatnot), I would switch tomorrow.
username223 2 days ago 4 replies      
It still blows my mind how badly the Python and Perl developers misunderstood their languages' roles. Many developers viewed them as Unix infrastructure like C, sed, and awk: a stable foundation upon which to build long-lasting programs. But the core devs got bored and decided to either break most things for very minor improvements (Python), or break everything to provide all things to all people (Perl).

The Perl folks eventually admitted that version 6 was a separate language; we'll see how that turns out. The Python ones can't make the same move, since version 3 offers no major benefits/changes over version 2, so they simply have to continue the beatings until... everyone moves to version 3. What a colossal waste of time!

mindcrime 2 days ago 2 replies      
Is anybody surprised by this? Everybody knew the transition was going to take a long time, and that it would be a gradual process. It may be that it's actually been even slower and more gradual than people expected, but by and large everything seems to be proceeding nicely.
afarrell 2 days ago 2 replies      
Ubuntu 16.04 LTS is planned to have only python3 installed by default
rcarmo 2 days ago 1 reply      
Well, I'd be happier if Pypy 3 were faster. As it is, I can get pretty decent speedups with 4.0 (2.7.x) but not for python3 code in my use cases...
declnz 2 days ago 0 replies      
Interesting. Whilst it is slightly embarrassing how long it's taken, recently I feel that Python 3 conversion is gaining critical mass, and I now hope / expect these adoption lines to go non-linear in the coming year or so...
cdnsteve 2 days ago 0 replies      
Who is this Microsoft? Loving this new community involvement.
buovjaga 2 days ago 2 replies      
ihuman 2 days ago 0 replies      
To add to this article, here is the Python 3 Wall of Superpowers. It is a list of the most popular packages on PyPI, and shows if they support Python 3.


KaiserPro 2 days ago 6 replies      
meh, there isn't (still) any reason to really jump onboard python 3, 2.7 is good enough for me

the only real selling point is that strings are unicode by default. Unless I'm missing something python 3 just doesn't really seem to be worth the effort, yet.

Can someone enlighten me?

BuckRogers 2 days ago 6 replies      
It will have taken 8 years for Python3 supporting uploads to have outpaced Python2. If that's not lack of enthusiasm, I don't know what is. This is really a pitiful display.

It's not a surprise though. This all stems back to the original reasons why Python3 was a bad idea. Some things to keep in mind regarding Python2/3.

- Python3 is pure technical churn, nothing there is true technical innovation. They simply mixed the pot to their liking. Unicode is supported in Python2.

- It is slower than Python2 for most people. Though there are crafty arguments out there that it's a wash. It's really not.

- CPython3 was adopted to benefit the core development team, to make it easier to maintain. Not the community on the whole. The community has paid the price by doing a lot of unnecessary ports.

- CPython3 was deliberately created to not support Python2 code even though modern VMs can easily support this. The CLR is a great example. They did it this way for their good, not for the community.

- In most things in life, you'd always want to be using the latest supported version. That isn't the case here. It may still effectively "fail" with a permanent split. This migration is not even close to being completely in the books and won't be for another 10 to 20 years. So why punish yourself moving before it's in your best interests. The syntactic differences are so slight, it's extremely easy to switch to Python3 if it ever truly reaches Python's momentum (this article does not convey momentum holistically, that includes package downloads and the majority of employers using 3).

- Related to the bullet above, almost all job opportunities are still Python2.

- The ecosystem can't keep up, PyPy is still only production-ready on Python2. Python 3.2 support is essentially experimental by comparison.

- Many changes and additions are being made and they aren't being vetted by the larger community. No significant userbase for Python3 has created this situation. Mistakes are being made and no one is around to speak up because the core devs don't care.

- Python3 is feature-soup. There is now a new 4th and possibly 5th string formatting method in 3.6 incoming. Really jumped the shark on that one. See Mark Lutz's great insight on this topic and more.[0]

- Downloads are still dramatically in Python2's favor, to this day. Judging from downloads, there's a ~10% community userate of Python3.

- The bullet above means that the Python3 packages you do use are far less vetted and tested than the Python2 equivalent.

- CPython3 was slopped together. The initial 3.0 release was mostly pure Python in effort to get it out the door. Many parts of the standard library that are written in C are slower than the modules used in the CPython 2.7 counterpart, to this day. It was not and is not being tested. The users are not there to test it either.

- Community trust has been broken, I'm not sure anyone really believes another big break isn't incoming regardless of statements to the contrary. The core dev team is going to do what they want and you're going to like it, period.

- Python3 zealots have done a lot of harm by not accepting valid criticism, and aggressively attacking those who do what is in their best interests (which we should all do, this should not just apply to the CPython core devs), and continue using Python2 all these years. Watch for downvotes instead of countering my points.

- The 2020 date is a just a big political stunt and scare tactic. Code will continue working after 2020. As noted, it works better in CPython2 and PyPy today, and likely will in 2021 as well.

- Python3 lives off of and is pushed by PSF propaganda. No way around it as there is no innovation involved. Which is what should determine if new technologies are adopted or not.

- Usually when people make mistakes, such as Python3's existence, or the inability to mix Python2 and 3 code in the same VM- they go back and fix them. That has not happened with CPython3.

- Even back in 2014 projects such as Pyston from GvR's employer (Dropbox), were Python(2)-only. It's still Python2 today, which says it all.

All that said about this trainwreck, I'm in favor of getting back to a single major version of Python for the community. I'm using 3.5 for a single project myself but I will still plainly state the truth about the Python3 transition. It was a bad idea, for all the wrong reasons to benefit a rogue band of developers who believe since you aren't paying them- you as a community do not matter and should shutup and get to porting.

I love Python but will eagerly embrace Pythonic language alternatives as more are released. In particular, I'd love to see a Pythonic Erlang variety similar to Elixir. Or better yet, just a concise, minimum featured version of Python without all the extra. Picking 1 string formatting method would suffice, the basics but done well, stable in feature-set like Go and compiled. Something similar to a Pythonic Lua would suit me and would be the ideal case for a Python-reboot. Making it a subset of Python2 would make a lot of sense.

Lesson learned and bottom line is that we should overthrow all BDFLs.


stared 2 days ago 0 replies      
Plots using second since epoch as the x axis... nope. (I mean, if I do it then it is for internal plots only and still I am feeling lazy.)
dsil 2 days ago 1 reply      
I really like the showing-his-work, via ipython/jupyter notebooks. Unfortunately it's behind an azureml login so it can't be directly linked.
inanutshellus 2 days ago 1 reply      
I program in python 2 because that's where /usr/bin/python takes me to. ;-)
jedberg 2 days ago 2 replies      
Sadly I'm stuck using Python 2.7 until Amazon Lambda supports Python 3.
markonen 1 day ago 1 reply      
Python 3 : Python 2 :: IPv6 : IPv4
ksec 1 day ago 0 replies      
And No love for Ruby. Sigh


MrZongle2 2 days ago 1 reply      
FTA: "In 3 months, Python 3 will be better supported than Python 2."

This just seems to validate my choice to stick with Python 2 up until now.

rjurney 2 days ago 1 reply      
I recently started Python 4, which will be backwards compatible with 2.7. Submit your ideas as issues today!


How Web Scraping Is Revealing Lobbying and Corruption in Peru scrapinghub.com
386 points by bezzi  1 day ago   72 comments top 11
kilotaras 17 hours ago 0 replies      
I'm from Ukraine and the biggest success in battling corruption comes from system called Prozorro[1] (transparently) for government tenders.

It started as volunteer project and some projections put savings at around 10% of total budget after it will become mandatory in April.

[1] https://github.com/openprocurement/

carlosp420 1 day ago 5 replies      
Hi there, I am the author of the blog post. I will be happy to answer any question.
ecthiender 1 day ago 2 replies      
Very interesting, how tools like these can be so much helpful for journalists and generally transparency in government functions.

Probably world changing, when considering that even semi-technical folks can cook up tools to dig into things like this.

I know this tool was by a developer, but scrapinghub has web UI to make scrapers.

xiphias 1 day ago 0 replies      
Can you draw a covisit graph of people? Who visited the building at the same times as somebody else. The strength of the connections could be visitedboth^2/( visitedwithouttheother1+1)*(visitedwithouttheother2+1)))
alecco 1 day ago 1 reply      
In other countries, corrupt politicians found out a simple captcha per n items is good enough to defeat analysis.
danso 1 day ago 4 replies      
FWIW, if you live in the U.S., then you benefit from having such data in great quantity, though I don't think it's sliced-and-diced to near the potential that it has:

Lobbyists have to follow registration procedures, and their official interactions and contributions are posted to an official database that can be downloaded as bulk XML:


Could they lie? Sure, but in the basic analysis that I've done, they generally don't feel the need to...or rather, things that I would have thought that lobbyists/causes would hide, they don't. Perhaps the consequences of getting caught (e.g. in an investigation that discovers a coverup) far outweigh the annoyance of filing the proper paperwork...having it recorded in a XML database that few people take the time to parse is probably enough obscurity for most situations.

There's also the White House visitor database, which does have some outright admissions, but still contains valuable information if you know how to filter the columns:


But it's also a case (as it is with most data) where having some political knowledge is almost as important as being good at data-wrangling. For example, it's trivial to discover that Rahm Emanuel had few visitors despite is key role, so you'd have to be able to notice than and then take the extra step to find out his workaround:


And then there are the many bespoke systems and logs you can find if you do a little research. The FDA, for example, has a calendar of FDA officials' contacts with outside people...again, it might not contain everything but it's difficult enough to parse that being able to mine it (and having some domain knowledge) will still yield interesting insights: http://www.fda.gov/NewsEvents/MeetingsConferencesWorkshops/P...

There's also OIRA, which I haven't ever looked at but seems to have the same potential of finding underreported links if you have the patience to parse and text mine it: https://www.whitehouse.gov/omb/oira_0910_meetings/

And of course, there's just the good ol FEC contributions database, which at least shows you individuals (and who they work for): https://github.com/datahoarder/fec_individual_donors

This is not to undermine what's described in the OP...but just to show how lucky you are if you're in the U.S. when it comes to dealing with official records. They don't contain everything perhaps but there's definitely enough (nevermind what you can obtain through FOIA by being the first person to ask for things) out there to explore influence and politics without as many technical hurdles.

dkarp 13 hours ago 0 replies      
This is really impressive, even more so by the fact that it has already led to discoveries being made.

Web scraping is a really powerful tool for increasing transparency on the internet especially with how transient online data is.

My own project, Transparent[1], has similar goals.

[1] https://www.transparentmetric.com/

prawn 21 hours ago 0 replies      
Peruvians, do you think this would cause a majority of meetings to be held outside of public office buildings or via secretive messaging system?
Angostura 17 hours ago 0 replies      
This is a fascinating project - If successful I suspect the result will be that lobbying to longer takes place in the government offices: "Shall we meet at that little place down the street", or will be carried out over the phone.
jorgecurio 1 day ago 6 replies      
Really interesting use of data extraction....

For developers and managers out there, do you prefer to build your own in-house scrapers or use Scrapy or tools like Mozenda instead? What about import.io and kimono?

I'm asking because lot of developers seem to be adamant against using web scraping tools they didn't develop themselves. Which seems counter productive because you are going into technical debt for an already solved problem.

So developers, what is the perfect web scraping tool you envision?

And it's always a fine balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped.

It seems like web scraping is a really shitty business to be in and nobody really wants to pay for it.

dang 1 day ago 0 replies      
We've banned this account for repeatedly violating the HN guidelines.

We're happy to unban accounts when people give us reason to believe they will post only civil and substantive comments in the future. You're welcome to email hn@ycombinator.com if that's the case.

Unexpected evidence of a new particle at LHC nature.com
305 points by jonbaer  1 day ago   100 comments top 9
mojoe 1 day ago 4 replies      
So just to be clear, this particle is not predicted by the standard model? I'm no physicist, but that seems like a huge deal.
pjc50 1 day ago 2 replies      
Last time there was an "unexpected reading", we spent 3 months being excited about faster-than-light particles until someone found the error. Let's not get too excited just yet.
junto 1 day ago 5 replies      
Can anyone tell me if there are any practical applications for this, or it is 'just' a further understanding of the world around us (albeit an important one)?
abricot 1 day ago 4 replies      
"[...] before the experiment shut down for its winter recess."

Can anyone shine a light on why the particle accellerator needs recess?

CIPHERSTONE 1 day ago 2 replies      
ELI5: Presumably this was a result of the LHC's power upgrade to 13 TeV. If so, should they now be able to repeatedly get these results again and again?
UweSchmidt 1 day ago 0 replies      
About time! I can see how scientists were excited when the LHC confirmed existing theories, but the thing was kinda expensive, so it would be nice to get some new physics out if it instead of just a "yup we were right".
tobias2014 1 day ago 0 replies      
This is one case of ambulance chasing: http://arxiv.org/abs/1603.01204

EDIT: I just noticed that this is also linked in the article.

CPLX 1 day ago 9 replies      
"An unexpected data signal that could change everything has particle physicists salivating."

I know I'm like an old man screaming into the wilderness, but I can't be the only one that's profoundly sad that even nature.com has to resort to this sort of horrible clickbait formulation for a headline.

Notwithstanding my aesthetic dislike of it, the thing about these headlines that makes me really upset is the fact that they aren't actually telling the truth.

Not to be pedantic, but they have not observed any particle physicists with an unusual amount of saliva, presumably. There is no anecdotal or visual evidence of people drooling, nothing.

It seems to me this is like the most horrible way to start off an article in a publication that is about the scientific method for gods sakes, where precision and accuracy is sort of the whole point.

Would it really be so hard to have a headline like "Shocking and unexpected evidence of a new particle" or something similar that actually says or at least approximates what the news is?

crdb 1 day ago 1 reply      
The Trisolarians are coming!
NSA data will soon routinely be used for domestic policing washingtonpost.com
268 points by Jarred  6 hours ago   85 comments top 19
fweespee_ch 5 hours ago 5 replies      

> Perhaps the most chilling quote of the Soviet era came from Lavrentiy Beria, Stalins head of the secret police, who bragged, Show me the man, and I will find you the crime. Surely, that never could be the case in America; were committed to the rule of law and have the fairest justice system in the world.

> This should make everyone fearful. Silverglate declares that federal prosecutors dont care about guilt or innocence. Instead, many subscribe to a win at all costs mentality, and there is little to stop them.

> The very expansiveness of federal law turns nearly everyone into lawbreakers. Like the poor Soviet citizen who, on average, broke about three laws a day, a typical American will unwittingly break federal law several times daily. Many go to prison for things that historically never have been seen as criminal.



> John Baker, a retired Louisiana State University law professor, made a similar comment to the Wall Street Journal: There is no one in the United States over the age of 18 who cannot be indicted for some federal crime. That is not an exaggeration.


Do you even know the 134 laws passed by the current Congress are? I know I don't and you just have to fall afoul of 1.

rubyfan 2 hours ago 2 replies      
In related news, I heard on NPR today people are complaining that the TSA no-fly list didn't catch a man who was a wanted shooting suspect.

The insinuation is the no-fly list should be expanded to catch domestic criminals. You know, the no-fly list that you can't be removed from and don't have to have committed a crime to get yourself listed on. The no-fly list that is unconstitutional, yeah that one.


fiatmoney 5 hours ago 1 reply      
Of course, since this is now available to local & federal cops & prosecutors, same as any other law enforcement database, any exculpatory information will naturally be discoverable by defense attorneys.



maerF0x0 4 hours ago 0 replies      
> Itll be Black, Brown, poor, immigrant, Muslim, and dissident Americans: the same people who are always targeted by law enforcement for extra special attention.

.. Another Quote.

> First they came for the Socialists, and I did not speak outBecause I was not a Socialist.

> Then they came for the Trade Unionists, and I did not speak out Because I was not a Trade Unionist.

> Then they came for the Jews, and I did not speak outBecause I was not a Jew

> Then they came for meand there was no one left to speak for me.

I guess i'm on that list now.

tehmillhouse 43 minutes ago 1 reply      
As someone who doesn't live in the US, I am awestruck by how big of a deal this seems to be for people.

Are you really trying to tell me that surveillance of ~the rest of the world~ is somehow less bad?

linkregister 5 hours ago 1 reply      
I request the link to be changed to the original source; this article is an opinion piece. The original is here:


Someone1234 5 hours ago 4 replies      
Rumour has it that the NSA already has threat predictor algorithms and processes, how long until the NSA provides the FBI this and you get arrested for pre-crimes or thought-crimes? How do you answer the charge that you were thinking about doing something bad?
matt_wulfeck 4 hours ago 1 reply      
You when you get paid to find terrorist, and your job depends on you finding terrorists, well then, everybody starts to look like a terrorist.
meric 5 hours ago 1 reply      
Is anyone surprised?

Don't use electronic messaging to tell people you're holding cash, bring a cheque at least. The cops are going to find out and your cash is going to be confiscated.

dools 5 hours ago 1 reply      
It's not enough to go after drug dealers, you have to go after their families as well.
kii9mplppmfh 2 hours ago 0 replies      
The singularity is coming... It just turns out it's fascism.
dombili 4 hours ago 0 replies      
danenania 3 hours ago 1 reply      
With low level bureaucrats getting access to this data, isn't it inevitable that we'll start seeing massive security breaches on a routine basis?

I suppose we'll just have to start assuming that anything said or written or looked up online will eventually be accessible to anyone who's interested.

losteric 2 hours ago 2 replies      
So how many candidates in the upcoming election are publicly against this kind of data mining?
anocendi 4 hours ago 0 replies      
"What is your hue?"

The question will become relevant soon enough.

api 3 hours ago 2 replies      
Meanwhile the leading Republican presidential nominee is a right-populist (a.k.a. fascist), and the Democratic presidential nominee was almost a left-populist (socialist).

We are literally one economic crisis or major terrorist attack away from some form of significantly more authoritarian if not outright totalitarian government. Whether it would be "left" or "right" is sort of up in the air, and might depend on which side is able to produce a more compelling demagogue at the right time. In any case if history is any guide it doesn't matter much. Totalitarianism is totalitarianism.

If that comes to pass, we're going to find out what "turn-key totalitarian state" means. The infrastructure is in place. The only barriers are legal and social/cultural.

Theodores 4 hours ago 2 replies      
I think they are a few organisational steps away from something truly terrifying. That will be when the NSA just hand over the 'graph' dataset of all the drug dealers to the FBI/local police. As it stands they have to have a clue, e.g. a name before they can get particulars. So policing is still 'hampered' conveniently for those that use drugs. So long as their network does not arouse the suspicions of the police then there need not be any suspicions.

However, if the complete 'graph' is just handed over then even the most discreet of networks could be uprooted, all chains and links identified, geolocated too. This could be done. All it would take is a Donald Trump to take it up another level. He could get a lot of votes by promising to use NSA data to eradicate drugs from America once and for all, with no drug-dealer left behind...

Aside from the 'where next' aspects, as it is, I found this article to be quite shocking. So much for the 'land of the free'.

beedogs 3 hours ago 1 reply      
It's time to shut down the NSA for good.
squozzer 5 hours ago 0 replies      
To quote the great Gomer Pyle, "surprise, surprise, surprise!"
Announcing R Tools for Visual Studio microsoft.com
276 points by brettcannon  1 day ago   79 comments top 18
smortaz 1 day ago 4 replies      
Hi folks - Last year we polled HN on whether there'd be interest in R integration in Visual Studio. You said "YES!", so here it is! I'll be around in case you have any questions.

BTW RTVS was built by the same group that made PTVS (Python Tools for VS) and NTVS (Node.js Tools for VS). RTVS will also be free & open source of course.


Gratsby 1 day ago 0 replies      
Microsoft is trying really hard to get me to like them again. It's starting to work.
Mikeb85 1 day ago 2 replies      
While I'm somewhat ambivalent about Windows-only tools, it's nice to see R getting love from all the big players. The more market share and developers using it, the more the tools and implementation improves, and the better for all of us. Just finished an assignment using R Studio, Knitr and Plotly, so much less pain than Excel+macros or Stata.
hirenj 1 day ago 1 reply      
This definitely looks like something I need to check out. I do a lot of one off analysis for small bioinformatics projects,and I've taken to sharing results in the Microsoft ecosystem via OneNote. I've wrapped knitr into my own library (knoter) which generates the html and figures, and pushes the whole lot up to OneNote in the selected notebook. OneNote works well for this kind of collaborative analysis, where I need to keep track of the whole discussion somewhere.

One thing I'm missing from my workflow would be a way to integrate in to an IDE so I just push a button, and it'll commit the code to a gist, and push the output to a OneNote for other people to comment on. I'm wondering if it would be possible to fork this, and tweak the calls to knitr so they use my library instead.

tyfon 1 day ago 1 reply      
I got to test this tomorrow at work, however loading up a huge program instead of a web page where all the stuff is stored on a server is going to require some changes in behavior.

And I usually work from Linux, so it will be in a VM. But I'll try :)

stevehiehn 1 day ago 1 reply      
Sweet, If this works on the free community edition I'm gonna try it for sure.
markbao 1 day ago 0 replies      
This looks stunning. Anyone know how it compares to RStudio? I might virtualize Windows to use this.
namelezz 1 day ago 1 reply      
Why not Visual Studio Code?
karazi 7 hours ago 0 replies      
Any plans for a 32 bit version? We need our 32 bit R studio install to be able to run in parallel with this as we test this and we cannot switch Java versions from 32 to 64 bit which some pkgs we use require. Congrats on the public release.
swalsh 1 day ago 1 reply      
Stupid question, is it possible to take an R script and compile it to a .net CLR dll?
TheLogothete 1 day ago 1 reply      
Bring the VCF from Azure ML to Visual Studio! I said it in the survey too. VCF is very important in some settings. Being able to code R + SQL and have workflow components in the same environment would be a killer feature.
myth_drannon 1 day ago 2 replies      
Looks like a promising alternative to Rstudio on a Windows platform. If only it was available on Linux...
_Wintermute 1 day ago 0 replies      
Might try this out as I've had nothing but issues with the Windows version of RStudio.
melling 1 day ago 5 replies      
Does anyone have a short list of the best sites to learn R? I only have two on my list, and one is for advanced users:

http://adv-r.had.co.nz - Advanced R by Hadley Wickham


I keep my links on Github:https://github.com/melling/ComputerLanguages/blob/master/r.o...

kefka 1 day ago 1 reply      
That's what I never understood about Microsoft. They're a software company. And some of it is absolutely horrible (Sharepoint), and some of it is the best in the business (Visual Studio).

There's a significant mindshare in Linux and Open Source. That being said, I don't understand why MS didn't provide Visual Studio, Office, and similar for Linux at a premium. For example, if Office was $499 for Windows, charge $999 for Linux. That way, they get the best of both worlds (use their software, pay them money). And their mindshare is significant as well, and this would increase it.

Maybe finally they are coming to their senses, doing just this. It's about time.

eruditely 1 day ago 0 replies      
Microsoft is getting better and better, this is the rise of a new company.
vegabook 1 day ago 5 replies      
Personally never did R studio. Unfiltered command line is always more flexible, and prevents inertial lockin to a specific tool, OS, to a large extent. Now I'm supposed to do R in ultra-bloated Visual Studio?? I learned R precisely to get away from the inefficiency of the (in the past 20 years, functionally unchanged, visual-candy-only) Excel. How am I to be excited about adding this thick, lumpy MS gravy, to my pure R experience? So that when I do R on Linux or remotely I'm screwed? No thanks.

Addendum: Clearly the hive mind MS corporate drones are out in force today/tonight. I know everybody is enamoured with MS-Eclipse etc but R is best used unfiltered. Not hijacked into the laughable world of Windows and Visual Studio. I know from bitter experience that the Windows versions of R are terribly unstable by comparison with the Linux builds. I learned the latter precisely for that reason. MS is playing a fantastic marketing game but I was agnostic on platform until R on Windows started showing its catastrophic limitations. It's a second class citizen as soon as you venture beyond the basics. Take it from an ex R-on-Windows guy who uses R 10 hours per day.

cbo8of 1 day ago 0 replies      
No love for C11?
RIP Google PageRank score: A retrospective on how it ruined the web searchengineland.com
278 points by adamcarson  1 day ago   122 comments top 18
jandrese 1 day ago 7 replies      
I feel like no matter how Google or any other search engine ranked pages, SEO firms would be there to game the system and make a mess of the web. Making Pagerank visible to people with one particular toolbar does seem like a fairly major misstep on Google's part. The majority of the people who would really care about that are the kind of people you shouldn't be encouraging.

One of the obnoxious things about SEO is that if one person is doing it everybody has to do it. It's not necessarily enough to simply offer a better product at a better price. Luckily Google does try to reduce the effect of SEO. I notice for instance that StackExchange almost always beats out Expert Sex Change links these days.

rco8786 1 day ago 6 replies      
Really crummy title. PageRank is the reason we HAVE a search as powerful as Google, and largely the reason the web is as good as it is today.

Raise your hand if you want to go back to AltaVista/AskJeeves.

justinlardinois 1 day ago 2 replies      
I never knew PageRank scores were visible, and I never used the web before Google.

But this article is so far up its own ass.

> Ever gotten a crappy email asking for links? Blame PageRank.

Never mind that web rings were around long before Google and used the same tactics.

> Ever had garbage comments with link drops? Blame PageRank.

There are way more reasons spammers exist than just boosting PageRank.

The author is acting like a) Google had less of an influence on the web before PageRank was public information and b) the web was somehow better both back then and before Google existed. There will always be people who want to game search engine results, regardless of how much information they know about their own standing, and the web was pretty much un-navigable pre-Google.

kyledrake 1 day ago 1 reply      
I first learned about this after starting https://neocities.org and seeing a bunch of really garbage pages that were full of random text that linked to a derpie site somewhere.

We get pagerank SEO spam from time to time, and it's pretty annoying. I have the tools to take care of it within 5 minutes every day, but I do worry that if we grow to a certain point it may no longer be possible for me to handle the problem alone.

I'm sure many other sites have similar problems with comment spam, and I'd love to hear some advice on how to deal with this from sites that have the same problem.

Right now our main lines of defense are a recaptcha (our last remaining third party embed, ironically sending user data to Google I'd rather not send to deal with a problem Google largely created), and a daily update of an IP blacklist we get from Stop Forum Spam.

I tried to do some Bayesian classification, but didn't make much progress unfortunately. And nofollow really isn't an option for me, as it would involve me manipulating other people's web sites and I don't want to do that.

jzawodn 1 day ago 1 reply      

Back in 2003 I wrote:

"PageRank stopped working really well when people began to understand how PageRank worked. The act of Google trying to "understand" the web caused the web itself to change."


It's amazing that it took this long.

tyingq 1 day ago 0 replies      
The real problem is that Google was losing the link spam war until very, very recently. It was trivial to game them up until 2010, and only really became relatively difficult somewhere around 2012.

And, the solution looks roughly like "weigh established authority to the point where it trumps relevance".

Animats 1 day ago 1 reply      
Google has dealt with web spam by replacing it with their own ads. Search for "credit card" or "divorce lawyer". Everything above the fold is a Google ad. Air travel searches bring up Google's own travel info. No amount of SEO can compete with that.

(I still offer Ad Limiter if you'd like to trim Google's in-house search result content down to a manageable level.)

al2o3cr 1 day ago 0 replies      
Better title: "How SEO Asshattery Turned The Web To Shit"
_yosefk 1 day ago 0 replies      
"How gravity ruined flying"? PageRank looking at links isn't some arbitrary thing, it's a source of information every good search will take into account.
hartator 1 day ago 0 replies      
I think it's odd to perceive the end of a relative transparent metric - whatever relevant or not it has been - as a good thing.
gchokov 1 day ago 0 replies      
Indeed, one of the things I don't like (hate?) Google about is the SEO and PageRanking BS. All pages in the last 10 years are starting to look the same. All pages are becoming what Google wants them to be.
kin 1 day ago 0 replies      
Just because I can't see the score doesn't mean I'm not going to what I can to increase it.
rgovind 1 day ago 1 reply      
Does anyone here think we need a search engine which lets us maintain large blacklists of websites. For example, if I am searching for information about airbnb, I do not want news websites like NY Times, WSJ, Forbes, Business standard etc to show up in the results at all. Any business related question on India is invariable dominated by Times of India and other newspapers. With google, its becoming increasingly difficult to filter out websites.

Edit: Changed "on airbnb" to "about airbnb"

hackuser 1 day ago 0 replies      
Another way to look at this is a blow to openness and a concentration of Google's power. The PageRank scores still exist, but they now will be known only by (some? all?) Google employees.

Therefore, the data is no longer open and power is now more concentrated: Those who know someone at Google can find out their page rank score; the 99.999...% of the rest of the world cannot.

kazinator 1 day ago 0 replies      
What difference does it make if the semantics of PageRank are still in place for determining position in the search index, but it is just hidden?

You can still infer the approximate rank of a page by where it places relative to other pages, when searching for relevant keywords. Someone wanting to place ahead of the competition still has a function for measuring how well they are doing in SEO.

vorg 1 day ago 1 reply      
Reading this makes one realize how easy it was for Groovy's Tiobe ranking to jump from #82 to #17 in just 12 months, as shown at http://www.tiobe.com/tiobe_index?page=Groovy , and the other spikes in its history.
runn1ng 1 day ago 1 reply      
PageRank is still visible today?!? Where? (I am just curious, I thought it's not visible anywhere for years)
VikingCoder 1 day ago 1 reply      
I just lost a ton of respect for Danny Sullivan.

Every system can be gamed. Every system where money can be made WILL be gamed. It's a predator-prey relationship.

The way this article was written made it sound like Google Search was a bane when it arrived. And sure, it was the worst Search Engine at the time, except for all the others that had been invented up until then.

FBI quietly changes its privacy rules for accessing NSA data on Americans theguardian.com
269 points by spiralpolitik  2 days ago   36 comments top 7
chatmasta 2 days ago 6 replies      
What good is a set of "rules" (not even laws) if they are deliberated and enforced behind closed doors? Why should the FBI even bother following these "rules" if nobody is transparently ensuring that they do?

Also, I did not realize just how intricately NSA data is shared with the FBI (and who knows how many other agencies). From the article, it sounds like any FBI analyst can run an arbitrary query on the "to" and "from" fields of email addresses, at any time, as many times as desired. So effectively the FBI has one giant inbox with Americans' communications in it?

That inbox will get hacked. It's only a matter of time. If thousands of federal bureaucrats have access to it, I would be very surprised if foreign intelligence agencies do not already have access to it in some capacity.

Scary stuff. People should continue to assume all systems are compromised and email is public information.

matt_wulfeck 2 days ago 1 reply      
I hate secret courts. Everything about a secret court/FISA is offensive to democracy and the longevity of a free society. I strongly believe we will be ashamed of ourselves in the future for allowing them for so long in our country.
marshray 1 day ago 0 replies      
This end of even the pretense of due process, 4th amendment, and Posse Comitatus is one of the most important shifts in American domestic policy in my lifetime.
alfiedotwtf 2 days ago 1 reply      
How long until the NSA quietly changes its privacy rules against domestic-only surveillance?
awqrre 1 day ago 0 replies      
Does the NSA have American citizen's data because other countries share it with them? I thought that they were forbidden from having this type of data...
mirimir 2 days ago 1 reply      
Maybe I'm just paranoid and cynical, and it's hard not to see this as mere posturing.
boomin 2 days ago 0 replies      
Fred Brooks retires dailytarheel.com
275 points by cperciva  17 hours ago   66 comments top 20
ericboggs 12 hours ago 5 replies      
I worked part-time in desktop support at Sitterson Hall, home of the UNC Computer Science program, when I was an undergrad in the late 90s / early 2000s. My team supported Windows, hardware, printers, etc. I distinctly remember closing help tickets for Prof Brooks (and Matt Cutts while he was a PhD student).

My fellow undergrad tech support doofuses and I knew that Prof Brooks was a god and thus walked on eggshells when we were around him...which we quickly learned was totally unnecessary. He was incredibly friendly, gracious, and encouraging. A true Tar Heel.

Congrats to Prof Brooks.

officialchicken 14 hours ago 1 reply      
I think reading this book[1] is even more important than learning to use a keyboard (and mouse) with the intention of creating software or any complex system.

Knowing your limits is one thing, but understanding why/how they are being manipulated by outside forces (e.g. overestimating ability) is another. And how to counter those forces is also included in these pages.

Thank's for the sanity and well-managed project advice Fred!

[1] http://www.amazon.com/Mythical-Man-Month-Software-Engineerin...

henrik_w 15 hours ago 2 replies      
Most people associate "The Mythical Man-Month" with Brookss law: adding people to a late project makes it later. For me, the best part of it is one page at the end of chapter one, entitled The Joys of the Craft.

It is excellent on what makes programming so great: http://henrikwarne.com/2012/06/02/why-i-love-coding/

jgrahamc 15 hours ago 15 replies      

 Heres Fred Brooks, this giant. I mean made IBM, adviser to presidents, all this stuff. And this lady is looking for directions, so he walks with her out to the street and down the street to show her where she needs to go, Bishop said.
Isn't it sad that this was deemed even worth reporting? Why assume someone like Fred Brooks wouldn't do that?

tarvaina 14 hours ago 0 replies      
If UNC had hired 612 people instead of him, we would have had the job done in a month!
superdude264 3 hours ago 0 replies      
Just last Spring, he lent me his copy of "What Color is Your Parachute" and invited me into his office to discuss two job offers I was contemplating. He did all this after he passed by the CS library and saw me looking for something.
sizzzzlerz 12 hours ago 1 reply      
The MMM was first published in 1975. I began working in the industry in 1978 and first read his book around then. Thirty-eight years later, we still try to fix late programs by adding people. Brooks wrote the seminal, magnificent book on project management but he's still a voice, crying in the wilderness.
AKrumbach 6 hours ago 1 reply      
I have, sitting at my elbow right now, a copy of Mythical Man-Month, as part of a mini-bookshelf of the ten books I found most influential in my career / wish to share with my co-workers.

[For the curious: http://i.imgur.com/CGv9PGc.jpg ]

ScottBurson 8 hours ago 1 reply      
Brooks' famous essay "No Silver Bullet" is worth a (re-)read [0]. I still think AI and Automatic Programming will eventually change the face of software development in a bigger way than Brooks thinks possible; but I can't tell you when it will happen.

[0] http://worrydream.com/refs/Brooks-NoSilverBullet.pdf

PaulRobinson 11 hours ago 0 replies      
He was in attendance at the Turing 100 conference a few years back - http://curation.cs.manchester.ac.uk/Turing100/www.turing100.... - which I was fortunate enough to attend.

It was full of great names. Roger Penrose, Donald Knuth, Gary Kasparov, Vint Cerf, Tony Hoare, etc.

Brooks was one of the speakers who seemed really interested in talking to delegates in coffee breaks and sharing stories. A lovely man, and his retirement is well deserved. He has shaped the industry more than any other attendee, even if others may have contributed more to the science, so to speak.

tarheelredsox 3 hours ago 0 replies      
Had Fred for Advance Computer Architecture back in 89 when he was writing his book. Great class and fantastic prof, loved the anecdotes and details on why certain decisions were made for various iconic computer systems. Oddly enough my mother had him as an advisor when she was in grad school working on masters #2.
mathattack 6 hours ago 0 replies      
The Mythical Man Month [0] hit me at a perfect time - I was working at a company that was obsessed with tracking everything in man-months without considering who was doing the work, or when they were added. The book gave me the academic support to back my intuition when I would push back on management.

Now on the other side, I take his advice in There is No Silver Bullet [1] very seriously. Improving software engineering is a slog, not a shiny buzzword.

My favorite quote [2] of his "The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere."

I guess the one surprising thing for me was that he was still actively working. Even 15 years ago I thought of him as a grand dean from a past generation. Great to see him stay so vibrant for so long.

[0] https://en.wikipedia.org/wiki/The_Mythical_Man-Month

[1] https://en.wikipedia.org/wiki/No_Silver_Bullet

[2] https://en.wikipedia.org/wiki/Fred_Brooks

tiernano 14 hours ago 0 replies      
> Although Brooks officially retired in 2015, Jeffay said he is still active in the department. He says I didnt retire. I just went off the payroll, Jeffay said.

I like that.

oneeyedpigeon 15 hours ago 1 reply      
Macbook Air - check. Tie and cardigan - check. Is Fred Brooks one of the original hipsters? ;-)
svec 7 hours ago 0 replies      
I saw him speak on "A Personal History of Computers" last year and wrote about it here: http://chrissvec.com/fred-brooks-talk-a-personal-history-of-...

Brooks's overview: "I fell in love with computers at age 13, in 1944 when Aiken (architect) and IBM (engineers) unveiled the Harvard Mark I, the first American automatic computer. A half-generation behind the pioneers, I have known many of them. So this abbreviated history is personal in two senses: it is primarily about the people rather than the technology, and it disproportionally emphasizes the parts I know personally."

It was a great talk covering his whole career. A video of the same talk is here: http://www.heidelberg-laureate-forum.org/blog/video/lecture-...

girkyturkey 9 hours ago 0 replies      
Wow, this man is a true hero. What a humble, outstanding man that created this from the ground up! Thank you Fred for everything that you have done. It's great to see that fame does not change everyone.
cwingrav 11 hours ago 0 replies      
I had an opportunity to spend a day with him and his VR research team a few years back. Very exciting. Insightful. I loved his contributions to Software Engineering, but few know how much he impacted Virtual Reality as well!
carlsborg 6 hours ago 0 replies      
I will let you in on a secret. Brook's best work isn't the Mythical Man Month, its a 2010 book called The Design of Design.
coverband 11 hours ago 1 reply      
Best title I've seen on HN yet ;-)
crdoconnor 12 hours ago 0 replies      
It's sad that his message never really filtered through. I've worked at more places that thought you could speed up project development by throwing developers at it than otherwise.
Lets Encrypt client will transition to a new name and a new home at EFF letsencrypt.org
316 points by riqbal  1 day ago   42 comments top 8
riscy 1 day ago 0 replies      
> Another reason is that we want it to be clear that the client can work with any ACME-enabled CA in the future, not just Lets Encrypt.

Great to see that they are actively aware of CA monopolization, and taking steps to avoid becoming one themselves.

heavenlyhash 1 day ago 5 replies      
Anyone looking to use Let's Encrypt and free to make choices regarding their server may want to check out https://caddyserver.com/ -- it has Let's Encrypt support baked right in.
waskosky 1 day ago 1 reply      
If you were like me and holding out on Let's Encrypt until Windows XP is supported (even Chrome is still broken on XP) it looks like a date of March 22nd has been set for "getting new cross-signatures from IdenTrust which work on Windows XP."



desireco42 1 day ago 2 replies      
Let's encrypt really helps get ssl everywhere. It is not super easy to set it up, but I am sure this will get better as time goes, this is huge.
_jomo 1 day ago 1 reply      
cm2187 19 hours ago 0 replies      
Does that mean that they also intend to develop a client for IIS?

It would be great if Microsoft was doing that themselves instead. That the Let's Encrypt client would come by default out of the box.

mioelnir 1 day ago 4 replies      
Missed opportunity to move beyond the reach of NSLs.
BillyParadise 19 hours ago 0 replies      
Need to refresh the cert every 3 months, need to pick a new name every 3 months too?
Graph Databases 101 cray.com
258 points by BooneJS  23 hours ago   93 comments top 9
gtrubetskoy 10 hours ago 2 replies      
I have spent a lot of time figuring out how to deal with a large graph a couple of years ago. My conclusion - there will never be such a thing as a "graph database". There are many efforts in this area, someone here already mentioned SPARQL and RDF, you can google for "triple stores", etc. There are also large-scale graph processing tools on top of Hadoop such as Giraph or Graphx for Spark.

For the particular project we ended up using Redis and storing the graph as an adjacency list in a machine with 128GB of RAM.

The reason I don't think there ever will be a "graph database" is because there are so many different ways you can store a graph, so many things you might want to do with one. It's trivial to build a "graph database" in a few lines of any programming language - graph traversal is (hopefully) taught in any decent CS course.

Also - the latest versions of PostgreSQL have all the features to support graph storage. It's ironic how PostgreSQL is becoming a SQL database that is gradually taking over the "NoSQL" problem space.

valhalla 9 hours ago 1 reply      
If anyone's curious about Network Science/Graph Theory in general here's a free online textbook used by a grad student friend of mine


valine 22 hours ago 14 replies      
Question as someone new to graph databases: Are there any open source graph databases worth looking into?
amirouche 8 hours ago 0 replies      
I am huge fan a graph-y stuff. I did several iteration over a graph database written -- in Python -- using files, bsddb and right now wiredtiger. I also use Gremlin for querying. Have a look at the code https://github.com/amirouche/ajgudb.

Also, I made an hypergraphdb, atom-centered instead of hyperedge focused in Scheme https://github.com/amirouche/Culturia/blob/master/culturia/c....

Did you know that Gremlin, is only srfi-41 aka. stream API with a few graph centric helpers.

edit: it's srfi 41, http://srfi.schemers.org/srfi-41/srfi-41.html

lobster_johnson 7 hours ago 0 replies      
I've seen people using graph databases as a general-purpose backing store for webapps/microservices. What are people's opinions about this?

My feeling is that graph databases are not suitable/ready for for lack of a better term the kind of document-like entity relationship graphs we typically use in webapps. Typical data models don't represent data as vertices and edges, but as entities with relationships ("foreign keys" in RDBMS nomenclature) embedded in the entities themselves.

This coincidentally applies to the relational model, in its most pure, formal, normal form, but the web development community has long established conventions of ORMing their way around this. The thing is, you shouldn't need an ORM with a graph database.

AdamN 12 hours ago 2 replies      
Everybody's focused on graph databases here but let's talk about Cray! One of the most forward-thinking computer technology companies ever to exist is starting to get out there again. If they got a few hundred million dollars from an outside investor, they could do friggin' incredible things. They already do incredible things but not out there in the way it so easily could be.
SloopJon 11 hours ago 0 replies      
The author's next post describes RDF and SPARQL in the context of the Cray Graph Engine:


thesz 13 hours ago 4 replies      
It introduces false dichotomy "graph vs relational".

In fact, most (if not all) graph algorithms can be expressed using linear algebra (with specific addition and multiplication). And matrix multiplication is a select from two matrices, related with "where i=j" and aggregation over identical result coordinates.

The selection of multiplication and addition operations can account for different "data stored in links and nodes".

So there is no such dichotomy "graph vs relational".

TimPrice 17 hours ago 0 replies      
1-Would it be more efficient to store objects that contain its relations if you only do (simple) read operations?(e.g. JSON database)

2-Instead, do graph DB engines try to break through bottlenecks for big data and analytics scenarios?

Intermittent Fasting Is Gaining Acceptance well.blogs.nytimes.com
282 points by igonvalue  2 days ago   167 comments top 40
doublerebel 2 days ago 1 reply      
I'll add one more anecdote for IF. I've been doing it for 3 years and for me it's been the most effective way to get in and stay in shape. Coffee with a bit of milk or cream in the morning keeps me until 1-3pm (I've never gone full bulletproof coffee, but coffee definitely helps.) As others have mentioned, a run feels really good toward the end of the fast.

Heavy olympic lifting hasn't been a problem but I only lift on high calorie days, and not more than 4 times a week. If you are into the Leangains flavor of IF, lifting is the recommended way to do IF. I find that in my and my gfs case, lifting has been far more effective than cardio.

I grew up with a mother as an aerobics instructor and I live in a very active Seattle, so I've seen and tried all number of diets and workout regimens. With IF I have the most energy and the fastest results. It's also reasonably easy to work into real life situations (travel, dinners at friends' houses, etc.).

I will say that using a calorie and exercise counter is very very helpful when starting or resuming IF. Otherwise it is far too easy to mis-estimate calories in or out. I use Fatsecret app. And to calculate target calorie in/out, there is a great tool here at 1percentedge [1]. (Unfortunately it's in Flash, contacting the author to port to a mobile app is on my long list...)

[1]: http://www.1percentedge.com/ifcalc/

Lazare 2 days ago 9 replies      
I've been doing IF (the "one meal per day" form) for a while now, mostly because I find it helps me focus during the day.

When I used to eat breakfast, I'd be productive in the morning, until I started getting distracted by craving lunch. Then I'd eat lunch, and end up sleepy and groggy afterwards while I digested.

Now that I'm used to IF, I feel more productive; I spend less time at work thinking about food, and I miss that whole post-lunch carb crash thing.

Obviously it's just anecdotal, possibly all psychosomatic, etc., etc. But I like it.

fomoz 2 days ago 1 reply      
I've been doing Leangains style IF for the past four years. I'd like to share a few quick tips.

If anyone here wants to try intermittent fasting, do it for the sake of convenience. I don't eat breakfast or lunch, I just eat when I get home from work or after weight training. On weekends I break my fast earlier than on weekdays.

I wouldn't say you should do it for the "benefits" if it doesn't fit your lifestyle. Right now all we know is that it's not bad for you. It does work pretty well for hunger control, which can help if your goal is weight loss.

Also, don't live by the clock. It's OK to eat outside your feeding window, you might get hungry later during the day and the next day you might get hungry earlier.

Here's a guide if you're looking for a place to start: http://www.leangains.com/2010/04/leangains-guide.html

Here's a good article if you have general questions about IF: http://www.leangains.com/2010/10/top-ten-fasting-myths-debun...

js2 2 days ago 3 replies      
I've been following the 5:2 fast since Jan 1, and running 60-80 miles per week. I've run my 3 best marathons in this time, including a PR. (I'm not crediting the diet for this, just saying that the diet hasn't hurt my performance.)

I'm mostly doing it as an experiment for myself, after talking with a friend who did it last year, and watching the BBC Documentary on the diet (Eat, Fast, and Live Longer). I thought running on days that I partial fast would be unbearable, but I've felt fine. As well, on my non-fast days for the past two weeks, I'm trying to consume all my meals within 8 hours (1 PM - 9 PM).

I enjoy the discipline the diet requires. Also, somewhat like running a marathon, fasting days are a little bit about putting up with discomfort. I am definitely hungry toward the end of fasting days. Oddly, when I wake up the next morning my hunger has gone away, and I usually don't eat till 1 PM, as mentioned above.

Stats: 44 y/o male, 5'8", 140-145 lbs, down from 145-150. I'd like to be in the 135-140 range for racing purposes.

woofiefa 2 days ago 0 replies      
For years I have been eating only evening meal, so I fast between 22 and 24 hours. I started doing this because it was very convenient for me to do so. I did not know anything about possible health benefits doing it, I was actually afraid to do any research if I would find out that eating only once a day was very dangerous and bad for you, but it fitted so well with my lifestyle that I wanted to continue no matter what.

A side effect is that I never feel really hungry anymore, even if I have been fasting for more than 25 hours. Before I could eat at noon and a few hours later be extremely hungry again. I really enjoy my evening meal complete with dessert, but I don't feel that deep hunger, almost pain, anymore.

Most of the time I only drink while eating and this can be a problem, especially in the summer and on the days I jog, as I can become very confused after not drinking for so many hours. As long as I stay hydrated everything is great, so I need to become better at that.

ardit33 2 days ago 1 reply      
Having done IF myself, where I would eat between noon and 8pm and skip breakfast during a 'cutting phase', my experience:

* 'IF' is great if you are "cutting" and trying to achieve a lower body weight or maybe just maintaining what you have.

* 'IF' is not good or practical if you are going through a traditional bulk phase

* The feeling of alertness, energy and maybe slight 'high' during the morning, is real.

* Not sure if it is better or worse at preserving muscle than just lowering your calories (and having 3-4 regular meals). I did lose some muscle mass (as well as body fat) when I did it.

The strongest point of intermittent fasting is that it is very easy to practice. Just skip breakfast, (and no snacks) and have full normal/regular meals between noon and 8pm.

reasonattlm 2 days ago 0 replies      
Valter Longo's work is probably the most robust and human-targeted on this topic, since his group is trying to get fasting and calorie restriction past the FDA as an adjuvant treatment for cancer. His breakthrough here is arguably as much the medical diet as a way to pull for-profit Big Pharma interest and funding into this field as the actual science.

Still, intermittent fasting is still far behind straight calorie restriction in terms of data and level of conviction that the output has a certain set of effects under a given set of circumstances. The interesting thing to my eyes is that the gene expression studies show that fasting and calorie restriction at similar overall intake of calories produce overlapping but in some ways quite different gene expression changes. Short-lived species have a much larger response in terms of health and longevity to all these things in comparison to long-lived species such as ourselves, however.

All in all it's quite the interesting area of research, especially now that the scientific community is making significant progress towards biomarkers of biological aging (such as DNA methylation pattern).

Here's a good introductory paper or two for human CR:

Will calorie restriction work in humans? - http://impactaging.com/papers/v5/n7/full/100581.html

NIH study finds calorie restriction lowers some risk factors for age-related diseases - http://www.nih.gov/news-events/news-releases/nih-study-finds...

And an interview with Longo on the thrust of his work on intermittent fasting and CR:


Florin_Andrei 2 days ago 2 replies      
I strongly doubt we are evolutionarily adapted to a steady regimen of 3 meals a day every day. I think it's virtually certain that our biochemical engine is optimized for something a lot closer to "intermittent fasting".
mcculley 2 days ago 4 replies      
I've been practicing intermittent fasting for almost a year now. It works fine for me. I generally eat no breakfast or lunch. I drink only water throughout the day. I run a 5k every morning on an empty stomach with no ill effects. It really helps keep my body fat low and makes me conscious and deliberate about what I eat.

The only exceptions are meals I eat for social reasons (e.g., work lunch meetings) and mornings on which I run a road race and want to ensure I'm at peak performance. I find that when I do break the fast, I'm more hungry sooner.

guhcampos 2 days ago 3 replies      
My personal anecdote is a bit different. I don't have a breakfast on a daily bases since I can remember, for the simple reason that I wake up a little bit nauseous every day, and it takes me a couple hours to actually feel like eating anything, so I just go on and wait for lunchtime.

It's never helped me lose weight, and I can't really correlate to any improvements in my everyday life. It got me on a daily Pantoprazole regime though, since the long periods without food have skyrocketed my gastritis.

So, as most of the stuff out on the Internet, don't go doing it just because everybody else is. Try it, and see if it fits you. Don't overdo it and you should be fine.

stevenkovar 2 days ago 1 reply      
I've done intermittent fasting (specifically, the 'Leangains' protocol) for 6 years now, and it's treated me very well. The biggest benefit to me is that it makes diet adherence much easier, and keeps my macronutrient intake top-of-mind, which has a sort of trickle-down effect in my overall health.

Coupled with intense fasted weight lifting or sport activity, I feel great whether I am cutting or bulking. I can play 90 minutes of soccer while fasted and feel more energetic than anyone else on the field.

For anyone interested in starting: ease into it, track your calories and protein/fats/carbs to get a sense of what ratios make you feel better / lose bodyfat more effectively, and keep in mind caffeine and alcohol will feel more potent. Some people find the first few days uncomfortable, but wait for the one week mark before making any judgments (or 1-month if you're comfortable gathering more data).

agentgt 2 days ago 3 replies      
I have being doing IF now for a 4 months and out of all the various dieting methods I have tried over the years I'm very impressed so far.

I only do one meal a day (big lunch around 2pm) and then a protein shake and maybe a beer at 8ish. I like that I can have a beer every day.

I have noticed a couple of things with diet:

* shockingly you can still workout hard while starving.

* it seems to help regulate and improve bowel movements (TMI but if I eat healthy and frequently I kill several trees w/ toilet paper consumption).

* it doesn't seem to work as well if you been skipping breakfast your whole life.

* it doesn't seem to work on females.

Of course these are limited observations.

andreirailean 1 day ago 2 replies      
I've been eating mostly in the 5pm-10pm window for the last 6 months after watching a few TED talks and related videos. Adjustment took about a week or two. I don't feel hunger during the day but when u get home I eat all I want. Lost 10 kilos and am feeling great. Every other morning I go for a 4K run and every day I go for a swim in the ocean. Don't feel any hunger after the run or swim.

I cook breakfast for my family a few times a week and have no cravings while smelling or handling food.

The good things about IF are

- more free time - breakfast and lunch can be spent having fun

- no 3.30itis after lunch - productive all day long

- less money spent

The bad:

- socially awkward. Most social gatherings involve eating. All celebrations are eat-feasts.

All in all it's a positive experience and feels more natural.

I've been thinking "eat less, exercise more" for a while. But only when I started doing IF I really understood what it means: 80% eat less, 20% exercise more. And exercise must come before food - having a big meal, then going for a walk is not as good for you as doing it the other way around. Run first, eat second - like the lions do it.

You have to eat way less and exercise just a little more and your body will adjust. IF is much easier than going for controlled portions. Because "limbic hunger" (look it up).

valine 2 days ago 2 replies      
> The scientific community remains divided about the value of intermittent fasting. Critics say that the science is not yet strong enough to justify widespread recommendations for fasting as a way to lose weight or boost health...

Its always strange when ancient practices like fasting are so poorly understood.

SilasX 2 days ago 2 replies      
I had just been reading Antifragile by Nassim Taleb and he theorizes that fasting works by exploiting your body's "anti-fragility" and need for stressors by periodically starving it of some resource, which forces it to adapt in a way that improves the overall system (eg by economizing). For similar reasons, he recommends a diet of "different nutrients at different meals" rather than making each meal balanced in itself: you rotate what your body has to economize on.

Not saying he necessarily has a justifiable basis for believing this: in a lot of the book, he really comes off as a know-it-all everyone-else-is-stupid crank :-/

Taylor_OD 10 hours ago 0 replies      
I've been using the time restricted Intermittent Fasting for the last month or two. I drink coffee in the morning with some creamer but other than that I don't eat until 12:30 (Lunch) and eat again around 7:00PM or 8:00PM at night.

Honestly, Its not all that different than my old eating habits except now I've cut out breakfast (which I only really ate because it is, "the most important meal of the day")

I use to wrestle in college and maintain my weight via exercise but now I work in an office and this is the only diet I've been able to find that allows me to workout a normal amount and maintain a decent weight.

henriquemaia 1 day ago 0 replies      
I've known (and have been following a form of fasting diet) about this since I watched this documentary: Eat, Fast and Live Longer (BBC Horizon, 2012-2013)[1]

> Michael Mosley has set himself a truly ambitious goal: he wants to live longer, stay younger and lose weight in the bargain. And he wants to make as few changes to his life as possible along the way. He discovers the powerful new science behind the ancient idea of fasting, and he thinks he's found a way of doing it that still allows him to enjoy his food. Michael tests out the science of fasting on himself - with life-changing results.

You can watch it here: http://www.dailymotion.com/video/xvdbtt_eat-fast-live-longer...

[1] http://www.bbc.co.uk/programmes/b01lxyzc

escoz 2 days ago 0 replies      
I've been eating LCHF for almost 4 years now, I started doing it to lose weight, but I feel so much better when on diet that is hard to stop. When I'm firm on the diet, I can easily go a full day without any food, and not feel any hunger or mood swings. It's great.
miseg 1 day ago 0 replies      
> Critics say that the science is not yet strong enough to justify widespread recommendations for fasting as a way to lose weight or boost health, and that most of the evidence supporting it comes from animal research.

Should that not be the other way around, that more research is needed before recommending you eat three large meals a day plus snacks?

danielodio 1 day ago 0 replies      
Amazing to see IF & Ketogenic entering the mainstream.

I was on the verge of metabolic syndrome, with a 38 inch waist, high triglycerides and high cholesterol. http://go.DanielOdio.com/health In 10 months of IF I've lost 60 pounds. http://go.DanielOdio.com/fasting

I've also experimented with Ketosis for four months http://go.DanielOdio.com/ketosis and now with "Blue Zones" http://go.DanielOdio.com/guide

Love seeing so much of this gaining a broader audience.

voraciousg 1 day ago 1 reply      
These are dangerous as they naturally lead to a fast/binge type eating pattern . if you're not careful you'll Indoctrinate yourself into a new hell.

I've been lifting and experimenting with my nutrition for over a decade. Just be careful, it goes from regime to disorder quick.

TheRealmccoy 1 day ago 0 replies      
People have been advocating, skipping breakfast since long, like more than a century.

There is this book, published in 1900 by a doctor named, Mr.Edward Dewey, the title of which is - The No Breakfast Plan and the Fasting-Cure.

I have read this book, and it amazes me how logical, everything that doctor has explained.

I don't have breakfast.

Here is the link to the book on Gutenberg - http://www.gutenberg.org/ebooks/27128

d357r0y3r 2 days ago 0 replies      
I got really into IF about 6 years ago. It was a pretty good way to drop about 30 pounds of fat.

That said - at least for me - I'm not sure it helped my eating patterns in general.

For some background, I got up to about 305 lbs when I was ~19 (about 9 years ago), then lost a bunch of weight over a year and a half or so. Got down to about 180 (way too skinny). I really like food. I eat way too fast, and I eat way too much, and I love eating junk food.

So, for me, IF was kind of a way to compartmentalize what was essentially binge eating. I'd push my fasting periods further and further into the evenings, sometimes going til 4-7 PM, then having an absolute blowout meal.

Eventually I stopped doing IF. Well, kind of. I still don't eat breakfast. I got less extreme with it.

roymurdock 2 days ago 0 replies      
I would be interested to see the body of research on the effects of Ramadan (1 month of fasting, both food and liquid until sunset) on the health of those who fast.

The effects on the economy were clear from what I observed in Afghanistan - at least 5% potential GDP loss due to reduced working hours, shutdown of public services, overall latency of business transactions during this time. An actual study across different observing countries would be intriguing.

RikNieu 1 day ago 0 replies      
I've tried IF on-and-since since 2010. While I found it pretty useful for weight loss, I constantly felt foggy and dased mentally, even after months of going at it.

My body(or brain, I guess) works better when I eat small amounts of protein every four hours or so.

shanev 2 days ago 1 reply      
Nearly every health-related article that makes it to the top of Hacker News these days is stuff that was discussed in the Paleo community 5+ years ago. Earlier today was the benefits of a high fat ketogenic diet in regards to cancer, and now IF. Good to see you guys catching up!
bcheung 2 days ago 1 reply      
I really wish there was more scientific research around extended fasting. There's so little research and much of it doesn't wait long enough for ketoadaptation.

I've done extended water fasts as well. 18 days of just water is the longest I've done and I felt like I could have gone longer.

the_af 2 days ago 1 reply      
> Mark Mattson, a neuroscientist at the National Institute on Aging in Maryland, has not had breakfast in 35 years.

But... but.. breakfast is the best part of my day :( Thanks for ruining it, pseudoscientific celebrity fad of the moment!

tdkl 2 days ago 0 replies      
Here's a nice resource that explains 16/8 IF pretty good : http://antranik.org/intermittent-fasting/
ctrijueque 1 day ago 0 replies      
Im been doing this but skipping dinner instead of breakfast. I don't eat anything after 14 p.m until next morning around 6 a.m.

Few first days was hard, lots of cravings, but after a week/week and a half I get used to it.

I feel more healthy (yeah, I know, I know...) than ever. And I'm losing weight at a steady pace (without the usual hights/lows, etc.

pjc50 1 day ago 1 reply      
Seemingly everyone is talking about this in the context of 'gains' as part of a high-exercise lifestyle. What about the rest of us on the bare minimum of exercise? Is it possible that simply skipping breakfast has all these health benefits?
amgin3 1 day ago 0 replies      
I've been doing this for years, not because I want to, but because I'm too poor not to.
ljk 2 days ago 0 replies      
been doing it for a few months now. it's pretty interesting that the first week i get hungry the whole day from not eating breakfast, but now I go on with the day without breakfast or lunch, and only drink water through the day and I don't get hungry or cravings.

Also none of that afternoon food coma from lunch!

Still not sure if IF just makes it easier to eat less or there's really something behind it(enough evidence from both sides, like how good/bad egg is for you) but it's worked for me so far

mordocai 2 days ago 0 replies      
I've been doing 16/8 IF for most of my life and didn't realize it had a name or was something that people talked about...
TurboHaskal 1 day ago 0 replies      
Enjoy your cold hands, socially awkward encounters and binge eating sessions.
dayaz36 2 days ago 0 replies      
As a Bahai that is currently fasting, it's nice to see science and religion agree!
ChemicalWarfare 1 day ago 1 reply      
"Mark Mattson, a neuroscientist at the National Institute on Aging in Maryland, has not had breakfast in 35 years. "

and his squat+dl+bench total is?

saiko-chriskun 1 day ago 0 replies      
everybody here is crazy
daxfohl 2 days ago 0 replies      
tag: things_that_will_have_no_effect_on_your_life_but_you_think_might

(edit: I eat two meals a day, except sometimes I'm hungry in the morning and then I do often get hungry again soon afterward too and end up eating 4-5 meals that day, but I think everyone's metabolism is different, and moreover different at different times, and feel more harm than good can often come out of these studies, especially when presented to data-centric people like HN readers). (Not to mention, I think a lot of it doesn't matter at all (searching for "Douglas Adams Nutrition" diatribe but not finding it).

ha8o8le 2 days ago 3 replies      
I don't think it's possible for the average person to fast in any of the ways suggested here. I think the best thing to do is have a light dinner without carbs and a healthy breakfast. If you do this daily it will be like a daily intermittent fast as you can have a large lunch to indulge yourself. This way you can actually stick to it. I have been doing this for years and am very fit. I made a video showing what I ate for lunch each day while losing weight to prove it works https://youtu.be/v0hYofwTIiw
PyPy 5.0 Released morepypy.blogspot.com
248 points by mattip  10 hours ago   51 comments top 9
fijal 6 hours ago 1 reply      
Because everyone asked, I gonna clear a few things about PyPy3 support, please keep the comments civil.

* We are not against working on Python 3 - it just happens that there is a lot of interest these days in things like numerics, warmup improvements and C extensions that we want to focus on.

* We essentially exhausted Py3k pot. I personally think it delivered what it promised, despite being short on the funding goals. It's crazy what level of expectations people have with crowdfunding - it's really difficult to find someone to deliver a big, multi-year project for 60k, even outside the states.

* We're closely watching py3k adoption - since we're always a few releases behind, we'll probably do a 3.5 after CPython 3.6 is out, but it all depends on good will of volunteers, who I have no control over.

* Money can easily change focus, but it would need to be a significant enough amount to actually commit to delivering a fast and compliant PyPy 3.5, not 5 or 10k

I hope this clear some things up, those opinions are my own and not necesarilly represent everybody in the pypy project

EDIT: there is just over 8k USD left in the py3k pot. At $60 USD/h (official SFC rate) it's 146h. That's not enough to even fix the inefficiencies in the current version. We hope to use it to get to version 3.3

JelteF 9 hours ago 2 replies      
I think its a real shame that pypy3 is not updated with the new releases of pypy. It is still on 2.4.0.

I understand that the major sponsors of PyPy are interested in the python 2.7, but not updating it for 1.5 years seems like they have abandoned it.

wyldfire 2 hours ago 0 replies      
Well, great job, team!

I get a free speedup for my non-numpy/scipy projects and that's flipping awesome. The 2.7 and 3.2 support is just fine for my needs. Your focus on the C API emulation seems totally appropriate to me.

If numpy and friends worked well on pypy, IMO there'd be little reason left to use CPython.

AdamN 8 hours ago 0 replies      
I'd like to use PyPy but all of my new projects are Python3.
yahyaheee 8 hours ago 2 replies      
Pypy is really cool, and I hope it becomes the default interpreter down the road. However, little py3 support makes me edge away from it for now. Hopefully, all pythons will merge in the near future
psandersen 3 hours ago 0 replies      
Good to see PyPy progressing.

I mainly use Scikit Learn, theano, numpy and pandas; is PyPy able to work with the above, and likely to give any speedups at this stage?

Animats 8 hours ago 1 reply      
What version of CPython does this match? The announcement doesn't say.
aleksi 9 hours ago 2 replies      
Why major version change, why 5.0 after 4.0.1?
gaze 3 hours ago 0 replies      
How is numpy doing? How about the sandbox?
Jeff Bezos Lifts Veil on His Rocket Company, Blue Origin nytimes.com
274 points by rcurry  1 day ago   114 comments top 13
thematt 1 day ago 6 replies      
This isn't a "who is hiring?" thread obviously, but if anyone wants to be a part of our team we are always looking for folks who share our passion for space, especially those who happen to build software too!


Needless to say, there are tons of interesting problems to solve and opportunities to make a huge impact.

pjscott 1 day ago 1 reply      
Blue Origin is a credible rocket company, in case anybody was wondering. They've got some impressive engineering, a realistic, incremental long-term plan, and pockets deep enough to pay for it.
pnathan 1 day ago 1 reply      
I had an interview at Blue Origin a few years ago. Very driven group of people. I wasn't the right fit for the position, we both agreed, but it was a great experience. If you're into spaceflight, I would definitely recommend checking them out.
josefresco 1 day ago 2 replies      
"His argument was simple: Energy consumption has been rising at 2 or 3 percent a year. Even at that modest rate, within a few centuries, the energy usage would be equal to the energy produced by high-efficiency solar cells covering the entire surface of the planet. Well be using all of the solar energy that impacts the Earth, he said. Thats an actual limit."

What now? I was under the assumption that the solar energy levels hitting the earth were quite higher/more than we could ever "need" and also that energy usage isn't going to increase at a consistent rate. Anyone have data on this? Does his timeframe (few centuries) exceed than existing reporting? Does it even make sense to project a 2-3% increase in energy consumption over ~300 years?


dkarapetyan 1 day ago 1 reply      
What a weird sentence

> Currently, most rocket companies launch, at most, about a dozen times a year. You never get really great at something you do 10, 12 times a year, Mr. Bezos said. With a small fleet of reusable New Shepard rockets, Blue Origin could be launching dozens of times a year.

So they'd be doing the same as everyone else?

LAMike 1 day ago 1 reply      
Hope they eventually offers tours like SpaceX, it is really cool to check out the factory and see the Merlin engines. They are surprisingly small, but they look like time machines.

Jeff really likes things that fly. Drones, Cargo planes and rockets!

hooliganpete 1 day ago 4 replies      
Gone are the days of billionaires buying sports teams, buying newspapers and blogs still seems to be in but not quite as hot as space exploration. With SpaceX and Virgin Galactic, Blue Origin sort of seems like a "me too" venture...
kin 1 day ago 0 replies      
This is awesome. I love that there is more and more interest in space exploration lately. Hopefully real life events will soon inspire beyond what recent movies like Interstellar and the Martian have shown us.
roflchoppa 1 day ago 1 reply      
Its almost an eerie feeling of these companies having the potential to last "forever".

common health technologies freeze me now and wake me up in 400 years please.

kirk21 1 day ago 0 replies      
Good to see that they will become more open about what they are doing. Great job Mr. Bezos.
dang 1 day ago 1 reply      
Please don't be rude.

We detached this subthread from https://news.ycombinator.com/item?id=11251509 and marked it off-topic.

infocollector 1 day ago 1 reply      
It would be nice to see which billionaire is next :-)
samstave 1 day ago 4 replies      
Funny how he started a rocket company, yet carmack abandoned one... I feel carmack could have done a better job, but if carmack can give up - my prospects on bezos are quite dim...
Turning two-bit doodles into fine artworks with deep neural networks github.com
302 points by coolvoltage  21 hours ago   55 comments top 15
bd 16 hours ago 2 replies      
These are really cool. Though if you were, like me, puzzled how could some really complex and coherent features come from those simple drawings / masks, have a look at the original paintings that were used as sources and compare them with generated images:

Original #1:


Generated #1:


Original #2:


Generated #2:


So those new generated images are structurally very similar to the original sources. Neural net seems to be good at "reshuffling" of the sources. That's probably how things like reflections on the water got there, even if not present in the doodles.

nuclai 18 hours ago 2 replies      
(Author here.)

For details, the research paper is linked on the GitHub page: http://arxiv.org/abs/1603.01768

For a video and higher-level overview see my article from yesterday: http://nucl.ai/blog/neural-doodles/

Questions welcome!

pygy_ 16 hours ago 4 replies      
I'd love/dread to see this this kind of work (neural nets run in reverse mode) applied to voices and accents.

You could credibly put any words in the mouth of anyone.

beeswax 13 hours ago 2 replies      
That's pretty cool. Might speed up asset creation for games by orders of magnitude: Train with concept art, generate the variations via these networks; adds consistency to the output and helps loosen the asset bottleneck / content treadmill esp for smaller studios/individuals.
ogreveins 10 hours ago 1 reply      
I played with something similar for a while, https://github.com/jcjohnson/neural-style

What I've found so far is that it takes a while to get good results like something that looks like its own creation instead of an overlap of pictures. There's no exact way to do this. If you modify existing artwork it works well enough since the source is already somewhat divorced from reality but photos are difficult. When it works it's amazing though.

Angostura 13 hours ago 0 replies      
Looked at the images and honestly thought that someone had posted an April fools joke a few weeks early. Amazing.
MichaelBurge 15 hours ago 1 reply      
Very interesting! The thing that amazes me most about these neural network projects is how small the source usually is compared to what they're doing. Your doodle.py is only 453 lines.
amelius 17 hours ago 1 reply      
What data has been used to train the neural network?
mkj 14 hours ago 0 replies      
In coming years this will create a very strange reality combined with improving VR tech...
intrasight 10 hours ago 0 replies      
Now please combine this with TiltBrush
wslh 14 hours ago 1 reply      
Exciting! Where can we find image databases for this?
Dowwie 17 hours ago 2 replies      
Have you run children's paintings through this yet?
mhurron 13 hours ago 0 replies      
Finally a way to draw things without learning how to draw. I'll be famous!
tjaad 16 hours ago 2 replies      
Would this work with photos?
api 9 hours ago 0 replies      
This project should be named Bob Ross.
Is this c/10 spaceship known? conwaylife.com
368 points by morninj  1 day ago   79 comments top 11
pervycreeper 1 day ago 2 replies      
This writeup (found a few pages into the thread) explains things in a little bit more detail for a newcomer. https://niginsblog.wordpress.com/2016/03/07/new-spaceship-sp...
heavenlyhash 1 day ago 8 replies      
That's a beautifully concise numbering system for sharing being used there.

Now if only we had descriptions of chemistry that were this terse. Imagine if this kind of problem solving, collaboration, simulation, and instant verification were the norm for synthetic chem. One of the comments -- "[Let's] use gencols to rub the ship against gliders and *WSSs to see whether there is a useful collision to maybe build a puffer" -- just blew me away. If this were chemistry, that commentator would have been suggesting automatic nanomachine factory discovery.

(InChI appears to be close. But vast amounts of data are locked up in obtuse formats are either Assigned-Names-And-Numbers style formats which are useless to indexing and similarity searches, or formats that embed non-relative coordinates in 3d space, etc, in such a way that computing a deterministic ID for sharing is practically a nonstarter.)

ticklemyelmo 1 day ago 3 replies      
Hint: Click "Show in viewer" in the first message. Zoom out a bit. Press play.
jonah 20 hours ago 1 reply      
I just love all the lingo in little specialized communities.

orthogonal spaceship, glider, puffer, rake, loafer.

"Trying to perfect a rake so it does not create Methuselah which eventually evolves into loaves, beehives and traffic lights isn't normal. But on Conway's Life, it is. Life. Not even once." - 'muzik

"- Use gencols to rub the ship against gliders and *WSSs to see whether there is a useful collision to maybe build a puffer." - 'HartmutHolzwart

And the excitement exhibited over this discovery. Very cool.

nkrisc 1 day ago 2 replies      
Skimming through the thread, I realized I had no idea of the type of community that exists surrounding Conway's game. I think it's awesome.
xamuel 1 day ago 3 replies      
Very nice!

You might be interested in a simple proof I found of why c/2 and c/3 are speed limits for orthogonal and diagonal spaceships respectively.

Definition: In a gameplay of life, an "infinite lifeline" is a sequence of pairs (c_i,n_i) such that each c_i is alive in generation n_i and either c_(i+1)=c_i or c_(i+1) is adjacent to c_i.

Lemma ("Two Forbidden Directions"): Let x,y be any two 'forbidden' directions from among N,S,E,W,NE,NW,SE,SW. In any gameplay of life that starts finite and doesn't die out, there is an infinite lifeline that never goes in either direction x or y.

The lemma's proof uses biology. Say that (c,n) is a "father" of (c',n+1) if c' is the cell adjacent to c in direction x or y. Otherwise, (c,d) is a "mother" of (c',n+1). By the rules of the game of life it's easy to show every living (c,n+1) has at least one living father and at least one living mother. It follows (modulo some more details) that since the gameplay doesn't die out, there must be an infinite lifeline where each cell is a mother of the next, i.e., an infinite lifeline that never goes direction x or y.

Proof of c/2 orthogonal speed limit: If a spaceship went faster than c/2, say, northward, by the lemma, it would have an infinite lifeline that never goes N or NE. The only way it could ever go northward would be to go NW. Every NW step would have to be balanced out by an eastward step (of which NE is forbidden) or the spaceship would drift west. So every northward step requires a non-northward step, QED.

Proof of c/3 speed limit for diagonal: A diagonal spaceship faster than c/3, say, northeastward, would have an infinite lifeline that never goes N or NE. The only way for it to go northward would be to go NW. Each NW step would need at least two eastward steps in order for the ship to go eastward, QED.

iamwil 1 day ago 0 replies      

This link has an animation of the c/10 spaceship.

stcredzero 1 day ago 2 replies      
I'm thinking about writing a Conway's Life MMO, where you can activate "lanterns" that illuminate rectangles in the grid with Conway Life squares. These lanterns are fueled with "living" Conway Life cells, which are harvested by the player. Sound interesting?
stephenitis 1 day ago 2 replies      
I'm confused about what I'm looking at... is this a pattern that emerges in conways game of life?
tommoose 21 hours ago 0 replies      
Is it just me or does the original frame look like Serenity (Firefly)? It's movement is backwards though.
Mauricio_ 1 day ago 0 replies      
The New Mind Control The internet has spawned subtle forms of influence aeon.co
249 points by mark_l_watson  1 day ago   196 comments top 33
md224 1 day ago 26 replies      
I have made this case before, but I will make it again.

The Internet is the largest information system in the world, and Google is the primary portal into that information system. Google's "organic" results are accompanied by AdWords results, which are based on a mixture of bid price and relevance. These ads are marked with a small "Ad" label that many people miss, and even those who know they're ads can't really "unsee" those results.

So, searching the world's largest information system provides results which have been biased by money. How does anyone consider this ethical? Why are we letting money influence the salience of information?

What if your local library (you know, those old things) had a card catalog with "sponsored" results? If this already exists, then maybe we're already lost. But it seems to me that as a basic rule of information ethics, the salience of information in a given information system should not be biased by monetary influence. Full stop, the end, no exceptions. If anyone has a counterargument, I would honestly love to hear it, because this has nagged at me for a long time. I simply can't understand how AdWords is ethical.

jamesblonde 1 day ago 1 reply      
As a adjunct to this great read, you find out more about this field through the historical documentary, "The century of the Self" by Adam Curtis and the BBC. It is the 'red pill' for understanding consumerism.https://vimeo.com/10245146
justsaysmthng 1 day ago 3 replies      
I remember the '90s, when the Internet as we know it was just a baby... The enthusiasm we all shared for it. Of a better future. Of true democracy in the world. Of free people, free minds.The Internet will be the cure for all the social ills that humanity has experienced in the past.People will trade and talk with each other and that's how we can have peace on Earth!

We will be able to discuss, collaborate, create. We would be able to watch any film, listen to any song.

Some people called us "geeks", we liked to call ourselves "hackers".

We are not just hacking code, we're hacking a new world.


A quarter of a century later and most of those things are now reality. But somehow these great things have brought with them some hidden things. Things which we ignored or brushed off easily back in the day..

Like the fact that the Internet is now populated by the same demographic as the real world, not just hackers and dreamers. Now everyone is online.

We thought it would free us from oppression, but it is becoming the ultimate tool for oppression.

We thought it would give us true democracy, but it is becoming the ultimate moderation system for "foreign" though suppression and group think generator.

We though it will serve our needs, but it is becoming the thing that is telling us what to need.We thought it will satisfy our tastes, but our tastes are now being programed into us by it.

Of course we're still high from all the positive aspects and it's not in our nature to be scared of things, but that will soon wear off .. And when we wake up what will we find there ?

Either way, it is unstoppable and nobody can turn it off. So we only have to wait and see what it will ultimately turn into.

What will it be 25 years from now ? Will we still be able to discuss about this freely ?

Gatsky 1 day ago 0 replies      
Here is the actual study: http://www.pnas.org/content/112/33/E4512.full.pdf

I can't see how this is valid at all. We know that polls give biased results unless you are very careful with the sampling. Here people are self-selecting for a poll (eg via mechanical turk). Then you apply a highly contrived scenario that they are googling about a candidate. Then you ask them a bunch of questions, immediately, and proceed to draw wide ranging conclusions designed to increase your self-importance as much as possible. I mean, seriously, it's worse than useless.

This article is also written as if these findings are earth shattering. After conducting a small, biased, invalid study (Asking people in San Diego about an Australian election? How does that generalize to anything?) and finding a large effect Epstein says 'We did not immediately uncork the Champagne bottle'. Is that how psychology research is conducted? Researchers toasting large implausible effects in small biased samples that have no external validity?


puranjay 16 hours ago 2 replies      
Most people find this a joke, but I don't kid when I say that 4Chan might be the single most influential political force today.

4Chan's 'meme makers' have an uncanny ability to distill an idea to its simplest form. Ideas that emerge on 4Chan end up on Reddit, from where they are picked up by Buzzfeed and HuffPo. Before you know it, what was a dumb little idea spawned by some anon on /b has become a part of 'internet culture'

I've seen firsthand how 4Chan has been able to influence Trump's presidential run. If it weren't for /pol's constant shilling for him, I doubt Trump would've had so much support. 4Chan's memes have changed perceptions of Trump, whether you like it or not

kazinator 11 hours ago 0 replies      
The internet, in fact, provides the best support ever for closing your mind to anything that doesn't align with your world view. You just have to google for pages that confirm your beliefs, finding sites and forums filled with people who think like you do. Then you can deceive yourself into believing that your views actually have a large support. ("Pretty much everyone I know online on every site I go believes it!")

The upshot is that "subtle influence" is not going to work on those who are crudely entrenching themselves into some camp or other. For instance, you're not going to "subtly influence" some denier into accepting human-caused global warming. Not as long as he can find plenty of others and keep believing that his social circle is an unbiased sample of the population.

Dowwie 1 day ago 0 replies      
"We now have evidence suggesting that on virtually all issues where people are initially undecided, search rankings are impacting almost every decision that people make."

I dug into his CV and found the following related works:

- recent publications: http://aibrt.org/index.php/internet-studies

- The search engine manipulation effect (SEME) and itspossible impact on the outcomes of elections [http://aibrt.org/downloads/EPSTEIN_&_ROBERTSON_2015-The_Sear...]

A talk that he gave at Stanford about SEME: https://www.youtube.com/watch?v=TSN6LE06J54&feature=youtu.be

- Democracy At Risk Manipulating Search Rankings Can ShiftVoters Preferences SubstantiallyWithout Their Awareness:[http://aibrt.org/downloads/EPSTEIN_and_Robertson_2013-Democr...]

CV: http://drrobertepstein.com/pdf/vita.pdf?lbisphpreq=1

erikpukinskis 1 day ago 1 reply      
Google could send people to places that are contrary to the user's interest, but that's essentially deliberately decreasing the quality of one of their products so I'm not that worried about it. If they do it's a (big) market opportunity for someone else.

I would go even farther: I'm not particularly worried about individual interests at all, on any subject. The Internet is very good at exposing them.

I am much more concerned with bad classes of actors than bad actors. We see many ways in which competition breaks down because entire classes of people benefit from working in synchrony. The classic example is politicians: crooked elections mean longer terms which benefits basically all of them.

The other classic example is the capital class. If everyone in the capital class plays by he rules of property then they can exploit the labor class. Once you're in the capital class there are few reasons to compete with private property. Social pressure mostly neuters whatever capital class activists might try to keep working.

It's these class barriers that we should be worried about. But new weapons (like search engines) and new villains (like Islam) make much better news stories.

manachar 1 day ago 4 replies      
This read like anti-Google FUD.

The basic argument is search engine rank determines trustworthiness of a source. This influences people's opinions on politics, what they buy, what they think, etc.

This is absolutely true, and the core of their research (it seems).

But then it goes into FUD territory when talking about Google backing Hillary. Hillary and Trump have received the lion's share of attention in media, social media, and such. Google searches SHOULD show them prominently.

Worse, the article basically finishes up with a "be afraid, be very afraid" approach that rankles me. "The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril."

No solutions or deeper analysis. No discussions on how a search engine should rank relevancy to search terms.

I personally have no doubt that mass-media, marketing, and the internet are shapers of opinions. Bias in the media, search engines, and such is a complex topic. Not something that should boil down to "Google could make it so Hillary wins" therefore you should be afraid.

ashurbanipal 1 day ago 2 replies      
Hold on, so we call it "Mind Control" when Google shifts our preferences toward one of two pre-selected choices? What do we call the state of the world that leaves us with only 2 pre-selected choices who happen to agree on 90%+ of all policies?
nsxwolf 1 day ago 3 replies      
Something like this recently disturbed me. Mitt Romney published a tweet storm a few days ago making his case against Donald Trump. They all showed up in my feed - I don't follow Romney, and the tweets weren't marked as sponsored, but somehow Twitter decided I should be seeing them anyway.

I'd hate to think Twitter decided it was for the public good that everyone read what Romney had to say about Trump.

Mindless2112 1 day ago 2 replies      
Google doesn't need to use search results to manipulate voting behavior -- they have Google Now.

Google Now currently displays cards to remind people to vote on voting day. Maybe it just happens to be more likely to show up for people that have been profiled as likely to vote for Google's favored candidate.

whatTheFuckEvr 1 day ago 1 reply      
Oh god. This article is so fucking disappointing.

What a quaint boogey man this "Search Engine Manipulation Effect" is. It even has it's own obscure little acronym, SEME, to appear more relevant.

I learned about the subtle effects of advertising by the time I was in fourth grade, and certainly understood how to ignore them by middle school.

Back when special holographic foil comic book covers and trading cards were new, I had already figured out that all of these "collectibles" were mass-produced, and would never wind up as valuable as, say Action Comics Issue#1, despite so many claims otherwise. This was something you could kind of figure out on your own. If your were easily amused by shiny objects though, you might not arrive at the same conclusion.

Meanwhile anyone could figure out that the influence of single frame inserts in movies was as potent and realistic as the subliminal messaging in John Carpenter's Sci-fi movie, THEY LIVE.

So too, with Search Engines.

Figure if a fourth grader can figure out the shenanigans of opinion and belief influence in advertising, and unravel the bullshit of religion before high school ends, then this other newer form of bullshit is similarly debunked by comparable intellects. If you're so stupid that you buy into bullshit, without multiple channels of factual verification, you're your own worst enemy.

Okay, okay, maybe this is good reading material for an elementary school classroom assignment, focused on current events. Sure, why not?

I was hoping this would be about technological manifestations of psychic telepathy through malicious use of functional MRI systems.


labster 1 day ago 0 replies      
It's looking like our only hope left is the recently declassified WMF search engine. I get that people didn't like Lila doing all of that grant in secret, but I find myself not really opposed to to Wikimedia taking on Google. In the long term, someone is going to have to do it.
guelo 1 day ago 5 replies      
This is a really interesting study but it's hard for me to believe in the conspiracy theory that the hundreds of engineers that work on Google Search would be OK implementing a complicated vote manipulation algorithm and keeping it secret. But it's possible, criminal conspiracies involving many people seem to happen regularly in the financial industry.
hellbanner 1 day ago 0 replies      
So on this note, I just want to point out that everytime the horrors of the new technologies is talked about, something else isn't talked about.

I've seen a number of newer HN account flooding the site with articles... presumably for eyeballs + ad revenue but hey, maybe they just want another link to lose attention..

r0m4n0 1 day ago 0 replies      
The internet is just a new medium to persuade opinions just as the many before it. Most people don't even know who represents them apart from what's talked about in the news. This study required people to actually browse through this fake search engine. I'm not convinced many people do any research whatsoever (beyond the top of the ballot).

Does this "search engine manipulation effect" have an impact on top of the ballot votes? We still don't know. Does it have an impact on everyone else on your ballot? Nope.

Disclosure... I am the founder of a company that builds a tool for organizations to blatantly tell people who to vote for...

Sir_Cmpwn 1 day ago 0 replies      
What I find tragically interesting is that the maintainers of this website (aeon.co) and likely most of the people reading this comment are contributing to the massive dominance the mentioned companies have over the flow of personal information about people online. If I didn't block trackers, it looks like Facebook and Google would both know I read this article, along with Twitter and New Relic.
knaik94 1 day ago 0 replies      
As much as I would love to imagine some powerful people intentionally influencing search results on a big scale, Google and Facebook and Twitter have no real way to make money from it. I would argue that it would only drive people away. I am sure that it happens to a certain degree, look at the marketing of Bernie Sanders on reddit, but a much bigger influence is your social circle and your source of new information. Social media mirrors people's attitude. If you see new tweets about an issue you don't particularly care for, there's a good chance that people you follow do care. It's basic psychology that you befriend people who share your views and interests. If all of your friends tweet or like or show interest in something then twitter will assume you do too. Twitter makes money from user engagement and so it's logical to show you things that your friends agree with because chances are you will too. I think a real issue is the lack of a source of unbiased information. Relevant information and information you agree with are very different things.
ikeboy 1 day ago 0 replies      
>Keep in mind that we had had only one shot at our participants. What would be the impact of favouring one candidate in searches people are conducting over a period of weeks or months before an election? It would almost certainly be much larger than what we were seeing in our experiments.

Reminds me of http://slatestarcodex.com/2014/09/24/streetlight-psychology/

>And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Googles search results routinely favoured Democratic candidates. Are Googles search rankings really biased?

A greater portion of liberals use social media than conservatives (source: http://www.pewinternet.org/2012/03/12/main-findings-10/) Maybe they organically generate more links?

jbclements 20 hours ago 1 reply      
I'm really astonished to find only a handful of references to DuckDuckGo in this discussion. I've been using it exclusively for about 2 years now, and had no problems at all. Perhaps I just don't know what I'm missing ... like mind control!
pcmaffey 1 day ago 0 replies      
>Power on this scale and with this level of invisibility is unprecedented in human history.

I would argue that the influence of media has always been this powerful. And media has always been biased.

Another angle to look at would be to apply the work of Stanley Milgram re: obedience to authority figures. Our ability to think for ourselves has some evolving to do...

nunyabuizness 23 hours ago 1 reply      
I once read an article about surveillance with a title along the lines of "A Tale of Two Cities."

In it, the author explains that there are two types of surveillance cities that will emerge in the future: one where every park bench is rigged with a mic, every street corner has a camera aimed at it, and where all the data collected is funneled to law enforcement agencies; if you were mugged on some street corner, they'd be able to react to the crime swiftly and with high accuracy.

The other city is exactly the same, but all the data is made available to all citizens through an open API; so if you wanted to meet with someone on some street corner, you could decide for yourself if it was safe enough to visit, likely preventing the crime from happening at all.

Does anyone know what article I'm talking about?

_han 1 day ago 0 replies      
This topic is also addressed in the last season of House of Cards.
scottlocklin 1 day ago 0 replies      
Megacorporations are bad for the internet for certain, but I don't feel very mind controlled by Google, as I almost always use Duckduckgo or Yandex. They produce fairly similar results, and I have the satisfaction of not shoveling the tiniest bit of money at a company I have likened to fat Vegas era Elvis.

I also doubt the results of their research. Nobody is going to vote for Donald Trump because he happens to appear first in a google search; that's just retarded. I think the fact that outsider candidates are locked out of legacy media megaphones and party power structures seems more harmful to democracy, and this has been accepted as "just how it is" for decades.

zby 16 hours ago 0 replies      
hotcool 1 day ago 1 reply      
I designed supraliminal posters[1] to counter the covert forms of persuasion like Low Attention Processing marketing[2]. I definitely find them helpful, especially for meditation.

[1] http://zenpusher.com

[2] http://www.neurosciencemarketing.com/blog/articles/low-atten...

daveloyall 1 day ago 0 replies      
Related: https://googleblog.blogspot.com/2009/12/personalized-search-... "Personalized Search for everyone " (2009)
EGreg 1 day ago 1 reply      
This article is speaking about something that has existed as long as mass media has. Whether it's google or newspapers or TV, the companies running the sources we turn to have control over what information we are exposed to, and can influence our views. Our grandfathers read the newspaper, our parents watched TV. What, google is a monopoly? Ok, so that's the big issue.

On the other hand, our susceptibility to having our political system be disproportionately affected by a company or two with a top-down chain of command is a reflection that our system of representative democracy has weak links and can be easily subverted.

I wrote about the solution to this a while back: replace voting with polling! Have people cast their voice for POLICIES not REPRESENTATIVES. It is much more costly to fool all the people all the time, than to fool them at election time, and then go on to lobby the representatives they chose.

Voting depends on turnout, which skews the results and is susceptible to sybil attacks (remember facebook's vote about the newsfeed that got 3% turnout?)

Polling doesn't. It can be refined using better and better statisical techniques. We can gradually replace costly and stupid elections where candidates talk about their penis, with polling of the population on issues like gun control etc. Replace the bickering lawmakers and filibusters with polling and threshholds.


NoMoreNicksLeft 1 day ago 0 replies      
Mind control in the same way encyclopedias used to favor subjects that start with the letter A?

I swear, sometimes I think the world is inhabited by p-zombies, who don't actually think things through, but just mindlessly recombine previously consumed memes into (slightly) novel variants.

Was this written by a second grader?

andrewclunn 1 day ago 0 replies      

Use DuckDuckGo.

       cached 11 March 2016 03:11:01 GMT