hacker news with inline top comments    .. more ..    14 Mar 2016 News
home   ask   best   3 years ago   
1
Scala Center epfl.ch
47 points by neverminder  33 minutes ago   3 comments top 2
1
Cyph0n 18 minutes ago 1 reply      
What with the recent TypeSafe news, this is quite reassuring. I love Scala and would have been devastated if it ended up going into financial backing limbo.
2
airless_bar 20 minutes ago 0 replies      
Great news! Nevertheless, some people will still manage to spin this announcement into some proof that Scala is dying!!! (this time for real!)
2
Hannibal: An AI/bot for 0 A.D github.com
50 points by jonbaer  2 hours ago   8 comments top 3
1
noobie 42 minutes ago 1 reply      
Cool name.

Hannibal was Carthaginian (modern day Tunisia); one of the greatest military commanders in history.[0]

His crossing of the Alps during the Carthage-Rome war is one of the most celebrated achievements of any military force in ancient warfare.[1]

0.https://en.wikipedia.org/wiki/Hannibal

1.https://en.wikipedia.org/wiki/Hannibal%27s_crossing_of_the_A...

2
lettergram 1 hour ago 2 replies      
Im glad to see work still being done on 0 A.D.

Last time I tried playing with it (over a year ago now), it was having a memory overflow error if maps got too big.

Beyond that though, the game was pretty cool.

3
akerro 49 minutes ago 1 reply      
Is this what you get for opensource games? People write AI enemies in their spare time? I think it's second or third alternative AI for the game.
3
How Gut Bacteria Are Shaking Up Cancer Research bloomberg.com
33 points by bookmtn  2 hours ago   discuss
4
Minecraft to run artificial intelligence experiments bbc.co.uk
95 points by sjcsjc  4 hours ago   47 comments top 8
1
kriro 3 hours ago 2 replies      
The fact that they want to use it as a teaching platform of sorts has very exciting implications. Minecraft is really huge with kids and that's an excellent way of getting more young people interested in AI and programming in general.To this day I think the Berkeley intro to AI class[1] on edx is the best I've seen because it uses Pacman as the running example which makes everything more approachable.It would probably be interesting to turn some of the examples from AIAMA into Minecraft examples as well.

[1] https://www.edx.org/course/artificial-intelligence-uc-berkel...

2
clishem 4 hours ago 2 replies      
I think any simulation of a reasonably complex environment is an opportunity for AI research, but I am hoping that TrueCraft (https://github.com/SirCmpwn/TrueCraft) will do well so that researchers do not depend on proprietary Minecraft.
3
nefitty 3 hours ago 2 replies      
One day soon, when the current generation that Minecraft courts is my age (mid-20's+), anyone who has never experienced Minecraft on a beyond-coursery level will have no true base from which to analyze concurrent trends. I count myself in the latter group at the moment, but as I see Minecraft become a greater cultural touchstone every day my desire to experience what it offers continues to grow.
4
10dpd 2 hours ago 3 replies      
Its seems that the current progress in AI pioneered by DeepMind is predicated on being able to measure progress by using a pre-defined set of rules (i.e. the game currently being played).

How will this work in a MineCraft simulation where there are no rules? Where will the feedback loop come from, e.g. a subjective evaluation by "humans"?

5
2bitencryption 3 hours ago 3 replies      
Just had a strange thought.

Say a sufficiently advanced AI agent is trained in Minecraft with the pixels on the screen as the only raw input (its 'eyes') and then trained to do simple tasks like building houses or hunting for rabbits.

Eventually, to us, these in-game players will seem to reason and 'think' about their world.

...which in my mind grants a few points to the "the universe is a computer simulation" paradox/thought experiment/theory.

6
daemin 2 hours ago 0 replies      
This sort of reminds me the experiment that some people ran on a custom Ultima Online server back in the day. They found some interesting behaviours came out from the monsters. I.e. which animals were afraid of which other ones, grouping etc.
7
brillenfux 3 hours ago 4 replies      
So skeletons will be an even worse pain int the future?

On a more serious note: I really hope they improve the mod mechanism, it's really painful to use mods as it stands. And mods are a very big part of Minecraft.

As I understand it, Minecraft will be reimplemented soon anyways?

8
z3t4 3 hours ago 2 replies      
If it wasn't possible to "cheat" by using xray or insta mining super tools ... it would be a fun experiment making mining robots etc.
5
Show HN: CodeMill a marketplace for pull requests codemill.io
16 points by shaharsol  46 minutes ago   2 comments top 2
1
afandian 6 minutes ago 0 replies      
I see a bit of a conflict between

 Does CodeMill work on private repositories as well? Yes, the only difference is that interested developers won't be able to fork them unless you authorize them... How do I screen developers? That's part of the beauty of CodeMill -- It's based completely on GitHub, where each developer has their public profile
This might be useful for orgs sponsoring open source projects but for orgs trying to get cheap labour on internal projects, there's no way of appraising previous similar work.

(Also, I have only ever heard the word 'mill' is used in a pejorative context: "Diploma mill", "essay mill", etc)

2
michaelmior 2 minutes ago 0 replies      
Typo:

> you can unassgin them

Anyway, this seems interesting. But as a developer who completes pull requests, what's to make sure I actually get paid? The buyer could easily reject a pull request and merge the code elsewhere anyway. You say the payment is pre-approved. Does that mean it's impossible for the original developer to get their money back?

6
Should All Research Papers Be Free? nytimes.com
500 points by mirimir  13 hours ago   240 comments top 34
1
kriro 5 hours ago 6 replies      
If it is funded by government in any way (public university, research project) I think it is borderline defrauding the tax payer that the research funded by tax-money is not free by default. Since close to all research is government funded in some way, shape or form...my answer would be yes in the general case.

I think the long term answer is decentralized publishing. Publish everything you do on a university or private website and let others decide if it's good or not when they want to cite it instead of a peer review that is set in stone. I think people reading papers deciding if they want to cite you are smart enough to figure out if it's good research or not. The peer review process is overrated (and quite often suffers from insider networks). If you decentralize publishing you can also have other researchers upvote a paper to basically approve of the academic standards in the paper.I also think the static nature of papers is a problem. I'd much rather cite a specific version of the paper. I'm thinking about git and pull requests along the lines of "want to cite, fixed layout" or "new research disproves this" etc.

2
robertwalsh0 10 hours ago 6 replies      
Full disclosure: I'm a founder of a company called Scholastica that provides software that helps journals peer-review and publish open-access content online. One of our journal clients, Discrete Analysis, is linked to in the NYT article.

It is incredibly obvious that journal content shouldn't cost as much as it does.

- Scholars write the content for free

- Scholars do the peer-review for free

- All the legacy publishers do is take the content and paywall PDF files

Can you believe it? Paywalling. PDFs. For billions.

Of course the publishers say they create immense value by typesetting said PDFs, but as technologists, we can clearly see that this is bunk.

There's a comment in this thread that mentions the manual work involved in taking Word files and getting them into PDFs, XML, etc. While that is an issue, which you could consider a technology problem, it definitely doesn't justify the incredible cost of journal content that has been created and peer-reviewed at no cost. Keep in mind that journal prices have risen much faster than the consumer price index since the 80s (1).

The future is very clear, academics do the work as they've always done and share the content with the public at a very low cost via the internet.

PS. If you want a peek into how the publishers see the whole Sci-Hub kerfuffle, check out this post from one of their industry blogs - the comment section is a doozy: http://scholarlykitchen.sspnet.org/2016/03/02/sci-hub-and-th...

1. https://cdn1.vox-cdn.com/thumbor/jtj2dzMfklULQipRZt_3xaLoFxU...

3
payne92 12 hours ago 4 replies      
I feel especially strongly that papers that result from taxpayer-funded research should be free.
4
reuven 9 hours ago 2 replies      
When I finished my PhD at Northwestern, part of the university's procedure involved going to the ProQuest Web site. ProQuest is a journal and dissertation publishing company.

They asked if I wanted my dissertation to be available, free of charge, to anyone interested in reading it.

Clicking on "yes, I want to make it available for free" would cost me something like $800.

Clicking on "no, I'll let you charge people to see it" would cost me nothing.

Having just finished, and being in debt to do so, it shouldn't come as a surprise that I wasn't rushing to pay even more. So now, if people want to see my dissertation, they have to pay -- or be part of an institution that pays an annual fee to ProQuest. (BTW, e-mail me if you want a copy.)

My guess is that it's similar with other journals. And while professors have more than PhD students, they have limited enough research funds that they'll hold their nose, save the money, and keep things behind a paywall.

Which is totally outrageous. It's about time that this change, and I'm happy to see what looks like the beginning of the end on this front.

5
imglorp 12 hours ago 3 replies      
Some things, like dissemination of knowledge, are truly in the interest of all humanity. It seems criminal that a few hundred people at the publishing houses should benefit at the expense of billions' welfare.
6
tomahunt 1 hour ago 1 reply      
There must be thousands of people who could use free access to research papers: PhDs and Masters now in industry trying to apply the state of the art, engineers who have worked their way into a subject, concerned citizens who want to read the source material.

I am a PhD who'd love to be working in industry, but I'm shit scared that once I leave the gates of the university I'll simply lose touch with the state of the art because the papers will no longer be accessible.

7
stegosaurus 12 hours ago 3 replies      
All everything 'should' be free. At least, that which is not scarce.

The correct question to ask is 'can' all research papers be free - does the world continue to spin, will research still happen, will we still progress, if they are free?

The only reason we even have this debate to begin with is because the producers of this information require scarce/controlled resources in order to survive.

8
bloaf 8 hours ago 0 replies      
Yes. They should.

It is in the best interests of humanity to make the knowledge obtained through research available to anyone looking for that knowledge. There is a clear consensus among scientists that the current publishing model is at best inexpedient and at worst hostile to that end.

Most people are asking what good the current publishing model provides, but I think to answer that question we need to ask: "compared to what?" It seems clear to me that the current model is better than having no publishing mechanism at all, but I doubt that anyone seriously thinks that the "none" model is the only alternative.

I think that if we sat down today and thought up a new publishing model from scratch, we would be able to outdo the status quo on just about every "good" people have mentioned here, as well as provide features that the current model is incapable of. I think it is highly likely that we could make a system that ran on donated resources alone.

Some things we might want/have in a "from scratch" model:

1. Direct access to data-as-in-a-database instead of data-as-a-graph-in-a-PDF

2. Blockchain-based reputation system for scientists

3. P2P storage and sharing of scientific data

4. Tiers of scientific information, e.g. an informal forum-of-science, semi-formal wiki-of-science, and formal publications

5. Automated peer review process

6. A better and more consistent authoring tool for scientists

9
davnn 7 hours ago 0 replies      
I think Elbakyan should do everything to make sci-hub easily replaceable. Once it's hosted on multiple places it would be much harder to shut down.

Maybe completely free research papers are not the future but there should be a Spotify for research papers that is affordable for everyone. I hope that Elbakyan will reach her goal and ultimately change the whole industry.

10
platform 11 hours ago 0 replies      
Taxpayer funded research must be free to read.

Also, a research that has been at least partially tax-funded resulting in a publication, must not be usable as an necessary ingredient for a commercial patent.

That is, a patent can include this type of research, but it cannot be a 'necessity' for the patent to be viable. Or, if the particular research, is necessary for a given patent to be viable, the patent must grant no-fees, no-commercial-strings-attached use.

This allows a corporation to establish patents as means to protect itself, while allowing the tax funded research to be used by others without commercial strings attached

11
cft 7 hours ago 1 reply      
Publishing used to cost money when it required physical printing/distrubution/storage of journals. Now all of this is basically free, but they still charge. Most theoretical physicists for example only care about "publishing" in the ArXiv (all free, open source). The traditional publishing is ridiculous.
12
denzil_correa 11 hours ago 0 replies      
> The real people to blame are the leaders of the scientific community Nobel scientists, heads of institutions, the presidents of universities who are in a position to change things but have never faced up to this problem in part because they are beneficiaries of the system, said Dr. Eisen. University presidents love to tout how important their scientists are because they publish in these journals.

For me, this is the cog of the problem. People who are in a position to change should push for it.

13
jammycakes 12 hours ago 0 replies      
Something I'd like to see here: results published in research papers aggregated and released as open data.

There must be a lot of interesting meta-analyses that aren't getting done because the necessary data is locked away behind paywalls, and usually not in an easily machine readable format into the bargain.

14
arbre 11 hours ago 4 replies      
Can someone explain me why the researchers themselves don't publish their work for free? The article says they are not paid for the articles so I don't see why they couldn't do that.
15
mrdrozdov 11 hours ago 1 reply      
This isn't the right question. The question is, "Who should be profiting from research papers?" The Journal performs quality control for the sake of consistency and prestige, but the papers and their reviews are put together by researchers, commonly at great cost for marginal personal gain. The article's hero doesn't really care. She needs to read papers, and needs other people to be able to read them, so she built sci-hub (demo: https://sci-hub.io/10.1038/nature16990).
16
ycmbntrthrwaway 12 hours ago 1 reply      
The main problems with tax-funded research and grants is that money is given in return for citations in journals with high "impact factor". As a result, publishers of those journals are indirectly supported by the state. Instead, government or funding organizations should review the results of the work for themselves, but they are unable to do it, because they usually don't understand a thing about research subject.
17
return0 11 hours ago 0 replies      
I hope this publicitly doesnt lead to swift shutdown of scihub. She provides us with a great service that helps many researchers work faster. We should also commend her for stirring the most lively debate about an anachronistic and dumb publishing system.
18
ajuc 4 hours ago 0 replies      
Science funded by taxes should be free, obviously.
19
guico 5 hours ago 0 replies      
I wonder, in the end what's really holding open-access publishing back? What can we do, as technologists, to help fix it?
20
pmarreck 7 hours ago 0 replies      
How is something that is, in essence, "truth determination and dissemination," not free?
21
baby 5 hours ago 0 replies      
You don't want me to read your paper? Charge for it.
22
leed25d 8 hours ago 0 replies      
Research funded by taxpayer dollars should be free to those who paid for it.
23
jeffdavis 7 hours ago 1 reply      
Nothing is "free", the only question is: "who pays, and how does that change the incentives and results?".
24
sandra_saltlake 6 hours ago 0 replies      
required open access publication
25
Chinjut 10 hours ago 1 reply      
Yes.

("Betteridge's law of headlines" fails)

26
dschiptsov 6 hours ago 0 replies      
At least they will get wider and less biased and institution-conditioned reviews.
27
kombucha2 9 hours ago 0 replies      
Yes
28
catnaroek 10 hours ago 2 replies      
What follows is just my very uninformed opinion. I'm not a scientist myself, but my interest in CS and math has made me an avid reader of scientific papers and books - whenever they're publicly available, that is.

What publishing houses do is exploit the rules of the social games that scientists themselves willingly play. When the importance of an academic work is judged by the names of its authors, or by the name of the journal in which it is published, or by the presence of fashionable keywords in its title or in the abstract, scientists are giving publishing houses the very rope with which they will be hanged. So, while the behavior of publishing houses is certainly antisocial and most abominable, it is only made possible by the very scientific community that condemns it.

Is there any fundamental reason why scientists can't always submit their papers to the arXiv, and let the web of citations determine their relative importance?

29
x5n1 11 hours ago 1 reply      
What benefit do the publishers provide to anyone? Why do the publishers deserve billions of dollars?
30
yeukhon 12 hours ago 1 reply      
I am still willing to pay for a high-quality printed version of research journals, but for the online access I think we should simply give away because research knowledge should belong in the public domain when you choose to publish the knowledge with a research journal. You are not publishing a paper within your 4x4 walled intrnaet.

But I get it. There is a business cost behind running a journal / magazine (although not all reputable one charge fees!). So here is the radical question: why the fuck do we need 100+ CS-related journal publishers out there? All we need is one.

31
lacker 11 hours ago 0 replies      
Why does it cost $3000 to publish an article?? You can put it on Medium for free.
32
jimjimjim 11 hours ago 5 replies      
unpopular opinion ahead: no, and probably not even for taxpayer funded research.

Can you demand a lift in a garbage truck? or in a tank?both of these things are provided for local or central government. why not? because it distracts from the job that they are there for.The same can be said for research (and source code). It takes time, effort and money to publish and peer review research. If journals can't make money providing access to the research who is going to pay for it?

Also there is currently a lot of BAD research out there. Domain experts don't have time to review all of it. Journals with prestigious names act as filters and as sort of priority queues for where you should look first.

33
arek_ 12 hours ago 1 reply      
Who will produce meaningful research for free?
34
ikeboy 12 hours ago 1 reply      
>The largest companies, like Elsevier, Taylor & Francis, Springer and Wiley, typically have profit margins of over 30 percent, which they say is justified because they are curators of research, selecting only the most worthy papers for publication.

> But that financial model requires authors to pay a processing charge that can run anywhere from $1,500 to $3,000 per article so the publisher can recoup its costs.

These two facts seem to point strongly to the publishers' being in the right. 30% is not a high number. If they were to lower their prices by 30%, running completely as nonprofits (or whatever number would break even), do you think people's complaints about difficulty of access would go away? If not, your complaining is not about their profit.

And you can't seriously expect them to eat a >$1000 loss on every paper.

Either we need a single party to fund upfront, like the government, or we need some other way to pay for it.

7
Common Search nonprofit search engine for the Web commonsearch.org
50 points by hjacobs  4 hours ago   6 comments top 5
1
libeclipse 16 minutes ago 0 replies      
I've tried using different search engines to Google numerous times, but each time I've returned to Google simply because the searches are better. They're more accurate, more relevant, and I very rarely find myself searching more than once to find something.

If commonsearch can beat Google in that regard, then count me in. But I doubt it will.

2
onion2k 50 minutes ago 1 reply      
One thing I hope this project does that Google fails to do is give developers a good API to access search. Google closed down their first web search API and now only give developers access to a limited Custom Search API that's rate limited to 100 queries a day for free with a hard limit of 10k searches - that makes it either very hard to develop anything against or relatively expensive. There are other options (Bing, Faroo, raw access to CommonCrawl) but they're either low quality or hard to work with. A good quality, straightforward, open web search API would be awesome.
3
faizshah 53 minutes ago 0 replies      
I like it!

The explainer tool gives a really cool insight into the results: https://explain.commonsearch.org/

4
jdimov10 1 hour ago 0 replies      
If it keeps being THAT fast after they've indexed the whole web, I'm switching search providers! :)
5
jasode 39 minutes ago 0 replies      
I like the project's goal but as techies, we inevitably want to understand the technical details and how it helps (or handicaps) the search results in comparison with Google.

For example, the project's data sources[1] says that the bulk of data comes from The Common Crawl. It looks like the CC is ~150 TB of data[2]. I'm not familiar with google.com internals but various sources estimate that their proprietary crawl dataset is more than a petabyte. (A googler could chime in here with more accurate data.)

So it's not as simple as the algorithm for Common Search being "more fair" than the algorithm for Google Inc. The underlying dataset in terms of quantity, recency, rules for the robot, etc all affect the algorithm.

This is not a criticism of the project. It is my attempt to understand what is not obvious on the surface level.

[1]https://about.commonsearch.org/data-sources

[2]http://commoncrawl.org/2015/12/november-2015-crawl-archive-n...

(I'm can't tell if each archive of MM/YYYY is cumulative or an addendum.)

8
Yann LeCun's comment on AlphaGo and true AI facebook.com
227 points by brianchu  10 hours ago   131 comments top 27
1
Smerity 7 hours ago 5 replies      
Preface: AlphaGo is an amazing achievement and does show an interesting advancement in the field.

Yet ... it really doesn't mean almost anything that people are predicting it to mean. Slashdot went so far as to say that "We know now that we don't need any big new breakthroughs to get to true AI".The field of ML/AI is in a fight where people want more science fiction than scientific reality. Science fiction is sexy, sells well, and doesn't require the specifics.

Some of the limitations preventing AlphaGo from being general:

+ Monte Carlo tree search (MCTS) is really effective at Go but not applicable to many other domains we care about. If your problem is in terms of {state, action} pairs and you're able to run simulations to predict outcomes, great, but otherwise, not so much. Go also has the advantage of perfect information (you know the full state of the board) and deterministic simulation (you know with certainty what the state is after action A).

+ The neural networks (NN) were bootstrapped by predicting the next moves in more matches than any individual human has ever seen, let alone played. It then played more against itself (cool!) to improve - but it didn't learn that from scratch. They're aiming to learn this step without the human database but it'll still be very different (read: inefficient) compared to the type of learning a human does.

+ The hardware requirements were stunning (280 GPUs and 1920 CPUs for the largest variant) and were an integral part to how well AlphaGo performed - yet adding hardware won't "solve" most other ML tasks. The computational power primarily helped improve MCTS which roughly equates to "more simulations gets a better solution" (though with NNs to guesstimate an end state instead of having to simulate all the way to an end state themselves)

Again, amazing, interesting, stunning, but not an indication we've reached a key AI milestone.

For a brilliant overview: http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...

John Langford also put his opinion up at: http://hunch.net/?p=3692542

(note: copied from my Facebook mini-rant inspired by Langford, LeCun, and discussions with ML colleagues in recent days)

2
fallingfrog 18 minutes ago 0 replies      
I don't know if I'd agree that unsupervised learning is the "cake" here, to paraphrase Yann LeCun. How do we know that the human brain is an unsupervised learner? The supervisor in our brains comes in the form of the dopamine feedback loop, and exactly what kinds of things it rewards aren't totally mapped out but pleasure and novelty seem to be high on the list. That counts as a "supervisor" from a machine learning point of view. It's not necessary to anthropomorphize the supervisor into some kind of external boss figure; any kind of value function will do the trick.
3
hacknat 9 hours ago 5 replies      
I think we need more advances in neuroscience and, I know this will be controversial, psychology before we really know what the cake even is.

Edit:

I actually think the major AI breakthrough will come from either of those two fields, not computer science.

4
pavanky 8 hours ago 3 replies      
Can someone more knowledgeable explain why biological systems are considered unsupervised instead of reinforcement based systems?

While it seems intuitive that most individual "intelligent" systems in animals can be seen as unsupervised, isn't life itself driven in a reinforced manner?

5
sago 4 hours ago 2 replies      
Ah, the joys of arguing about artificial intelligence without ever defining intelligence.

It is the perfect argument, everyone can forcefully make their points forever, and we'll be none the wiser whether this AI is 'true AI' or not.

6
grumpy-buffalo 8 hours ago 9 replies      
I wish the term "true AI" were replaced with "strong AI" or "artificial general intelligence" or some such term. We already have true AI - it's a vast, thriving industry. AlphaGo is obviously a true, legitimate, actual, real, nonfictional example of artificial intelligence, as are Google Search, the Facebook Newsfeed, Siri, the Amazon Echo, etc.
7
kzhahou 9 hours ago 2 replies      
I'm surprised he'd make such an optimistic statement. I think a better analogy would be:

We figured out how to make icing, but we still don't really know what a cake is.

8
johanneskanybal 20 minutes ago 0 replies      
Click bait titles aside it's an amazing achievement.
9
Houshalter 7 hours ago 2 replies      
No one is claiming that alphaGo is close to AGI. At least not anyone that understands the methods it uses. What alphaGo is, is an example of AI progress. There has been a rapid increase in progress in the field of AI. We are still a ways away from AGI, but it's now in sight. Just outside the edge of our vision. Almost surely within our lifetime, at this rate.
10
scotty79 3 hours ago 2 replies      
> As I've said in previous statements: most of human and animal learning is unsupervised learning.

I don't think that's true. When baby is learning to use muscles of its hands to wave them around there's no teacher to tell it what should its goal be. But physics and pain teaches it fairly efficiently which moves are bad idea.

It has built in face detection engine and the orienting and attempting to move and reach towards it is clear goal. Reward circuit in the brain do the supervision.

11
sytelus 6 hours ago 5 replies      
If you look at how child learns, it's huge amount of supervised learning. Parents spend lots of time in do and don't and giving specific instructions on everything from how to use toilet to how to construct a correct sentence. Lots of language development, object identification, pattern matching, comprehension, math skills, motor skills, developing logic - these activities has huge amount of supervised training that runs day after day and year after year. There is sure unsupervised elements like ability to recognize phonemes in speech, tracking objects, inference despite of occlusion, ability to stand up and walk, make meaningful sounds, identify faces, construct sequence of actions to achieve goal, avoiding safety risks from past experiences and so on. However, typical child goes through unparalleled amount of supervised learning. There was an incidence of a child who got locked up in a room for over a decade and she didn't developed most of the language, speech or social skills. It seems unsupervised learning can't be all of the cake.
12
Animats 6 hours ago 0 replies      
What we need next are more systems which can predict "what is likely to happen if this is done". Google's automatic driving systems actually do that. Google tries hard to predict the possible and likely actions of other road users. This is the beginning of "common sense".
13
flashman 8 hours ago 1 reply      
If artificial intelligence is the cake, true AI is the ability to argue about whether cake is a useful analogy.
14
fiatmoney 8 hours ago 0 replies      
There's also a huge issue around problem-posing and degrees of freedom, that doesn't necessarily get better as your AI tools improve. Go has a fairly large state space, but limited potential moves per turn, well-defined decision points, limited time constraints, and only one well-defined victory condition. The complexity is minuscule compared to even something relatively well-structured like "maximize risk-adjusted return via stock trades".
15
kailuowang 9 hours ago 3 replies      
Can someone elaborate the difference between reinforcement learning and unsupervised learning? It seems that I mistakenly think that human learns through reinforcement learning, that we learn by the feedback from the outside world. I mean without feedback from aldult can a baby even learn how to walk?
16
Geekette 9 hours ago 0 replies      
The statement that he's critiquing does reflect the wider-spread, overly simplistic view of AI. Contrary to hype, recent events represent only partial development/peeling of the top layer from the AI onion, which has more known unknowns and unknown unknowns than known knowns.
17
rdlecler1 6 hours ago 0 replies      
We keep trying to engineer AI rather than reverse engineering it. The thing with living organisms is that the neural network underlying the intelligence of living organisms is a product of evolutionary design of an organism situated in the real physical world with laws of physics and space and time. This is where the bootstrapping comes in. Unsupervised learning is built on top of this. Trying to sidestep this could prove difficult to get to General AI.
18
esfandia 8 hours ago 1 reply      
It seems that AI does well when the problem and the performance metrics are well defined: chess, Go, various scheduling problems, pattern recognition, etc. At the very least we can track, quantitatively, how far off we are from a satisfactory solution, and we know we can only ever get closer.

"True", or general-purpose AI, is harder to pin down, and thus harder to define well. I'd argue that the moment we have define it formally (and thus provided the relevant performance metrics) is the moment we have reduced it to a specialized AI problem.

19
megaman821 8 hours ago 4 replies      
It seems to me one of the higher hurdles for creating a general purpose intelligence, is human empathy. Without it you are left with creating a nearly infinite-length rules engine.

When you ask your AI maid to vacuum your house, you would prefer it not to plow through closet door to grab the vacuum, rip your battery out of your car and hardwire to the vacuum, and then proceed to clean your carpets. If you don't want to create a list of rules for every conceivable situation, the AI will need to have some understanding human emotions and desires.

20
LERobot 10 hours ago 0 replies      
Totally agree, it's a bit like when some physicists were convinced that there wouldn't be other great breakthroughs after Maxwell's theory of electromagnetics. Maybe Yann LeCun is the Einstein of Machine Learning? haha
21
javajosh 9 hours ago 4 replies      
Is anyone working on an embodied AI? Even a simulated body might help. Ultimately intelligence is only useful insofar as it guides the body's motion. We often tend to minimize the physical act of say, writing down a theorem or actually applying paint to the canvas, but there are certain actions like playing a musical instrument that certainly blur the distinction between "physical" and "mental". Indeed, even 'purely mental' things like having an "intuition" about physics is certainly guided by one's embodied experience.
22
_snydly 8 hours ago 1 reply      
I have Facebook blocked for the next week (because, you know, productivity). Can someone post LeCun's comment here?
23
chriscappuccio 7 hours ago 0 replies      
Like, duh.
24
juskrey 8 hours ago 0 replies      
True life has no rules
25
ronilan 9 hours ago 1 reply      
The cake is a lie. Obviously :)
26
daxfohl 8 hours ago 0 replies      
Once an AI algorithm (even just one for Go) realizes that it can hijack the bank accounts of all the world's other 9 dan players in order to demand an analysis of its planned move, and figures out how to do that, then we've made the cake.

N.B. the genericity of the deepmind stuff that is the basis of AlphaGo makes this seem not entirely far-fetched.

Yum, cake.

27
justsaysmthng 1 hour ago 0 replies      
Stop looking at the red dot. Take a step back and look around you."True" AI is here and it's been here for some time. You're communicating with it right now.

It's just that we find it so hard to comprehend it's form of "intelligence", because we're expecting true AI to be a super-smart super-rational humanoid being from sci-fi novels.

But what would a super-smart super rational being worth 1 billion minds look/feel like to one human being ? How would you communicate with it ?

Many people childishly believe that "we" have control over "it".You don't. We don't.

The more we get used to it being inside our minds, the harder it becomes to shut it down without provoking total chaos in our society. Even with the chaos, there is no one person (or group) who can shut it down.

But "we" make the machines ! Well... yes, a little bit..

Would we be able to build this advanced hardware without computers ? Doesn't this look like machines reproducing themselves with a little bit of help from "us" ?

Think about the human beings from the Internet's perspective - what are we for it ? Nodes in a graph. In brain terms - we are neurons, while "it" is the brain.

But it's not self-aware ! What does that even mean ?

Finally, consider that AlphaGo would have been impossible without the Internet and the hardware of today.

And that "true" AI that everybody expects somewhere on the horizon will also be impossible without the technology that we have today.

If so, then what we have right now is the incipient version of what we'll have tomorrow - that "true" AI won't come out of thin air, it will evolve out of what we have right now.

Just another way of saying the same thing - it's here.

Is this good or bad ? Well, that's a totally different discussion.

9
ExoMars launch scheduled for 09:31 GMT (10:31 CET) esa.int
33 points by jpatokal  4 hours ago   8 comments top 2
1
creshal 1 hour ago 1 reply      
Launch successful so far:

http://www.esa.int/Our_Activities/Space_Science/ExoMars/ExoM...

Half of the recent Proton losses have been from third stage failures (worked here), the other half of the Briz-M upper stage (still needs to do a few more burns).

2
manaskarekar 1 hour ago 2 replies      
Is there an alternate link, perhaps youtube? Livestream never seems to work for me.
10
Managing two million web servers joearms.github.io
296 points by timf  12 hours ago   79 comments top 14
1
klibertp 8 minutes ago 0 replies      
OMG, guys, this is getting really strange. Half of the commenters here read the word "process" and jumped to their own conclusions, possibly true in general, but obviously wrong in the case of Erlang.

It bears repeating: Erlang processes are not OS-level processes. Erlang Virtual Machine, BEAM, runs in a single OS-level process. Erlang processes are closer to green-threads or Tasklets as known in Stackless Python. They are extremely lightweight, implicitly scheduled user-space tasks, which share no memory. Erlang schedules its processes on a pool of OS-level threads for optimal utilization of CPU cores, but this is an implementation detail. What's important is that Erlang processes are providing isolation in terms of memory used and error handling, just like OS-level processes. Conceptually both kinds of processes are very similar, but their implementations are nothing alike.

2
frik 7 hours ago 2 replies      
With the same speak, you could say Facebook mangages billions of PHP web servers, though no one speak like that. (PHP has a shared nothing architecture; HHVM works simlar to the Erlang VM, if one can say so)
3
mpweiher 5 hours ago 0 replies      
Beautiful way of putting it. Also very close to Alan Kay's vision of "object oriented"

"In computer terms, Smalltalk is a recursion on the notion of computer itself. Instead of dividing computer stuff into things each less strong than the whole like data structures, procedures, and functions which are the usual paraphernalia of programming languages each Smalltalk object is a recursion on the entire possibilities of the computer. Thus its semantics are a bit like having thousands and thousands of computer all hooked together by a very fast network." -- The Early History of Smalltalk [1]

I also personally like the following: a web package tracker can be seen as a function that returns the status of a package when given the package id as argument. It can also be seen as follows: every package has its own website.

I think the latter is vastly simpler/more powerful/scalable.

What's interesting is that both of these views can exist simultaneously, both on the implementation and on the interface side.

[1] http://gagne.homedns.org/~tgagne/contrib/EarlyHistoryST.html

4
stephen_mcd 10 hours ago 1 reply      
I really love the idea of explaining the actor model as tons of tiny little servers compared to a single monolithic server. I tried to make the same comparison recently when I talked about adding distributed transactions to CurioDB (Redis clone built with Scala/Akka): http://blog.jupo.org/2016/01/28/distributed-transactions-in-...
5
rodionos 6 hours ago 2 replies      
The title is somewhat misleading. I clicked expecting to read how someone is managing 2 mln web server instances such nginx or apache. I was curious what kind of company would claim that.
6
kennydude 3 hours ago 1 reply      
> why does the Phoenix Framework outperform Ruby on Rails?

Ruby is known to be a slow language. Most things will easily outperform it

7
z3t4 5 hours ago 2 replies      
I would like to see the code for the chat or presence server. I have a hunch it will look different depending on the experience of the programmer.

I'm especially interested in how they manage state. Because when you do not have to manage state, everything becomes easy and scalable. With state I mean for example a status message for a particular user.

8
andy_ppp 2 hours ago 1 reply      
I get the feeling that people reading this and saying "it's just kind of like a pool of PHP FastCGI instances or Apache worker pools" etc. Do not understand that Phoenix + Elixir can serve minimal dynamic requests about 20% slower than nginx can serve static files. This is very very fast.

It also leads to better code due to being functional, lots of amazing syntactic sugar like the |> operator and the OTP can easily allow you to move processes (these basically have very little overhead) to different machines as you wish to scale. Pattern matching and guards are also incredible.

I really do not want to write anything else!

9
akkartik 11 hours ago 5 replies      
This article got me to go figure out precisely what Erlang processes are. Tl;dr - they aren't OS processes. So it is still conceivable that an error in Erlang can bring down all your web servers.

http://stackoverflow.com/questions/2708033/technically-why-a...

10
smaili 9 hours ago 6 replies      
Could someone explain how in the context of the article, "process" differs from a "thread", in say Java or Python? Or are they one in the same?
11
siscia 4 hours ago 0 replies      
A little while ago I wrote an extremely short introduction to distributed, highly scalable, fault tolerant system.

It is marketing material for my consulting activity anyway some of you can find it interesting.

The PDF is here: https://github.com/siscia/intro-to-distributed-system/blob/m...

The source code is open, so if you find a better way to describe things feel free to open an issue or a pull request...

12
rasengan 7 hours ago 2 replies      
I can't help but think this is madly in-efficient with cache misses and the like.
13
sandra_saltlake 6 hours ago 0 replies      
processes do not share memory, but threads may be,
14
jondubois 11 hours ago 7 replies      
You don't need to "crash the server" in response to an error from a single user - It is sufficient to just close the connection and destroy the session.

I doubt that erlang spawns millions of OS processes because that would be extremely inefficient due to CPU context switching. So in reality, all erlang is doing behind the scenes is closing the connection and destroying the session... It's not actually crashing and restarting any processes... You can easily implement this behavior with pretty much any modern server engine such as Node.js and tornado.

12
Enrollment Is Surging in Machine Learning Classes nvidia.com
4 points by Osiris30  30 minutes ago   discuss
13
Ask HN: Who is most likely to develop true AI?
21 points by bossx  48 minutes ago   26 comments top 22
1
TelmoMenezes 19 minutes ago 1 reply      
I very much doubt there will be a Hollywood-movie-event when "company X developed true AI" will be in the news. I think it's more likely that more and more algorithmic building blocks as well as computational power gradually become available, and that humanity collectively gets closer to AGI capabilities.

Many of the algorithms that are making headlines theses days are decades old. It seems to me that we just crossed some computational power threshold a couple of years ago that made it possible to produce qualitatively better results with the algorithms that we already knew about. There wasn't any qualitative break-through -- just years and years of small incremental improvements, both on the computer science and hardware fronts -- and there won't be a "monopoly of AI". That is industrial-era thinking. AI is not like railways, light bulbs or power plants.

On the other hand, I am worried that all the hype currently surrounding AI could lead to a second AI winter. The media cycle and investors seems to have terribly low attention spans, and are prone to lose interest as easily as they become hyper-excited.

2
3
pjc50 4 minutes ago 0 replies      
Google are the obvious candidate, with Apple close behind.

As I see it, it's most likely to evolve from a combination of translation software (which needs some sense of the semantics and context of a sentence to do really well) and DWIM-orientated search. But attempts at building a huge ontological model of the world in the hope of producing a formal-reasoning intelligence have been going for years with little fruit (Cyc etc.).

Next AI step to watch for: computerised bureaucracy assistants. It's one thing to have a team of human experts turning the tax code into a program; it's another to be able to just feed all the text of the law into a program and then engage in a dialogue with it to have it fill in the form for you.

4
5
3pt14159 19 minutes ago 0 replies      
Either Google or the US DoD. The former has the raw information, researchers, and computation skills; the later has the raw funding and motive.

A state actor with AGI will own electronic warfare. AGI would also be able to bring swift advancements to the first state that created it, so inventing it would quickly follow with leaps in conventional weapons. Assuming the AGI wants to collaborate.

6
underscoremark 32 minutes ago 1 reply      
7
JamesMcMinn 34 minutes ago 0 replies      
I'm not sure we'll ever get to Strong AI, but if we do, I don't expect any company that exists today will still be around.
8
tim333 18 minutes ago 0 replies      
Deep Mind who wrote AlphaGo look like favourites. They are owned by Google but not Google's only AI operation. They have produced a general but far sub human level AI system to play Atari games and working on reverse engineering the hippocampus. https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...
9
awinter-py 9 minutes ago 0 replies      
scarier question -- who is most likely to have already developed 'true AI' (let's define this as a bot that can make jokes about the presidential elections and also write code in, say, ocaml).

And why haven't they released it. (oscar isaac probably has his own opinion about why).

10
antocv 12 minutes ago 0 replies      
Intelligence is not defined, from your quote, what is a general mental capability? Mental? What is that? Is that when you eat alof of mentos while listening to metal and thus become Mental?

No human fits into "Strong AI" definition, I believe no human can fulfill the professor Lindas definition of Intelligence, if they think they can, well thats like your opinion man, just like I can program a chatbot to always respond "yes" to the question "are you cool".

11
gus_massa 38 minutes ago 2 replies      
From the FAQ:

> How do I submit a poll?

> . http://news.ycombinator.com/newpoll .

But I think you need t let 200 karma to use it.

My opinion is that nobody will do it, because "AI is whatever hasn't been done yet." https://en.wikipedia.org/wiki/AI_effect

12
bossx 48 minutes ago 0 replies      
Google
13
stray 22 minutes ago 0 replies      
> Who is most likely develop a true AI system?

Some completely unknown amateur will stumble upon it more or less by accident.

14
hunterjrj 34 minutes ago 0 replies      
Two Kids in a Garage
15
bossx 48 minutes ago 0 replies      
Baidu
16
jordhy 27 minutes ago 0 replies      
Leslor
17
fsiefken 17 minutes ago 0 replies      
ben goertzel
18
bossx 47 minutes ago 0 replies      
Amazon
19
bossx 47 minutes ago 0 replies      
Apple
20
bossx 48 minutes ago 0 replies      
Microsoft
21
bossx 48 minutes ago 0 replies      
Facebook
22
TheLogothete 22 minutes ago 0 replies      
I don't think there is a possibility of it happening at all (in the foreseeble future).
14
Go Ahead and Change Bodies; Just Remember to Take Your Soma exolymph.com
54 points by exolymph  5 hours ago   2 comments top 2
1
DannoHung 13 minutes ago 0 replies      
What a disjointed story. I am not sure what the things it wanted to tell me were. It presented like4 or 5 unrelated high concept points but they didn't tie together meaningfully.
2
mdip 15 minutes ago 0 replies      
Interesting read that strikes home for me a bit.

As a kid, my mom was a working mother -- a terribly unpopular choice in rural Michigan where we lived. My mom drove 30 minutes away from home to get me to a suitable child-care provider. She didn't pick the place because it was close to work -- she was a traveling sales-person -- she picked it because it was the closest available (located at a church because commercial daycare was non-existent where we lived). When I started looking for a partner in life, one of my "absolutes" was to find a woman who would be OK with one of us staying home with our children (I was and still would be perfectly fine with being a stay-at-home Dad). As a result, my ex-wife stayed home with the children (and my current wife does the same).

The tables have turned in my adult-hood. It's unusual to have a parent stay home. The job is as hard as it was when I was a kid (if not harder due to social pressures that make women feel badly for not pursuing careers). As a society (in the US anyway), we're setup for both parents to work full time. When my wife and I chose not to enroll our children in pre-school most of our friends thought we were out of our minds and holding our children back. As parents we felt it was unnecessary to hire a professional educator for teaching our children the ABCs/123s and were surprised at the reaction we received even from families who didn't have two working parents. As a child, preschool was purely the domain of families of working parents and enrollment was considered an unnecessary expense when mom or dad was available.

Personally, I don't have strong feelings about stay-at-home vs. work full-time parents. I knew I didn't enjoy my experience as a kid but I knew that a lot of that was due to jealousy since nearly all of my friends had moms that stayed home. Our kids are school-age, now, but we're still a single-parent earning family by choice. Having children is a lot of work and I personally don't think we could handle raising them the way we desire if we both had priorities outside of the home. Don't read that as me saying that it cannot be done -- I'm sure it can, as my parents remind me "you turned out just fine"[1] -- I just don't know that I could do it.

[1] People get very touchy about parenting choices. When I told my family that we had no intention -- from the beginning -- to have both of us working, they assumed that we were calling them bad parents for choosing differently. I know the situation my family was in when I was young. They had little choice but to both work in order to keep the lights on. And I'm not the least bit upset with how I was raised, but the added time we get to enjoy our children and the reduced stress that I experience knowing that our home life is taken care of while I'm working is worth the loss of income, especially since we live in a place where we can comfortably get by on my salary.

15
Lessons Learned from a Year of Elasticsearch in Production scrunch.com
31 points by kiyanwang  5 hours ago   2 comments top
1
amelius 2 hours ago 1 reply      
How difficult would it be for the elasticsearch team to make the software run smoothly as a black box, i.e., without administrator supervision?
16
Mediterranean Sea filled in less than two years: study phys.org
135 points by curtis  14 hours ago   50 comments top 10
1
dredmorbius 9 hours ago 0 replies      
WildwoodClaire's video on the Messinian Salinity Crisis gives an excellent backround on the paleogeology of the Mediterranean basin, particularly as expressed through salt deposits up to 3500 meters thick, but also with considerable evidence of ancient channels both in the regions surrounding the Mediterranean (as at the Aswan High Dam) and underneath the existing sea.

Her enthusiasm is also infectious.

https://youtu.be/U5qTQpws5H0

2
curtis 13 hours ago 1 reply      
There is a similar theory about the Black Sea: the Black Sea deluge hypothesis [1].

From Wikipedia:

The Black Sea deluge is a hypothesized catastrophic rise in the level of the Black Sea circa 5600 BC from waters from the Mediterranean Sea breaching a sill in the Bosphorus strait.

(This one, if true, is pretty interesting because it occurred within fairly recent human history.)

[1] https://en.wikipedia.org/wiki/Black_Sea_deluge_hypothesis

3
ksgifford 13 hours ago 3 replies      
XKCD did a comic about this phenomenon a few years ago. The story is set in the far future when the Med has dried up again, and the next flood that refills it is imminent.http://blog.xkcd.com/2013/07/29/1190-time/
4
yeldarb 12 hours ago 6 replies      
Is there a contemporary basin ripe for a catastrophic flood from the sea in the event of an earthquake?
5
grumblestumble 9 hours ago 1 reply      
If you like Sci-Fi, spoiler alert but Julian May's Pliocene Saga is set around this event.
6
tim333 2 hours ago 0 replies      
It reminds me of the Red Sea Dam idea https://en.wikipedia.org/wiki/Red_Sea_Dam
7
BurningFrog 13 hours ago 4 replies      
>"We do not envisage a waterfall, as is often represented: instead the geophysical data suggests a huge ramp, several kilometres wide, descending from the Atlantic to the dry Mediterranean...," the scientists said.

Can anyone help me get a mental image of this?

8
keypusher 5 hours ago 0 replies      
[2009]
9
anotheryou 13 hours ago 5 replies      
is that the flood the bible refers too?
10
sandra_saltlake 5 hours ago 0 replies      
Mesopotamia was flooding quite frequently.
17
From fleeing Vietnam in a refugee boat to becoming Ubers CTO techinasia.com
53 points by williswee  8 hours ago   13 comments top 4
1
violentvinyl 1 hour ago 3 replies      
I have a particular interest in truly disruptive start-ups, especially ones thats really flout the law like Uber does. I'm under the impression that conventional entrepeneurial wisdom says "Go ahead and break the rules, by the time they catch up with you or you've gained their attention, you'll have the money or support needed to fight for real change". Of course, all you have to do is look at Zenefits to see this isn't always going to be the case. It makes you wonder what type of person it takes to run a company like that. I relize Thuan isn't the CEO/founder, but when I think about a disruptive business, I weight up the threat of serious fines/jail time against a relatively cushy life with a stable 9 to 5. It's really eye-opening to see what type of person it takes to drive a busines like Uber forward.
2
iwwr 38 minutes ago 2 replies      
How many world-changing individuals are being denied the opportunity to get started because of the politics of barbed wire...
3
lpsz 2 hours ago 2 replies      
Stark contrast with New York Times pieces about struggling liberal arts majors with $120K in debt and no hope.

Given what I feel like is a growing class war in modern America, I wish more could embrace stories like this, and not the culture of putting down others' hard work in the name of "equalization." Some accomplished people (such as Asian immigrants or refugees, in this case) have worked really really hard to get where they are.

4
wimagguc 1 hour ago 0 replies      
Many qualities required for someone to flee their home country in a refugee boat seem to be pretty useful for a CXO too.

(I mean things like taking calculated risks, learning to navigate new cultures -- and just the cold blood you need to face those pirate ships. Pretty impressive.)

18
Londons startup scene is getting more sophisticated economist.com
71 points by jimsojim  11 hours ago   36 comments top 9
1
joshvm 1 hour ago 2 replies      
> for women are generally outnumbered on Britains science-degree courses, especially among PhD

That's really not true. Physics undergraduate in most universities is perhaps 1:10 male:female. Postgraduate (PhD) is pretty much half/half in my experience. My department is physics/space/climate and we have more girls than boys now I think.

Perhaps the scene is worse in computer science, but I've been to a few machine vision conferences and women are pretty well represented.

EDIT: That said, engineering undergraduate was atrocious at my university. I had some friends doing civil/mechanical and it was maybe 1:20. Comp sci was pretty good though, as was maths. Biology and chemistry (and biochemistry) always seem to attract girls too.

2
vidarh 5 hours ago 2 replies      
This article is a fluff piece on EF. EF is not a very large part of London's startup scene. In a couple of years of various startup heavy tech meetups in London, I've yet to meet anyone who's gone through EF. Which is not to dismiss them, I'm sure they're good, but with a title like that you'd expect them to have actually explored what's going on in London rather than just marketing an individual incubator.
3
jamesmcaulay 1 hour ago 0 replies      
EF turned me - a Computer Science graduate with a vague interest in startups - into the founder of an angel-backed startup.

The pace of learning during the six pace programme is rapid, but that's not what I see as most important. I'm most grateful for them giving me the opportunity to start my own thing straight out of university.

Without EF, I would have been forced to get a graduate job at a tech company in order to pay the bills. Building a startup in London would have been orders of magnitude harder. I know people who are trying to build products in their spare time, and it's tough.

Thanks to EF, I'm now running a five-person company with nearly 10,000 users.

Oh, and I met my co-founder on the programme. Not bad.

4
photonicist 2 hours ago 0 replies      
I just went through EF - while the cost of living in London is awful, the EF stipend more than covers it. I wouldn't have even considered moving back to the UK (much less to London) had EF not been able to assure me that they could deliver on the mentoring, connections, and financial support that they advertise.

In 3 months I learnt a lot about a variety of industries as we very rapidly iterated through teams and ideas, and I'm now developing instrumentation and software that will be launched into space later this year.

It certainly beats the opportunities I've had in finance and academia over the past few years!

5
nakedrobot2 2 hours ago 5 replies      
My lawyer lives in London. He's from silicon valley and has helped hundreds of startups, including some of the biggest.

His observation about London is that the cost of living is simply so high, that it is impossible to have a real startup culture here.

6
andrewdon 10 hours ago 0 replies      
Why sophisticated? It simply says bar is higher and woman participants are as few as in the past.
7
jasonrez 10 hours ago 2 replies      
Any independent opinions of EF? They are great things about them in most media, though most articles have a PR flavor to them.
8
bArray 8 hours ago 0 replies      
There's equal opportunity and equal outcome, they propose to tackle the problem with equal outcome. With more women taking degrees in the UK one must ask why they are less seen in STEM based subjects and not try to artificially level the playing field.

My personal opinion is that the issue is a social one, where we are still awaking from a period where women must do certain jobs and men must do others. You don't see many women as bin collectors although they are more than qualified for the job. I think the best place to tackle this stigma is in Primary and Secondary Schools. The results will take a few years to filter up all the way from Primary to University.

Meanwhile, start-ups should be judged equally based on their quality and not discriminated by who is attempting it. There is a ridiculous notion that we can artificially change the end result and somehow that won't lead to issues.

9
zump 9 hours ago 4 replies      
I don't see an electric vehicle manufacturer in London.
19
MRSA superbug's resistance to antibiotics is broken newscientist.com
29 points by CarolineW  2 hours ago   6 comments top 4
1
fjarlq 1 hour ago 1 reply      
The newscientist.com link is broken for me, due to "Maintenance in Progress".

Alternate links:

http://arstechnica.com/science/2016/03/sidekick-chemicals-re...

https://www.sciencenews.org/article/molecules-found-counter-...

2
bborud 1 hour ago 0 replies      
How is this different from all the other claims of an antibiotic that can deal with MRSA?

(Yeah, before sharing this link, I did a quick Google search and skimmed the results and there were claims back to 2014 that various parties have "cracked the superbug".)

3
bitwize 1 hour ago 0 replies      
For now.
4
pygy_ 1 hour ago 1 reply      
... in mice.
20
Show HN: CLI for trying out Node.js modules easier github.com
27 points by diggan  6 hours ago   12 comments top 3
1
gravypod 6 hours ago 2 replies      
This is one of those really cool ideas that is obvious after someone else much smarter then yourself thinks of it first.

I wish this was in NPM.

2
alehander42 2 hours ago 2 replies      
It would be very cool to detect an example from npm docs or github repo and input it initially!
3
amelius 2 hours ago 2 replies      
Next step: a try-it-on-the-web interface?
21
Lee Sedol Beats AlphaGo in Game 4 gogameguru.com
1291 points by jswt001  1 day ago   418 comments top 63
1
mikeyouse 1 day ago 9 replies      
Relevant tweets from Demis;

 Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now... Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87 When I say 'thought' and 'realisation' I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87 Lee Sedol wins game 4!!! Congratulations! He was too good for us today and pressured #AlphaGo into a mistake that it couldnt recover from
From: https://twitter.com/demishassabis

2
argonaut 1 day ago 6 replies      
If it's true that AlphaGo started making a series of bad moves after its mistake on move 79, this might tie into a classic problem with agents trained using reinforcement learning, which is that after making an initial mistake (whether by accident or due to noise, etc.), the agent gets taken into a state it's not familiar with, so it makes another mistake, digging an even deeper hole for itself - the mistakes then continue to compound. This is one of the biggest challenges with RL agents in the real, physical world, where you have noise and imperfect information to confront.

Of course, a plausible alternate explanation is that AlphaGo felt like it needed to make risky moves to catch up.

3
dannysu 1 day ago 2 replies      
In the post-game press conference I think Lee Sedol said something like "Before the matches I was thinking the result would be 5-0 or 4-1 in my favor, but then I lost 3 straight... I would not exchange this win for anything in the world."

Demis Hassabis said of Lee Sedol: "Incredible fighting spirit after 3 defeats"

I can definitely relate to what Lee Sedol might be feeling.Very happy for both sides. The fact that people designed the algorithms to beat top pros and the human strength displayed by Lee Sedol.

Congrats to all!

4
fhe 1 day ago 3 replies      
My friends and I (many of us are enthusiastic Go lovers/players) have been following all of the games closely. AlphaGo's mid game today was really strange. Many experts have praised Lee's move 78 as a "divine-inspired" move. While it was a complex setup, in terms of number of searches I can't see it be any more complex than the games before. Indeed because it was a very much a local fight, the number of possible moves were rather limited. As Lee said in the post-game conference, it was the only move that made any sense at all, as any other move would quickly prove to be fatal after half a dozen or so exchanges.

Of course, what's obvious to a human might not be so at all to a computer. And this is the interesting point that I hope the DeepMind researchers would shed some light on for all of us after they dig out what was going on inside AlphaGo at the time. And we'd also love to learn why did AlphaGo seem to go off the rails after this initial stumble and made a string of indecipherable moves thereafter.

Congrats to Lee and the DeepMind team! It was an exciting and I hope informative match to both sides.

As a final note: I started following the match thinking I am watching a competition of intelligence (loosely defined) between man and machine. What I ended up witnessing was incredible human drama, of Lee bearing incredible pressure, being hit hard repeatedly while the world is watching, sinking to the lowest of the lows, and soaring back up winning one game for the human race.. Just incredible up and down in a course of a week. Many of my friends were crying as the computer resigned.

5
jballanc 1 day ago 7 replies      
So AlphaGo is just a bot after all...

Toward the end AlphaGo was making moves that even I (as a double-digit kyu player) could recognize as really bad. However, one of the commentators made the observation that each time it did, the moves forced a highly-predictable move by Lee Sedol in response. From the point of view of a Go player, they were non-sensical because they only removed points from the board and didn't advance AlphaGo's position at all. From the point of view of a programmer, on the other hand, considering that predicting how your opponent will move has got to be one of the most challenging aspects of a Go algorithm, making a move that easily narrows and deepens the search tree makes complete sense.

6
keypusher 1 day ago 3 replies      
The crucial play here seems to have been Lee Seedol's "tesuji" at White 78. From what I understand this phrase in Go means something like "clever play" but is something like sneaking up on your opponent with something that they did not see coming. Deepmind CEO confirmed that the machine actually missed the implications of this move as the calculated win percentage did not shift until later.https://twitter.com/demishassabis/status/708928006400581632

Another interesting thing I noticed while catching endgame is that AlphaGo actually used up almost all of its time. In professional Go, once each player uses their original (2 hour?) time block, they have 1 minute left for each move. Lee Sedol had gone into "overtime" in some of the earlier games, and here as well, but previously AlphaGo still had time left from its original 2 hours. In this game, it came down quite close to using overtime before resigning, which is does when the calculated win percentage falls below a certain percentage.

7
jonbarker 18 hours ago 1 reply      
AlphaGo's weakness was stated in the press conference inadvertently: it considers only the opponent moves in the future which it deems to be the most profitable for the opponent. This leaves it with glaring blind spots when it has not prepared for lines which are surprising to it. Lee Sedol has now learned to exploit this fact in a mere 4 games, whereas the NN requires millions of games to train on in order to alter its playing style. So Lee only needs to find surprising and strong moves (no small feat but also the strong suit of Lee's playing style generally).
8
mizzao 23 hours ago 6 replies      
Another way to look at this is just how efficient the human brain is for the same amount of computation.

On one hand, we have racks of servers (1920 CPUs and 280 GPUs) [1] using megawatts (gigawatts?) of power, and on the other hand we have a person eating food and using about 100W of power (when physically at rest), of which about 20W is used by the brain.

[1] http://www.economist.com/news/science-and-technology/2169454...

9
Houshalter 17 hours ago 2 replies      
We were discussing the probability that Sedol would win this game. Everyone, including me, bet 90% that no human would ever win again, let alone this specific game: http://predictionbook.com/predictions/177592

I tried to estimate it mathematically. Using a uniform distribution across possible win rates, then updating the probability of different win rates with bayes rule. You can do that with Laplace's law of succession. I got a 20% that Sedol would win this game.

However a uniform prior doesn't seem right. Eliezer Yudkowsky often says that AI is likely to be much better than humans, or much worse than humans. The probability of it falling into the exact same skill level is pretty implausible. And that argument seems right, but I wasn't sure how to model that formally. But it seemed right, and so 90% "felt" right. Clearly I was overconfident.

So for the next game, with we use Laplace's law again, we get 33% chance that Sedol will win. That's not factoring in other information, like Sedol now being familiar with AlphaGo's strategies and improving his own strategies against it. So there is some chance he is now evenly matched with AlphaGo!

I look forward to many future AI-human games. Hopefully humans will be able to learn from them, and perhaps even learn their weaknesses and how to exploit them.

Depending how deterministic they are, you could perhaps even play the same sequence of moves and win again. That would really embarrass the Google team. I hear they froze AlphaGo's weights to prevent it from developing new bugs after testing.

10
minimaxir 1 day ago 4 replies      
There were a few jokes made during the round about how AlphaGo resigns. Turns out it's just a popup window! http://i.imgur.com/WKWMHLv.png
11
sethbannon 16 hours ago 0 replies      
I wouldn't be surprised if, in a month, Lee Sedol was able to beat AlphaGo in another match. This is what happened in chess. The best computers were able to beat the best humans, until the best humans learned how to play anti-computer chess. This bought them a year or so more, until computers finally dominated for good.
12
versteegen 1 day ago 0 replies      
I found this comment on that thread quite insightful:https://gogameguru.com/alphago-4/#comment-13410

Edit: here's another great one on MCTS: https://gogameguru.com/alphago-4/#comment-13479

13
adnzzzzZ 1 day ago 3 replies      
According to the commentary of both streams I was watching, after losing an important exchange in the middle (apparently move 79 https://twitter.com/demishassabis/status/708928006400581632) it seems AlphaGo sort of bugged out and started making wrong moves on an already dead group on the right side of the board. After that it kept repeating similar mistakes until it resigned a lot of moves after. But the game was already won for Lee Sedol after that middle exchange. It was really interesting seeing everyone's reactions to AlphaGo's bad moves though.
14
emcq 1 day ago 3 replies      
That was really cool! It seemed after the brilliant play in the middle the most probable moves for winning required Lee Sedol to make impossibly bad mistakes for a professional, which would be a prior that AlphaGo doesn't incorporate. I've heard the training data was mostly amateur games so perhaps the value/policy networks were overfit? Or maybe greedily picking the highest probability, common with tree search approaches, is just suboptimal?
15
magoghm 1 day ago 2 replies      
Right now I don't know if I'm more impressed by AlphaGo's artificial intelligence or its artificial stupidity.

Lee Sedol won because he played extremely well. But when AlphaGo was already losing it made some very bad moves. One of them was so bad that it's the kind of mistake you would only expect from someone who's starting to learn how to play Go.

16
hasenj 1 day ago 1 reply      
The game seemed to be going in AlphaGo's favour when it was half way through. Black (AG) had secured a large area on the top that seemed nearly impossible to invade.

It was amazing to see how Lee Sedol found the right moves to make the invasion work.

This makes me think that if the time for match was three hours instead of two, maybe a professional player will have enough time to read the board deeply enough to find the right moves.

18
herrvogel- 1 day ago 2 replies      
Am I right by asumming, that if they would play another game (AlphaGo black and Lee Sedol white), that Lee Sedol could pressure AlphaGo into makeing the same mistake again?
19
kronion 1 day ago 3 replies      
After AlphaGo won the first three games, I wondered not if the computer had reached and surpassed human mastery, but instead how many orders of magnitude better it was. Given today's result, it may be only one order, or even less. Perhaps the best human players are relatively close to the maximum skill level for go, and that the pros of the future will not be categorically better than Lee Sedol is today.
20
Bytes 1 day ago 2 replies      
I was not expecting Lee Sedol to come back and win a game after his first three losses. AlphaGo seemed to be struggling at the end of the match.
21
conceit 4 hours ago 0 replies      
I just noticed a pun in the name: All Phago, devourer of worlds. Especially funny as beating a stone could be imaged as swallowing.
22
Angostura 1 day ago 1 reply      
Bizarre. I felt a palpable sense of relief when I read this. Silly meat-brain that I am.
23
awwducks 1 day ago 0 replies      
Lee Sedol definitely did not look like he was in top form there. I would say (as an amateur) his play in Game 2 was far better. It was the funky clamp position that perhaps forced AlphaGo to start falling apart this game. [0]

I wonder if Lee Sedol can find a way to replicate that in Game 5.

[0]: https://twitter.com/demishassabis/status/708928006400581632

24
rubiquity 23 hours ago 0 replies      
This is a great day for humans. Glad to see all those years of human research finally pay off.
25
asdfologist 20 hours ago 1 reply      
On a tangential note, apparently AlphaGo has been added to http://www.goratings.org/, though its current rating of 3533 looks off. Shouldn't it be much higher?
26
creamyhorror 1 day ago 1 reply      
Here's the post-game conference livestream:

https://www.youtube.com/watch?v=yCALyQRN3hw

At the end, Lee asked to play white in the last match, and the Deepmind guys agreed. He feels that AlphaGo is stronger as white, so he views it as more worthwhile to play as black and beat AlphaGo.

Conference over, see you all tomorrow.

27
pmontra 17 hours ago 0 replies      
GoGameGuru just published a commentary of the game with some extra insight https://gogameguru.com/lee-sedol-defeats-alphago-masterful-c...

The author thinks that Lee Sedol was able "to force an all or nothing battle where AlphaGos accurate negotiating skills were largely irrelevant."

[...]

"Once White 78 was on the board, Blacks territory at the top collapsed in value."

[...]

"This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves."

"Weve talked about AlphaGos bad moves in the discussion of previous games, but this was not the same."

"In previous games, AlphaGo played bad (slack) moves when it was already ahead. Human observers criticized these moves because there seemed to be no reason to play slackly, but AlphaGo had already calculated that these moves would lead to a safe win."

Which, I add, is something that human players also do: simplify the game and get home quickly with a win. We usually don't give up as much as AlphaGo (pride?), still it's not different.

"The bad moves AlphaGo played in game four were not at all like that. They were simply bad, and they ruined AlphaGos chances of recovering."

"Theyre the kind of moves played by someone who forgets that their opponent also gets to respond with a move. Moves that trample over possibilities and damage ones own position achieving less than nothing."

And those moves unfortunately resemble what beginners play when they stubbornly cling to the hope of winning, because they don't realize the game is lost or because they didn't play enough games yet not to expect the opponent to make impossible mistakes. At pro level those mistakes are more than impossible.

Somebody asked an interesting question during the press conference about the effect of those kind of mistakes in the real world. You can hear it at https://youtu.be/yCALyQRN3hw?t=5h56m15s It's a couple of minutes because of the translation overhead.

28
kibaekr 1 day ago 1 reply      
So where can we see this "move 78" that everyone is talking about, without having to go through the entire match counting?
29
h43k3r 1 day ago 0 replies      
The post match conference analysis with Lee Sedol and the CEO of deepmind about the different aspects of the game is beautiful to watch. There seems to be a sense of sincerity rather than the greed to win from each of the side.
30
jacinda 21 hours ago 0 replies      
<joke>AlphaGo let Lee Sedol win to lull us all into a false sense of security. The robot apocalypse is well underway.</joke>
31
overmille 1 day ago 0 replies      
Now that we have two points for interpolation, expectations are down to near best human competency in go using distributed computation. Also from move 79 to 87 the machine wasn't able to detect the weak position, that shows its weakness. Now Lee can try and aggressive strategy creating multiple hot points of attacks to defeat his enemy. The human player is showing the power of intelligence.
32
zkhalique 19 hours ago 0 replies      
Wow! Incredible! Now we know that they have a chance against each other. I would say that this was a very major point... otherwise we wouldn't know whether AlphaGo's powers have progressed to the point where no one can ever beat it. Now I take what Ke Je said much more seriously: http://www.telegraph.co.uk/news/worldnews/asia/china/1219091...
33
hyperpape 15 hours ago 0 replies      
It's worth mentioning that while 79 is where Black goes bad, not everyone is sure that 78 actually works for White (http://lifein19x19.com/forum/viewtopic.php?f=15&t=12826). I'm sure we'll eventually get a more complete analysis.
34
yoavm 1 day ago 0 replies      
okay human race, let's sit back and enjoy our last moments of glory!
35
GolDDranks 16 hours ago 1 reply      
https://gogameguru.com/lee-sedol-defeats-alphago-masterful-c...

> This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves.

It seems to me, that these bad moves were a direct result of AlphaGo's min-maxing tree search.

According to @demishassabis' tweet, it had had the "realisation" that it had misestimated the board situation at move 87. After that, it did a series of bad moves, but it seems to me that those moves were done precisely because it couldn't come up with any other better strategy the min-max algorithm used traversing the play tree expects that your opponent responds the best he possibly can, so the moves were optimal in that sense.

But if you are an underdog, it doesn't suffice to play the "best" moves, because the best moves might be conservative. With that playing style, the only way you can do a comeback is to wait for your opponent to "make a mistake", that is, to stray from a series of the best moves you are able to find, and then capitalize that.

I don't think AlphaGo has the concept of betting on the opportunity of the opponent making mistakes. It always just tries to find the "best play in game" with its neural networks and tree search in terms of maximising the probability of winning. If it doesn't find any moves that would raise the probability, it picks one that will lower it as little as possible. That's why it picks uninteresting sente moves without any strategy. It just postpones the inevitable.

If you're expecting the opponent to play the best move you can think of, expecting mistakes is simply not part of the scheme. In this situation, it would be actually profitable to exchange some "best-of-class" moves to moves that aren't that excellent, but that are confusing, hard to read and make the game longer and more convoluted. Note that this totally DOESN'T work if the opponent is better at reading than you, on average. It will make the situation worse. But I think that AlphaGo is better in reading than Lee Sedol, so it would work here. The point is to "stir" the game up, so you can unlock yourself from your suboptimal position, and enable your better-on-average reading skills to work for you.

It seems to me that the way skilful humans are playing has another evaluation function in addition to the "value" of a move how confusing, "disturbing" or "stirring up" a move is, considering the opponent's skill. Basically, that's a thing you'd need to skilfully assess your chances to perform an OVERPLAY. And overplay may be the only way to recover if you are in a losing situation.

36
conanbatt 23 hours ago 1 reply      
This game is a great example for the people that said that AlphaGo didnt play mistakes when it had a better position because it lowered the margin, because it only looks at winning probability.

AlphaGo made a mistake and realized it was behind, and crumbled because all moves are "mistakes"(they all lead to loss) so any of them is as good as any other.

Im very suprrised and glad to see Humans still have something against AlphaGo, but ultimately, these kind of errors might dissapear if AlphaGo trains 6 more months. It made a tactical mistake, not a theory one.

37
jonbarker 18 hours ago 1 reply      
Would it not be beneficial to the deepmind team to open at least the non-distributed version to the public to allow for training on more players? I was surprised to learn that the training set was strong amateur internet play, why not train on the database of the history of pro games?
38
devanti 12 hours ago 0 replies      
I wonder if Lee Sedol were to start as white again, and follow the exact same starting sequences, would AlphaGo's algorithms follow the exact same moves as it did before?
39
yulunli 1 day ago 1 reply      
AlphaGo obviously made mistakes in game 4 under the pressure from LSD's brilliant play. I'd like to know if the "dumb moves" are caused by the lack of pro data or some more fundamental flaws with the algorithm/methodology. AlphaGo was trained on millions of amateur games, but if Google/Deepmind builds a website where people (including prop players) can play with AlphaGo, it would be interesting to see who improves faster.
40
ctstover 19 hours ago 0 replies      
As a human, I'm pulling for the human. As a computer programmer, I'm pulling for the human. As a romantic, I'm pulling for the human. As a fan of science fiction, I'm pulling for the human. To me it will matter even he can pull off a 3-2 loss over a 4-1 loss.
41
yk 1 day ago 1 reply      
Apparently AlphaGo made two rather stupid moves on the sidelines, judging from the commentary. Which incidentally is the kind of edgecase one would expect machine learning against itself is bad at learning, since there is a possibility that AlphaGo just tries to avoid such situations. It will be interesting to see if top players are able to exploit such weaknesses once AlphaGo is better understood by high level Go players.
42
esturk 1 day ago 2 replies      
LSD maybe the only human to ever win against AlphaGo.
43
ljk 1 day ago 0 replies      
Does this mean Lee found AlphaGo's weakness, and AlphaGo wasn't player at a out-of-reach level?
44
spatulan 1 day ago 1 reply      
I wonder what the chances are of a cosmic ray or some stray radiation causing AlphaGo to have problems. It's quite a rare event, but when you have 1920 CPUs and 280 GPUs, it might up the probability enough to be something you have to worry about.
45
piyush_soni 1 day ago 0 replies      
I am super excited and all about the Progress AI has made in AlphaGo, but a part of me feels kind of relieved that humans won at least one match. :). Sure, won't last for long.
46
agumonkey 22 hours ago 0 replies      
Way to go humans. (I felt that AlphaGo was unbeatable and a milestone in computing overthrowing organic brains... I gave in the buzz a bit prematurely).
47
atrudeau 14 hours ago 2 replies      
Move 78 gives us hope in the war against the machines.

78 could come to symbolize humanity.

What a special moment.

48
eslaught 16 hours ago 1 reply      
Is there a place I can go to quickly flip through all the board states from the game?
49
uzyn 1 day ago 1 reply      
It seems Lee Sedol fares better at late to end game than AlphaGo. Makes one wonder if Lee might have won the earlier games had Lee pushed on until the late game stages.
50
_snydly 1 day ago 4 replies      
Was it AlphaGo losing the game, or Lee Sedol winning it?
51
codecamper 1 day ago 0 replies      
Wow that is awesome news. Very happy to read this this morning. It's a good day to be human.
52
partycoder 1 day ago 0 replies      
Montecarlo bots behave weirdly when losing.
53
makoz 1 day ago 0 replies      
Some pretty questionable moves from AlphaGo in that game, but I'm glad LSD managed to close it out.
54
another-hack 20 hours ago 0 replies      
Humans strike back! :P
55
vc98mvco 1 day ago 1 reply      
I hope it won't turn out they let him win.
56
pelf 17 hours ago 0 replies      
Now THIS is news!
57
Dowwie 1 day ago 0 replies      
But, did he pull a Kirk vs Kobayashi Maru? :) (yes, I went there)
58
samstave 17 hours ago 0 replies      
So I am a completely ignorant of the game go. I mean I've heard about it my whole life but never bothered to understand it ever.

But after watching the summary video of AlphaGos win... I'm fascinated.

I'm sure there are thousands of resources that can teach me the rules, but HN; can you point me to a resource you recommend to get up to speed?

59
techdragon 1 day ago 3 replies      
I was hoping to see how AlphaGo would play in overtime. Now I'm curious, does it know how to play in overtime? Can the system handle evaluating how much time it can give itself to 'think' about each move, or does it fall into the halting problem territory and it was programmed to evaluate its probability of winning given the 'fixed' time it had left.
60
repulsive 1 day ago 0 replies      
negativist paranoid skeptic could say that it would be a good move for the team to intentionally make go lose single battle in the moment it has already won the war..
61
conanbatt 1 day ago 0 replies      
Maybe Alpha Go understood it won the 5 series, so its reading that it can lose the last 2 games and still win and hence plays suboptimal :P
62
gcatalfamo 1 day ago 1 reply      
I believe that after winning 3 out of 5, AlphaGo team started experimenting with variables now that they can relax. Which will in turn be even more helpful for future AlphaGo version than the previous 3 wins.
63
antonioalegria 1 day ago 3 replies      
Don't want to sound all Conspiracy Theory but somehow this feels planned.. It plays into DeepMind's hand to not have the machine completely trouncing the human. It's less scary and keeps people engaged further into the future.

Also seems in-line with the way Demis was "rooting" for the human this time they already won so now they focus on PR.

22
Last Week Tonight with John Oliver: Encryption [video] youtube.com
142 points by XioNoX  6 hours ago   25 comments top 8
1
dcw303 4 hours ago 2 replies      
Like everyone on this site, I've been following this story too closely to get any new info from this segment, so I couldn't tell if this will convince people. It was up to the always high standards of Last Week Tonight though.

I really hope the message got through to his audience. We need every single non-technical person in the world to understand this clearly if we have any hope of getting the US Government to back down.

2
anc84 2 hours ago 0 replies      
What a shame that Signal is not mentioned as encryption app.
3
thwarted 4 hours ago 4 replies      
He used phrasing like "widely thought by experts to be impossible" (13m2s) a few times through this piece. Which cryptographers and cryptography experts think, in 2016, that a crypto system could be created that is, baring bugs, completely secure right up until the point where you don't want it to be? He showed clips of legislators asking for magic crypto unicorns (10m). Is this some kind of 4 out of 5 cryptographers think it's an "impossibility", and do we really think that that remaining one is actually an expert?

Or is this just an attempt at "fair and balanced" reporting, implying that, while they couldn't find any "experts" to take the opposite side, there must be some out there. John Oliver doesn't usually do that though.

4
pointernil 38 minutes ago 1 reply      
So there is an effort estimate to ADD what the authorities need?

Does this indicate the crypto is already broken?

What's hindering the "intelligence community" from doing it on their own on case by case basis?

Did they already do this?

Does Apple win disproportionately marketing wise by staging it self as the sound and secure provider?

5
Tempest1981 4 hours ago 2 replies      
Awesome summary of the issue. All it takes is 1 disgruntled/bribed/blackmailed employee, and everyone could be compromised. Not worth the risk.
6
aauchter 2 hours ago 2 replies      
Would it be possible to build devices that could be unlocked a fixed number of times across all units (say 1,000 times). Devices could be heavily hardware encrypted, but unlockable with an encryption key, a portion of which comes from a publicly monitored blockchain/distributed ledger, that when used reduces the number of future uses.

This way, the government could be granted access for extreme cases, but without the potential for abuse or mass surveillance. Once there were 1,000 check-ins, not more keys could be generated.

Thoughts?

7
senectus1 4 hours ago 3 replies      
any chance of a non-geoblocked link?
8
fufefderddfr 56 minutes ago 0 replies      
Video spam. Late night show bullshit.
23
The Next Front in the New Crypto Wars: WhatsApp eff.org
287 points by panarky  18 hours ago   161 comments top 16
1
jessegreathouse 16 hours ago 9 replies      
If the government wins any of these court battles, it's only a matter of time until one-way encryption is outlawed. It follows logically that if criminals/terrorists can't use iPhones to securely communicate, then they'll just move on to the next convenient encryption app. The government will continue to order companies to break their one-way encryption until the government realizes they're playing musical chairs and then they'll issue an executive order to ban one-way encryption outright. The precedent allowing them to do so, will be all of these initial court battles vs Apple, whatsapp, and whoever else gets defeated. In the wake of these events regular people, like you and me, will be harmed by hackers and commercial companies exploiting this new world without one-way encryption.
2
axihack 15 hours ago 1 reply      
Can anyone point me to where WhatsApp app is actually confirming they are implementing E2E encryption and how?

I couldn't find anything on the oficial web/blog, the single mention on security is this FAQ[1] which is about server/device encryption.

A friend also told me E2E is only available for US users but unfortunatelly I can't confirm this because of the lack of communication from WhatsApp.

[1] https://www.whatsapp.com/faq/en/general/21864047

Edit: fixed typos

3
rsync 8 hours ago 2 replies      
All computer code can be rephrased in common written (english, perhaps) language. I'm not talking about pseudocode, I mean an actual translation layer from, say, C to english phrasing that specifically describes the computer code to be written.

And at that point it's just speech. I don't mean "like speech", or "something sophisticated people should recognize as speech", or "code is speech" ... I mean, it's just plain old speech. Just very boring, long-winded (and extremely precise) descriptions of computer source code.

So perhaps there will be some pain and perhaps there will be some years before it finally gets to the supreme court, but in the end, it's just speech.

Will they change the 1A ? Would they ?

4
rmc 16 hours ago 0 replies      
If cases like this go in the US Gov's favour, it'll further damage the US tech industry. It's already illegal for EU orgs to use US tech companies for personal data!
5
ycmbntrthrwaway 16 hours ago 1 reply      
Please stop calling it crypto wars. Calling something a "war" justifies wartime measures, just as it happens with terrorism, drugs and things like that.
6
krylon 4 hours ago 1 reply      
I find it disturbing how the intelligence and law enforcement community seem to think there is some kind of natural right for them to snoop on people.

In case of the good old phone system, the very way that worked maked wiretapping very easy. The same was true for physical mail (one major reason why most states created and held on to the monopoly on mail for so long).

With email and IM this is - again due to the way this works - a lot more difficult. Artificially restricting encryption just so they can keep on doing things the way they're used to is a bad idea, and kind of naive, too.

7
lordnacho 16 hours ago 1 reply      
What does "undue burden" mean? Wouldn't it be very simple for WhatsApp to remove the encryption in the app? (Anyone can write an unencrypted app.) Could they be forced to do that?

IMO that would be awful.

8
trulyWasteful 14 hours ago 1 reply      
No one can stop me and my peers from sending meaningless garbage data to each other.

So, if it simply looks encrypted, but acctually contains randomized meaningless shit, how can anyone prevent me from bahaving in this manner, and claim that I've done harm?

I've paid for the service, and I can spam it with trash as I see fit.

9
alias240 13 hours ago 0 replies      
A part of me wonders if I should believe this, and the story about the Apple case. Or maybe it is all just a conspiracy to gain our trust.
10
venamresm__ 5 hours ago 0 replies      
There are alternatives to one way encryptions. Think of steganography and communication between the parties being indirect.
11
brudgers 12 hours ago 0 replies      
The important aspect of this and the Apple scenario is that the encryption requires a benevolent third party. Encryption that relies on Eve...well, she has three faces.
12
eyeareque 8 hours ago 0 replies      
The only reason the government "gave up" in the previous crypto war was because they decided to find ways to break or weaken crypto to their needs. I don't think this time will be any different, one way or another they will get access to our data.
13
1024core 14 hours ago 1 reply      
If the USG can force Apple/Whatsapp to decrypt some communication, what prevents the PRC from doing the same? Will we see Tim Cook arrested the next time he goes to China?
14
fweespee_ch 17 hours ago 1 reply      
http://www.nytimes.com/2016/03/13/us/politics/whatsapp-encry...

> The Justice Department and WhatsApp declined to comment. The government officials and others who discussed the dispute did so on condition of anonymity because the wiretap order and all the information associated with it were under seal. The nature of the case was not clear, except that officials said it was not a terrorism investigation. The location of the investigation was also unclear.

Just in case anyone was wondering if this was terrorism related, it is not. I suppose next is OpenWhisperSystems / Signal, etc.

I'm glad I've stuck with GnuPG for anything truly sensitive.

15
throwaway0209 17 hours ago 1 reply      
Here the local drug dealers encourage use of a app called Wickr. Does anyone know how the encryption compares to WhatsApp?
16
sandra_saltlake 5 hours ago 1 reply      
Please stop calling it crypto wars..
24
GNU Gneural Network gnu.org
278 points by jjuhl  20 hours ago   95 comments top 20
1
_delirium 18 hours ago 5 replies      
I agree with the general motivation that having too much AI research in the hands of software companies who keep it proprietary harms transparency and progress. But there is already a lot of neural-network free software, so why another package? For example, these widely used packages are free software, and seemingly more featureful: http://torch.ch/, http://www.deeplearning.net/software/theano/, http://pybrain.org/, https://www.tensorflow.org/, http://leenissen.dk/fann/wp/, http://chainer.org/
2
Aeolos 2 hours ago 0 replies      
http://cvs.savannah.gnu.org/viewvc/gneuralnetwork/gneuralnet...

Am I mistaken, or is the source repository for this project just tarballs checked into CVS?

3
rck 12 hours ago 0 replies      
The implementations look odd. A network consists of a collection of neurons, which are implemented individually as structs. The forward pass through the network is a series of nested loops, and the gradient descent implementation doesn't use backpropagation - it uses finite differences to approximate derivatives, which is known to be inefficient. Given the overall design of the library, it isn't really clear what you would use it for in practice.

I hope that future versions take inspiration from other open source machine learning libraries, which show how to use linear algebra and backpropagation and are much more effective.

4
arnorhs 17 hours ago 0 replies      
- It's nice that GNU is taking on such a project

- FANN seems like a pretty good alternative

- The value of the software at the big "monopolies" lies within the data, not necessarily the software

- This needs to be in some publicly accessible repo. Downloading a zip file and submitting patches? I thought we, as a society, were over that way of building OSS.

5
fche 17 hours ago 4 replies      
The "ethical motivations" section is out of place here. Its moaning about "money driven companies" (as though money were a bad thing), or "monopoly" (which does not exist in AI), just reflects badly upon the project.
6
mankash666 15 hours ago 0 replies      
This team should focus on a SPIR-V back-end and remove vendor lock in from NVIDIA for CUDA IN tensor AI software. A GPL licensed AI library without GPU acceleration isn't attractive outside academia.
7
dcuthbertson 17 hours ago 1 reply      
Aw. It should have been named the GNU Gneural Gnetwork, gno?
8
tajen 3 hours ago 0 replies      
Talking about this, GNU/the FSF should start drafting an OSS license for neural networks. Like APL Afferro for cloud services, the specifics of neural networks is that data is strategic.

APL -> Guaranty of OSS for the desktop

APL Afferro -> Guaranty of OSS for the cloud

??? -> Guaranty of OSS for NNs

9
mmf 17 hours ago 0 replies      
At this stage of things, I think it's more forward looking to open source trained models. Not only they are beginning to be the real core of future building blocks (see, e.g., trained word2vec vectors) but also the contain the real complexity in a NN, i.e., the are the "real function" you would want in a library.
10
latenightcoding 18 hours ago 2 replies      
Love it!If you want to play with state-of-the-art machine learning software, this is not for you.But if you want a clean implementation of neural networks in C that has a GPL license and no non-free components, this is a good start.
11
akhilcacharya 14 hours ago 1 reply      
Is there more being done to promote GPU acceleration on non-CUDA platforms? I feel like this would be more useful than yet another FOSS NN library.
12
walkingolof 2 hours ago 0 replies      
Isn't the problem that in our age of supervised training, the algorithms are not the competitive advantage, but the data ?
14
stevenaleach 15 hours ago 0 replies      
Funny.. The majority of AI research is currently using open source libraries (Theano, Lasagne, Torch, Keras, Scikit-Learn, Nolearn, etc. etc. etc.)

Now Google does have access to a whole lot of data that the rest of the world doesn't. and FB, Google, and etc. have more than a bit of a hardware advantage... for now, at least. Distribute a shared system over a P2P infrastructure, and you can change that. Perhaps rather significantly.

15
sandra_saltlake 4 hours ago 0 replies      
Nice that GNU is taking on such a project..
16
fnfhdjcnx 18 hours ago 1 reply      
I'm glad the FSF is finally getting concerned about proprietary AI, but it's going to take a lot more than a single neural network package to get caught up in this arms race.

I wish they had taken the initiative much sooner.

17
anonbanker 13 hours ago 1 reply      
If you were an AI (software), and you had to pick a license to release your source code under, one would assume you would pick the GPL, as it retains as much freedoms as a piece of software could ever expect in a world full of us.
18
jjawssd 18 hours ago 0 replies      
I wish them good luck
19
overmille 14 hours ago 0 replies      
freebase?
20
rand1012 17 hours ago 2 replies      
Anyone else notice how GNU's website is stuck in 1993?
25
Deep-Q Learning Pong with Tensorflow and PyGame danielslater.net
7 points by albertzeyer  5 hours ago   discuss
26
Small Memory Software: Patterns for systems with limited memory smallmemory.com
149 points by ingve  15 hours ago   29 comments top 5
1
userbinator 11 hours ago 3 replies      
The one thing that I was strongly expecting but didn't seem to find any mention of when I was quickly paging through is the idea of using simpler, constant-space algorithms (e.g. streaming style, keeping only what's needed in memory) and in general reducing the amount of code and data.

Likewise, the use of C++ and Java in a book about "limited memory" is a bit unusual.

Then again, I'm really not keen on the whole "patterns" thing, because from experience I've found it tends to replace careful thought with dogmatic application of rules that might not be relevant at all to the situation at hand.

2
ChuckMcM 15 hours ago 2 replies      
I realize that small memory discipline is not something that serves a modern programmer well. Spending time on optimizing memory usage is often completely useless as a new version or new feature will invalidate the effort and the overall effect on the system, compared to other work the programmer could be doing, would not be cost effective.

That said, understanding how to design in tight memory constraints is useful. While typically only seriously practiced by embedded systems developers, having habits that minimize memory use can have a large impact at scale. The paper gives some good reasons on what the benefits of those habits are, but I also think that, as a percentage of the total, programmers living in constrained memory spaces are a specialization not the mainstream any more.

3
fiatjaf 14 hours ago 1 reply      
At least this serves to remind people they must think about memory, at least sometimes. The rule today seems to be to use as much memory as possible, load gigantic frameworks or use super-heavy languages for the most simple of tasks. I only notice because my computer do not have all the memory a normal modern computer has.
4
jdcarter 15 hours ago 0 replies      
I've owned the print version of this book for many years; it's like the GoF book for embedded systems. Solid material presented very well. Kudos to the authors for releasing the PDFs!
5
jschwartzi 11 hours ago 0 replies      
Chapter 2 is basically a textbook description of how Android handles memory.
27
ESA ExoMars launch livestream.com
20 points by leowinterde  4 hours ago   2 comments top 2
1
sathackr 40 minutes ago 0 replies      
It seems to me that they launched that with a much higher TWR than normal...the rocket seems like it accelerated much faster than I'm used to seeing. Is that just the Russian way?
2
kartikkumar 3 hours ago 0 replies      
28
I made my own clear plastic tooth aligners and they worked amosdudley.com
861 points by dezork  1 day ago   118 comments top 33
1
rl3 1 day ago 4 replies      
Not to be a downer, but was any thought given to the safety of the plastic(s) used?

This is something that's in your mouth a lot and constantly exposed to saliva.

The Dimension 1200es mentioned doesn't appear to be specific to medical applications.[0] The product page lists the only compatible thermoplastic being ABSplus-P430. The MSDS for that basically says the stuff is dangerous in molten form, and beyond that there's very little data.[1] The same company makes "Dental and Bio-Compatible" materials for use with their other products, and these appear to have considerably more safety data.[2]

>The aligner steps have been printed, in addition to a riser that I added in order to make sure the vacuum forming plastic (sourced from ebay) ...

As another commenter pointed out, the vacuum forming plastic is probably the primary concern because the 3D printer was just used to create the molds. The specific type of vacuum plastic isn't mentioned.

Regardless, very neat project.

[0] http://www.stratasys.com/3d-printers/design-series/dimension...

[1] http://www.stratasys.com/~/media/Main/Files/SDS/P430_ABS_M30...

[2] http://www.stratasys.com/materials/material-safety-data-shee...

2
jeffchuber 1 day ago 1 reply      
Awesome work!

The animation definitely seems the most difficult (and subjective), but also the most cool! Body hacking via computed geometry!

Invisalign (align technology) uses almost the same workflow. Market cap $5.89B.

If you could move the workflow over to something based on WebGL / three.js - you could make this accessible to dentists in developing countries. Could be an awesome open source project.

I think "allowing" it to be used in the US would open yourself up to too much liability though :(

3
loocsinus 1 day ago 1 reply      
It is smart that you designed the retainers based on maximum tolerance of tooth movement quoting from a textbook. I suggest you take X ray to make sure no root resorption have occurred. Also for those who want to imitate, measure the length of teeth and compare with the arch length to make sure the teeth can actually "fit" into the arch. I am a dental student.
4
percept 1 day ago 2 replies      
Now that is awesome--those things aren't cheap.

I'm going to send this to my dentist (who's cool enough to appreciate it).

5
forgotpasswd3x 1 day ago 1 reply      
This is really amazing, man. It's honestly the first 3D printing application I've seen that I can see quickly improving thousands of lives. Just to think of all the people who right now can't afford this procedure, that soon will be able to... it's just really wonderful.
6
valine 1 day ago 0 replies      
He scans his teeth, animates how he wants them to move in blender, and then 3D prints each frame. That is absolutely brilliant.
7
wslh 22 hours ago 1 reply      
There is an important issue missing in the article (beyond the warning notice): the occlusion. The modification of the dental structure requires a whole functional analysis that goes beyond the teeth.

Anyway, the future is promising and the issues could be solved taking into account all the factors.

8
minsight 1 day ago 1 reply      
This is just amazing. I was waiting for how it might go horribly wrong, but the guy's mouth looks great.
9
rashkov 1 day ago 1 reply      
I came across an article here on HN about mail-order Invisalign companies at a fraction of the price. I'm about half way through and very happy with the progress so far. Just thought I'd give a heads-up if anyone is interested
10
CodeWriter23 1 day ago 1 reply      
The work he did with the impressions, to me, suggests he has experience as / knows someone who is a dental technician. If he didn't, wow, he independently figured out some of their key techniques.

My grandfather used to make dentures, and that casting in the 4th photo looks exactly like the impressions my GF would make. They also used these hinges so they could mate the upper to the lower, so they could adjust any collisions that occurred while opening and closing the mouth.

11
daveguy 22 hours ago 0 replies      
It looks like the author took into account the safety of the plastic in creating these, which is a good thing. Maybe more so than dentists. You know "silver" fillings aka dental amalgum? They are 50% mercury by weight and are still being used. Supposedly safe because it is inhalation of mercury that is poisonous. Removal of those fillings with a drill can be dangerous. When some guy told me about this and was talking about it being the next asbestos/mesothelioma, I was thinking "sure! That sounds like conspiracy crap!" Then I looked it up on the FDA site like he suggested:

http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedur...

Anti-vaxxers are idiots and it is obvious that vaccines don't cause autism (original study was a fraud). The health benefit of vaccines is as undeniable as the lack of correlation to autism.

That said, dental amalgum is a chunk of mercury in your mouth. FDA says it is safe for people over 6yrs old, but I personally will stay away from it for any future dental work.

12
KRuchan 6 hours ago 0 replies      
Kudos to him for doing this, but I am slightly concerned that he has introduced overbite [1] in his jaws looking at the before and after pictures :(

[1] https://en.wikipedia.org/wiki/Overbite

13
teekert 1 day ago 2 replies      
This also seems to have whitened his teeth at the same time ;), typical "before, after".

But on a serious note, I had braces, after the were remove a wire was placed behind my teeth to keep them in place. It didn't stick to one of my ceramic teeth I had from an accident in my youth. The wire was removed and after some months my front two teeth were as far apart as ever. Ok, the overbite didn't return but things will move back at least to some degree over time.

As mentioned before, I myself would never just put any plastic material in my mouth with all the bad things known about plasticisers, bpa/bps, etc.

14
hellofunk 1 day ago 2 replies      
This is cool but I can't say I agree with actually doing it. Just because you can do something doesn't mean you should, particularly in matters of health. If you don't have the requisite experience and knowledge and training, it seems risky to go about something like this on your own.
15
racecar789 1 day ago 0 replies      
Another option....have a dentist bind composite material to the couple teeth out of alignment.

Had two teeth done for under $500 10 years ago.

It's a stop gap until braces are an option financially.

16
zump 1 day ago 0 replies      
Now THIS is a hack.
17
z3t4 4 hours ago 0 replies      
Considering opportunity cost of the 100+ hours that probably went into this it would be cheaper to go to a dentist.

He might be able to come up with a better or cheaper method then the currently industry standard though ...

18
yogipatel 13 hours ago 0 replies      
I'm not trying to downplay how much the hacker/geek in me loves this, however, as a former* dental student, I would highly suggest not trying to pull this off on your own.

First, teeth and their movement is more complicated than it might first seem. You have to think about the entire masticatory apparatus, for example:

There's more root than crown, how does the root move in relation to the tooth? Root resorption is a common problem in orthodontic treatment.

Is there / will there be enough bone surrounding the tooth to support the intended movement?

How will the patient's occlusion (how the teeth fit together) be affected? Part of the Invisalign process is to take a bite registration that shows the upper and lower teeth in relation to each other. This is important, and ignoring it can potentially lead to other complications:

- stress fractures

- supraeruption of opposite tooth

- TMJ pain

Does the patient display any parafunctional habits that will affect the new tooth positions? For example, do they grind, clench, or have abnormal chewing patterns?

Many Invisalign techniques require the placement of anchors, holds, and various other structures attached to the teeth themselves. They allow for more complex movement than the insert itself would be able to provide.

Adjustments are often required mid-treatment. Not everybodys anatomy and biology is exactly the same, so you have to adjust accordingly.

Now, does every general dentist take this into account 100% of the time? No, but theyre at least trained to recognize these situations and compensate for them.

That said, many simple patients dont require any more thought than the OP put in. Its a good thing he looked in a text book and realized that theres a limit to how much you should try to move a tooth at each step before youre likely to run into problems. And if you do run into problems do you think a professional is going to come anywhere near your case?

A few issues I have with his technique:

Unless he poured his stone model immediately after taking the impression, its likely there was a decent loss in accuracy. Alginate is very dimensionally precise, but only for about 30 minutes. The material that most dentists use, PVS, is dimensionally stable for much, much longer (not to mention digital impressions).

Vertical resolution of the 3D print does matter. You might be moving teeth in only two dimensions, youre applying it over three dimensions.

Again, I think it is awesome that someone gave this a shot, and did a fairly good job as well. Im all for driving the cost of these types of treatments down, as well as promoting a more hacky/open approach to various treatments. Just know theres more than meets the eye.

* I decided to go back to tech, theres too little collaboration in dentistry for me to make a career out of it.

19
vaadu 20 hours ago 1 reply      
How soon before the FDA says this is illegal or the medical industrial complex lobbies congress to make this illegal?
20
Tepix 20 hours ago 0 replies      
I love this project - well done, and the result speaks for itself! It's unfortunate that you were forced to go this somewhat dangerous route due to money. In some countries dental care like that would be paid for by the health insurance.
21
scep12 1 day ago 0 replies      
Awesome stuff Amos! It's always nice to see creativity and persistence rewarded with successful results. I really enjoy reading these types of posts on HN.
22
justinclift 19 hours ago 0 replies      
Cool, that's an idea I'd had in the back of my head for some time too. Good to see someone's gone ahead and done it, and proven the concept. :D
23
muniri 1 day ago 0 replies      
This is awesome! Definitely not the safest thing to do, but I'm glad that they worked.
24
semerda 1 day ago 0 replies      
Wow this is awesome! Thank you for sharing. Retainers post Invisalign cost between $400-900 for 1 set - total ripoff. This looks like a far cheaper alternative.
25
vram22 1 day ago 3 replies      
Interesting article. Waterpik is a related product (as in, for teeth and gums) that a dentist recommended. Anyone have experience using it - pros, cons?
26
burgessaccount 1 day ago 0 replies      
This is awesome! Thanks for the detailed description.
27
mentos 1 day ago 1 reply      
Are you considering starting a business out of this?
28
pcurve 1 day ago 0 replies      
this is pretty amazing and daring.

I guess this would work better with those with gaps or very mildly crowded teeth.

Often crowded teeth result in pulling teeth to make room.

29
hamburglar 1 day ago 0 replies      
Having recently done invisalign, I think this is brilliant, but I would have had a really hard time sticking with it through the pain. I would worry too much that I was doing damage. My case was quite a bit more severe, however, so maybe it's less of a big deal if the movements are minor.
30
ck2 1 day ago 0 replies      
This is definitely for the brave, not me.

Not sure what I would do if we didn't have a dental school.

When I go there I am always surprised to find people who actually have insurance who still go there despite all the hassle.

31
transfire 1 day ago 3 replies      
Can you chew food with the aligner on?
32
peleroberts 23 hours ago 0 replies      
Direct leak into your gums..
33
brbsix 23 hours ago 0 replies      
Orthodontics is a field known for its protectionism. It'd be pretty foolish but I wouldn't be surprised if you receive a cease and desist.
29
What ISPs can see teamupturn.com
196 points by schoen  18 hours ago   71 comments top 9
1
mirimir 14 hours ago 3 replies      
This is a decent article. However, it's a bit vague on the vulnerabilities of VPN services. The major risk is probably traffic leakage after uplink interruptions, or changing WiFi APs. Once the VPN connection has failed, default routing must be restored in order for reconnection to occur. There must be firewall rules to prevent other traffic from using the physical NIC during reconnection. That is, you want the VPN to fail closed.

Another risk, which the article may vaguely allude to, is using ISP-associated DNS servers. Even if all traffic uses the VPN tunnel, DNS requests reveal sites being visited, and it's trivial for ISPs to correlate them with traffic.

IPv6 is a huge looming risk. Many VPN services don't route or block IPv6 traffic. As full IPv6 service becomes widespread, there will be major pwnage. However, this is easy to firewall, and good custom VPN clients do so.

Edit: For suggestions about leak-testing, see https://www.ivpn.net/privacy-guides/how-to-perform-a-vpn-lea...

Edit: Changed "URLs" to "sites".

2
pmoriarty 17 hours ago 4 replies      
One significant point that this analysis misses is that even when traffic is encrypted, it can be recorded and later decrypted by the ISP or those they give/sell/leak the recordings to if the encryption is ever compromised (which seems to happen on a pretty regular basis with SSL these days).

Never let yourself get lulled in to a false sense of security just because the information you wish to keep private has been encrypted.

3
droopybuns 10 hours ago 1 reply      
This is a weird article that lacks important context.

When are we going to get an article on "what google can see" from this team?

ISP snooping on network traffic really only happened after Google started getting into the ISP business.

Monetizing your traffic (beyond transport) only became a thing after Google Fiber fired a cannon ball across the broadside of carriers. Carriers responded by offering ad networks around anonymized, aggregated data about customer behavior. It drives the value of Google Adwards down. It doesn't make carriers rich.

Maybe someday we'll need to worry about big-I innovation from carriers in this domain, but I don't see it happening soon.

I'm all in on tearing it all down. But focusing only on the companies that offer services that huge populations are willing to actually pay for is extremely dishonest.

4
pippy 16 hours ago 5 replies      
Google could easily take the initiative on the DNS query front and implement DNSCrypt by default on Chrome. It would booster client privacy and also block ISP from selling usage data. So it would be a win-win for Google.
5
sysret 8 hours ago 0 replies      
The fastest DNS lookups are ones that do not need to traverse the network.

A zone file of public DNS information can be served by a daemon bound to the loopback on the user's device, obviating the need for many (but not all) lookups sent over the network.

These local lookups are also more private than ones sent out over the private LAN or public internet.

Same goes for any type of data. It's not limited to DNS information.

If a user downloads publicly available data dumps from Wikipedia, and then serves them from a local database and httpd, the response time will be faster and the requested URL's more private than accessing the same content over the public internet. Not to mention the small benefits of reliability and reduced dependence on the network.

I know a user who does this and has automated the process of setting it up.

To use the examples in the article, the idea is that a user can periodically download bulk data, e.g., information on medical conditions, in an open format, load it into her database of choice and query to her heart's content, without any ISP or website knowing what she has queried.

Same with daily newpaper content, and even a catalog of toys."Browsing" through the data remains private.

The alternative is to have this data served from third party computers and have the user send each and every request for each small item of information over an untrustworthy, public network (the Internet).

Despite ample, inexpensive local storage space for users to store data of any kind themselves, let us break up the data into little bits and make users request each and every bit individually. (Not only that, let's make them register for the the ability to make numerous queries.) Then we can record all user requests for every item of data.

Metadata. Sell. Profit.

6
Matt3o12_ 7 hours ago 1 reply      
It should also be mentioned that VPNs, even if correctly used (e.g. no dns/IPv6/webrtc leaks), this simply shifts to trust to another provider. Now, your ISP is not able to see your traffic but your VPN provider is potentially able to (even if you self host it on aws or digital ocean because they still have full access to the box), and their ISP. If you trust them more, use them, but I you don't, I see little reason to utilize a VPN unless you want to unblock geo-blocked services.

The only advantage is that your VPN provider (or their ISP) might have little reason to spy on your traffic instead of your regular VPN.

7
q1t 17 hours ago 2 replies      
So how do you protect form such things?I mean is there a way to analyze all you outcoming traffic (from a specific machine for example) and route every connection(like dns and similar stuff) though desired endpoint?
8
newman314 11 hours ago 2 replies      
I guess this is a good time as any to ask what people on HN use for a VPN provider.

I'll create a Ask HN post if there is interest...

9
snug 10 hours ago 1 reply      
SNI header being sent from the client can probably show even better patterns from the user.
30
Work for only 3 hours a day, but everyday plumshell.com
450 points by NonUmemoto  1 day ago   134 comments top 26
1
err4nt 22 hours ago 4 replies      
I have experienced a similar thing while freelancing or design and web development. I used to work 16 hours some days and less hours others, but then sometimes I would need to work and found it hard to kick it into gear.

I think creativity is like a well, and when you do creative work its like drawing that water out. If you use too much water one day, the well runs dry. You have to wait for the goundwater to fill it up again.

Not only did I begin viewing creativity as a limited resource I create and have access to over time, but I noticed that some activities, like reading about science, listening to music, and walking around town actually increase the rate at which the well fills up.

So now I have made it a daily habit of doing things that inspire me, and I also daily draw from the well like the author said - but Im more careful not to creatively 'overdo it' and leave myself unable to be creative the next day.

Viewing it this way has helped a lot, for all the same benefits the author listed. Im in a rhythm where I dont feel I need a break on the weekend, I just still have energy.

2
JacobAldridge 1 day ago 5 replies      
If I told you that every car needed 8 gallons of gas to drive 100 miles, you'd point out I was wrong - so many different makes and models, not to mention variables from tire pressure to driving style.

Yet for the potentially even more complex range that is different people, it amazes me that so much of the advice is didactic - we all need 8 hours sleep, 8 glasses of water, and 8 hours of work with breaks is optimal.

The closest I get to advice is 'learn your body and what works for you'. Thanks to the OP for sharing what works for him.

3
wilblack 18 hours ago 3 replies      
I started contract work last fall. I set me rate assuming a 25 hour work week. At first I tried working ~4 hrs / day everyday day. I quickly realized this did not work for me. Working everyday, even just a little is not sustainable for me. I have a family and they are still on the 9 to 5 schedule, so working even a few hours on weekends cut into my family time which is important to me. So now I force myself to take at least one weekend day off with no prgramming. This is hard because I love to program. Also I have a hard cutoff time during the week days at about 5:30pm when my wife and kid get home. I usually feel like I want to keep working but that forces me to stop (at least until my daughter goes to bed). So now I work 5 or 6 days a week but seldom exceed 6 hours/ day. Most days are closer to 4hrs. It's great at this pace because I usually always feel like i want to keep programming so I don't get burnt out. And if I do have an off day I just don't work.

The problem I am running into now is what do I do with my spare time? All my hobbies are computer based (video games and Raspberry Pi projects) but I am trying to minimize my screen time in my off hours. This will get better in the spring and summer as the weather gets better but during winter on the Oregon Coast going outside is hit or miss.

And I hear you about not being able to go to bed until I solve a problem I am stuck on, that drives me crazy.

4
jiblylabs 23 hours ago 3 replies      
As a freelancer, I understand where some of the comments "As a freelancer this won't work" are coming from. However, the last year I've flipped my freelancing model where I offer a more productized service with a clearly defined scope and set price. Instead of doing design work for $XXX/h, I'll deliver A,B,C within Timeframe Y, for Price $XXXX. With clearly defined services, I've actually been working for the last 12 months using a similar model, usually constraining myself to 4h/day with weekends off. My productivity + revenue have increased dramatically. Productizing your service makes it easier to market and generate leads, while it gives you the flexibility to work the way you want and actually free up time. Awesome post OP!
5
susam 22 hours ago 3 replies      
I agree with this article mostly, although 3 hours a day might be too little to make good progress with work for some people.

This article reminded me of my previous workplace (about 7 years ago) where my manager discouraged engineers from working for more than 6 hours a day. He recommended 4 hours of work per day and requested us not to exceed more than 6 hours of work per day. He believed working for less number of hours a day would lead to higher code quality, less mistakes and more robust software.

Although, he never went around the floor ensuring that engineers do not exceed 6 hours of work a day, and some engineers did exceed 6 hours a day, however, in my opinion, his team was the most motivated team on the floor.

6
shin_lao 1 day ago 1 reply      
3 hours a day is just not enough for everyone.

For some projects it's perfectly fine but some tasks can only be done if you focus for a large amount of time on it, work obsessively on it until you reach a milestone.

The greatest work I have ever done, was always done when I retreated like a monk for several weeks, cutting myself of the whole world and working almost non-stop on the task until I made a significant breakthrough.

Then I go back to the livings and share the fruits of my work, and of course, take a well deserved rest for several days.

The trap into most people fall is that they are confusing being active and working.

7
andretti1977 1 day ago 2 replies      
I agree with the author with some exceptions: when you are working as a contractor or freelancer for someone else's project maybe 3h/day is not acceptable. When you've got externally imposed deadlines 3h/day may not be sufficient.

But i agree that working less than 8h/day could be really more productive. I also liked the "less stuck for coding" topic as "...it is sometimes hard to go bed without solving some unknown issues, and you dont want to stop coding in the middle of it..." so maybe forcing themselves to stop could be a solution.

Anyway, i would really like to work 4 or 5 hours a day but keeping holidays and weekends free from work and i think this can only be achieved if you can pay your living with products of your own such as your apps and not by freelancing (i am a freelance and i know it!).

But i enjoyed the idea behind the article and i will try to achieve it one day.

8
dkopi 1 day ago 1 reply      
I'm pretty sure this has worked for the author, and it will work for a lot of other people as well, but a lot the benefits raised can still be achieved when working more than 3 hours a day.

A few points are raised in the post:1. If you only work 3 hours, you're less tempted to go on twitter/facebook/hacker news.

True - but that's really a question of discipline, work environment and how excited you are about what you're working on.It's perfectly possible to perform for 10 hours straight without distractions, just make sure to take an occasional break for physical health.

2. Better prioritization.

Treating your time as a scarce resource helps focus on the core features. But your time is a scarce resource even if you work 12 hours a day.Programmers are in shortage. They cost a lot. And the time you're spending on building your own apps could have been spent freelancing and working for someone else's apps.Always stick a dollar figure on your working hours. Even if you're working on your own projects.You should always prioritize your tasks, and always consider paying for something that might save you development time (Better computer. better IDE. SaaS solutions, etc).

3. Taking a long break can help you solve a problem you're stuck on.

Personally, I find that taking a short walk, rubber duck debugging or just changing to a different task for a while does the same.If I'm stuck on something, I don't need to stop working on it until tomorrow. I just need an hour or two away from it.

9
rmsaksida 1 day ago 4 replies      
I mostly agree with the author, but I don't see the point of stopping yourself when you're "in the zone". Why lose the flexibility?

What works for me is having a baseline of 3 or 4 hours of daily work, and not imposing any hard limits when I want or need to do extra hours. This works out great, because I have no excuses not to do the boring routine work as it's just a few hours, but I also have the liberty of doing obsessive 10h sessions when I'm trying to solve a tough problem or when I'm working on something fun.

10
jacquesm 20 hours ago 1 reply      
There is a much better alternative: work really hard for 2 to 3 months per year and then take the rest of the year off. If you're doing high value consulting you can easily do this. You may have to forego some luxury but that's a very small price to pay for the freedom you get in return.
11
jjoe 23 hours ago 0 replies      
It reads like someone who isn't doing much of realtime support. This works great for projects that haven't been unveiled or even ones that require little ongoing maintenance like a game. But if I worked 3 hours a day, my clients would crucify me.

Sadly, it isn't always possible.

12
maxxxxx 16 hours ago 1 reply      
When I was freelancing there were a lot of days when I didn't do much but then there were days when I got into the flow and worked 2 or 3 days almost straight. Most of the time this ended up at around 40 hours/ week on average but in spurts. This was probably the best work environment I have ever been in.

I hate about the corporate workplace that it doesn't accept any kind of rhythm but treats you like a machine that performs exactly the same at all times. Nature is built around seasons and so are humans. They are not machines.

I would much prefer to have a time sheet where I can do my 40 hours whenever I feel like it.

13
joeguilmette 11 hours ago 0 replies      
I work on a remote team and I am only accountable for my output. I end up working 15-25hrs a week. Sometimes more if something is on fire.

I usually work 7 days a week, but invariably a couple days a week I only work an hour, checking email and replying to people.

The work I do is of better quality, I'm happier, and I easily could work at this pace until the day I die.

14
LiweiZ 22 hours ago 0 replies      
I work 4-5 hours everyday but everyday on my own project. I wish I could have more time on work since most of the rest time I have is allocated to housework and taking care of two little ones. I guess the key is to control your work pace. When a sprint is needed and you are ready for it, a two-week with 90-100 hours in each week would not be a bad idea. Just like running. Listen to your body, pick your pace and keep going towards your goal.
15
Shorrock 19 hours ago 0 replies      
One size certainly does not fit all, however, my one take away is that this is huge benefit to paying close attention to what works best for you and optimizing your life around that. When you focus on productivity and happiness (often the 2 are linked) ignoring, when possible, schedules dictated upon you your quality of life will improve.
16
1123581321 18 hours ago 0 replies      
I read an essay several years ago that suggested working three focused hours a day. But, it suggested slowly increasing the hours worked while keeping the same level of focus, and doing restorative activities in the remaining time. The idea was that this would "triple" productivity.
17
a-saleh 14 hours ago 0 replies      
Nice!

I actually had similar routine while at school, but it was 6 hours a day total. 3 hours in the evening, usually just before I went to sleep, might be 19-22, or 21-24 and 3 hours in the morning when I woke up and continued for ~3 hours and then left for lectures.

I started doing this because I realized that I am no longer capable of pulling all-nighters. And it worked surprisingly well :-)

18
spajus 1 day ago 2 replies      
How to pull this through when you are paid by the hour?
19
TensionBuoy 16 hours ago 2 replies      
3 hours is not enough time to get anything done. I'm self employed. I go 12 hours straight before I realize I should probably eat something. I love what I'm doing so I'm drawn to it all day, every day. At the end of the day I've hardly made a dent in my project though. 3 hours is just getting warmed up.
20
abledon 14 hours ago 0 replies      
This is so true of people who give 100% every moment they work, but can't work long hours without feeling drained. compared to someone who goes at 50% and can manage the 40hr/work-week, I wish this method would become more recognized.
21
amelius 19 hours ago 1 reply      
> Making money on the App Store is really tough, and people dont care how many hours I spend on my apps. They only care if it is useful or not. This is a completely result oriented world, but personally, I like it.

I would guess that, if the OP had a competitor, then the OP would be easily forced out of the market if that competitor worked 4 hours a day :)

22
JoeAltmaier 15 hours ago 0 replies      
"Work for only 3 hours a day, but every day".

'everyday' is an adjective

23
mrottenkolber 20 hours ago 0 replies      
What about work 11 hours a week and be happy? Works for me, and I am a freelancer.

Edit: I usually do three blocks of three hours each and one two hour block each week. I find three hours perfect to tackle a problem, and a good chunk to be able to reflect upon afterwards.

24
jamesjyu 20 hours ago 0 replies      
Work hard. Not too much. Focus on what's important.
25
xg15 23 hours ago 1 reply      
So no going out for drinks where you might have a hangover the next day?
26
logicallee 1 day ago 7 replies      
Historically, working 24 hours a day (I include sleep because after a certain number of hours you even dream of code or your business) for 1 year typically accomplishes more than working 3 hours per day for 8 years. Or 1.5 hours per day for 16 years. There is just some kind of economy of scale.

---------

EDIT: I got downvoted. Come up with whatever standard of productivity you want (ANY standard that you want) and adduce a single human who in 16 years times 90 minutes per day accomplished more than I can find a counter-example of someone doing in the same field in 1 year. 1 year of 24 hours a day strictly dominates 16 years of 90 minutes per day, and you cannot find a single counterexample in any field from any era of humanity. Go ahead and try.

oh and by the way, in 1905 Einstein published 1 paper on the Photoelectric effect, for which he won his only nobel prize, 1 paper on Brownian motion which convinced the only leading anti-atomic theorist of the existence of atoms, 1 paper on a little subject that "reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity" and 1 paper on Massenergy equivalence, which might have remained obscure if he hadn't worked it into a catchy little ditty referring to an "mc". You might have heard of it? E = mc^2? Well a hundred and ten years later all the physicistis are still laying their beats on top.

https://en.wikipedia.org/wiki/Annus_Mirabilis_papers

Your turn. Point to someone who did as much in 16 years by working just 90 minutes per day.

Closer to our own field, Instagram was sold for $1 billion about a year after its founding date, to Facebook. Point out anyone who built $1 billion in value over 16 years working just 90 minutes per day.

       cached 14 March 2016 13:02:01 GMT