hacker news with inline top comments    .. more ..    25 Aug 2017 Best
home   ask   best   2 years ago   
Feather: Open-source icons feathericons.com
807 points by mcone  1 day ago   132 comments top 26
huhtenberg 1 day ago 5 replies      
Seems like a good time to mention an absolutely awesome - https://icomoon.io/app

It allows cherry-picking icons from dozens of different SVG icon packs (including Feather) and packaging them into a custom (web)font. It can also be used to package your own SVG assets into a single font file and then use resulting .woff as an SVG sprite sheet.

vforgione 22 hours ago 5 replies      
There's a Chrome icon, but no Firefox or Safari or Edge or any other browser. Not even a generic. Can we please stop with the implicit homogenization around Chrome?


jannes 1 day ago 0 replies      
I really like that these are SVG icons with a minimal amount of paths rather than an icon font.
vijaybritto 23 hours ago 2 replies      
The icons are beautiful. But one off the topic thing here is that the webpage loads superfast in 3g in India. Can the site builders talk about the loading perf a bit?
pvinis 1 day ago 3 replies      
All these icon collections are great, but I always find missing stuff there, and I don't know how this can be fixed besides complaining (and I don't want to complain when they give it for free).

For example, this one looks really nice, but it only has 'battery' and 'battery-charging'. In an app I work on, I need battery with no charge, 25%, 50%, 75%, 100%, and charging.

Then I think that this would be small to add, but again I don't want to complain. Then maybe someone will need a battery with a sliding charge, so they can represent as many charge percentages as the pixels allow. And then this becomes a whole other things, and not just an icon collection..

ISL 1 day ago 3 replies      
To pull criddell's important question to root-level:

How can the trademarked logos (Facebook, Instagram, Github, etc.) be provided without licensing terms?

koolba 23 hours ago 0 replies      
I don't think these icons are particularly beautiful. Not sure if it's because of the simplicity or anti-aliasing, but something just seems off.
SippinLean 22 hours ago 0 replies      
FYI if you're concerned about accessibility these SVGs lack any kind of descriptors; you'll need to add your own.


thinbeige 22 hours ago 6 replies      
Great site and there are also other quite helpful suggestions in the thread such as Icomoon.

Problem is that the next time I need SVG icons I will forget these sites, try to google them and ending up on those sites that perform well on Google's SERPs but want money for clean SVGs.

This happened to me two days ago and I just fired up Inkscape and remade those icons. However, this took too long.

How can I remember such sites the next time I need them?

totony 23 hours ago 1 reply      
What does the MIT license mean in this context? Let's say you use the icons, where should the notice be put?
bubaflub 1 day ago 1 reply      
From https://github.com/colebemis/feather/blob/master/README.md#l...:

 License Feather is licensed under the MIT License.

tlb 22 hours ago 0 replies      
Those are nice. Appropriately clean and unflavored for an open-source visual resource.
triangleman 23 hours ago 2 replies      
Is this strange anti-aliasing normal? Is the octagon with an exclamation point in it supposed to look like that?


fwx 1 day ago 4 replies      
What benefits does this provide over using Font Awesome icons?
sailfast 23 hours ago 0 replies      
Cool site! Always a fan of getting more design options out there for icons / UX in applications.

Bit of a tangent for product folks:

A lot of this stuff actually does look beautiful, but the use of "beautiful" as a modifier has diluted the term for me. When I click on something labeled "beautiful" I almost always expect to see "meh", and most of the time, that's exactly what happens.

michaelbazos 1 day ago 0 replies      
Selfish plug here, but I'm maintaining a library of Feather Icons components for angular 2+ applications. Current version is 3.2.2 as feather version: https://github.com/michaelbazos/angular-feather
zimpenfish 1 day ago 1 reply      
It mildly bothers me that the stroke for phone-off is the opposite direction to all the other *-off icons.
wolframhempel 1 day ago 2 replies      
They're beautiful, but I'd much rather use them as an icon font. Is there any way to do that?
ringaroundthetx 1 day ago 3 replies      
I understand this is not inherently presented here as any particular solution or competitor, although its presence here elevates it as such, so why use this or do this instead of using the library at thenounproject.com ?
ape4 23 hours ago 1 reply      
Just wondering, is it legal to remake a company's logo (eg instagram)
nkristoffersen 23 hours ago 0 replies      
I pay for noun project but sometimes libraries like this are easier :-)
IgorPartola 23 hours ago 2 replies      
Sweet! What other great FOSS icon sets are out there that you love?
Tokkemon 22 hours ago 0 replies      
And look at me stuck with FontAwesome.
Froyoh 22 hours ago 1 reply      
The umbrella icon makes me feel uneasy.
ElectronShak 1 day ago 0 replies      
hungerstrike 22 hours ago 4 replies      
I wouldn't call them "beautiful". They're OK. "Cute" maybe.

In any case, personally I'd rather wait for other people to call my work beautiful. Seems a bit pompous to self-apply it.

Learning How to Learn, the most popular course on Coursera nytimes.com
725 points by hvo  20 hours ago   168 comments top 38
sp527 19 hours ago 20 replies      
Took the course. A lot of it is cruft and motivation for the underlying core ideas. The techniques suggested are things many people are already familiar with: recall, deliberate practice, interleaving, spaced repetition, Einstellung, Pomodoro, Feynman Method, Cornell notes or similar (to force recall), exercise regularly, sleep well, focus on concepts not facts (chunking), etc. A composite of these dramatically enhances the learning process.

I can post some of the notes I took on the course if anyone is genuinely curious. The key premise of the course is that the brute force approach people usually take to learning is highly inefficient and ultimately ineffective (you'll forget).

EDIT: Notes https://pastebin.com/JNbGxvpQ

cpsempek 17 hours ago 3 replies      
I find this interesting for two reasons.

I think I recall that not too long ago, the most popular course on coursera was Ng's ML course. It is ironic that people are now more interested in teaching oneself how to learn versus a machine. This change could be attributed to other reasons like change in user demographics, or, market saturation, so that naturally popular courses will change once a large majority moves from one to the next. But I want to believe there is a more interesting phenomenon occurring where reading about abstract notions of learning causes a person to question how they themselves learn, and if the same abstract concepts apply. This is more a whimsical thought, than a serious one.

The second reason this is interesting is it could be surfacing a real issue with the way we have become accustomed to ingesting data. Could it be that we are becoming aware and fearful that the long term effects of suckling the internet's spout of instant gratification is causing serious harm to our ability to "actually learn".

Neither may be the case, but it seems like there is something interesting going on here.

tonyhb 19 hours ago 2 replies      
Dr Oakley also wrote "A Mind for Numbers", which is essentially this course in text form. The book is great as a a basis for the theory of learning, and dives into the same content (diffuse vs focused thinking, skimming chapters before reading etc.).

I find having a text reference with dedicated time makes me learn more, so if you're interested in the course you'd probably also love the book.

jkscm 18 hours ago 0 replies      
This is a pretty good summary of the core concepts: http://www.math.toronto.edu/nhoell/10rules-of-studying.pdf
cooervo 52 minutes ago 0 replies      
I took this course it is wonderful and good. Also read her book.

I now always try to skim the index of a book or chapter before reading it. Also try to study in smaller sessions, every day, instead of cramming a ton of info in just one day.

HumbleGamer 16 hours ago 2 replies      
This course revolutionized my views on learning. After taking it and applying the suggested techniques I've seen an amazing increase not just in my competence but my confidence. It left me feeling empowered. I was almost a bit sad when I reached the end.
baron816 19 hours ago 1 reply      
CrashCourse has a study skills course: https://www.youtube.com/playlist?list=PL8dPuuaLjXtNcAJRf3bE1...

It's geared more toward a younger crowd, but it's still pretty good, at least so far.

cJ0th 2 hours ago 2 replies      
Can anyone of you report any long term benefits from these kind of courses? Personally, I think those classes (haven't looked into the coursera one) only present obvious stuff.

I've once worked through "Make it stick", a book that is often recommended when it comes to learning. What I've found is that there is nothing wrong with the content but it did not really help.

I imagine that most people who struggle with learning deal with some kind of psychological issues that need to get addressed. They need to learn how to deal with stuff like frustration, worries, perfectionism or self esteem.

shahahmed 18 hours ago 0 replies      
tchaffee 15 hours ago 0 replies      
I'll report in too. I took the course and thought it was excellent. I love learning and have been learning new things for decades and thought my techniques were pretty good. I'm a very fast learner. The course helped me more than I was expecting and my learning speed and ability to memorize noticeably improved. Especially with the foreign language I'm studying. And the theories around how the brain works were interesting. And it's a pretty short course.
Yahivin 18 hours ago 0 replies      
Here's a great playlist featuring Richard Hamming with a CS focus on a similar topic: https://www.youtube.com/playlist?list=PL2FF649D0C4407B30
HateInAPuddle 18 hours ago 1 reply      
Am I the only one who thinks this is a cynical ploy to tap into the anxieties people have about not being successful?
maxwellfoley 19 hours ago 6 replies      
Is this any good? I've had it in my list of things to check out for a while but I suspect it might just be TED-talk pop-psych voodoo stuff.
sremani 19 hours ago 0 replies      
Quite an awesome course. I highly recommend it. In a day and age, we feel outdated by minute, sets right perspective and gives a good system for knowledge worker of any domain.
CaRDiaK 8 hours ago 0 replies      
Took the course, loved it. Bought the book, loved it. Encouraged my partner to check it out, she stuck through it. 3 years later she's about to graduate from college with her basic counselling education and experience behind her where she hit top of the class. She's about to set out on her own. This course was a massive driver and I'm not sure she'd have gone this way this quickly without it.
baby 18 hours ago 2 replies      
I haven't followed a course on Coursera since the first iteration of Crypto I. I heard that it became really bad, asking you to pay for a lot of courses.
abhip 18 hours ago 0 replies      
I took the course as well and wrote a post about applying the lessons learned as a developer: https://medium.com/learn-love-code/learnings-from-learning-h...

Feedback welcome! Would love to learn what other techniques devs use to learn and level up

diegof79 13 hours ago 1 reply      
There is a book from the 80s with the same name "Learning How to Learn" by Gowin & Novak. The book was very influential to the UX field. Concept Maps -the technique presented in the book- is used a lot to understand user mental models.The book is 80% discussion about how to apply the technique in a classroom... 20% explaining the technique, but anyways it's worth the read.

Edit: small correction, according to google the book was published in 1984

Aron 13 hours ago 0 replies      
I find that learning how to learning how to learning how to learn is a good way to spend my time when I don't actually want to accomplish anything.
serkanh 17 hours ago 0 replies      
I highly recommend this course. One thing i learned and have been practicing was how useful was memorization and spaced repetition practice of things i would like learn and understand.
roceasta 18 hours ago 1 reply      
Fantastic that resources like this now exist. In some ways it seems to be reminding us about how we used to learn. Children spontaneously go back again and again to things that delight them (spaced repetition) and they switch activities when bored (Pomodoro). Unfortunately, perhaps as a result of schooling, or other hard knocks, the spontaneous impulse gets lost. Adults suffer from mixed motivations and seem to be fairly clueless about what they find genuinely interesting. It becomes difficult to approach topics playfully.
mypath 4 hours ago 0 replies      
I have heard that the way it is presented is very dry. I have read a summary on reddit and I think it is good, but I just don't have the time to spend on it.
hkon 18 hours ago 0 replies      
I love to learn how to learn. Using what I learnt to learn stuff is hard, so I don't do it.
misiti3780 17 hours ago 0 replies      
Interesting article, personally I think this is more useful: https://www.supermemo.com/en/articles/20rules
erikb 17 hours ago 0 replies      
I haven't done that course but I have to agree. Learning how to learn is vastly important and really hard to do on your own, because the requirement is the same as the result.
senatorobama 13 hours ago 0 replies      
What's the best MOOC on Compilers?
Sinidir 15 hours ago 0 replies      
Which i neatly tagged away in my bookmarks feeling good about envisioning taking it some day in the future.:):(
zafka 17 hours ago 0 replies      
This reminds me of a book I read years ago:Learning How to Learn by Idries Shah

It was an interesting look at Sufi thought.

weishigoname 9 hours ago 0 replies      
took the course, I think it is pretty good, it follow the rhythm our brain to remember something.
deepnotderp 13 hours ago 0 replies      
I saw the title and initially thought this was about AutoML.
lettergram 15 hours ago 0 replies      
Wasn't this the course also provided by the teaching company?
ymow 10 hours ago 0 replies      
Scarbutt 19 hours ago 3 replies      
For those who read her book and did the course, is there anything in the course that isn't covered by the book? what's the advantage of the course over the book?

As a side note, I have found that the most powerful technique for me is recalling.

ymow 10 hours ago 0 replies      
ymow 10 hours ago 0 replies      



ymow 10 hours ago 0 replies      



aeorgnoieang 19 hours ago 6 replies      
The very first sentence:

> The studio for what is arguably the worlds most successful online course is tucked into a corner of Barb and Phil Oakleys basement, a converted TV room that smells faintly of cat urine.

I feel embarrassed on the Oakley's behalf. But I'm not a cat owner so maybe a room in one's home smelling (however faintly) of cat urine isn't particularly embarrassing.

Am I unreasonable in thinking that the author is an asshole?

ymow 10 hours ago 0 replies      
Ask HN: What is your favorite CS paper?
715 points by lainon  1 day ago   234 comments top 115
joaobatalha 22 hours ago 5 replies      
"Reflections on Trusting Trust" by Ken Thompson is one of my favorites.

Most papers by Jon Bentley (e.g. A Sample of Brilliance) are also great reads.

I'm a frequent contributor to Fermat's Library, which posts an annotated paper (CS, Math and Physics mainly) every week. If you are looking for interesting papers to read, I would strongly recommend checking it out - http://fermatslibrary.com/

- Reflections on Trusting Trust (Annotated Version) - http://fermatslibrary.com/s/reflections-on-trusting-trust

- A Sample of Brilliance (Annotated Version) - http://fermatslibrary.com/s/a-sample-of-brilliance

mathgenius 10 minutes ago 0 replies      
"The Derivative of a Regular Type is its Type of One-Hole Contexts" - Conor McBride, http://strictlypositive.org/diff.pdf

This shows how you end up "differentiating" datatypes in the context of strict functional programming, in order to do things like "mutate" lists. It is essentially the same as what mathematicians call "combinatorial species".

cs702 22 hours ago 2 replies      
I would never call it my "all-time favorite" (no paper qualifies for that title in my book), but Satoshi Nakamoto's paper, "Bitcoin: A Peer-to-Peer Electronic Cash System" deserves a mention here, because it proposed the first-known solution to the double-spending problem in a masterless peer-to-peer network, with Byzantine fault tolerance (i.e., in a manner resistant to fraudulent nodes attempting to game the rules), via a clever application of proof-of-work:


Others in this thread have already mentioned papers or opinionated essays that quickly came to mind, including "Reflections on Trusting Trust" by Ken Thompson, "A Mathematical Theory of Communication" by Claude Shannon (incredibly well-written and easy-to-follow given the subject matter), and "Recursive Functions of Symbolic Expressions and Their Computation by Machine" by John McCarthy.

I would also mention "On Computable Numbers, with an Application to the Entscheidungsproblem" by Alan Turing, "On Formally Undecidable Propositions of Principia Mathematica And Related Systems" by Kurt Gdel, and "The Complexity of Theorem Proving Procedures" by Stephen Cook, but in my view these papers are 'unnecessarily' challenging or time-consuming to read, to the point that I think it's better to read textbooks (or popular works like "Gdel, Escher, and Bach" by Douglas Hofstadter) covering the same topics instead of the original papers. Still, these papers are foundational.

Finally, I think "The Mythical Man-Month" by Fred Brooks, and "Worse is Better" by Richard Gabriel merit inclusion here, given their influence.

This is by no means an exhaustive list. Many -- many -- other worthy papers will surely come to mind over the course of the day that I won't have a chance to mention here.

There are many other good recommendations elsewhere in this thread, including papers/essays I have not yet read :-)

nikhizzle 1 day ago 3 replies      
Without a doubt.

Time, Clocks, and the Ordering of Events in a Distributed System. Leslie Lamport.


My first introduction to time scales as a partial ordering. Very mind opening.

0xf8 1 day ago 2 replies      
"A Mathematical Theory of Communication" - Claude E. Shannon


thristian 1 day ago 2 replies      
Out Of The Tarpit, by Moseley and Marks


The first half of the paper is a spot-on critique of so many things that go wrong in the process of designing and implementing large-scale software systems. The second half, where the authors propose a solution, kind of goes off the rails a bit into impracticality... but they definitely point in a promising direction, even if nobody ever uses their concrete suggestions.

akkartik 23 hours ago 1 reply      
Peter Naur, "Programming as theory building." (1985)

programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.


KirinDave 21 hours ago 7 replies      
I've been trying to get it frontpaged because, despite it's length, it's perhaps one of the most startling papers of this decade. Sadly, it seems like the HN voting gestalt hasn't decided to upvote a paper that's the CS equivalent of breaking the speed of light:

"Generic Top-down Discrimination for Sorting and Partitioning in Linear Time" ->


(if you're daunted by an 80 page paper as I am, there is also a talk on it: https://www.youtube.com/watch?v=sz9ZlZIRDAg)

It is possible, with some proper insight and approaches, to sort general datastructures in linear time on modern computing hardware. The speed limit of sort is O(n) with some extra constant cost (often accrued by allocation). It works by decomposing and generalizing something akin to radix sort, leveraging a composable pass of linear discriminators to do the work.

There's a followup paper using this to make a very efficient in-memory database that one could easily generalize under something like kademelia and with care I suspect could make something like a better spark core.


I keep submitting and talking about this but no one seems to pick up on it. This paper is crazy important and every runtime environment SHOULD be scrambling to get this entire approach well-integrated into their stdlib.

Unsurprisingly, Kmett has already implemented it in Haskell (it generalized neatly under the dual of the applicative+alternative functor): https://hackage.haskell.org/package/discrimination

flavio81 23 hours ago 2 replies      
Automated Distributed Execution of LLVM code using SQLJIT Compilation

As collected by the SIGBOVIK group:



"Following the popularity of MapReduce, a whole ecosystemof Apache Incubator Projects has emerged that all solve thesame problem. Famous examples include Apache Hadoop,Apache Spark, Apache Pikachu, Apache Pig, German Sparkand Apache Hive [1]. However, these have proven to beunusable because they require the user to write code in Java.Another solution to distributed programming has beenproposed by Microsoft with their innovative Excel system. Inlarge companies, distributed execution can be achieved usingMicrosoft Excel by having hundreds of people all sitting ontheir own machine working with Excel spreadsheets. Thesehundreds of people e combined can easily do the work of asingle database server."

PS: This thread is great, i'm bookmarking because here there are good (serious) papers.

andars 22 hours ago 0 replies      
I'll take a broad interpretation of 'CS' and throw out a couple of personal highlights.

C. Shannon, "A Symbolic Analysis of Relay and Switching Circuits" (1940): https://dspace.mit.edu/bitstream/handle/1721.1/11173/3454142...

Shannon's master's thesis, which introduces boolean algebra to the field of digital circuit design.

R.W. Hamming, "Error Detecting and Error Correcting Codes" (1950): https://ia801903.us.archive.org/1/items/bstj29-2-147/bstj29-...

In Hamming's own words: "Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?"

J.T. Kajiya, "The Rendering Equation" (1986):http://cseweb.ucsd.edu/~ravir/274/15/papers/p143-kajiya.pdf

Kajiya introduces the integral rendering equation, which is the basis for most current techniques of physically based rendering.

jasode 1 day ago 2 replies      
"The Limits of Correctness" (1985) by Bryan Cantwell Smith: https://www.student.cs.uwaterloo.ca/~cs492/11public_html/p18...

I know Thompson's "Reflections on Trust" and Shannon's "Communication" papers are more famous but I believe BCS's "Correctness" paper has more immediate relevance to a wider population of programmers.

For example, I don't believe Ethereum's creator, Vitalik Buterin, is familiar with it because if he was, he would have realized that "code is law" is not possible and therefore he would have predicted the DAO hack and subsequent fork/reversal to undo the code.

Seriously, if you read BCS's paper and generalize its lessons learned, you will see that the DAO hack and its reversal as inevitable.

gregors 1 day ago 2 replies      
"Reflections on Trusting Trust" - Ken Thompson


agentultra 23 hours ago 1 reply      
Most of my favourites have already been listed but one I found particularly interesting was Von Neumann's Theory of Self-Reproducing Automata [0].

[0] http://cba.mit.edu/events/03.11.ASE/docs/VonNeumann.pdf

tksfz 19 hours ago 0 replies      
Purely Functional Data Structures by Chris Okasaki - https://www.cs.cmu.edu/~rwh/theses/okasaki.pdf

Can programming be liberated from the von Neumann style? - John Backus's Turing lecture - http://dl.acm.org/citation.cfm?id=1283933

irfansharif 22 hours ago 0 replies      
Diego Ongaro's Raft paper[1]. Perhaps this only speaks to my experience as a student but having surveyed some of the other papers in the domain (paxos[2] in its many variants: generalized paxos[3], fpaxos[4], epaxos[5], qleases[6]), I'm glad the author expended the effort he did in making Raft as understandable (relatively) as it is.

[1]: https://raft.github.io/raft.pdf

[2]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

[3]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

[4]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

[5]: https://www.cs.cmu.edu/~dga/papers/epaxos-sosp2013.pdf

[6]: https://www.cs.cmu.edu/~dga/papers/leases-socc2014.pdf

chadash 1 day ago 2 replies      
It might be a cliche one to pick, but I really really really enjoy Alan Turing's "Computing Machinery and Intelligence"[1]. This paper straddles the line between CS and philosophy, but I think it's an important read for anyone in either field. And a bonus is that it's very well-written and readable.

[1] https://www.csee.umbc.edu/courses/471/papers/turing.pdf

emidln 1 day ago 0 replies      
A bit cliche for HN, but I really enjoyed RECURSIVE FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR COMPUTATION BY MACHINE (Part I) by John McCarthy[0]. It was accessible to someone whose background at the time was not CS and convinced me of the beauty of CS -- and lisp.

[0] - http://www-formal.stanford.edu/jmc/recursive.html

twoodfin 23 hours ago 1 reply      
Cheating a little, but the collected Self papers are what I'd bring to a desert island:


brad0 1 day ago 1 reply      
Kademlia, a P2P distributed hash table. DHTs are very complex from the outside but very simple once you understand the building blocks.


btilly 20 hours ago 0 replies      
As We May Thinkhttps://www.theatlantic.com/magazine/archive/1945/07/as-we-m...

This paper, written during WW II (!) by someone who had around to 20 years of computing experience at that time (!!) introduced the world to the ideas like hypertext, and citation indexes. Google's PageRank algorithm can be seen as a recombining of ideas from this paper.

This is worth reading to see how much was understood how early.

CobrastanJorji 20 hours ago 0 replies      
Yao's minimax principle. It's not a very exciting read or a very exciting conclusion compared to some of these other papers, but it's still interesting, and the conclusion has been practically useful to me a small handful of times.

It concerns randomized algorithms, which are algorithms that try to overcome worst case performance by randomizing their behavior, so that a malicious user can't know which input will be the worst case input this time.

The principle states that the expected cost of a randomized algorithm on a single input is no better or worse than the cost of a deterministic algorithm with random input.

Yao proves this is the case by constructing two zero sum games based around the algorithms' running times and then using game theory (specifically von Neumann's minimax theorem) to show that the two approaches are equivalent. It's a really neat approach!

dkamm 1 hour ago 0 replies      
"On non-computable functions" - Tibor Rado.

Proof that the busy beaver function is not computable.


dvirsky 22 hours ago 1 reply      
The Anatomy of a Large-Scale Hypertextual Web Search Engine, by Brin and Page.

Not only for the historical value of changing the world, and for the fact that it's very interesting and readable; It has personal value to me: the first CS paper I've ever read and it inspired me and changed the course of my life, literally.

Also, it has some very amusingly naive (in hindsight) stuff in it, like: "Google does not have any optimizations such as query caching, subindices on common terms, and other common optimizations. We intend to speed up Google considerably through distribution and hardware, software, and algorithmic improvements. Our target is to be able to handle several hundred queries per second"


zzzcpan 21 hours ago 1 reply      
Worth mentioning Joe Armstrong's "Making reliable distributed systems in the presence of sodware errors" [1].

[1] http://erlang.org/download/armstrong_thesis_2003.pdf

romaniv 23 hours ago 0 replies      
I don't have a favorite research paper, but there is a long Ph.D. thesis I've recently read in its entirety and found a lot of interesting ideas:

Programming with Agents: http://alumni.media.mit.edu/~mt/thesis/mt-thesis-Contents.ht...

Here is a short paper with a clear description of an ingenious idea.

Engineered Robustness by Controlled Hallucination: http://web.mit.edu/jakebeal/www/Publications/NIAI-2008.pdf

I like the simplicity of it. Most CS researches seem to be afraid of describing things that are simple, even if those things are non-obviosu and valuable.

vaibhavsagar 22 hours ago 1 reply      
There are a ton of fantastic Haskell papers, but if I had to pick one this would be it. It reconciles the pure and lazy functional nature of Haskell with the strict and often messy demands of the real world:

State in Haskell. John Launchbury and Simon L. Peyton Jones


bra-ket 22 hours ago 0 replies      
megahz 1 hour ago 0 replies      
Smashing the stack for fun adn profit http://insecure.org/stf/smashstack.html
filereaper 20 hours ago 0 replies      
Some old ones:

Jeffrey Ullman & John Hopcroft: Formal languages and their relation to automata [0]

Ted Codd: A relational model of data for large shared data banks [1]

C.A.R Hoare: Communicating Sequential Processes [2]




microbie 1 day ago 0 replies      
Dijkstra's shortest path algorithm in "A Note on Two Problems in Connexion with Graphs" http://www-m3.ma.tum.de/foswiki/pub/MN0506/WebHome/dijkstra....Pure, mathematical and a great impact on both how to prove and define algorithms as well as the problem itself.
gens 21 hours ago 0 replies      
"Communicating Sequential Processes" by Tony Hoare


I read it multiple times and still don't quite understand it all.

There are more great papers i read but this one comes back to mind more often then others.

1001101 1 day ago 0 replies      
New Directions in Cryptography - Diffie + Hellman


larkeith 22 hours ago 2 replies      
The Night Watch by James Mickens is always a good read:


ChuckMcM 22 hours ago 0 replies      
I've always enjoyed Finseth's Thesis on text editing, "A cookbook for an EMACS", which he turned into a book: https://www.finseth.com/craft/ and is available in epub form for free.
mayank 22 hours ago 0 replies      
The Flajolet-Martin paper on counting unique items in an infinite stream with constant space [1]: a great, well-written introduction to streaming algorithms that triggered my first "aha" moment in the field. You never forget your first.

[1] http://algo.inria.fr/flajolet/Publications/FlMa85.pdf

wsxiaoys 1 day ago 1 reply      
Cheney on the MTAhttp://home.pipeline.com/~hbaker1/CheneyMTA.html

Full tail recursion scheme implementation by never "return" in C

atilimcetin 21 hours ago 0 replies      
Not a single paper but Eric Veach's Ph.D. thesis 'Robust Monte Carlo Methods for Light Transport Simulation' - http://graphics.stanford.edu/papers/veach_thesis/
coherentpony 21 hours ago 1 reply      
An Algorithm for the Machine Calculation of Complex Fourier Series

James W. Cooley and John W. Tukey

Mathematics of ComputationVol. 19, No. 90 (Apr., 1965), pp. 297-301


nrjames 23 hours ago 1 reply      
Mine is "Image Quilting for Texture Synthesis and Transfer" by Efros and Freeman. It's simple enough to implement as a personal project and has some nice visual output. Plus, Wang tiles are cool and it's fun to learn more about them.


random_comment 23 hours ago 0 replies      
Depixelizing Pixel Art


I think this paper is very cute and also technically interesting.

taeric 21 hours ago 0 replies      
"Dancing Links" by Knuth (http://www-cs-faculty.stanford.edu/~knuth/papers/dancing-col...) is still one of my favorites for my actually having understood it. :) I wish I had found it back in grade school. (Though I suspect I wouldn't have understood it, then.)
protomyth 16 hours ago 0 replies      
I would say An Agent-Oriented Programming by Yoav Shoham. It certainly set my mind going and made me think about how programs could be organized. I still think, agents, systems of agents, and mobile agent code has a place in computing. Even though some form of RPC over HTTP won over mobile code, I look at the spinning up of VMs and cannot help but think that agents have a place. Combined with the tuple space stuff from Yale, I still see a powerful way to go forward.

1) 1990 http://cife.stanford.edu/node/599

2) 1993 http://faculty.cs.tamu.edu/ioerger/cs631-fall05/AOP.pdf

arde 21 hours ago 1 reply      
All of the classic papers I can think of have already been mentioned, but even though it's too recent to pass judgment a new contender may well be "Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design" - https://arxiv.org/abs/1704.01552
bluedino 23 hours ago 0 replies      
"A Method for the Construction of Minimum-Redundancy Codes"


I'm not sure if it was the fact that I was just a kid when I read it, but it was just so obvious and simple but so complicated and amazing at the same time.

bootsz 23 hours ago 0 replies      
lawn 23 hours ago 0 replies      
Bitcoin: A Peer-to-Peer Electronic Cash System


akkartik 23 hours ago 0 replies      
Hofstadter, D. R. and Mitchell, M. (1995). "The Copycat project: A model of mental fluidity and analogy-making." Chapter 5 in D. R. Hofstadter, Fluid Concepts and Creative Analogies.


"Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the Center for Research on Concepts and Cognition, Indiana University Bloomington. Copycat produces answers to such problems as "abc is to abd as ijk is to what?" (abc:abd :: ijk:?). Hofstadter and Mitchell consider analogy making as the core of high-level cognition, or high-level perception, as Hofstadter calls it, basic to recognition and categorization. High-level perception emerges from the spreading activity of many independent processes, called codelets, running in parallel, competing or cooperating. They create and destroy temporary perceptual constructs, probabilistically trying out variations to eventually produce an answer. The codelets rely on an associative network, slipnet, built on pre-programmed concepts and their associations (a long-term memory). The changing activation levels of the concepts make a conceptual overlap with neighboring concepts." -- https://en.wikipedia.org/wiki/Copycat_(software)


otakucode 23 hours ago 0 replies      
Admittedly a good portion of my appreciation is due to the title alone, but the paper and contents itself are very good as well:

'The Geometry of Innocent Flesh on the Bone: Return-into-libc without function calls' by Hovav Shacham


p4bl0 15 hours ago 0 replies      
It's not exactly a paper but I really liked "The Limits of Mathematics" by Chaitin. I wrote a blogpost about it a few years back (https://shebang.ws/the-limits-of-mathematics.html), I already submitted it to HN (https://news.ycombinator.com/item?id=1725936).
xixixao 1 day ago 1 reply      
Notation as a Tool of Thought, Kenneth E. Iverson


uvatbc 9 hours ago 0 replies      
One of my all time favorites has been the Jefferey Mogul paper on Receive Livelock: https://pdos.csail.mit.edu/6.828/2008/readings/mogul96usenix...

I read it first as a normal CS paper, but later started seeing it as a commentary on an extremely busy work life.

Right there in the first paragraph: "... receive livelock, in which the system spends all its time processing interrupts, to the exclusion of other tasks..."

Does this remind you of anything?

kwindla 14 hours ago 0 replies      
Alexia Massalins 1992 PhD thesis describing the Synthesis Operating System.

Here's Valerie Auroras description of Synthesis:

... a completely lock-free operating system optimized using run-time code generation, written from scratch in assembly running on a homemade two-CPU SMP with a two-word compare-and-swap instructionyou know, nothing fancy.

Which (necessarily) undersells by a very large margin just how impressive, innovative, and interesting this thesis is.

If youre interested in operating systems, or compilers, or concurrency, or data structures, or real-time programming, or benchmarking, or optimization, you should read this thesis. Twenty-five years after it was published, it still provides a wealth of general inspiration and specific food for thought. Its also clearly and elegantly written. And, as a final bonus, its a snapshot from an era in which Sony made workstations and shipped its own, proprietary, version of Unix. Good times.

beefman 22 hours ago 0 replies      
Backus - A Functional Style and Its Algebra of Programs


Phithagoras 1 day ago 0 replies      
Not exactly CS, but the Unreasonable Effectiveness of Mathematics in the Natural Sciences is one of my favourites.
mdhughes 23 hours ago 0 replies      
mooneater 21 hours ago 0 replies      
"On the criteria to be used in decomposing systems into modules" by David Parnas, 1972, the seminal paper where he brings forward the key ideas that would later be called cohesion and coupling.


Why it was important: you can't build big complex systems without these principles.

Some people say he was instrumental in stopping the Star Wars program, he argued it would be impossible to test outside of war (and therefore doomed).

ratsimihah 17 hours ago 0 replies      
Deepmind's first paper on deep reinforcement learning.The beginning of a new era towards AGI : )

Human-level control through deep reinforcement learninghttp://www.nature.com/nature/journal/v518/n7540/full/nature1...

kwisatzh 23 hours ago 1 reply      
How to share a secret by Adi Shamir. Simple, elegant, short and highly impactful.
Rickasaurus 15 hours ago 0 replies      
"NP-complete Problems and Physical Reality" by Scott Aaronson. It relates NP-complete problems to examples in nature. Excellent paper and a fun read.


GeorgeTirebiter 8 hours ago 0 replies      
"Hints for Computer System Design" by Butler Lampson


Frogolocalypse 1 hour ago 0 replies      
The bitcoin whitepaper.


dansto 8 hours ago 0 replies      
PCP theorem as explained by Bernard Chazelle , 2001https://www.cs.princeton.edu/~chazelle/pubs/bourbaki.pdf

Great writing style!

vsrinivas 19 hours ago 0 replies      
From the perspective of - 'take a fresh look at something we take for granted' - "A Preliminary Architecture for a Basic Data-Flow Processor" (Dennis & Misunas 1975)

Focusing on the flow of data between operators and greedily executing a linear program is what an out-of-order processor is.

pradn 1 day ago 0 replies      
My favorite paper in computer systems is "Memory Resource Management in VMware ESX Server". It identifies a problem and devises several clever solutions to the problem. I love papers that make your go "AHA!".


nonsince 23 hours ago 0 replies      
Type Systems as Macros


It's not world-changing or even particularly novel, but it's such a simple concept explained very well that really changes how you see the typed/dynamic language divide, as well as language design in general.

baddox 17 hours ago 0 replies      
Scott Aaronson's "Why Philosophers Should Care About Computational Complexity"


codelord 12 hours ago 0 replies      
ImageNet Classification with Deep Convolutional Neural Networkshttps://papers.nips.cc/paper/4824-imagenet-classification-wi...If not evident already, time will tell that this paper brought us to a new era.
acoravos 21 hours ago 0 replies      
"The Moral Character of Cryptographic Work" by Phillip Rogaway

Background: http://web.cs.ucdavis.edu/~rogaway/papers/moral.html

Paper: web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf

jhpriestley 23 hours ago 1 reply      
The Scheme papers are great http://library.readscheme.org/page1.html

"On the Translation of Languages from Left to Right", by Knuth, I found much clearer and more illuminating than any of the secondary literature on LR(k) parsing.

cjbprime 11 hours ago 0 replies      
I'll go with an unconventional choice: Michael Kleber's _The Best Card Trick_: http://www.northeastern.edu/seigen/11Magic/Articles/Best%20C...
efferifick 1 day ago 2 replies      
Producing Wrong Data without Doing Anything Obviously Wrong.

Immediately useful for anyone measuring compiler transformations performance!

jules 23 hours ago 0 replies      
A play on regular expressions: https://sebfisch.github.io/haskell-regexp/regexp-play.pdf

This paper explains a beautiful algorithm for matching regular expressions with a Socratic dialogue.

mrlyc 17 hours ago 0 replies      
My favourite is "Targeting Safety-Related Errors During Software Requirements Analysis" by Robyn Lutz at the Jet Propulsion Laboratory. It's available at https://trs.jpl.nasa.gov/bitstream/handle/2014/35179/93-0749...

The article provides a safety checklist for use during the analysis of software requirements for spacecraft and other safety-critical, embedded systems.

chowells 18 hours ago 0 replies      
The Essence of the Iterator Pattern, by Gibbons and Oliveira.

This paper develops a precise model for internal iteration of a data structure, such that exactly the necessary information is exposed and no more.

It's a fantastic exploration of improving a well-known design space with justified removal of details. I keep its lessons in mind whenever I am facing code that seems to have a lot of incidental complexity.


Jtsummers 1 day ago 1 reply      
Not a paper, and not strictly CS, but Mythical Man-Month by Brooks. It solidified the connection in my mind between systems engineering and software engineering. Other readings since then have extended and changed this understanding, but this is where my approach to software development started to mature.
sova 21 hours ago 0 replies      
"Collaborative creation of communal hierarchical taxonomies in social tagging systems"


coldcode 20 hours ago 0 replies      
Royce 1970 of course: http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/wate... wherein he did not introduce Waterfall, but for some reason the negative aspects of his article became the basis for Waterfall. The article for 1970 is surprisingly relevant although archaic in language. It's worth reading to the end. He wrote this describing leading teams in the 1960s do what I assume was actual "rocket" science.
wlesieutre 22 hours ago 1 reply      
"Interactive Indirect Illumination Using Voxel Cone Tracing" by Crassin et al.

As an architectural lighting guy, seeing realtime global illumination look this good in a game engine was fantastic. Parts of the algorithm I can understand, parts go over my head still, but the results are amazing.

A big part of what I do at work is radiosity simulations in AGI32 which is of course more accurate (because it's trying to accurately simulate real world lighting results) but much much slower.


surement 15 hours ago 0 replies      
The splay trees paper by Sleator and Tarjan (1985) https://www.cs.cmu.edu/~sleator/papers/self-adjusting.pdf

It's just such a cool result and the paper is very well written. Further, the dynamic optimality conjecture at the end is still an open problem.

jonbaer 16 hours ago 0 replies      
Anything dealing w/ "reversible computing", makes you ask "what-if" ...



tjr 1 day ago 1 reply      
Growing a Language
505 17 hours ago 0 replies      
I see some of my favourites among other replies. I don't think I see these:



phamilton 15 hours ago 0 replies      
The Dynamo Paper. http://www.allthingsdistributed.com/files/amazon-dynamo-sosp...

One of the best practical "How can this improve our business?" technical papers, and an excellent introduction to reading papers.

alok-g 1 day ago 0 replies      
Automated Theorem Proving, David Plaisted


kageneko 19 hours ago 0 replies      
Oh man... I don't know. There's so many.

I'll need to go with

Gilbert, E., & Karahalios, K. (2009, April). Predicting tie strength with social media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 211-220). ACM.

In grad school, it was the paper that kept on giving. I think I cited it every semester for a paper or project. There's a lot of other papers and books that really inspired me, but this one was magic.

haihaibye 14 hours ago 0 replies      
Why it is Important that Software Projects Fail


whataretensors 22 hours ago 0 replies      
The original GAN paper was pretty big for me. https://arxiv.org/pdf/1406.2661.pdf
lukego 1 day ago 0 replies      
0xf8 23 hours ago 0 replies      
The Applications of Probability to Cryptography - Alan M. Turing


Vervious 21 hours ago 0 replies      
Paxos made simple. It's a very beautiful paper.
djhworld 21 hours ago 0 replies      
Not hardcore CS as some of the other papers on here, but I really enjoyed the BigTable paper https://static.googleusercontent.com/media/research.google.c...
donquichotte 1 day ago 0 replies      
Not CS, but control theory: "Guaranteed Margins for LQG Regulators" by John C. Doyle. The abstract is just three words: "There are none."


ddebernardy 19 hours ago 0 replies      
Knuth vs Email:


It's not technically a CS paper, but well worth the (very short) read regardless.

totalZero 19 hours ago 0 replies      
Intelligence without representation, by Rodney Brooks.


Hernanpm 23 hours ago 0 replies      
I sill find this interesting, if you are familiar with Dijkstra Algorithm.

Finding the k Shortest Paths by D. Eppsteinhttps://www.ics.uci.edu/~eppstein/pubs/Epp-SJC-98.pdf

damontal 20 hours ago 0 replies      
the report on the Therac-25. a good warning that bugs can have very real consequences.


zachsnow 22 hours ago 1 reply      
Olin Shivers's work on various control flow analyses, in particular the paper "CFA2: a context-free approach to control-flow analysis", is a really cool static analysis via abstract interpretation. Matt Might had a bunch of papers in a similar vein.
hatred 20 hours ago 0 replies      
The Byzantine Generals Problem by Lamport et al is a must read for anyone interested in distributed systems.

Some of the others that have already been mentioned on this thread:

- Time, Clocks, and the Ordering of Events in a Distributed System

- Paxos Made Simple

chajath 16 hours ago 0 replies      
di4na 21 hours ago 0 replies      
"Programming with Abstract Data Types", B Liskov and S Zilles
johnny_1010 8 hours ago 0 replies      
CommunicatingSequential ProcessesC. A. R. Hoarehttp://www.usingcsp.com/cspbook.pdf
wwarner 23 hours ago 0 replies      
morphle 22 hours ago 0 replies      
Scalability of Collaborative Environmentshttps://sci-hub.ac/10.1109/C5.2006.32#
notaharvardmba 23 hours ago 1 reply      
Andrew Tridgell's PhD Thesis: https://www.samba.org/~tridge/phd_thesis.pdf

Which documents the invention of rsync, it's a good read.

nayuki 23 hours ago 0 replies      
"Bitcoin: A Peer-to-Peer Electronic Cash System" https://bitcoin.org/bitcoin.pdf
throwaway1e100 1 day ago 0 replies      
Real programmers don't use PascalThe rise of worse is better
AnimalMuppet 22 hours ago 0 replies      
Why Pascal Is Not My Favorite Programming Language, by Brian Kernighan http://www.cs.virginia.edu/~cs655/readings/bwk-on-pascal.htm...

No Silver Bullet, by Fred Brooks http://worrydream.com/refs/Brooks-NoSilverBullet.pdf

The original STL documentation https://www.sgi.com/tech/stl/table_of_contents.html

gnaritas 23 hours ago 0 replies      
Not a paper, but one of my favorite talks


Growing a Language by Guy Steele (co-inventory of Scheme). Brilliant speech about how to grow languages and why it's necessary. Languages that can be grown by the programmer, like Lisp or Smalltalk are better than languages that fixed like most others, this is why.

megamindbrian 1 day ago 0 replies      
My favorite topic was from an advanced user interfaces class. Describe 3 example of a bad user experience where the input in to the system does not give you the expected output. My poor example was a Kleenex box, I try to pull on one Kleenex and it tears or two come out at a time.
probinso 16 hours ago 0 replies      
Relevance Vector Machines by Tipping


Homomorphic Encryption over the Integers

kruhft 15 hours ago 0 replies      
On Formally Undecidable Propositions... by Kurt Godel.

One might argue this is not CS, but it's something everyone should read and understand.

kendallpark 19 hours ago 0 replies      
I'm surprised no one has mentioned "The Cathedral and the Bazaar" yet. Admittedly is more essay, less paper.


iOS 11 Safari will automatically strip AMP links from shared URLs twitter.com
528 points by OberstKrueger  1 day ago   417 comments top 2
cramforce 1 day ago 23 replies      
TL of AMP here.Just wanted to clarify that we specifically requested Apple (and other browser vendors) to do this. AMP's policy states that platforms should share the canonical URL of an article whenever technically possible. This browser change makes it technically possible in Safari. We cannot wait for other vendors to implement.

It appears Safari implemented a special case. We'd prefer a more generic solution where browsers would share the canonical link by default, but this works for us.

millstone 1 day ago 4 replies      
I hope the next step is a way to strip AMP links from all URLs, backfilling the "Disable AMP" setting that Google ought to have provided.

AMP has always worked poorly on iOS: it has different scrolling, it breaks reader mode, and it breaks status bar autohide and jump-to-top. Perhaps Apple would be less hostile to AMP if the implementation were better.

A Thorium-Salt Reactor Has Fired Up for the First Time in Four Decades thoriumenergyworld.com
470 points by jseliger  1 day ago   159 comments top 22
philipkglass 1 day ago 2 replies      
Better explanation here (linked from Technology Review): http://www.thoriumenergyworld.com/news/finally-worlds-first-...

More details on the experiment sequence: https://public.ornl.gov/conferences/MSR2016/docs/Presentatio...

This is not actually a reactor test because the thorium-bearing salt does not attain criticality. It's a sequence of materials tests using thorium-containing salt mixtures in small crucibles inside the conventionally fueled High Flux Reactor (https://ec.europa.eu/jrc/en/research-facility/high-flux-reac...).

The experiments rely on neutrons from the High Flux Reactor to induce nuclear reactions in the thorium-bearing salt mixtures. However, the experiments will be useful in validating materials behavior for possible future molten salt reactors because it combines realistic thermal, chemical, and radiation stresses.

Sukotto 1 day ago 8 replies      
I think we're making a serious PR mistake calling these "Thorium Reactors" even though the term is accurate.

"Reactor" evokes "Nuclear Reactor". For many people, "nuclear reactor" is a deeply loaded term. Likewise "Thorium" (and other words that end in "-ium") sounds dangerously like "plutonium" and "uranium".

It doesn't matter how much better/safer this technology is. Don't expect the public to respond positively when we use those words. There's too much knee jerk, "no nukes!" baggage.

We should start calling these "salt power stations" or something else accurate, yet non-threatening. Otherwise, IMHO, it will be a steep uphill battle getting public and legislative support for building these things, regardless of their many benefits.

ChuckMcM 1 day ago 1 reply      
A couple of comments;

First it is really awesome to see actual research experiments being done on the materials. This is a critical first step in understanding the underlying complexity of the problems and as the article points out it is really helpful to have a regulatory agency that is open to trying new things.

The second is this isn't a 'Thorium-Salt Reactor' it is 'parts that would go into parts that would make up such a reactor if the experiments indicate they will work.' A much less clickbaitey headline but such is 21st century journalism.

PaulHoule 1 day ago 4 replies      
I am surprised they are using stainless steel instead of Hastelloy-N


The Hastelloy family of super alloys is basically stainless steel without the steel and was proven in the Oak Ridge MSR experiment.

velodrome 1 day ago 1 reply      
This technology, if viable, could help solve our current nuclear waste problem. Valuable materials could be recycled (by separation) for additional use.



skybrian 1 day ago 0 replies      
This was apparently at the High Flux Reactor in Petten, Netherlands.


dabockster 1 day ago 6 replies      
> charged particles traveling faster than the speed of light in water

What did I just read?

jhallenworld 1 day ago 1 reply      
So there was a meltdown at a liquid sodium cooled reactor due to a materials problem:


I don't see a pump seal test in this experiment... does anyone know if a solution to the SRE meltdown problem is known at this point? Perhaps the LFT chemistry would not have the issue.

bhhaskin 1 day ago 0 replies      
Really happy to finally see some movement with Thorium. It might not be the magic silver bullet that some people hype it up to be, but it needs to be explored.
tfy11aro 1 day ago 1 reply      
acidburnNSA 1 day ago 0 replies      
Glad to see some thorium-bearing salt being irradiated in a conventionally-fueled test reactor. That's a big step to getting back on the road to fluid-fueled reactors.

Here are some reminders for everyone on the technical info about Thorium. First of all, Thorium is found in nature as a single isotope, Th-232, which is fertile like Uranium-238 (not fissile like U-235 or Plutonium-239). This means that you have to irradiate it first (using conventional fuel). Th-232 absorbs a neutron and becomes Protactinium-233, which naturally decays to Uranium-233, a fissile nuclide and good nuclear fuel. This is called breeding. Thorium is unique in that it can breed more fuel than it consumes using slow neutrons, whereas the Uranium-Plutonium breeder cycles require fast neutrons (which in turn require highly radiation-resistant materials, higher fissile inventory, and moderately exotic coolants like sodium metal or high-pressure gas). Any kind of breeder reactor (Th-U or U-Pu) can provide world-scale energy for hundreds of thousands of years using known resources and billions of years using uranium dissolved in seawater (not yet economical).

Great, so Thorium can do thermal breeding, so what? Well to actually breed in slow neutrons, you have to continuously remove neutron-absorbing fission products as they're created (lest they spoil the chain reaction), so you really can only do this with fluid fuel. This leads to an interesting reactor design called the Molten Salt Reactor (MSR). Fun facts about this kind of reactor are that it can run at high temperatures (for process heat/thermal efficiency), can run continuously (high capacity factor), is passively safe (can shut down and cool itself without external power or human intervention in accident scenarios), and doesn't require precision fuel fabrication. Downsides are that the radionuclides (including radioactive volatiles) are not contained in little pins and cans like in solid fueled reactors so you get radiation all over your pumps, your heat exchangers, and your reactor vessel. This is a solvable radiological containment issue (use good seals and double-walled vessels) but is a challenge (the MSRE in the 1960s lost almost half of its iodine; no one knows where it went!!)

U-Pu fuel can work in MSRs as well, getting those nice safety benefits, but it can't breed unless you have fast neutrons.

People on the internet may tell you that Thorium can't be used to make bombs and that it's extremely cheap, etc. These are not necessarily true. You can make bombs with a Th-U fuel cycle (just separate the Pa-233 before it decays), and nuclear system costs are unknown until you build and operate a few. There are reasons to hope it could be cheaper due to simplicity, but there are major additional complexities over traditional plants or other advanced reactors in the chemistry department that add a lot of uncertainty. Fluid fueled reactors are probably ~100x or more safer than traditional water-cooled reactors, on par with sodium-cooled fast reactors and other Gen-IV concepts with passive decay heat removal capabilities.

zython 1 day ago 2 replies      
I was under the impression that thorium-salt reactors have been tried in the past and not deemed "worth" from security and profitability point of view.

What has changed about that ?

xupybd 1 day ago 1 reply      
"The inside of the Petten test reactor where the thorium salt is being tested is shining due to charged particles traveling faster than the speed of light in water."

What I thought that wasn't possible? Or is this just the speed of light in water, so the particles are still moving slower than the speed of light in a vacuum?

tim333 1 day ago 1 reply      
Glad to see they are resuming research even though there remain problems with it as a commercial technology.
nate908 1 day ago 4 replies      
What's up with this image caption?

"The inside of the Petten test reactor where the thorium salt is being tested is shining due to charged particles traveling faster than the speed of light in water."

As I understand it, nothing travels faster than the speed of light. The author is mistaken, right?

novalis78 1 day ago 0 replies      
Great that they are mentioning ThorCon's project in Indonesia. Too bad that they had to leave the US after trying really hard to find a way to build it here.
zmix 1 day ago 1 reply      
As far as I know the Chinese are also putting much effort into this type of reactor.
ece 1 day ago 2 replies      
India has the most thorium reserves, according to USGS: https://en.wikipedia.org/wiki/Occurrence_of_thorium

And they have had the plans and motivation to build domestic reactors for the past two decades: https://en.wikipedia.org/wiki/India%27s_three-stage_nuclear_...

NSG membership keeps getting held up by someone or the other and would provide more energy security for India.


genzoman 1 day ago 1 reply      
very excited about this tech, but i think it will be regulated to death.
MentallyRetired 1 day ago 0 replies      
How'd you like to be the guy pressing the button for 40 years?
unlmtd1 1 day ago 0 replies      
I have a better idea: horsecarts and sailships.
SubiculumCode 1 day ago 2 replies      
Anyone with insight on this I read years ago: http://www.popularmechanics.com/science/energy/a11907/is-the...

I worry that that if Thorium reactors become very very common because they are thought to be very safe (e.g. behind your house common, as some have bragged), but they turn out to be dangerous...we will have a real problem.

Right to Privacy a Fundamental Right, Says Indian Supreme Court thewire.in
511 points by sandGorgon  1 day ago   199 comments top 37
dalbasal 1 day ago 6 replies      
A lot of conversations about expanding rights, end up with a distinction between positive and negative rights. A right to water, is different to free speech. For everyone to have a right to water, someone needs to provide the water or be held accountable for not providing it. A default, state culprit needs to be designated.

Privacy though, privacy is like speech or equality before the law or presumption of innocence. You have it by default and if its denied there is a culprit.

Even in this case, we seem to have a hard time expanding rights. I say expanding, but privacy is a right in many places, formally. But, the interpretation of that right is very weak.

Anyway if were to make privacy a right with serious intent then there needs to be a willingness to break eggs, bear a cost. The right to free speech, conscience, affiliation , assembly and other political freedoms mean we need to tolerate and protect the proverbial nazis rights to try and spread their politics. Bitter pill for anyone scared of a proverbial nazi takeover.

Are we willing to bear the (fear-based, mostly) costs in the fight against the terrorism demon. The economic costs that will be claimed by companies relying on data? If we have a strong yes, I think we can start building the real framework of laws and conventions that will secure a right to privacy for the next few generations.

Absolute principles require some sacrifice.

msravi 1 day ago 1 reply      
A word of caution before being carried away by the words "fundamental right". According to the judgement, the right to privacy is fundamental (as an offshoot of the fundamental rights of freedom of life and personal liberty guaranteed under Article 21 of the Indian Constitution).

Asserting it as a "fundamental right" raises the bar on what restrictions can be put. But reasonable restrictions can still be put.

A separate bench of the SC is now going to test the validity of the Aadhaar Act on the basis of whether the restrictions are reasonable in the light of privacy being a fundamental right.

Edit: The Aadhaar Act is an act that allows for the government to collect biometric and other personal data that can be used to identify an individual for various services (including but not limited to governmental benefits).

ruytlm 1 day ago 2 replies      
As an aside, this is a decent example of why some people oppose bills of rights. As I understand it, the argument is that a bill of rights is considered exhaustive; if it's not in the bill, it's not protected.

By contrast, countries without a bill of rights are free to interpret their constitutions through implied rights, in ways that make sense in the context and allow the constitution to adapt to new or developing circumstances.

See for example, https://en.wikipedia.org/wiki/Australian_constitutional_law#...

Not entirely sure which side of that argument I'd sit on.

whack 1 day ago 2 replies      
"The petitioners, former Karnataka high court judge Justice K.S. Puttaswamy and others, had contended that the biometric data and iris scan that was being collected for issuing Aadhaar cards violated the citizens fundamental right to privacy"

I find the above interpretation of privacy troubling. In order for the government to effectively distribute social services to the needy, cutting out corrupt/inefficient middlemen, it needs a way of effectively verifying someone's identity, so that people can't dual-enroll themselves. Having people provide biometric identifies such as Iris scans, if they want to qualify for government services, seems like a perfectly reasonable way to do this. I would also contend that when most people declare the importance of privacy, they are talking about their actions and lifestyle, not fingerprints or Iris scans. It would be sad if this ruling prevents the government from efficiently providing social services to help the poor.

jawns 1 day ago 5 replies      
I was a bit surprised by the fact that all of the judges' opinions, which are embedded in the article, were in English.

But it turns out that English is the official language of the Indian Supreme Court, and the court has even gone so far as to rule that it need not provide a Hindi translation of its rulings:


It seems kind of bizarre to me that as an American English speaker half way across the world, I'm in a better position to read and comprehend the Indian Supreme Court's rulings than a great number of non-English-speaking Indians.

bhhaskin 1 day ago 1 reply      
As it should be. The world is quickly approaching a very Orwellian feature unless we take steps to stop it. Right to privacy should be right up there with the right to free speech.
sreeni_ananth 1 day ago 0 replies      
The Indian government has been mandating all utility providers to link the biometric details of the subscribers to their account. With the rising number of criminal cases due to misuse of laws such as those used in marital disputes, the government can easily control what services to provide or deny its citizens based on centrally available biometric database which it could not have done before. Of course this is just a crazy theory for now which can hopefully never happen due to this much needed judgement.
Abishek_Muthian 1 day ago 0 replies      
To HN readers outside India, who are unaware of the BG behind this historic judgement. It all started off with a series of litigations against the govt's mandate to link India's unique identity system - Aadhar (which includes biometric data) to existing Indian identification systems for different purposes.

In HN fashion, if you are interested to know about how the govt pulled of the huge technical overhead of storing billion records; it could be seen in talk here by it's chief architect - https://www.youtube.com/watch?v=08sq0y8V1sE

j0hnM1st 1 day ago 0 replies      
This is such a relief ...

Also this is the man behind the fight


denzil_correa 1 day ago 4 replies      
Here's what the Attorney General - the government counsel - argued

> He said that in developing countries, something as amorphous as privacy could not be a fundamental right, that other fundamental rights such as food, clothing, shelter etc. override the right to privacy.

It's a question which has alway bothered me. What happens when two fundamental rights clash with each other?

vkrm 1 day ago 1 reply      
I'll wait for the full text of the judgements before celebrating. The ruling apparently consists of 6 judgements and should be available shortly. Hope this reins in the Aadhaar monster without leaving any wiggle room for the government to exploit.

edit: The judgement is now available here: http://supremecourtofindia.nic.in/pdf/LU/ALL%20WP(C)%20No.49...

fwx 1 day ago 3 replies      
Can someone more well-versed in Indian law help me understand : Does this ruling prevent collection of biometrics or restrict it somehow under the Aadhar system?

Also, I recall the Indian Government was pushing Aadhar Pay - a biometric fingerprint scan based PoS payment / verification system (likely it is already deployed, I don't live there so I don't know). What happens to that now?

ahamedirshad123 1 day ago 0 replies      
This is what Indian government said:

Citizens don't have absolute right on their body, privacy argument bogus, govt tells SC


Abishek_Muthian 1 day ago 0 replies      
Privacy is about the freedom of thought, conscience and individual autonomy and none of the fundamental rights can be exercised without assuming certain sense of privacy. argued Mr.Subramaniam.

It would be interesting to see if there are more specifics w.r.t data security in the judgements. I smell a criminal lawyer cooking up the ways this judgement would aid his clients for not opening their computer, smartphones to police!

The judgement would be widely shared worldwide, the people outside India who isn't familiar with Supreme court judgements from India; keep Merriam-Webster nearby.

anuraj 1 day ago 0 replies      
After 70 years of independence - right to privacy is recognised as a fundamental right by the Supreme Court of India. Individual rights especially privacy underlie the cornerstone of democracy - Liberty. A small step for Indian Supreme Court - a giant leap for 1/5th of mankind!

Welcome democracy. Bye bye mobocracy!

eklavya 1 day ago 1 reply      
Not sure what will happen, I want both this and Aadhar.
thrw000 1 day ago 0 replies      
On the one hand Aadhar is so convenient. If I want a phone number or a bank account, I can simply identify myself with my thumbprints and iris scans and get it activated immediately without paperwork. This has really made things easier for people. Using biometrics also reduces fraud when claiming benefits from the government and maybe makes the process easier as there is again no paperwork, and it is easy to make claims.

But on the other hand, all this makes it so easy to track anyone. All your bank accounts, cards, phone numbers, internet connections would be linked to your Aaadhar number and would be centrally accessible. This is a privacy nightmare. I am already getting frequent messages from my phone company to link my phone number with my Aadhar number, or let it be deactivated.

All this information would be in the hands of government officials. The Indian bureaucracy is notorious for the corruption everywhere. What if you could purchase somebody's data through an "agent" - get access to everything that they do, everything they buy, everyone they transact with, everyone they communicate with, contents of every message they send to anyone at all - imagine the kind of possibilities this opens up for negative minds.

Besides this, someone could just hack the data and maybe leak all of it. Someone recently created an app that would let you get anyone's details including their phone number, address, etc. by typing in their Aadhar number. It was taken down a month ago. I'm not sure about the exploit but it was related to using plain http instead of https somewhere. I checked one of the Aadhar linked projects and found that they were using an open source library in the backend which wasn't up to date, and the version being used had some documented security vulnerabilities. I wonder how safe peoples' data really is.

A large number of Aadhar numbers have already been leaked thanks to government websites. It is possible to extract a person's fingerprint or iris scan using photos of their hands or the face in specific conditions. If the person has linked their bank account with Aadhar which is getting compulsory, one could take out money from their accounts by impersonating their prints or iris scans. Fortunately there is an option to protect yourself from this - go to the Aadhar website and lock your biometrics data. If used regularly this can protect people from "biometrics theft", but the biometrics are unlocked by default, and for 99% of the people they are going to stay that way.

ribasushi 1 day ago 0 replies      
An amazing talk from couple weeks ago on the details of Aadhaar and how it was "implemented" and why this court ruling is super important both in India and closer to the west: https://www.youtube.com/watch?v=iCkhupMROZU
vkrm 1 day ago 0 replies      
Full text of the judgement is available[0]

Fair warning: its pretty long (547 pages!)


tim333 1 day ago 0 replies      
Privacy is a tricky thing to make a right in terms of where you draw the line. Hidden cameras in the bedroom illegal, but a right to a private table at restaurants wouldn't be practical.

I'm not quite sure where you'd set the boundaries.

godzillabrennus 1 day ago 2 replies      
Will this help push more privacy centric companies toward operating out of India?
vsviridov 1 day ago 1 reply      
But they also prohibit use of strong encryption, so that internet traffic could be snooped upon...
abhi3 1 day ago 0 replies      
A lot of comments in this thread are misleading. What the Supreme Court has done is that it has expanded the interpretation of an existing Fundamental Right ('Right to life') to include a 'Right to Privacy'. Now this means that if any law is made that infringes on an individuals privacy then it'll be tested for reasonableness.

So before this judgment, the legislature could have for example made a law requiring all internet activity to be reported to the government or criminalized homosexuality (existing law) and anyone challenging the law could not claim that the law violated his privacy as such a right was not recognized.

After this judgment, such an argument could be made and the courts would test whether the violation of one's privacy is a reasonable restriction or not. So a law requiring you to have number plates on your car to be captured by traffic cams, or KYC norms for Bank accounts, reporting of your financial data to tax authorities could be held to be a reasonable restriction whereas laws such as criminalizing one's sexual orientation could be held unreasonable.

What prompted this constitutional reference was the governments 'Aadhar Scheme' which compelled 1.2 billion citizens to hand over their private biometric data to the government if they wanted to claim any government services. This judgment provides the test to be used while deciding whether the law and its applications are constitutional. Most likely the scheme will not be struck down in total but specific instances will be tested on a case by case basis. (eg. Aadhar can't me made compulsory for getting health services but can be made so for a Gun Licence as the latter seems reasonable but the former may not)

jpelecanos 1 day ago 0 replies      
Would this verdict affect Joint Cipher Bureau's SIGINT ops?
trhway 1 day ago 0 replies      
seems like they had a good session - from other news they took down that instant Muslim divorce. As one can guess that divorce procedure isn't gender symmetric.


"In India, Muslim men have been able to end their marriages by saying the word talaq Arabic for divorce three times. They could do this in person, by letter or even over the phone."

ngold 1 day ago 0 replies      
Thank goodness someone did it.
modi15 1 day ago 1 reply      
I for one, have no idea what this really means. Can anyone explain why this ruling is not totally useless ?
srinathrajaram 1 day ago 1 reply      
It says a lot about where we are headed that the Supreme Court had to say this.
AmIFirstToThink 22 hours ago 0 replies      
Privacy : You have a right to try to keep things as private as you want. You should not be prosecuted for merely trying to keep things private.

Your responsibility :

1. Don't share things that you want to keep private.

2. Carefully weigh the trade offs when you agree to share things about you. There is no retroactive privacy on things that you yourself shared.

3. You can attempt to retract what was shared about you, but you can't hold society responsible for successful retraction of that piece of information, from media or minds. You can add addendum e.g. an apology from someone, you can claim damages, but we can't rewind time.

Government responsibility:

1. Don't criminalize people trying to keep things private. This would be similar USA Fifth Amendment, do not force people to share what they don't want to share. Government can ask "What crimes you committed in the privacy of your home?", but it can't force people to answer that question or punish for not answering it.

2. You can't plead fifth and deny proving your identity when you want to take food stamps from government, or when you get unearned tax credit. Just like in any transaction, Government can ask you to prove who you are and may demand increasing levels of proof depending on the transaction. Your choice would be to not participate in such transactions, in certain situations you implicitly give permission to Government to demand proof of identity from you, e.g. if you request a loan to dig a well or subsidy to buy fertilizer or collecting unemployment benefit. Security of exchange of money from government to people is Government's responsibility and it may demand increasing levels of identification depending on the nature of the transaction, as deemed appropriate by abused observed or potential for abuse. In places with high corruption rates, strong identification would be required and would be appropriate. I don't think people would be OK if someone collects their pension using just name, address and birth date, and government throwing hands in the air accusing you for not protecting your name, address and birth date.

What you can't do:

1. Make the world forget what it already knows. Can't ask Google to delete a piece of information about you from entire internet, once you yourself post it on Blogger. You can delete the post from Google, you can delete your account, but you must realize that once something is not private, you have no control over who has seen it and how many formats/copies of that information got created.

2. Get into a contract to drop certain privacy and then deny fulfilling the contract because of privacy rights. E.g. a model can't say that she won't show her face on a fashion ramp because of privacy after taking payment. A storybook author can't say that she won't share her book with publisher because of privacy after taking payment.

3. Make a demand that a private entity, on its private premises, can't have monitoring equipment. A store may decide to have cameras at the self checkout lanes, and it may deny self checkout to folks with full face covering. Your choice would be to not shop at such places, you can't use law to shut down the business's ability to monitor their private premise as they wish. An employer may make alcohol breath analyzer test required e.g. for a surgeon before surgery, a pilot, air traffic control at the start of the duty, or a long distance train driver. The employees in this case can't claim privacy rights to deny such tests.

4. When you are in public place e.g. a sidewalk, you are participating in a public endeavor that comes with you dropping the privacy protection e.g. compared to what you would get in your bedroom. The rays of light that bounce off of you or your belonging are fair game to be captured. Photographers do not need to take your permission to capture rays of light that are travelling in their direction when they stand on a public place or a private place they own or a private place where the owner has given them permission to capture the rays. Those photographs can only be used for personal consumption or for non-profit activities e.g. an investigation, news reporting. Any commercial use of the photo e.g. in an product advertisement, would require release agreement from the person in the photo.

I think Strong Privacy and Strong Identification both are required, for some things they are mutually exclusive, in some parts you trade one for another. Authentication/Authorization/Encryption/Non-Repudiation is needed to deliver these rights.

Consider this, if privacy laws are absolute in every aspect of life then you can't have antitrust laws that stop competitors from fixing prices or agreeing to anti-competitive behavior. If privacy laws are absolute then smartphone apps that capture photo/video of an crime unfolding won't be allowed due to privacy concerns of the criminal. If you can keep something private (lock the door to your room, your safe deposit box), no one will force you to expose it, but one can't demand privacy in situations that naturally expose information to others, unless you explicitly set the expectation of privacy (attorney-client, doctor-patient, a service provider) as part of a contract. Government may make laws to cover most common situations e.g. your real estate agent sharing your budget with the seller of the property, your medical records etc.

Privacy law is natural. What I draw and erase on a doodle board in the privacy of my home is my business, you can't force me to divulge it. What I say in my head to myself is my business, there is no thought crime. What I sing when on a trail is my business, no one can force me to say which song I sung. When government or corporation tries to invade the natural privacy, it should be stopped. In that regard, privacy is a fundamental right. But, privacy can't be claimed to hide criminal record from your neighbors or employers.

More of me trying to sort it out in my own head.

Shivetya 23 hours ago 0 replies      
They also made sexual orientation a right as well. read further into this ruling, it is just more than privacy at the top level. they went on to make sure people/prosecutors understand what they really mean
joering2 1 day ago 0 replies      
I'm not sure if I read it here, but there was a great example given on how to answer people with approach of "if you are not doing anything wrong, you have nothing to hide".

Using this logic, just recently I somewhat won an argument with my fiancee. She always believes that I'm hiding something on my phone because she doesn't have PIN to it and because I'm unwilling to give it to her, she assumes the worst.

Therefore I made a bet. I asked her what is she doing in bathroom? She answered she does what everyone else's is doing: #1, #2 or showering. I replied you must be doing something wrong or maybe illegal, since you always not only close the door, but you lock it as well! We had short argument back and forth about obviously how it is not about hiding something, but rather about enjoying your own time in privacy, and I think she kind of got it.

We have a bet in place for 3 months now that when she leaves the door wide open while in bathroom, I will give her code to my cellphone. So far I haven't had the need to give it out just yet :)

arc_of_descent 1 day ago 3 replies      
I'm not sure what this has to do with Aadhaar, although the initial petition mentions it I think.

Aadhaar is setup as a way of proof of identity and not proof of citizenship. I for one did not get an Aadhaar until a week ago! The only reason I had to was a company that I applied to, said that please use your full name as mentioned on your Aadhaar card.

India is a bureaucratic mess. And as UG Krishnamurti put it very succinctly, India is a failed country. As mentioned elsewhere in this thread, we desperately first need to focus on poverty first.

christa30 1 day ago 1 reply      
When the US Pentagon website has been hacked quite recently, some in this Government think they can make the Aadhaar data completely secure.
known 1 day ago 0 replies      
Can we expect such Judgement from Chinese regime?
ap46 1 day ago 1 reply      
Finally some good news after the countless train pile-ups.
Water Found Deep Inside the Moon nationalgeographic.com
435 points by chenster  1 day ago   238 comments top 17
Iv 1 hour ago 0 replies      
Calm down.

From the article: "the glass beads contain only 0.05 percent water."

It is much drier than Sahara sand where soil moisture can reach 1%. Dune's scenario would be a piece of cake compared to what you would need to do to feed a colony off of this water. I suspect that energy-wise it would be cheaper to import water from Earth and recycle like crazy.

tambourine_man 22 hours ago 19 replies      
I've always been more excited about a Moon colony than a Mars one. I mean, compared to Mars, the Moon is ridiculously close.

Many of the same benefits Musk seeks (backup for Civilization, excitement) for a fraction of the cost.

Low gravity and atmosphere on a dome would be more complicated, but again, compared to Mars, it's right there.

It would be a more humble and realistic first step, IMO.

lovelearning 22 hours ago 1 reply      
This was last month. Another analysis this month by a different organization reached the opposite conclusion.

[1]: https://scripps.ucsd.edu/news/analysis-rusty-lunar-rock-sugg...

kin 22 hours ago 7 replies      
Unrelated to the article, but WOW did anyone else see the giant full view height ad video at the top? I have ad blocker off for NatGeo. It's followed by a giant banner. I can't imagine how much crazier ads have been getting and most of us don't even notice 'cause we have ad blocker.
Abishek_Muthian 12 hours ago 0 replies      
"In 2009, NASA crashed a rocket and a satellite into a crater on the moons south pole, in the hopes of picking up additional watery evidence. "

NO, In 2008 ISRO crashed Moon Impact Probe (MIP) into lunar surface as part of Chandrayaan-I mission. MIP didn't have any specific role, other than political agenda. The NASA's Moon Minerology Mapper M3 onboard of the Chandrayaan spacecraft was instrumental in providing the first mineralogical map of the lunar surface.

Quoting S. Pete Worden, center director at NASAs Ames Research Center :

NASA missions like Lunar Prospector and the Lunar Crater Observation and Sensing Satellite and instruments like M3 have gathered crucial data that fundamentally changed our understanding of whether water exists on the surface of the moon,

National Geographic, having titled the article - 'Get the facts' and blatantly ignoring ISRO's contribution is not healthy.

The reason I brought this up is not to score nationalist brownie points, but a credit where it's due. In a country like India, where an organisation like ISRO instills scientific termparment over large population; ignoring it's contribution by global media focussed on science is an insult to entire scientific community.

pushpen99 19 hours ago 0 replies      
Water molecules on moon was confirmed by Chandrayaan-I mission launched by ISRO/India in 2009. Strange no mention of that in the article.
Pica_soO 2 hours ago 0 replies      
I wonder- if you warm it up and freeze it down- wouldnt the water fracture the rock around it?
mjevans 17 hours ago 0 replies      
It wouldn't be as sexy as sending astronauts up, but I think we could get a lot of bang for the buck sending up generic robots and some scientific equipment that could also be used to boot-strap a local production environment for some resources. (solar oven/etc)

I imagine it would be useful to gather core samples from a variety of locations and catalog the location of resources. I wonder how much of that could be done with higher intensity radar aimed at the surface while in orbit...

bad_user 20 hours ago 0 replies      
"The Moon Is a Harsh Mistress" novel doesn't seem so impossible now :-)
grogenaut 22 hours ago 0 replies      
Is the process of drilling for it less intensive than just making it from the soil?
gene-h 17 hours ago 0 replies      
If this is really the case, it's a pretty big deal. While water is important in keeping humans alive, one big near term application is propellant. Being able to make H2-O2 propellant on the Moon could reduce the costs of lunar return missions. There is also something of a business opportunity in using propellant mined on the Moon to refuel satellites in Earth orbit. The delta V cost from Moon-GEO(or pretty much any other orbit) is lower than the cost of Earth to GEO. So if one can get the infrastructure set up it makes sense. But water is useful for other things too. Just having access to hydrogen on the Moon lets us do so many things. This makes it easier to do a number of extractive processes, we can use hydrogen to reduce metals in lunar soil[5], make acids like HCl and H2SO4, make silicone for seals, purify silicon for solar cells and electronics.

Of course, we already know the Moon has water, the problem is this water is located in permanently shadowed craters. It's hard to get power in a crater that's permanently in the dark. In addition, aside from crashing a spacecraft into one[0], we haven't explored these craters and don't know what the environment is like.

But this new discovery implies that certain surface regions of the Moon could have quite a bit of water. Sure, the water may not be as concentrated as in permanently shadowed regions, but we can get power easier and we understand the lunar surface environment better. Not only that, if the water is in the form we expect, then we have ground truths from the apollo missions from which we can start developing extraction processes[1]. The TRL for robots capable of operating on the lunar surface is relatively high[2][3]. Using specs from top performing team for NASA's lunar robotic mining challenge[] and the estimated concentration of water in regolith, it is reasonable to assume that a rover about the size of China's Chang'e rover could potentially mine enough regolith in about ~75 days to obtain enough water to equal its landed mass(assuming 100% extraction from regolith).

Of course the problem is that the water is locked up in glass. We need to grind or melt the glass up to get the water out. Grinding has the potential to be more energy efficient, but I suspect we won't be able to extract very much and melting glass uses quite a bit of energy. We might be able to reduce the amount of energy by using a molten salt flux to reduce the melting temperature.

Of course, another option is to be smarter about how we use the energy we put into melting glass. We can recapture the heat and run a stirling engine for electrical power. It would be even smarter to put the spent glass itself to use since we have to melt glass anyway, we might as well form it into something useful. We might use the spent glass to make bricks for landing pads, roads, and other structures


Frogolocalypse 6 hours ago 0 replies      
amazing. Stephen Baxters book 'Space' continues to connect with reality.
pasbesoin 11 hours ago 0 replies      
Unless we settle our military confrontations (of varying degrees), any successful political entity is going to have to take a (sufficiently potent) position on the moon, first.

Unless it is willing to accept the circumstance of its sole continuation being off-Earth (Mars, in the near term). And has the belief that its off-Earth presence can continue and succeed in the absence of its erstwhile Earth-based counterpart.

0xdeadbeefbabe 22 hours ago 0 replies      
> From that point on, lunar water discoveries started gushing.

Edit: Just wait till the bottled water companies find out that moon water exists. (https://www.usatoday.com/story/money/2017/08/18/maine-poland...).

overcast 22 hours ago 0 replies      
That's no Moon!
idlewords 20 hours ago 0 replies      
I know a great place really close to the Moon where there's all the water you want, and you don't even have to dig for it.
Universities are broke lets cut the pointless admin and get back to teaching theguardian.com
416 points by ryan_j_naughton  1 day ago   283 comments top 25
meri_dian 1 day ago 14 replies      
I work in the administration for a top public US research University. The increase in size of University administration and bureaucracy is due to a number of factors. One is certainly unnecessary employment and over-employment. Not only at high levels, with VP's, Assistant VP's, Assistant Vice VP's, Chancellors, Vice Chancellor's, Executive VP's, Directors of XYZ, etc, but also at low levels where the work done by 3 could realistically be done by 1.

However it's also important to recognize that not all of the runaway growth of University bureaucracy is due to poor management or redundant workers; expansion of IT infrastructure and increased regulatory requirements - especially for public institutions - demand more labor. These are the obvious culprits, but beyond these, because the modern University has become far more than just a place of higher education and has come to resemble a miniature city, it is expected to serve the diverse non-academic needs of tens of thousands of students, in addition to more traditional academic needs. Counseling and advisory services, recreational activities, food service, engagement and diversity programs, ubiquitous computing, etc. all add to the University's bottom line. Universities fear that if they were to stamp their feet and refuse to supply these amenities in the name of keeping down tuition, matriculation rates would decline as students would seek greener pastures elsewhere.

Add to this the fact that Universities receive no penalty from the market for continually increasing their prices. Because student loans are available to service ever increasing tuition costs, and students pretty much need to go to college to succeed in the 21st century, demand for college education is highly inelastic. What economic entity wouldn't raise its prices if it knew demand for its product wouldn't suffer?

In a traditional market, as one supplier increases price, competitors enter the market offering lower prices. This doesn't happen in the market for higher education because the value of a University is largely tied to its prestige, and prestige cannot be easily generated by competitors. We bemoan the high cost of University education then mock the University of Phoenix and similar offerings. Market dynamics are the guilty party here.

rfdub 1 day ago 11 replies      
I work in post secondary administration, so I think I have some perspective here. Part of the problem, at least in the US & Canada (Where I live) is that post secondary institutions are positioning themselves less and less and places to get an education and more and more as places to go for an "experience." Its no longer enough to provide a quality education, universities now are selling themselves on their facilities, their "student life" and all the other intangibles that are secondary to actual education. This leads to all the administrative bloat we're seeing as now that many schools are functioning more like glorified 4 years spas they have to have departments filled with staff to plan events, throw parties, Snapchat sports games, provide "save spaces," etc.

I haven't been in the sector long enough to have a real handle on when or why this shift happened, but from my perspective its the primary driver of the increasing administrative bloat. Schools are competing more on the intangibles, and so they need to invest more into these areas, which means more staff and more overhead.

Personally I think the whole university model isn't long for this world though as there are plenty of ways competency can be signaled apart from a fancy foil-stamped piece of paper and eventually when the costs of university education don't provide a positive return over any reasonable time horizon students are going to start looking for alternatives en masse and the market will innovate to meet that demand.

mnm1 1 day ago 7 replies      
Yes. This is why I refuse to donate to my alma mater anymore. Tuition has nearly tripled in in fourteen years while they are still teaching the same number of students with roughly the same or fewer full time faculty. There's something seriously wrong with that and this is a huge symptom of it. Until they get their shit together, they need less money coming in, not more. This is supposed to be a nonprofit institution but clearly many people are making big money in this business at the expense of students. The federal loan programs certainly don't help either. Allowing student loans to be discharged in bankruptcy would also lessen this money feast for universities. Alas, no solution looks in sight so I do my part in keeping money away from these money furnaces.
Animats 1 day ago 6 replies      
Stanford is building a new "campus" in Redwood City. 35 acres. 2,700 people on site. None are students. None are faculty. No teaching or research will occur there. It's all administrators.[1] "School of Medicine administration; Stanford Libraries and University Archives; the major administrative units of Business Affairs; Land, Buildings and Real Estate; University Human Resources; Residential & Dining Enterprises; and the Office of Development", says Stanford's FAQ. ("Development" in university-speak means fund-raising, not building construction.)

Now that's management bloat.

Stanford has only 2,180 faculty members.

[1] https://redwoodcity.stanford.edu/

bluetwo 1 day ago 3 replies      
As an adjunct professor running one class per year, I ran the calculation of:

(Amount I'm paid per class / (students in class * cost per credit * credits for class) )

And found I'm paid about 10% of what the students pay for the experience of taking my class. I can't help wonder what happens to the rest of that money.

slackstation 1 day ago 3 replies      
The pointless admin is from services given to the students. Universities (that aren't household brand names like the Ivys) compete on services and facilities. And because most students are young and using other people's money (their parents or their future selves) they will choose schools not because they have the best deal educationally but, because they beautiful grounds, newer, swankier dorms and all of the social clubs and facilities for those like sports fields, etc.

It's a market problem with misaligned incentives and payment structures that has slowly grown worse over the past 40 years. No one actually says no because competition favors those that fatten themselves up with attractive but functionally useless things.

It's more like peacock feathers than malice or greed by administrators.

dmix 1 day ago 2 replies      

> Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people:

> First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.

> Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.

> The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.


4bpp 1 day ago 0 replies      
People are happy to claim they want to downsize university administrations when the appeal is voiced like this, but another kind of editorial that pops up just as frequently in the opinion press later, they will be just as enthusiastically demanding more counseling for students in emotional quandaries, more university-mediated internship opportunities, more officials providing sexual assault prevention training, more recourse to resolve student-advisor disputes in a way that shifts the power balance towards the student, more varied dining options and a plethora of other goodies that can mostly only be realised with more and/or more powerful admin staff.
cperciva 1 day ago 0 replies      
About five years ago, the president of my university proudly announced at a Senate meeting that they had appointed a new Director of Sustainability, to help ensure that the university's operations were sustainable.

I asked if, in light of the increasing administration headcount, they thought appointing a Director of Sustainability was financially sustainable.

They clearly didn't get the message; while that particular Director has moved on to other things, there is now an Office of Sustainability with about a dozen people.

dmix 1 day ago 1 reply      
In an administrative heavy organization every problem seems to be answered with "how can we add more administrative layers, processes, and backroom deals to satisfy x group (management, media, special interest groups, voters, etc) that something appears to be being done?" rather than asking "looking at all available options what is the best way to resolve this problem (inside and out of government)?". Basically a "When all you have is a hammer everything looks like a nail" type of thing.

This is the difference between group A (admins) and group B (engineers, teachers, etc) in Pournelle's Iron Law of Bureaucracy [1]. The former is often presented as a clever talented group of smart people who savvily work the system in TV shows like House of Cards but I question the utility it really offers the world when the ROI is so often questionable.

I'm curious how much of this modern dysfunction in modern governments (not just in the US) is due to the fact politicians are now almost entirely career politicians who spend their formative years in this insular world. The majority coming from the same private schools and 90% of them with law degrees. Rather than in the past, such as the founding fathers, who were businessmen, writers, and intellectuals first embedded in the real world who then went into public service.

The same analogy applies to Universities with administrators being raised within the system rather than the teaching staff intimately familiar with the front-line realities of the organization.

[1] https://www.jerrypournelle.com/ironlaw.htm

leggomylibro 1 day ago 1 reply      
Kind of unrelated, but I am extremely disappointed in how much 'giving back to local communities' has been abandoned by modern universities, in favor of their massive administrative payrolls.

What used to be a core tenant of thought, is now a dollar figure. If you live in the state, you get a discount. End of story, obligations fulfilled, full stop.

But it's not a commitment or a central belief. No university will open its facilities to community members. No stage will be available for public performances, no instruments or machines will be made available for inquiring minds, no local organizations will spend more than a few hours on campus for a field trip.

Providing those things would cost money, sure enough. But they used to be part of a university's mission. Some of the older ones still enshrine the ideal in their mottos. That should mean something. It doesn't.

reaperducer 1 day ago 1 reply      
Whether it's a university or a government or any other bureaucracy, the money eventually flows to the top. And the top tier of employees have no incentive to remove their own livelihoods.

Thinning the waste at the top is a great idea, but never gets done unless a bigger bureaucracy makes it happen.

forsakenkraken 1 day ago 0 replies      
As one of the 'pointless' admin staff at a British Russell group university, the article seems to be overly hammering home the authors point which has to be partly to do with the book 'Bullshit Business' that he's trying to sell. I don't deny that there are issues in the university system here in the 'United' Kingdom, but I feel he's overhyping issues to get clicks and sell his book.

As several previous commenters have noted, we have a some what dysfunctional market, where most universities charge the same fees. For students you're still picking based on courses and reputation. You can't really pick a cheaper 'no-frills' university. This marketisation has leeched into the central university services, where we're turning everything into a product to sell. Not much of a market when academics have to use us. We are very cheap, it's not like we've got to make money. Obviously if you got rid of that, you'd be able to remove some staff. You'd still need quite a few to meet the legislative demands which are required of universities these days.

There are definitely some truths in the article. Not everyone needs to go to university. Certainly there are some students who are getting massive amounts of debt for very little gain. Universities do chase various league tables and other such measures (some imposed centrally) which does require lots of data gathering. Large amounts of data require staff to deal with that data. Students do really like shiny new buildings, so universities go on building sprees. Again a poor NSS (national student survey) can potentially reduce students and thus income. But to be fair, you'd want things to be nice when you're paying 9k per year for your fees alone.

REF and TEF aren't well liked by academics, so I'm quite sure a lot of them would be happy if they were dropped. It's unsurprising that the article is penned by an academic and one of his main suggestions is the dropping of REF.

The USS pension scheme deficit is probably misleading. If you want some detailed reading around that, then Mike Otsuka's articles are good - https://medium.com/@mikeotsuka

norswap 1 day ago 0 replies      
It seems increasingly likely to me that the whole academic model is on the verge of existential crisis, and headed towards profound transformations. I'm curious to see where this leads. Hopefully someplace better than before. (I shoul mention I'm a PhD student)
perseus323 1 day ago 0 replies      
I worked at my university after graduation for 2 months (as an employee; well funded project). A project that should have had no more than 4 developers had 3x that many. Tool 2.5x as long to finish (I left in the middle). Crippling, dysfunctional politics, lunch meetings with stake holders, useless field trips. I think everyone involved knew what was going on and didn't care.
santaclaus 1 day ago 1 reply      
Is this true in the US, in general? My alma mater has a circa 9 billion dollar endowment... yet it still sends me sob story letters soliciting donations.
PeachPlum 1 day ago 0 replies      
My University, Huddersfield, isn't broke and is debt free to boot.

> In addition to the Creative Arts Building which opened in 2008, a new 17 million Business School opened in 2010, followed by the 3 million Buckley Innovation Centre in 2012, the 22.5 million Student Central building in 2014, and the 27.5 million Oastler Building for Law and the School of Music, Humanities and Media, in 2017. 5.5 million invested in University Campus Barnsley.

> The University now attracts students from more than 130 countries. With an annual turnover of approximately 150m each year, it estimates it is worth 300m annually to the local economy.

And yet there is rabble rousing concerning the Vice Chancellor's 337,000 salary


pragmar 1 day ago 0 replies      
Probably the best deep dive I've read on the cost of higher ed was published a few years ago by Robert Hiltonsmith. This addresses US universities and colleges as opposed to UK, so the issues may not be one to one and the data is a little dated (published in 2015). Still, a worthwhile read.


forkLding 1 day ago 5 replies      
I have actually had an idea related to the sort of debt had to be taken by American university students for their academics.

Why not use a mixture of blended learning and Coursera, where students pay not just professors but also industry folk for cutting-edge knowledge but also open up a marketplace for Youtube tutorial people to do physical workshops in their cities that can be charged and move onto a sort of Airbnb for education and workshops? I feel a lot of straight-up learning can be gained and the 3% charged per workshop will go towards scholarships. For people who think you would be paying high prices per lecture/workshop in this model, a university student has already been paying about $50 or more per lecture.

Instead of basically physical/manual admin systems in individual centralized universities, you have decentralized software system managing schedules and booking workshops/lectures.

Just an opinion and thought I've been having, any feedback welcome.

rahimnathwani 1 day ago 0 replies      
This article is purportedly about UK universities, but half of the evidence quoted is from studies of _US_ universities. So no matter how strong the evidence, it's irrelevant.

The second half of the article is prescriptive but short on details. What does it mean to require "people to fully carry out their own fanciful ideas"? Does it mean that someone who decides the trash should be collected every other day rather than every day, should actually collect the trash?

Like many people, I'm attracted by headlines which align with my own biases. But my level of bias here is unchanged after reading the article!

tonyedgecombe 1 day ago 1 reply      
The problem in the UK is there is no significant competitive pressure. What would help is separating the teaching from grading, just as it is in schools.

Right now an employer has to consider a degree from Oxford Brookes in a completely different light to one from Oxford University. If they knew the students were graded in exactly the same way then it could really shake the system up.

Fej 1 day ago 0 replies      
The university I go to is now run more like a business, with top-down leadership. Questioning the higher-ups is apparently grounds for firing.

The disgust with the administration has gotten to the point that few recent graduates plan to donate, ever. Tuition is so high that it feels like they already got all the money they deserve.

mrslave 1 day ago 0 replies      
In addition to this article, from the vlog "Plunging Enrollment at Mizzou" <https://www.youtube.com/watch?v=h7CHd-w02lc>:

 They can't pay all of the administrators that they need.

Also consider that the bennies at de facto government jobs are also higher than market rates (e.g. contribution to retirement schemes, insane types of paid leave) and it's funded with your taxes or government debt (future taxes).

The FA also addresses the politicization of college education which wasn't my point but interesting nonetheless.

tqi 1 day ago 1 reply      
"For instance, before introducing a new procedure they would need to eliminate an old one."

If someone wrote a tech article suggesting that companies should require removal of a line of code for every new line introduced, do you think that article would make it to front page of HN?

c517402 1 day ago 1 reply      
Administrative bloat isn't just a problem at the University level, but across all levels of education in the US.
Chrome Enterprise blog.google
415 points by pgrote  2 days ago   262 comments top 28
redm 2 days ago 20 replies      
I'm hesitant to invest anymore into the Google ecosystem after reading about how account termination can happen without detail, or recourse to resolve. [1] The last thing I need is more lock-in to a Google world.

[1] https://news.ycombinator.com/item?id=15065742

pducks32 2 days ago 7 replies      
I thought this was a special version of Chrome the browser and I think many people will too. Especially someone like my brother who works at a corporation. If they told him theyre switching to Chrome Enterprise hed be a tad confused.

Side Note: the reading experience on this blog is one of the best Ive seen on mobile. Love the text size though the header animation was not the smoothest. Nonetheless great job.

twotwotwo 2 days ago 2 replies      
One of my annoyances on consumer Chrome OS is that the built-in VPN support is tricky. There's a JSON format, ONC (https://chromium.googlesource.com/chromium/src/+/master/comp...), that maps to OpenVPN options. When I last used it the documentation was a bit tricky though it may have improved, I couldn't find ONC equivalents for some of my .ovpn options, and, most frustratingly, there was very little specific feedback if you try to import a configuration that isn't right. Because of all that I wonder if it was developed so Google could support specific large customers' VPNs (think school districts or companies) and its public availability was mostly an afterthought.

If you leave the GUI, you can also run openvpn yourself on a good old .ovpn file, but you lose some of the nice security properties you get with the default Chrome OS setup, you have to do cros-specific hacks to make it work (https://github.com/dnschneid/crouton/issues/2215#issuecommen... plus switching back and forth between VPN and non-VPN DNS by hand), and last I checked it made ARC (Play Store) apps' networking stop working.

I would consider paying a premium just to get my Chromebook connecting to work's VPN smoothly, though of course I'd love it if improved VPN functionality were available to everyone by default.

At some point I'm probably also going to take a second look at the latest ONC docs. It looks like they've improved since I first looked at VPN setup a while back.

jstewartmobile 2 days ago 5 replies      
Sounds great until they shut your shit down without explanation, and all you're left with is a support number that is about as helpful as a brick wall...
pat2man 2 days ago 0 replies      
This is probably the perfect OS for any shared terminal: libraries, internet cafes, etc. You don't need native apps, just a locked down browser that can keep your settings and bookmarks across devices.
open-source-ux 2 days ago 3 replies      
Not a popular opinion here I know, but I'll say it anyway. Not a single word in that blog post about privacy.

Chrome OS is already widely used in US schools (and tracks student online activities), now we have a 'business-friendly' version of Chrome OS.

What kind of analytics does a cloud OS like this record? What does Google do with that data? Even if that data is 'anonymised' (a pretty meaningless term nowadays), in aggregated form that gives Google staggering quantities of data that they can mine for the future. Why did Google not even mention the word privacy once in that blog?

niftich 2 days ago 0 replies      
Notwithstanding the Active Directory integration, this is the clearest shot across the bow of Microsoft's on-prem management suite yet.

The naming is puzzling. But I'm sure MS shops are used to weird names, and aren't likely to get pedantic about whether or not there should be an "OS" in there. They likely went with the simpler name to build on mindshare among decision-makers, and to intentionally muddy the waters to their benefit.

tbyehl 2 days ago 2 replies      
Is this just a re-branding of "Chrome device management"?

I wish they'd come up with something Family-oriented. I've got my mom, girlfriend, and girlfriend's children all using low-end Chromebooks / Chromebases as their primary computers, and I'm using one for about 80% of my computing. Chrome device management would be useful for us but $50/year per device plus needing to buy G.suite per user is a bit much.

Havoc 2 days ago 2 replies      
A big chunk of business is dead in the water without Excel (and to a lesser extent Word/Powerpoint).

And no don't tell me google sheets. Great for sharing data...ultra crap for data manipulation.

solatic 2 days ago 4 replies      
Lots of enterprises out there with many users who need nothing more than a web browser, email, light word processing and maybe slideshow software. Active Directory integration makes the migration possible. Chrome OS provides it all in a way which dramatically reduces maintenance costs compared to Windows.

If Google starts showing some reduced TCO figures, they'll start to pull a lot of converts.

Multicomp 2 days ago 2 replies      
$50 bucks per device per year? For what, extra management frameworks on a chromebox? What a bargain /s
devrandomguy 2 days ago 1 reply      
On a related note, does anyone know how to bury a dead corporate user account? The company that gave it to me doesn't even exist anymore, but Google keeps insisting that "account action is required". The company terminated my login shortly before imploding, and I lost the associated phone number when I fled the country, so there is no way that I can get back in to shut it down myself.

I suppose I will eventually just buy a new phone in a few years, but I'm not thrilled about all that private work / business data that is sitting in limbo.

MBlume 2 days ago 1 reply      
I would really like to have a computer for use at work where my IT department could feel like they had assurance that it was secure/virus free/malware free but from which I could sign into my personal accounts without feeling like I'm opening them to my IT department. Right now I just carry two laptops in my bag and it's really annoying. Wondering if Chrome Enterprise will enable this sort of thing.
trequartista 2 days ago 3 replies      
While there is Google Play Integration, there is no word on how they plan to integrate the corporate intranet - which is littered with thousands of custom applications ranging from payroll to HR to ticket and incident management.
morpheus88 2 days ago 0 replies      
Created a throwaway for this. But Google has a reputation for shutting you down without resort to any recourse and I can attest to this personally. Hope I'm not off topic, but I had a successful Android App which was taken down from the Playstore because I used a single keyword that was copyright, but it was really essential for this app and I had provided context for using the keyword. It was a free app anyway and I was making no money from it (no ads either). Anyway, they removed my app from the store and I had no way to get it back up- all my ratings, downloads, reviews were lost. The point here is that they didn't give me a chance to defend myself- one strike, and you're out and never coming back again. Imagine enterprises using Google products with this sort of an attitude.
bbarn 2 days ago 1 reply      
I suspect Active Directory integration might make this actually have legs. Especially in the educational industry.
bedhead 2 days ago 0 replies      
Sounds great until you realize that their "Hate Algorithm" or whatever will end up erroneously shutting down your computer one day.
hiram112 2 days ago 1 reply      
I've always had the belief that the Microsoft juggernaut would continue its slow decline in relevance as mobile and web devices removed the need for Windows, and the improvement of apps like Google Docs, OpenOffice, etc. would eat away at Office from the other side.

But I really think now we're approaching the point where their fall might happen swiftly. Chromebooks are fine for the majority of corporate users. And if they catch on, there is no need for any of the Active Directory / Azure tie-ons that MS has been hoping would pull enterprise customers towards Azure, Office 365, and all the rest.

And even if Microsoft can convince customers to stay, they simply won't be able to charge the same prices they've enjoyed for decades now with the overpriced Office, Server, and Client access licenses.

And once an enterprise moves away from Active Directory and Office, I don't see any benefit of using the very expensive Sharepoint, Outlook, OneDrive, and other apps that have always been overpriced, but worth it as they integrated well together and saved companies more money via lower IT costs.

massar 2 days ago 1 reply      
I hope they finally acknowledge the Security Bypass they have in this "Enterprise" version... where it will be even more serious


It is fun to report those things to Google Project Zero and then find that people on that side obviously do not understand that security bypasses are... well... security issues.

full submission reproduced below, just in case they radar-disappear the item... duping items is apparently what Project Zero does so that the items disappear from Google results...


Thank you for an amazingly solid looking ChromeOS. Happy that I picked up a nice little Acer CB3-111, thought about plonking GalliumOS/QubesOS or heck OpenBSD on it, but with the TPM model and the disk wiping, not going to.

Just wanted to note this discovery so that you are aware of it and hopefully can address the problem as it would improve the status quo. Keep up the good work!

Greets, Jeroen Massar <jeroen@massar.ch>


By disabling Wireless on the login screen, or just not being connected, only a username and password are required to login to ChromeOS instead of the otherwise normally required 2FA token.

This design might be because some of the "Second Factors" (SMS/Voice) rely on network connectivity to work and/or token details not being cached locally?

But for FIDO U2F (eg Yubikeys aka "Security Key"[1]) and TOTP no connectivity is technically needed (outside of a reasonable time-sync). The ChromeOS host must have cached the authentication tokens/details though to know that they exist.

The article at [2] even mentions "No connection, no problem... It even works when your device has no phone or data connectivity."

[1] https://support.google.com/accounts/answer/6103523?hl=en[2] https://www.google.com/intl/en/landing/2step/features.html


Chrome Version: 59.0.3071.35 devOperating System: ChromeOS 9460.23.0 (Official Build) dev-channel gnawty Blink 537.36 V8


First the normal edition:- Take a ChromeOS based Chromebook (tested with version mentioned above)- Have a "Security Key" (eg Yubikeo NEO etc) enabled on the Google Account as one of the 2FA methods.- Have Wireless enabled- Login with username, then enter password, then answer the FIDO U2F ("Security Key") token challenge

All good as it should be.

Now the bad edition:- Logout & shutdown the machine- Turn it on- Disconnect the wireless from the menu (or just make connectivity otherwise unavailable)- Login with username, then password- Do NOT get a question about Second Factors, just see a ~5 second "Please wait..." that disappears- Voila, logged in.

That is BAD, as you just logged in without 2FA while that is configured on the account.

Now the extra fun part:- Turn on wireless- Login to Gmail/GooglePlus etc, and all your credentials are there, as that machine is trusted and cookies etc are cached.

And just in case (we are now 'online' / wireless is active):- Logout (no shutdown/reboot)- Login with username, password.... and indeed asks for 2FA now.

Thus showing that toggling wireless affects the requirement for 2FA.... and that is bad.


- Being asked for a Second Factor even though one is not "online".

As now you are walking through say an airport with no connectivity, and even with the token at home, just the username and password would be sufficient to login.


For the Google Account (jeroen@massar.ch) I have configured: - "strong" password

and as Second Factors: - FIDO U2F: Two separate Yubikeys configured - TOTP ("Google Authenticator") configured - SMS/Voice verification to cellphone - Backupcodes on a piece of paper in a secure place.

Normally, when connected to The Internet(tm), one will need username(email), password and one of the Second Factors. But disconnect and none of the Second Factors are needed anymore.


The Google Account password changer considers "GoogleChrome" a "strong" password.... might want to check against a dictionary that such simple things cannot be used, especially as 2FA can be bypassed that easily.....

gangstead 2 days ago 0 replies      
I don't believe the checkmark indicating "Cloud & Native Print" support on Chrome OS. I've got two Chromebooks and have used Chromeboxes at work and have never gotten printing to work reliably.
chaudhary27 2 days ago 1 reply      
I don't like to lock into Google ecosystem at work but I also hate some Microsoft services at work.
booleandilemma 2 days ago 1 reply      
I assumed this was an enterprise version of Chrome, with the main difference being it doesn't auto update, thus being more friendly to the IT departments who administer a company's computers.
jdauriemma 1 day ago 0 replies      
Random observation: the font-background contrast ratio in this post makes it very hard to read comfortably.
jaypaulynice 2 days ago 0 replies      
$50/device?? With that said, I suspect Facebook is working on a browser...that could compete well with Chrome...any reason why Facebook hasn't developed a browser?
killjoywashere 2 days ago 0 replies      
David was working on the smart card authentication system for ChromeOS not too long ago. Glad to see this maturing.
demarq 2 days ago 1 reply      
That is a very compelling price point.
ben174 2 days ago 2 replies      
I've been seeing IT become increasingly frustrated at their inability to lock down the security on MacOS to the level they'd hoped. Wouldn't be surprised to see silicon valley startups issue Chromebooks out as the default in 3-4 years time. Especially if Google gets this right.
mnd999 2 days ago 0 replies      
Is it April 1 already?
Disconnect. Offline only bolin.co
587 points by danmeade  2 days ago   197 comments top 44
mck- 1 day ago 6 replies      
Beauty. Almost a piece of art.

I was on a plane yesterday (literally on airplane mode) and I finished a book I've been working on for a month, and prepped/wrote half of a presentation. Quite often I produce much of my writing on a plane.

I find myself very productive on a plane. Especially on cheap flights that don't have in-flight entertainment. Literally no distractions for a preset amount of time. You're not only offline, you're also physically stuck. Best way to make the time fly by is by being productive.

beat 2 days ago 5 replies      
For those interested in managing online time and getting ourselves offline regularly, the book Deep Work, by Cal Newport, has some very useful ideas. One that I plan to start experimenting with is the idea of scheduled internet access - allow yourself to get online only at certain times of day. This isn't just for work. Even if you're, say, standing in line at the grocery store, you don't get to pull your phone out and check your email.

As the author points out, we've forgotten how to be bored. We need to learn to engage that part of our brain again.


gervase 1 day ago 3 replies      
> your ability to Google something

In my opinion, this actually is something that makes me valuable. It doesn't matter how well you can synthesize information if you can't find it in the first place.

Having the ability to take a problem, figure out what you don't know, reprocess those parts into a format that Googles well, filter out the noise from the results, and only then synthesize the information gathered is actually not as common as you might think.

nicklaf 1 day ago 1 reply      
The funny thing is, smartphones are all but useless for many tasks the minute you go into airplane mode. There are exceptions, but you're basically holding a client to a distributed operating system which has appropriated many of the promises of wearable personal computing for corporate profit.

So yes, if you've already ceded your right to not be inturupted by running apps like Twitter and Facebook in the background, then I can see the appeal of cord cutting.

Of course there also exists the possibility of at least trying to use these devices without ceding this autonomy in the first place, but that requires admitting just how little today's social media offerings will have to do with this approach.

And no, a smartphone is not the right place to do research anyway. In fact, neither is the WWW using off-the-shelf browsers, but it's the version of Hypertext we're stuck with for now.

norswap 1 day ago 0 replies      
While I sympathize, that would be forgetting all that the net and online-ness has done for me. I would be a fundamentally different (and, in my current estimation, worse) person if I didn't have the net. It's been an engine of personal growth much more than one of distraction.

It might not be the same for everyone, of course. But I still think going offline is giving up too much.

The pendulum doesn't have to swing all the way to the other direction. Couldn't we just focus on being more responsible in our net consumption and promoting the good benefecial stuff instead?

paperpunk 2 days ago 1 reply      
I got stuck into a bad habit of browsing the internet idly anytime when I'm at home and then having to rush to get to work so I've created rules on my router to disable web access in the mornings before work and late at night before bed.

It is quite effective and I suddenly do other things but I do worry that it's a psychological crutch which is just going to make self-control even harder in the long term.

thinbeige 1 day ago 1 reply      
Super nice idea. I remember when I was very young and the Internet was also young, maybe just two, three years, I experienced something strange. In this time we still used US-Robotics 56k modems to connect to the Internet. When I was offline my computer felt dead. Worthless. Useless. Only when I was online my computer felt right and I felt good.

You have to imagine that I loved my self-built PCs even before the Internet came. I spent so much time with them, upgrading them, spent night and day installing and trying new software, stuff like Sierra and Lucasfilm Adventures, Clipper/dBase, Turbo Pascal, QuarkXPress, Corel Draw, saving for hardware such as PostScript laser printers, AdLib later Soundblaster soundcarfd, SyQuest harddrives, flatbed scanners, all the typical stuff. And once the Internet came an offline computer felt like a dead computer.

patatino 2 days ago 4 replies      
Well done, I had the urge to google the second most commonly spoken language while reading the article.

I turned my phone in some kind of a "dumb phone":

- Deleted all games, news apps, basically all the apps I don't regulary need

- Turned off email. It's still configured, I turn it on if I need to read an email

- No push notifications at all

Next step: Turn off mobile data for browser and only activate it if I need to read something. I'm just not ready yet!

numbers 1 day ago 0 replies      
I love this line:

"Do your research online, but create offline."

A lot of times, I'm working on something and in the zone and then all of a sudden I see an iMessage notification and forget my thoughts almost instantly.

groundCode 1 day ago 0 replies      
Somewhat brilliant in that by forcing me offline I was distraction free in reading the piece. Basically it fostered an environment in which I was more likely to read to the end of the article
ptspts 1 day ago 1 reply      
How is this page implemented for Chrome? It looks like it is using service workers. Is there a tutorial?

EDIT: Tutorial for Chrome here: https://developers.google.com/web/fundamentals/getting-start...

chrisbolin 1 day ago 1 reply      
y'all have been very kind. the productivity tips are very helpful. here's my system:

- sit down at my desk with laptop and phone

- disable wifi on my laptop

- turn my phone face-down on the desk, muted, with wifi/data still on

This lets me check if I have any new messages via my phone, but it is a polling system vs an interrupt system. I have to opt in to check. And I am very aware that using my phone looks and feels less productive, so I try to avoid using it too long.

I've been able to be pretty productive (as an software engineer) with this system. I find that I have to reconnect on my laptop about every 30 minutes to do something or another. Of course, every day varies.

Unbeliever69 2 days ago 3 replies      
I am probably one of the few techies in the world that does not own/use a smartphone. Since 2011 I have used a cheap Verizon flip phone for exactly this reason; I want more control over my life. NOTHING is that important that I need to be plugged in 24/7. My wife has a smart phone which is great for when we travel (maps, Yelp, Fandango, Uber, etc.) Don't get me wrong, I was standing in a long line outside the Apple store the day the first iPhone came out. However, over the years I came to realize that in order to have the amount/type of work/life balance I desired, technology would have to take a back seat to my relationships and interests outside of my career.

I haven't looked back.

5_minutes 2 days ago 0 replies      
On old Nokia phones you could create profiles, like "work" and "weekend" and configure it's possible functions and also distractions.

I figure there's something like that for android but on iOS you can only put on: do not disturb. It works though.

Multicomp 2 days ago 0 replies      
Using a WP7 device has really forced me to come to terms with all the online cruft and clutter I look at all day. When I only have email, calls and text, I really do seem to see a lot more in life.
Nekobai 2 days ago 0 replies      
This article seems to discuss similar stuff to The Shallowshttps://www.amazon.co.uk/Shallows-internet-changing-think-re...
binaryapparatus 2 days ago 4 replies      
Doesn't work on FreeBSD/firefox? First I put interface down, so no network access, second try I physically pulled cable out. Nothing happens.
csomar 1 day ago 1 reply      
If you are, like me, interested in reading the post but do not want to get offline:

> window.dispatchEvent( new Event( "offline" ) );

tenkabuto 1 day ago 0 replies      
The point about chasing links in articles is interesting to me. One of my favorite activities is loading articles up in Pocket for all-but-offline reading and Pocket's Listen feature, which uses Android's text-to-speech (?), to listen to articles.

The point about articles being written differently according to whether the author expects that the article will be read offline or not interests me, though, especially if a decent amount of background/context provision is outsourced via providing a link to documents that cover such material.

kuschku 2 days ago 0 replies      
In Firefox, just use Alt-F to open the File menu, and check "work offline" to view this page.
bryananderson 1 day ago 0 replies      
I use an iOS app called Freedom to disconnect. I can block the entire Internet (excluding iMessage and FaceTime) or a list of sites (social media, news, etc) for a period of time. This way I can still contact people, but cannot browse idly.

Is it a crutch? Sure, but crutches work. If you were dealing with alcoholism, the best thing you could do would be to remove your ability to easily access alcohol.

lypextin 2 days ago 1 reply      
to me, using cron to disconnect my internet every half an hour just to remind me to break the loop has been immensely helpful. And annoying. But mostly helpful.

My brain switched to offline mode has about three times better focus.

Similarly to this, I'm using my browser in full-screen mode most of the time to eliminate distractions. It was very surprising to me, how big an effect it has, to not see the tabs.

cableshaft 2 days ago 0 replies      
My phone bricked a week and a half ago and I've been using a phone I walked into a lake on accident two years ago that still completely works except for cell service, so it only updates when I'm connected to Wifi now. I also have google voice number, so my texting works from that phone as well, but again only where there's Wifi.

I'm going to make a claim, pay a deductible, and get a new proper phone at some point, but I've been a bit lazy and delaying it a bit because it hasn't been too bad going without.

Although I did have one bad experience since it happened (almost immediately after). My car's battery died and it required me walking for almost an hour next to a dangerous street to get to some place that had Wifi and sort out getting my car towed and being able to open Uber to have someone pick me up.

ohthehugemanate 1 day ago 0 replies      
It really does require a lot of discipline to stay focused nowadays. In my personal life I'm terrible at it. I wish I read more, like I did before the Internet are my life.

But at work, I HAVE to be disciplined. I start my day with 1 hour of communications catch-up, including stand-ups and slack. Then I turn off slack, and get to work. My phone is set to do not disturb automatically starting at 10am. I check messages when I'm on a break, going to the bathroom, on lunch, etc... But the messages are never allowed to interrupt me.

Works for me, at least.

hondish 1 day ago 0 replies      
I just created a new location on my MacBook's network settings, called it 'getitdoneland', and removed all the network services from it. Now my laptop has an 'airplane mode'. Thankfully, it takes just enough seconds for connectivity to return when I switch back to my regular location that I think I'll be dissuaded from distraction. Friction is good sometimes.Back to work...
mistniim 1 day ago 0 replies      
Personally just knowing that wasn't of much help, so I made a simple application to help me stay disconnected https://github.com/mistnim/coin-op-web
myf01d 2 days ago 1 reply      
in your console put

> window.dispatchEvent(new Event("offline"))

viach 1 day ago 0 replies      
Console -> Network -> Offline [x] also works.
wenham 2 days ago 1 reply      
For those that don't want to, and 'miss the point'. View source and go to the .JS file

The text of the off-line site is about a third of the way down.

fizixer 1 day ago 1 reply      
I would love to work offline. But my work involves constant use of a commercial software that won't run without a license check connection to its central license server.

edit: just occurred to me. I should try to script my connectivity, so the connection is established just before the software is used, and terminated soon after. Looking into it.

webXL 1 day ago 0 replies      
Au contraire, mon ami: javascript:window.dispatchEvent(new Event('offline'))
KirinDave 1 day ago 2 replies      
Doesn't work at all for me on Linux Chrome, Linux Firefox, Windows Firefox, or Windows Chrome.

Is it a joke? Or just poor tradecraft?

jancsika 1 day ago 0 replies      
With Firefox reader view do event listeners still work?

If so it'd be nice to have an extension or option that goes into offline mode when reader view is triggered.

schnevets 2 days ago 3 replies      
Totally agree. Unfortunately, 90% of my work is development in a SaaS platform, so getting anything done will require my device to remain online.

Does anyone know of a Chrome Plugin/hack that might block all but a few web pages? Then I can enjoy the silence of working without distractions while still plugging into the application that I'm working with.

hozae 1 day ago 0 replies      
What is the bounce rate on your page?
iapurv 1 day ago 0 replies      
The irony was that I was unable to upvote this well written post since I was in airplane mode.
ericfrederich 1 day ago 1 reply      
Ctrl+Shift+I on Chrome opens developer tools. There is an "offline" checkbox you can click.
hatsunearu 1 day ago 1 reply      
Would be great if the website worked. Went offline and nothing happened.
afshinmeh 1 day ago 0 replies      
Thanks for that `window.dispatchEvent(new Event('offline'))` option though.

I had to be online and read the article at the same time :P

amelius 2 days ago 4 replies      
Hmm, in Chromium developer tools, in the Network tab, I set throttling to "Offline", but nothing happened in the page.
willhackett 1 day ago 0 replies      
dispatchEvent(new Event('offline'))
nnd 1 day ago 0 replies      
Don't blame the internet, numerous sources of distractions existed long before. If you are easily distracted, the real cause lies elsewhere.
r0fl 2 days ago 1 reply      
The article won't load for me.
saikatsg 1 day ago 0 replies      
Very cool
Go 1.9 is released golang.org
365 points by techietim  16 hours ago   104 comments top 12
old-gregg 14 hours ago 2 replies      
In case someone cares about these things, I compared the build times and the binary sizes for 1.9 vs 1.8.3 using the open source project we maintain [1]. This is on a 6-core i7-5280K:

Build time with 1.8.3:

 real0m7.533s user0m36.913s sys0m2.856s
Build time with 1.9:

 real0m6.830s user0m35.082s sys0m2.384s
Binary size:

 1.8.3 : 19929736 bytes 1.9 : 20004424 bytes
So... looks like the multi-threaded compilation indeed delivers better build times, but the binary size has increased slightly.

[1] You can git-clone and try yourself: https://github.com/gravitational/teleport

zkanda 8 hours ago 1 reply      
In case you guys didn't know, there's multiple release party in different parts of the world: https://github.com/golang/cowg/blob/master/events/2017-08-go...

Come join if your near the area.

alpb 16 hours ago 1 reply      
t.Helper() is certainly going to be very useful. I often implement functions like:

 func testServer(t *testing.T, port int) { ...do stuff... if err != nil { t.Fatalf("failed to start server: %+v", err) } }
similarly you can have

 func assertMapEquals(t *testing.T, a, b map[string]int)
It lets you hide such helper methods from the test failure's stack trace (where t.Fatal is actually called), making test errors more readable.

tschellenbach 15 hours ago 1 reply      
Nice can't wait to run some of our benchmarks against this. Go has the awesome property of always becoming a little bit faster every release. It's like your code becomes better without doing anything. Love it :)
Scorpiion 9 hours ago 2 replies      
In the release notes it says:

 "Mutex is now more fair."
Source: https://golang.org/doc/go1.9#sync

Does anyone know what that means?

cristaloleg 15 hours ago 3 replies      
Please remember that sync.Map isn't type safe and is more optimised for reads.
tmaly 16 hours ago 1 reply      
I am looking forward to

1. runtime/pprof package now include symbol information

2. Concurrent Map

3. Profiler Labels

4. database/sql reuse of cached statements

5. The os package now uses the internal runtime poller for file I/O.

e12e 15 hours ago 2 replies      
Does anyone know of compile-time benchmarks spanning 1.4 through 1.9, along the lines of [1]?

I see there's (more) parallel compilation in 1.9 - so that should improve elapsed time (but not reduce cpu time) of compilation.

Would be nice to know if 1.9 is (still) on track catch up to/pass 1.4.

[1] https://dave.cheney.net/2016/11/19/go-1-8-toolchain-improvem...

riobard 6 hours ago 2 replies      
I'm wondering if we could abuse type alias to fake generics somehow? E.g.

 // file tree.go type T = YourConcreteType type TreeNode struct { Value T } // rest of tree implementation
Then you can just copy the file and replace YourConcreteType at the top and voila!

Seems simpler to use than the unicode hack here https://www.reddit.com/r/rust/comments/5penft/parallelizing_...

ivan4th 15 hours ago 2 replies      
I was looking forward to the fix for the dreaded Linux namespace handling problem: https://www.weave.works/blog/linux-namespaces-and-go-don-t-m... which kind of makes Go suck for many important container-related tasks. But apparently the effort has stalled... https://go-review.googlesource.com/c/go/+/46033
alphaalpha101 16 hours ago 5 replies      
So this new concurrent map? Am I right in understanding it's designed for cases where you have a map shared between goroutines but where each goroutine essentially owns some subset of the keys in the map?

So basically it's designed for cases like 'I have N goroutines and each one owns 1/N keys'?

kokwak 13 hours ago 1 reply      
Please add enum class
Windy.com windy.com
438 points by davesque  16 hours ago   90 comments top 37
PaulHoule 2 hours ago 1 reply      
What I can't get over is the speed.

Ever since tile maps have become the norm, most of the weather radar services are unbearably slow on my DS(Hel)L connection. This loads fast.

I wonder what they are doing right.

jackschultz 2 hours ago 2 replies      
Interesting note, I play a lot of golf competitively, and they've basically recently allowed players in tournaments to use phones (obviously players don't do that much or if at all, concentration and all that).

But the one specific rule is that players can't use their phones to check the weather, and even more specifically the wind direction. Wind makes a huge difference on the course, and being able to know the exact direction of the wind where the ball is flying would be really helpful. Other part is being able to know if the wind shifts during the round. Before you start you can check the wind direction, but if that changes, you could be out of luck. This seems like a perfect golf aide, so much to the point where it's a penalty in a tournament.

dejv 8 hours ago 3 replies      
Windy was coded by billionaire founder and owner of Seznam, which is czech search engine (and media company), one of only three other search engines in the world that still beats Google in local market.
amai 3 hours ago 0 replies      
bhhaskin 15 hours ago 4 replies      
That is pretty cool! Although hijacking the back button is a bit annoying.
karboosx 1 hour ago 1 reply      
Very similar to: https://www.ventusky.com
ilblog 7 hours ago 0 replies      
We are happy that you like windy.com. If you want to help us with this project, then report all issues to community.windy.com We love bug reports from programmers, with all screenshots etc (Ivo)
SeanDav 2 hours ago 1 reply      
This site just went straight to my bookmark list, brilliant.

Small criticism: Every time you move it creates new page entry in the back button list, so once you have moved around a bit you can't use browser back button to easily go back to previous website.

penagwin 15 hours ago 3 replies      
This looks incredibly similar to https://www.ventusky.com/ doesn't it?
crosbyar 49 minutes ago 0 replies      
Other than the whizbang interface there's nothing really innovative going on here as far as actual science... Same with all the other me-too sites that use that same streamline animation code. Some of the visualization is downright misleading, but whatever. The ventusky wave animation is awful and physically incorrect.
mourner 5 hours ago 1 reply      
Great project! I recently wrote a detailed technical post on how to implement a similar visualization with WebGL check it out: https://blog.mapbox.com/how-i-built-a-wind-map-with-webgl-b6...
rthomas6 41 minutes ago 0 replies      
Looks like they are using http://leafletjs.com/
stanlarroque 7 hours ago 2 replies      
It reminded me of this awesome project: https://earth.nullschool.net/
CharlesDodgson 4 hours ago 0 replies      
As someone who works with mapping data and web based maps regularly, this website is excellent in terms of usability. The ease of switching overlays, adding symbols, saving selection, adjusting the map are all excellent and intuitive. The ability to drill down on symbols added in a smooth and sensible way is excellent. This is how you make web maps for specialist data!
StavrosK 5 hours ago 0 replies      
That's funny, as some friends of mine made the exact same thing years ago, and it even had a similar name (Weendy):


They have since pivoted to something similar, as AFAIK they didn't get enough traction.

rodolphoarruda 1 hour ago 0 replies      
Thanks to this website I discovered that procrastination can reach new levels... there's no limit to it.
malloryerik 15 hours ago 1 reply      
I wrote in their forums suggesting they add air quality/pollution info and greenhouse gas emissions to their maps and it was done in about three days. I was impressed.

Btw I think they use Riot.js on their front end?

JumpCrisscross 8 hours ago 0 replies      
You know what strikes me? Look at the overland place where the winds move quickly. Those are our cities. We're living off whiffs in the aerodynamic backwaters on a world of windy metropoli.
abtinf 8 hours ago 1 reply      
Is there a name for the hotspot wind system south of Africa? That looks intense.
needz 1 hour ago 0 replies      
Trying to back out of this website after zooming in is really frustrating.
sccxy 9 hours ago 1 reply      
They also provide free API (http://api.windytv.com/)

Which is very cool to track ocean sailing.

I have made several trackers to follow around the world sailing races/adventures.



Dayshine 3 hours ago 0 replies      
The UI is very small on a big screen. The entire right hand legend and menu is taking up around 10% of my screen width, so the button are tiny!
ClassyJacket 8 hours ago 1 reply      
Holy hell please give me my back button back.
pmoleri 11 hours ago 0 replies      
Great website, has been around for some years as windity and windytv. I guess windy will be its final name.I usually find windguru.cz more easy to read, but windy offers a cool visualization that I think gives more context. It's really cool to check it during hurricanes.
subroutine 3 hours ago 1 reply      
It's going to be hot in SF today...


sparrish 14 hours ago 1 reply      
Not sure where the data is coming from but my area is showing arrows to the north west. We nearly never get wind from the south east and looking outside this map isn't accurate.
amelius 5 hours ago 0 replies      
Is it just me or is this website really slow?

(Especially after pressing the "play" button in the lower left).

33a 14 hours ago 3 replies      
Looking at this made me realize how insanely huge storms are in the southern ocean. Hurricanes and typhoons up north have nothing on that.
loblollyboy 8 hours ago 0 replies      
This is pretty, but I don't think it is going to be 35 C in the North Atlantic any time soon.
staticelf 7 hours ago 0 replies      
My friend that works in the aerospace industry uses Windy all the time.
seltzered_ 13 hours ago 0 replies      
FWIW, I found hang gliders really enjoying this site (alongside some other obscure wind estimation sites) for planning whether to go flying.
djsumdog 10 hours ago 0 replies      
and if you want to see the windiest city in the world, it's in the South Pacific:


ge96 5 hours ago 1 reply      
Oh shit what's going on by Texas
bradb3030 13 hours ago 0 replies      
It reminds me of hint.fm/wind
nvahalik 15 hours ago 3 replies      
What model is this pulling from?
nvr219 15 hours ago 0 replies      
Thanks for correctly saying "these data" in the menu
I spent my career in tech, but wasnt prepared for its effect on my kids washingtonpost.com
365 points by ALee  20 hours ago   326 comments top 10
citizenkeen 20 hours ago 18 replies      
'One of my favorite things you can do is plan a device-free dinner.'

_A_ device free dinner? Why aren't all of them? These weird little "small gestures" make me feel like people have lost control of their homes.

We have a landline expressly to give out to emergency contacts. And dinner is tech free. Every night.

I think the main problem is tech has made everything _else_ easier, but parenting harder, and parents just aren't prepared to fight the battles / put in the work. A parent who is staring at Instagram when they're at the park shouldn't be surprised that their kids wants screens, too.

mcone 19 hours ago 1 reply      
This doesn't invalidate the article, but I feel the need to point out that the author lives in a $63 million house with $80,000 worth of TV screens lining the walls. "Anyone in the house can change the screens displays to their favorite painting or photograph, in effect personalizing the room (via lighting, temperature, and even decor) to the guests own flavor." [0]

They certainly didn't do themselves any favors when they built their house.

[0] http://www.therichest.com/rich-list/the-biggest/12-unbelieva...

luckydude 11 hours ago 4 replies      
I'm a retired geek but I do tractor work, I just pulled a 14 hour day and I'm having some wine so salt heavily.

In my opinion, the best thing you can do with kids is create boredom. If they have access to a shop, or some lumber, they will start to build stuff. If kids are bored and they turn that into building, that's the first step towards getting into a good undergrad school. Building stuff is good.

The hard part, as parents, is creating that boredom. It's so easy to give them the video game babysitter. I haven't done well at that. I wish I had some magic statement that made that part easy but that part is supersuper hard.

The other thing I'd say about kids, and I hate this, really hate this, is private school. It's better. My kids went and were on track to go to Los Gatos High School which is a pretty decent school. For various reasons we found kirby.org and both of my kids go there and it's a shit ton better. I hate private schools, I think kids should experience the full range of people, not just the rich kids that get into private schools, but wow, the private school was so much better. So much better. Hugely better. My younger son who hates that school, it's a nerd school and he's a jock, came to me and said "yeah, I want to go there, it's better than Los Gatos". My older kid is applying to schools and he has a shot at the ivys, that's all the private school.

I'm ashamed to admit that I like private schools, but wow, have they been good for my kids.

Noos 17 hours ago 6 replies      
"As its use has proliferated, so has its effects...

...the interviewer, apparently, asks young people (age 18 to 22 years old) the following: whether men with the (device hidden) are human or not; whether they are losing contact with reality; whether the relation between eyes and ears is changing radically, whether they are psychotic or schizophrenic; whether they are worried about the fate of humanity."

Can you guess the device? It's the Sony Walkman, and the time is the 1980s.


The arguments aren't new or novel. Hosoda was saying similar in the 1980s, and the Walkman just as easily isolated people in the 1980s as phones do now. yet somehow humanity survived.

Remember pagers? They were the device of choice in the 1990s for kids, and they were pilloried just as bad, and even linked to drugs. There's a pretty decent history of people panicking morally about new technologies and their dehumanizing effects, and generally people have adapted more or less fine.

telesilla 20 hours ago 7 replies      
The hardest thing for me has been young children begging, just begging, for screen time. It's heartbreaking and I know very few parents who have managed to make no mean no. I grew up watching TV but then, TV was mostly for adults and I only paid attention those few hours it was child-friendly. I spent almost the entire decade of the 90s without a television, only going to the movies or the occasional VHS. Even today, while I'm on my computer for 8 hours/5 days a week, I read books when I stop work or cook or just sit and talk. I turn off the screen, close the laptop and turn off wifi on my phone. I worry for the attention span our kids won't have.

When was the last time you let a child just get bored, so they might entertain themselves with their imagination?

On the other side, when we go out for walks or camping or away from tech, it really doesn't take long for the kids to adjust.

otakucode 18 hours ago 3 replies      
I got Internet access in 1990 when I was 12 years old. Completely unfettered, unmonitored, unlimited in any way. And I wouldn't trade it for anything. Certainly many, if not most, kids would end up obsessing over social status and gossip and similar things. That has nothing to do with technology and absolutely everything to do with how they deal with their life in general. They're encouraged to avoid taking an intellectual approach to life, to never question or doubt their emotional impulses (indeed taught that those impulses are more trustworthy and 'pure' than conclusions reached intellectually). They're the kids that have always been popular, the bullies and the kids that get DUIs before out of high school. You can't protect them from themselves through any means if you're not willing to address their way of living.

And for the kids that aren't destined to live an adolescence of bickering and strife, they will flourish with access to the whole of human knowledge and ability to interact with online communities as an equal, without anyone knowing their age unless they choose to reveal it.

SunnyCanuck 19 hours ago 2 replies      
Have two kids, 14/12. They've grown up as fully connected kids, have always had access to their own devices, never more than a year or two behind the Big Now. We have never imposed "limits for the sake of limits" on their screen time. I find the idea preposterous, frankly.

We have no issue connecting with them, or doing family things together, or etc etc etc. If you can't connect with your kids when they have iPhone in hand, you're not going to be able to connect with your kids even if you're a million miles from the nearest wireless cloud.

A lot of discussion around this issue is just tired rehashing of the same complaints every generation over the past 150 years has said about the incoming generation. Some of y'all in here already sound like grandparents, lol.

fzeroracer 18 hours ago 0 replies      
When I was a teenager living with my parents we used to have a 'devices free' dinner where we'd sit in silence and realize that an awkward teenager had little in common to talk about with his out-of-touch parents. This was the time where conversation was forced and often led to arguments. Though this says more about my upbringing than the use of technology.

I think my point is that people see people looking at their phone while in restaurants or people sitting silently and assuming that this void means that the social aspect of the human race is doomed for destruction. You can't force conversation and I've found with friends and even family now that the phone has replaced forced discussions with something else. And you don't always need to fill silence with conversation and nonsense.

Similarly I would say there's nothing wrong with lessening the use of technology your kids use, but at the same time you can't just outright cut them and yourself out from it. Instead you should adapt; give your kids positive uses of the TV or their phone. If you try to force devices-free meals your kid may think that bringing up issues of cyberbullying, facebook etc might not be appropriate.

bungie4 20 hours ago 5 replies      
I have a 14 yr old and a 16 yr old step kids. They both live with their phones in their hands.

I took them camping this past weekend. Fishing (caught 4 lake trout!), tubing, camp fire .. it was a blast. Even though their was no cell and no wifi neither of them could bare to put their phone down AND CHECK IT.

They had fun, but the second they weren't stimulated they were asking to take a 40 minute boat ride (45mph!) to 'buy some ice cream' and get a cell signal.


rayiner 20 hours ago 8 replies      
It's a level-headed article. Parenting, like almost nothing else, turns normal, intelligent, educated people into nutters. A rational, evidence-based conversation about screen time, breastfeeding, pregnancy, birth, education, and diet is increasingly impossible to have, especially with millennial parents.

I just got my four year old an iPhone (with a data plan, because she realized she didn't have wifi in the car). It's great! She can text me emojis and Digital Touch messages, and Facetime me at work whenever she wants. It's fine: https://www.theguardian.com/science/head-quarters/2017/jan/0....

The Interface font family rsms.me
406 points by glhaynes  1 day ago   86 comments top 23
DiabloD3 1 day ago 3 replies      
Dear font authors:

Please screenshot renderings via multiple important renderer, important ex: Apple Safari on a Retina box (highlights weird over-bolding due to their hinting prefs), Chrome and Firefox on Windows (both use Freetype, but custom builds and don't quite match stock), and anything normal on a Linux that doesn't use a hacked up Freetype (ergo Ubuntu is out, so is RHEL/Centos and Fedora).

Also, in both white on black and black on white, because font rendering is non-linear in respect to the 2.2 gamma curve (fun fact: everybody still uses 1.8 gamma for font rendering).

jack_jennings 1 day ago 2 replies      
This is based on Roboto (reusing some outlines directly; not initially acknowledged by the designer on the marketing site), and arguably doesn't tread much new ground either in character or use-case or license. Not convinced there is anything this adds to an already crowded space.
thinbeige 1 day ago 5 replies      
A disproportionate sans serif without any letter spacing. It's free though, so better than nothing.

Edit: Dear downvoters, what I am saying is that you can take any ramdom sans serif, reduce the letter-spacing and you end up with a similar looking font face which might be even more balanced. Despite my criticism, I expressed my high appreciation that the creator offers his work for free. If you disagree let me know why instead of downvoting, maybe I am wrong and missed something.

tgsovlerkhgsel 1 day ago 2 replies      
Sadly, digits have a different width, so if you have e.g. a right-aligned, increasing numerical value in your UI, the left digit of it will "wiggle around" if the last digit iterates through 0-9, and if you have numerical values, they won't align.

This may be OK for text, but specifically for user interfaces, this is the very first thing I check when considering whether a font is usable.

With a good font, it will be immediately obvious which of these amounts is more, while this font would likely mislead you:

$ 100000

$ 111111

Ironically, Roboto seems to get it right.

AceJohnny2 1 day ago 4 replies      
Looking at Interface's glyph map, I see that letter-O (O) is slightly wider than number-zero (0). Capital-eye (I) is indistinguishable from small-ell (l), though number-one (1) is distinguishable from both.

What other glyph ambiguities do you look out for on new fonts?

gjm11 1 day ago 1 reply      
> Since this font is very similar to Roboto, glyph outlines from Roboto are indeed being used, mainly as "placeholders" while the glyph set is expanded. The Roboto license can be found in the source directory.


If this is a deliberate near-clone of Roboto, then at the very least there should be some explanation of how it differs and why.

sagichmal 1 day ago 0 replies      
It's nearly identical to the new Mac OS system font San Francisco (SF) but with tighter spacing and (subjectively to me) nicer finials and terminals. Looks great.
richev 1 day ago 0 replies      
> Interface started out in late 2016 as an experiment to build a perfectly pixelfitting font at a specific small size (11px.)

Didn't Tahoma[1] achieve this back in 1995?

[1]: https://en.wikipedia.org/wiki/Tahoma_(typeface)

harrygeez 1 day ago 0 replies      
I've been waiting for a font like this for forever! Finally a good alternative to Apple's San Francisco font.

Amazing job to the author!

rcarmo 1 day ago 1 reply      
I actually came across this yesterday and set it as a system font on my Linux machine, which runs Elementary.

Although I don't have a HIDPI display, it is nicer and (subjectively) more readable than what I've tried before (I still use Fira Code for coding and Fira Mono inside the terminal, but for the UI tried various variations of Fira, Roboto and other sans serif fonts, yet none of them stuck).

fairpx 1 day ago 1 reply      
Nice font. Would there be a way for you guys to incorporate the font on a platform like Google's webfonts? For our business (http://fairpixels.pro) we are constantly looking for great fonts to use in the UI work we do for software teams. Having a scattered landscape doesn't help.
j_s 1 day ago 1 reply      
I was interested to discover fonts with programming-specific ligatures when they were discussed last month. I haven't experimented enough yet to know how well they work out in the long term.


chrissnell 1 day ago 0 replies      
I created an AUR package for this if any Arch users want to install the font:


This package only installs the OTF version currently.

Let me know if you have any problems installing.

duncanmeech 1 day ago 0 replies      
The interweb works better when you use things like <P> not <IMG>
nkkollaw 1 day ago 0 replies      
Looks really good.

We really need good open source fonts.

BafS 1 day ago 0 replies      
Thanks for this font, I really appreciate the mix between Roboto and Helvetica or San Francisco. The cuts of Interface are more horizontals than Roboto (look the S for example) and I find it more readable, rational and beautiful. Good job !
floatboth 22 hours ago 0 replies      
Nice. Honestly, it reminded me of Apple's San Francisco more than Roboto o_0
SquareBalls 1 day ago 0 replies      
Beautiful, will definitely consider this on our new web site.
fredsted 1 day ago 0 replies      
This font is really pretty, and the text very readable. Great job.
cratermoon 1 day ago 0 replies      
Upper case I and lower case l look too much alike. Letter O and number 0 need to be more distinguishable.
tudorw 1 day ago 1 reply      
system font for the win...
virtuexru 1 day ago 0 replies      
Very very clean. I love it. The easier read the better imho.
wyager 1 day ago 7 replies      
I'm sure this is like tabs vs spaces for typographists, but why the hell do people make and use sans-serif fonts? Even ignoring aesthetics, the fact that there are horrendous ambiguities (like between I and l) renders these fonts completely inappropriate for computational tasks like copying passwords or secret keys. I have run into this problem multiple times on OS X and iOS, where the password managers use sans serif fonts.
The Librem 5: A Matrix-Native FLOSS Smartphone matrix.org
400 points by Arathorn  1 day ago   146 comments top 24
jph 1 day ago 1 reply      
Massive props to you all -- thank you for this step for freedom.

Purism is doing some of the most exciting work in the field of personal privacy and consumer computing.

The tech blog is terrific, the laptops are coming along nicely, and the team is making inroads deeper into the chipsets and drivers.

Librem 5 crowdfunding: https://puri.sm/shop/librem-5/

Matrix on Patreon: https://www.patreon.com/matrixdotorg

Purism blog posts: https://puri.sm/posts/

Hasknewbie 26 minutes ago 0 replies      
> "The specifications are continuing to get pinned down, and will not be finalized until after the campaign ends"

Big red flag right there.

In my opinion it is unethical to ask for money via crowdfunding when you haven't even bothered with finalizing the spec first.

morganvachon 18 hours ago 1 reply      
If this campaign is as sketchy as their laptop was, no thanks. I already don't trust them because of their misdirection in that project, they have a ton of goodwill and trust to rebuild before they can be taken seriously.



mrhigat4 22 hours ago 0 replies      
I have doubts, but I've been waiting for a phone like this for a long time. I hope it works out and gets enough funding. I'm glad they didn't make compromises on the OS, hardware switches, etc. I wish more companies would cater to passionate yet non-mainstream markets. It's crazy a truly hackable linux phone doesn't exist today.
Accacin 1 day ago 1 reply      
That looks really interesting. Their funding target seems quite high so I'm not entirely convinced that they'll be able to meet it.

I will definitely try and support this project as I'd love to be able to move to a more open phone.

Link to their crowd funding page: https://puri.sm/shop/librem-5/

pfooti 22 hours ago 1 reply      
I am sort of troubled by the crowdfund page. It is obviously designed to look like a kickstarter page, but it is in fact not a kickstarter. Say what you will about kickstarter, but I have questions.

Will the campaign charge you even if it doesn't reach its goal? There is no indication one way or the other on this page.

What are the refund / delivery guarantee policies here? KS ones are potentially controversial, but they at least are written down.

Are those pictures of the phone actual pictures or renders / mockups? KS has explicit policies about that, but this context makes that unclear.

INTPenis 20 hours ago 3 replies      
I remember jumping on the N900 bandwagon.

Hardware wise, for its time maybe, it was perfect with its full qwerty keyboard.

Software was so bad it was tragic. You couldn't even trust phone calls to work.

Clearly I want a FOSS phone to work but the market is miniscule and therefore the product quality will likely remain on a hobbyist level.

Furthermore I tried the Jolla phone and that wasn't exactly FOSS but the small team could be compared to what you might see in a FOSS phone. And again it was barely useful.

I also tried the ZTE Firefox OS phone, it felt like a toy and the OS was crippled.

mrhigat4 1 day ago 5 replies      
I support the whole idea, but I think it's premature.

The Riot app in my experience has a pretty unwelcoming UI/UX experience and is still insanely buggy. Things like Jitsi integration, widgets and a phone partnership should be after a solid, stable 1.0 MVP IMHO. Encryption is still opt-in and beta.

So super supportive of the environment, the momentum and a native matrix phone partnership is the right move eventually, but please get it stable, fast and polished first before branching out too far.

AdmiralAsshat 1 day ago 4 replies      

Is this what the actual phone will look like? Some kind of context photo with a reference hand would be great, because the dimensions and shape otherwise suggest this would be comparable to holding a Kindle or Nexus 7 next to my head!

xoroshiro 23 hours ago 1 reply      
As much as I want a FLOSS phone to succeed, I have my doubts. Then again, I'm not sure how all the Matrix stuff works, but if I can't simply give someone my phone number and expect it to just work, I don't see how this can happen.

Can someone explain it more simply? Does it completely forego SIM cards? Will it 'just work', or is it more of a 'we made progress in this area, but not a lot of people are going to find it practical' thing like Replicant?

grizzles 1 day ago 7 replies      
I wish these guys would just use android from the get go. It's already running on i.mx6, so by switching now they would at least have a chance of making a decent product.

They are literally throwing away hundreds of millions of dollars of excellent work that has been put into power optimization for no good reason. I've told the ceo there this like at least 3x but he is stubborn.

> Android is so frustrating! Trying to remove Googles privacy invasion bit-by-bit removes functionality bit-by-bit, and you end up with a non-working phone. Purism will solve this by putting your privacy protection and security first. Zlatan Todori, CTO

This quote is nonsense. There are at least 3 projects (replicant,copperhead & another) that are shipping trees like this.

It will be so much harder for them to create a phone that stays alive for more than an hour or two running a full linux desktop.

XorNot 1 day ago 1 reply      
Gotta say, I would buy this. Android is android, but a phone which ran Linux and Gnome I'd happily switch to at this point.
Nelkins 22 hours ago 0 replies      
Just pre-ordered one. These folks have done a great job on their Librem laptop line, and I have wanted a "FLOSS-ier" phone for a while now. And if it goes nowhere I get my money back! (allegedly)
skierscott 20 hours ago 1 reply      
The details: https://puri.sm/shop/librem-5/

> that can also become a full desktop computer with an option for a compatible keyboard, mouse, and monitor ... It can be a desktop computer and phone all-in-one.

I'm interested to see their solution. How useable will the interface be?

jancsika 23 hours ago 3 replies      
> The CPU will be an i.MX6/i.MX8, where we can separate the baseband modem from the main CPU, digging deeper and deeper to protect your privacy and isolate components for a strong security hardware stack.

I'd like to know more about this separation. For example--can the phone boot without the baseband being powered on?

swiley 1 day ago 1 reply      
If it really can run fully open GNU/Linux it's worth what they're asking.
theEXTORTCIST 23 hours ago 1 reply      
This seems like a really cool device. One that I would certainly purchase.

What weird stretch goals they have. I wonder if these are jokes?"$8m = Signatures of entire team printed inside the phone case$10m = Free encrypted VPN tunnel service for all backers for 1 year$20m = Candy Crush (clone) available for free"

everheardofc 17 hours ago 1 reply      
>The whole idea of the phone is to provide unprecedented privacy, security and autonomy by running an entirely FOSS Debian-based GNU/Linux stack (even including CPU & GPU drivers!)

Call me pessimistic but I don't think we will see a FOSS GPU driver anytime soon.

akavel 1 day ago 1 reply      
Do I understand correctly that it would be purely VoIP over GSM/2G/.../LTE? Where the particular VoIP implementation of choice is this "Matrix" protocol/ecosystem? Which potentially at some point (with enough financing?) may get some gateways enabling calls between Matrix phones like this one, and regular GSM/2G/... mobile phones?
moosingin3space 1 day ago 2 replies      
Their UI looks like it's based on Gnome -- I wonder if they wrote some extensions to make Gnome more phone-friendly.

IMO great idea to use Matrix as the communication layer -- especially when double-ratchet is stable, it'll be able to provide the good UX of things like Signal on Android, iMessage, Google Duo, and FaceTime, but built on an open platform. Hope it does well!

unknown2374 22 hours ago 1 reply      
I couldn't find the single most important piece of information for me, the battery life. Is that not decided yet or has it been omitted for marketing convenience? I was thinking of signing up to get one but with that missing bit I don't want to take the risk.
white-flame 13 hours ago 1 reply      
This phone's software may not track you, but the cellular network itself still does.
toolibre 16 hours ago 1 reply      
The first shots of the phone in the video have it surrounded by logos for Fedora, Arch, Suse, PureOS, etc.

First sentence of second paragraph of text: "based on Debian"

Why are you promoting the phone with a bunch of unrelated logos?

ZenoArrow 20 hours ago 0 replies      
I can't really afford a new phone right now, but I really want this to be a thing, and I think we're running out of chances for a FLOSS phone to take off, so I ordered one. I really hope it meets its target.
Amazon cuts Whole Foods prices washingtonpost.com
318 points by kelukelugames  16 hours ago   226 comments top 24
sowbug 10 hours ago 0 replies      
Brick and mortar retailers finally got their way in 2012 when Amazon started collecting sales tax in states where it had no physical presence.

This removed the reason for Amazon to avoid that very same physical presence in so many states. Now we have local Amazon warehouses with one-day and same-day delivery, Amazon delivery lockers in convenience stores, Amazon-operated delivery vehicles, and soon Amazon grocery stores.

Is this the level playing field[1] that B&M retailers had in mind?

1. http://www.mercurynews.com/2012/09/13/mercury-news-editorial...

cgb223 14 hours ago 4 replies      
It's funny that a paper owned by Jeff Bezos is reporting on a company owned by Jeff Bezos cutting the prices of another company just bought by Jeff Bezos
makecheck 15 hours ago 4 replies      
Most Whole Foods stores Ive seen arent exactly hurting for business and the parking lot is basically full. If their stuff becomes cheaper, thatll drive demand way up, at which point theyll need to have more ways to buy. That might mean building more stores but my guess is that Amazon is expecting online shopping to go up once it becomes a bit too crowded at the actual stores.
afpx 15 hours ago 5 replies      
I'll believe it when I see it.

"salmon, avocados, baby kale and almond butter" - sounds more like they're going to go the Trader Joe's route: have a few high-visibility loss leaders that give the appearance of generally low prices but higher prices overall.

That said, I'm looking forward to the 365 brand being available through amazon.com. But, like at Trader Joe's, I'll have to re-check packaging to see from where they're sourcing the food.

mack1001 16 hours ago 3 replies      
I feel this is the beginning of the end of instacart. Amazon is getting into same day delivery big time with this. Now they own the grocery warehouses to drive the end to end supply chain.
Fej 16 hours ago 2 replies      
"Your margin is my opportunity."

- Jeff Bezos

Seems like they are continuing to follow this ethos.

petraeus 45 minutes ago 0 replies      
Its hilarious to me to think of the type of people who buy "organic" while driving around in their v6s living in their million $ homes and dressing in designers clothing.

its a feel good type of marketing intended for the weak minded ignorant hypocrites.

You actually want to save the world? live a minimalist lifestyle, stop over-consuming and realize your wealth is actually destroying the planet.

nemo44x 13 hours ago 0 replies      
I believe that top quality ingredients will be strictly reserved for better restaurants and home cooks that are willing to pay for it. There just is not possible production capacity for this level of ingredient that the common person can freely buy which is what Amazons strategy is here.

They can make a splash by lowering the price of a few ingredients with shocking price tags, like avocados. But will be impossible due to supply lines and cost of production for Amazon to turn Whole Foods into a high end merchant of top quality ingredients while maintaining any kind of margin.

If they want to play the Amazon loss game they can for awhile but eventually when a financial crisis hits and they have to rely on cash reserves, this cash poor company relative to their peers will be in trouble.

VC's subsidize Uber rides. Amazon investors subsidize these types of ventures. For now.

noobermin 16 hours ago 6 replies      
Will there be as sexy of a press release when he cuts wages and lays off workers?
dotnetisnotdead 16 hours ago 1 reply      
incredibly smart. There's a bit of a loss after an acquisition for obvious reasons which usually means cutbacks and trimming the fat etc. Looks like Amazon will be doing this but found a way to create a small rush of customers to offset it a bit. Very smart.

I for one will be going in there just to see what has changed. I haven't been in a WF for 2 years (new seasons girl here) mostly because of cost.

uniformlyrandom 16 hours ago 5 replies      
Good. Whole foods is a nice idea gone horribly wrong (horribly expensive)? I buy some stuff I cannot find anywhere else from WF, but I get the rest for half of what WF is asking next door.
odammit 9 hours ago 0 replies      
Cheaper avacados. Well melt my California heart.

If you want an awesome combo on savings with Amazon in the meantime sign up for their cash back credit card then buy all of your household items using it "through subscribe and save."

It'll end up saving you ~15% (5% cash back; 10% off) on most of your household and pantry items.

fragsworth 16 hours ago 5 replies      
I'm willing to pay extra to eat ethical meats/dairy/eggs and products that generally avoid factory farming. And otherwise eat vegan. By the look of things, Amazon will eventually get rid of most of these things that Whole Foods made very easy for us.

Yes it's more expensive. That's because the cheapest foods that you buy at the cheapest supermarkets are fucking terrible for the livelihoods of animals.

Not a fan of this corporate buyout. Amazon clearly has a much different direction in mind for this chain. I wish they bought Kroger instead.

perseusprime11 1 hour ago 0 replies      
Something about reading this news in Bezos owned Washington Post feels weird to me.
sjg007 10 hours ago 0 replies      
Amazon wants to be the Ocado of the usa and will probably take out blue apron as well.
fullshark 15 hours ago 2 replies      
I'm pretty skeptical Amazon knows how to run a premium grocer. I hope they didn't do this only cause they saw Whole Foods as a way to super charge their prime pantry program.
gigatexal 11 hours ago 0 replies      
Sweet! We can go back to Whole Foods again. And the Amazon Prime members discount is rad.
johnhenry 13 hours ago 0 replies      
Probably not the right place to say it, but I really hope Amazon/Whole Foods pays attention to the quality of their hot bar foot. In some locations, it's consistently great, while in other locations, not so much...
TearsInTheRain 16 hours ago 1 reply      
I was surprised at how quickly this was approved by the FTC. I think they should be more active in preventing corporate consolidation.
ljf 16 hours ago 2 replies      
Just the pedant in me, but seems odd that the title doesn't read: 'Amazon cuts Whole Foods prices'

Surely the name of the company is 'Whole Foods'?

(just edited this comment for clarity and grammar)

ajaimk 16 hours ago 1 reply      
Seriously, why implement a paywall that can be dodges with Inspect?
dsfyu404ed 16 hours ago 1 reply      
Will shoppers now refer to it as "partial wallet" instead of "whole wallet"?
amelius 16 hours ago 1 reply      
After so many reports of fake items on Amazon, I wonder where this is going. Fake food?
randyrand 15 hours ago 0 replies      
>Everybody should be able to eat Whole Foods Market quality

Strange. Last I heard Whole Foods has made no effort to get this quality of food into the hands of 7.2 Billion people that aren't Americans.

Or maybe it should be okay that different groups of world people have access to different quality of food.

But you can't have it both ways.

Introducing Network Service Tiers googleblog.com
344 points by ropiku  1 day ago   200 comments top 31
Veratyr 1 day ago 12 replies      
A long standing complaint of mine is that Cloud egress pricing severely limits the usefulness of compute. If I want to say process some visual effects on a large (1TB) ProRes video, I might spend $1 on the compute but $100 on the egress getting it back.

Unfortunately these changes don't really resolve that problem. "Standard" pricing is a paltry 20% less. That 1TB video egress still costs $80 and for that price I can rent a beefy server with a dedicated gigabit pipe for a month.

Why is "Cloud" bandwidth so damned expensive?

I'd love a "best effort" or "off peak" tier. I imagine Google's pipes are pretty empty when NA is asleep and my batch jobs aren't really going to care.

pbbakkum 1 day ago 4 replies      
A few notes here:

- An unmentioned alternative to this pricing is that GCP has a deal with Cloudflare that gives you a 50% discount to what is now called Premium pricing for traffic that egresses GCP through Cloudflare. This is cheaper for Google because GCP and Cloudflare have a peering arrangement. Of course, you also have to pay Cloudflare for bandwidth.

- This announcement is actually a small price cut compared to existing network egress prices for the 1-10 TiB/month and 150+ TiB/month buckets.

- The biggest advantage of using private networks is often client latency, since packets avoid points of congestion on the open internet. They don't really highlight this, instead showing a chart of throughput to a single client, which only matters for a subset of GCP customers. The throughput chart is also a little bit deceptive because of the y-axis they've chosen.

- Other important things to consider if you're optimizing a website for latency are CDN and where SSL negotiation takes place. For a single small HTTPS request doing SSL negotiation on the network edge can make a pretty big latency difference.

- Interesting number: Google capex (excluding other Alphabet capex) in both 2015 and 2016 was around $10B, at least part of that going to the networking tech discussed in the post. I expect they're continuing to invest in this space.

- A common trend with GCP products is moving away from flat-rate pricing models to models which incentivize users in ways that reflect underlying costs. For example, BigQuery users are priced per-query, which is uncommon for analytical databases. It's possible that network pricing could reflect that in the future. For example, there is probably more slack network capacity at 3am than 8am.

brunoTbear 1 day ago 3 replies      
I quite like the way Google has drawn the map here-since no cables reach from India to Europe, they've split the map there making the paths easier to trace between Asia and NA. https://2.bp.blogspot.com/-QvF57n-55Cs/WZypui8H8zI/AAAAAAAAE...

Compare with the difficulties of https://cloud.google.com/images/locations/edgepoint.png

Elegant and subtle work. Just like the networking.

jstapels 1 day ago 3 replies      
Egress pricing for Google and AWS (sans Lightsail) continues to be one of the biggest price differences between them and smaller hosts such as Linode and DigitalOcean.

I think Google missed an opportunity here. They should have cut the prices more significantly for standard tier (sacrificing performance) to make this more competitive.

Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan.

always_good 1 day ago 3 replies      
This precedent (including CloudFlare's new private routing) doesn't bode well for the public internet.

Imagine the day when everyone has to use private routing and the public internet barely even gets maintained anymore.

Of course, public internet also suffers tragedy of the commons and not much is happening on that front. Like how most people are still behind ISPs that allow their customers to spoof IP addresses. And nobody has reason to give a shit. We're getting pinned between worst of both worlds. It's a shame.

idorosen 1 day ago 3 replies      
TL;DR: New Standard tier level is hot potato routing while existing (now called Premium) tier is cold potato routing.


breck 1 day ago 1 reply      
Seeing the map of Google's network makes me appreciate more the impact of undersea cables.

If you're interested in the history of earth-scale networks I recommend this free documentary on Cyrus Field and the heroic struggle to lay the first transatlantic cable: https://www.youtube.com/watch?v=cFKONUBBHQw

jerkstate 1 day ago 6 replies      
How is this different from paying more for a fast lane, which net neutrality is supposed to prevent?

Edit: there seems to be a bit of confusion what I'm referring to. I'm referring to the Open Internet Order of 2015 [1] which states:

18. No Paid Prioritization. Paid prioritization occurs when a broadband provider acceptspayment (monetary or otherwise) to manage its network in a way that benefits particular content,applications, services, or devices. To protect against fast lanes, this Order adopts a rule that establishesthat:A person engaged in the provision of broadband Internet access service, insofar as suchperson is so engaged, shall not engage in paid prioritization.

[1] https://transition.fcc.gov/Daily_Releases/Daily_Business/201...

sitepodmatt 1 day ago 1 reply      
I suppose this was inevitable, the costs of cold potato routing must be prohibitive, especially if we consider more exotic places, for example riding the GCP network from just a few milliseconds away in Bangkok all the way to a tiny GCP compute instance in London on practically all GCP network (exc first three hops locally). GCP network is awesome, I am surprised we are only see a small pricing reduction for standard offering, perhaps idea is to eventually make it 2-3x price, a premium worth it imo if you consider one would push most bandwidth heavy assets onto edge CDNs anyway.
jloveless 1 day ago 0 replies      
Google's network (especially w/ BBR[1]) is amazing and this makes the price point more approachable for other use cases (like running your own CDN[2]).

[1] https://cloudplatform.googleblog.com/2017/07/TCP-BBR-congest...[2] https://blog.edgemesh.com/deploy-a-global-private-cdn-on-you...

xg15 1 day ago 3 replies      
Not that I'm really surprised, but does that map imply that Google has its own trans-atlantic undersea cables? Is there some more info about that?
kyledrake 1 day ago 0 replies      
Great idea, but it's still way too expensive. I pay between $0.01/GB (the fancy CDN stuff) to ($0.0024/GB) from an IP transit provider for Neocities. That's market rate. I would have given them a pass at $0.02-$0.03, but not for $0.085.

If you pay this "public internet" rate, you're paying essentially 2007 transit prices. I hope you don't need to ship a lot of traffic. I hope you don't need to compete with someone that's paying market rate.

I would love to use GCS for our infrastructure, but with rates like this, it's hard to imagine us ever switching.

heroic 1 day ago 3 replies      
> There are at least three independent paths (N+2 redundancy) between any two locations on the Google network, helping ensure that traffic continues to flow between these two locations even in the event of a disruption. As a result, with Premium Tier, your traffic is unaffected by a single fiber cut. In many situations, traffic can flow to and from your application without interruption even with two simultaneous fiber cuts.

What does this mean? N+2 redundancy should mean, that even if both go down, then service will not be affected at all, no?

jedberg 1 day ago 2 replies      
The most interesting thing to me here is that they can actually deliver a cheaper service by going over the public internet. I would think their private net would be cheaper because they don't have to pay for transit.

I guess transit is still cheaper than maintaining ones own lines...

cwt137 1 day ago 3 replies      
I thought I read an article about an online game company who was doing something similar with their users; trying to get their users on their private network as soon as possible. Does anyone else remember that article on HN?
ssijak 1 day ago 0 replies      
Reading this I just got stumped by how many stacks, layers, hardware, technologies, and knowledge incorporated into all of that those bytes needed to travel so I could read them on a laptop across the globe
0x27081990 1 day ago 2 replies      
I thought they were firm supporters of Net Neutrality. Or is this somehow different case?
ksec 1 day ago 0 replies      
A Naive Questions. When we say Private Networking, In Google or Amazon terms, does it actually mean Google buying / laying down Fibre from DC to DC, much like how OVH does. Or they are renting / buying dedicated links in multiple exchanges.
gigatexal 1 day ago 0 replies      
I just think the funny thing is the new feature is instead of taking the hit on price for the current level of service their shiny new feature is standard networking! Woot!!
Animats 1 day ago 2 replies      
Don't sign up for a Google service unless you get contract terms which say they can't terminate you at will and have penalties if they do.
grandalf 1 day ago 1 reply      
Ironically, this offering is precisely the argument against network neutrality -- different customers need different QoS guarantees.
0xbear 1 day ago 1 reply      
So now we know why Google's egress was so expensive before. It was the premium offering, and standard wasn't quite ready yet.
josephv 1 day ago 0 replies      
Cloud neutrality now
benbro 1 day ago 1 reply      
Can I use the new standard tier with all services like cloud storage or only with compute instances?
hartator 1 day ago 0 replies      
It's kind of interesting that after being so against preferred networking via their net neutrality stance, they basically implemented it.
unethical_ban 1 day ago 0 replies      
The title should not have been changed - old version noted the product referenced, Google Cloud Compute.
CodeWriter23 1 day ago 2 replies      
It would be nice if they would say if their pricing is per GB or per TB.


christa30 1 day ago 0 replies      
Via the wonderful people at Google... Introducing Network Service Tiers: Your cloud network, your way
lowbloodsugar 1 day ago 2 replies      
So is this like the app engine price hike debacle a few years ago but with "better" messaging? So "Try Network Service Tiers Today" means "Migrate to Standard Tier today to avoid the massive price increases coming soon"?

But fundamentally they just massively underestimated costs and need to find a way to adjust pricing. With app engine it was very conveniently beta, so they used the end of beta for the price hike. For this, they're having to invent a "Premium" and a "Standard" Tier, and hey guess what, everyone has been using "Premium".

My experience so far with Google has been "Use this now, and we'll have a massive price hike later, if we keep it around at all."

Tepix 1 day ago 0 replies      
> "Over the last 18 years, we built the worlds largest network, which by some accounts delivers 25-30% of all internet traffic

I think that's way more than enough already, thank you.

arekkas 1 day ago 1 reply      
Ok so lobby against net neutrality but don't give a * in your own network. "Don't be evil", right?
Wall Street Banks Warn Downturn Is Coming bloomberg.com
320 points by champagnepapi  2 days ago   382 comments top 3
chatmasta 2 days ago 29 replies      
The pattern of boom/bust cycles over the past century is alarmingly consistent, especially for a field like economics that is famously unpredictable. Just look at the graph in Exhibit 7 of this article. It's almost perfectly periodic. According to investopedia [0], "there have been 11 business cycles from 1945 to 2009, with the average length of a cycle lasting about 69 months, or a little less than six years." By this logic, we're definitely "due" for a downturn very soon.

Does anyone who understands finance have any insight on why this pattern seems so predictable? Is it due to fundamental economic drivers, or is it merely correlated with major historical events (internet 2000s, globalization 1990s, deregulation 1980s, post-WW2 society 1950s, etc)?

If technological society does not continue to innovate at the pace of the last few decades, will boom/busts smooth out at a point of slower growth?

[0] http://www.investopedia.com/terms/b/businesscycle.asp

jseliger 2 days ago 2 replies      
The nice thing is that if you predict enough downturns you'll eventually be right.

The clich goes, "Economists have predicted nine of the last seven recessions," but I think the numerator is actually higher.

This article: https://www.theatlantic.com/magazine/archive/2008/12/why-wal... was published in 2008 but is still underrated.

chollida1 2 days ago 7 replies      
What I'm most excited to see in the event of a market downturn is how well Betterment and Wealthfront hang onto their clients.

I'm guessing that the average client of those firms hasn't really lived with a significant stock market investment during a bear market. Will these clients keep their money invested in a larger percentage than the typical ETF investor?

If so then I think that's a huge bullish signal for these new types of wealth management firms.

If not, then those companies are going to have to go out and raise money in a downturn.

If you can hold peoples money during a downturn, then I view that as a very positive investment signal, there has to be something more than the dollar, bitcoin, hedge funds, and gold that people can turn to in a downturn.

How ACH works: A developer perspective (2014) gusto.com
316 points by alpb  13 hours ago   195 comments top 31
mdip 12 hours ago 8 replies      
I noticed a few comments specifically referencing FTP (and who can blame them since the HN title as of this moment specifically references it). In the first post of the series, the author refers to the server as a "Secure FTP" server, which can be confusing to read[0]. In later parts (and a little googling of my own), it's clear that the server is actually an SFTP server, not a plain-old FTP server.

It's still plenty archaic, but takes the headline's shock value down a small peg[1].

[0] It adds a mental pause -- a Secure ... FTP server. It hints that, possibly, it's a reference to a different aspect of the server's security (a non-technical person might refer to a server as being a "secure" server simply because it's protected by an ID and password, for instance).

[1] Based on my personal interaction with banks and software, as well as several friends who had previously been members of a few banks' IT departments, my first -- very sarcastic thought -- was "of course it works that way!"

joshribakoff 13 hours ago 8 replies      
I had an integrator request this so I stood up a nodeJS server that only implements upload, not download. This way if they leaked their own password, a malicious actor is limited to forging data, and no real data can be leaked. Because it didn't work in FileZilla, they didn't want to use it. Worked at another company that shuffled data between big name gyms & health insurance companies, it also used CSV files sent over FTP in all directions, to my dismay. CSV isn't even a well defined format, and you get all kinds of impedance mismatches with different delimiter & escaping mechanisms, character encodings, BOM, etc. Other companies will just give you a SQL user & let you go to town mining their database directly. I don't understand whats so difficult about making an API, but sometimes it seems like no one wants to do it. You can't push back too much or they will just see you as a problem & decide not to integrate with you.
peterjlee 12 hours ago 0 replies      
Here's a good NPR Planet Money episode on how transferring money works.


Apparently, back in the days before ACH, banks met up at a parking lot in NYC every night and literally exchanged bags of paper checks.

There was a proposal build something better than ACH, but it was denied because the cost to upgrade the infrastructure would cost too much for small banks and credit unions.

nickbauman 12 hours ago 2 replies      
I worked on the ACH system at the Federal Reserve Bank. When you're getting multi-gigabyte files from the Social Security Service daily that have many millions of transactions in them, you appreciate the NACHA format's compactness (~100 bytes each tx). We never transmitted files on insecure protocols like FTP, though.
derefr 13 hours ago 3 replies      
A good story on ACH: http://www.npr.org/templates/transcript/transcript.php?story...

They mention that other regions' inter-bank money-transfer systems (e.g. the EU's) have been sped up to be same-day, or in some cases nearly instantaneous. The US ACH system lags behind, due to the sheer number of institutions that would be involved in a modernization effort. (There are a lot more US banks than there are UK/French/Canadian/Australian/etc. banks; I think in part because a bank that operates in 50 states is technicallyand legally50 banks, and each one maintains its own ACH infrastructure?)

TACIXAT 32 minutes ago 0 replies      
>At Gusto, we rely heavily on the ACH network. For example, when a company runs payroll, well use the ACH network to debit the companys account to fund their employees pay. Once weve received these funds from the company, well again use the ACH network to initiate credits into each of the employees accounts to pay them for their hard work.

Can you use ACH to initiate a transfer between two (third) parties (i.e. you not being one of them)? If not, what are the requirements to be a broker / escrow in between them?

manigandham 12 hours ago 0 replies      
As of September 15, same-day transfer will be possible as part of phase 2 of the modernization plan:


ryanackley 1 hour ago 1 reply      
There are several companies that provide an API on top of ACH. I work for one[1]. For high volume ACH (like a payroll company) it's usually cheaper to go through an API provider than it is to go directly through the bank. I'm not exactly sure why. Maybe because we handle technical support? We also have better reporting.

One of the challenges for banks is that there is an oligopoly on the software that runs the bank. There are 4 companies that provide the "core banking" software to most of the banks in the USA. The banks get stuck providing you with whatever services one of these four pieces of software is capable of.

[1] http://acheck21.com/api/

rollulus 6 hours ago 1 reply      
Having spent a few years at a large energy company, I got quite used to the use of FTP servers to exchange--what else--csv files with data. And is a uploading/downloading a file to/from some ftp server really that different from POST/GETing an object to/from some REST service?

Some major news/market information provider solely made their data available to us through ftp. And used Amazon SNS to push a notification that something new is available on that ftp.

rayiner 11 hours ago 1 reply      
In 20 years FTP will still be a thing and whatever JS/Ajax/RPC/WSDL/JSON thing kids are using these days will be as dead as CORBA.
zie 11 hours ago 0 replies      
For us and our bank it's gpg encrypted and then transferred over SFTP.

Sure, the ACH file format is sort of sucky, but it's not like it's difficult. The lack of an ACK is super awful tho. Payroll has to call in 20m after sending to verify.

jackgavigan 12 hours ago 3 replies      
The UK's Faster Payments system does same day (often less than an hour) inter-bank transfers for up to 250,000.


tannhaeuser 9 hours ago 1 reply      
FWIW, Swift (the company behind the interbank payment system) has been developing and pushing ISO 20012 as an XML-based long term replacement for their Swift message format, though it's not designed as a replacement for ACH.

For that, there was HBCI years ago (also XML); don't know if it's used much still.

iokevins 13 hours ago 2 replies      
At least it's secure and usually FTPS...hopefully using TLS 1.2+ cryptographic protocol.

Previous discussion:


SeoxyS 12 hours ago 0 replies      
I've implanted ACH file format and, worse, FedWire BAI2 file parsing it's absolutely archaic. The worst part is that various partner banks will have differently erroneous variations of their implementation of the BAI2 spec so we had to intentionally code buggy version to match the bugs they had on the other side ridiculous.
cperciva 11 hours ago 0 replies      
I'm disappointed that they use SFTP rather than UUCP. But I guess ACH isn't quite that old...
jlgaddis 11 hours ago 0 replies      
While "part 1" of this series says "FTP" (implying plain-text/unencrypted data), "part 2" [0] and "part 3" [1] both say "SFTP". This is "more correct", in my experience, as encryption is pretty much always used nowadays.

[0]: http://engineering.gusto.com/how-ach-works-a-developer-persp...

[1]: http://engineering.gusto.com/how-ach-works-a-developer-persp...

kar1181 4 hours ago 1 reply      
Going from the UK to the US was like stepping back in time when it came to banking. I remember complaining about some aspects of UK banking - it's going to take a day for my transfer to complete!?! Now we have faster payments in the UK which complete in hours at most.

Meanwhile in the US I still had to pay my rent with a physical check because that was easier than figuring out the weird 'pay anyone' implementation my bank had.

ufmace 10 hours ago 0 replies      
Yup, most of the modern business world works by FTPing CSV files all over the place. Usually there is security in there somewhere. Sometimes it's XML. JSON? Maybe in 2030 or so.
tomschlick 12 hours ago 4 replies      
The exciting thing about all these crypto currencies to me, is that banks could roll their own private "exchange" currency to do near real time transfers to any other bank / account.
meta_AU 9 hours ago 2 replies      
The entire Australian energy market communicates with XML over FTP. Electricity meter data is CSV in XML over FTP.

Sometimes simple just works.

animex 12 hours ago 0 replies      
XRP (Ripple) to the rescue?!
rodgerd 12 hours ago 0 replies      
I work in banking and it's always interesting to:

1/ See people encounter how these things work, because there's usually a sense of lost innocence about it. (If they stick around long enough they come to understand that dealing with hundreds of years of history is why glib "re-imagine everything" solutions tend to come a cropper).

2/ Continually discover that by the standards of the rest of the world, US banking is even more like banging rocks together.

designium 12 hours ago 1 reply      
It is good to mention that it also a very similar system in Canada for EFT. I did implementation of that.
chomok 10 hours ago 0 replies      
This is why we need to provide better regulations for adapting better blockchain technologies in various industry sectors including this, banking. Also it's another away of sharing and saving infrastructure costs for bankings while providing better security than traditional banking systems like ACH. Perhaps ACH can be written in smart contract in safer way with security.
dibujante 10 hours ago 0 replies      
Oh god this is actually my job. Well, part of it.
coding123 9 hours ago 0 replies      
I just moved 25% of my savings account into Ethereum yesterday... over ACH (DOH!)
horsecaptin 13 hours ago 2 replies      
Today my bank sent me a note saying that ACH payments for both credit and debit cards will only take 1 day to process.
k26dr 12 hours ago 1 reply      
Makes you pine for Bitcoin
DINKDINK 13 hours ago 1 reply      
Totally more secure, less error prone, and faster than a blockchain. /s
Apple Scales Back Its Ambitions for a Self-Driving Car nytimes.com
269 points by fmihaila  2 days ago   417 comments top 5
tambourine_man 2 days ago 13 replies      
From the beginning, the employees dedicated to Project Titan looked at a wide range of details. That included motorized doors that opened and closed silently. They also studied ways to redesign a car interior without a steering wheel or gas pedals, and they worked on adding virtual or augmented reality into interior displays.

The team also worked on a new light and ranging detection sensor, also known as lidar. Lidar sensors normally protrude from the top of a car like a spinning cone and are essential in driverless cars. Apple, as always focused on clean designs, wanted to do away with the awkward cone.

Apple even looked into reinventing the wheel. A team within Titan investigated the possibility of using spherical wheels round like a globe instead of the traditional, round ones, because spherical wheels could allow the car better lateral movement.

Very interesting, and one heck of a leak if true.

iiiggglll 2 days ago 5 replies      
> Even though Apple had not ironed out many of the basics, like how the autonomous systems would work, a team had already started working on an operating system software called CarOS. There was fierce debate about whether it should be programmed using Swift, Apples own programming language, or the industry standard, C++.

Wow. Few things guarantee success like starting off a project with a good old-fashioned language flamewar!

IBM 2 days ago 1 reply      
This is a weirdly titled report which implies it just happened. The "Apple scales back" part was already reported first by Bloomberg last year (which seems to be behind a paywall now) [1]. Bob Mansfield was brought on to refocus Project Titan on the fundamentals (being self-driving) rather that producing a car [2]. But both of these reports have the exact same hedging:

>Apple Inc. has drastically scaled back its automotive ambitions, leading to hundreds of job cuts and a new direction that, for now, no longer includes building its own car, according to people familiar with the project.

>Five people familiar with Apples car project, code-named Titan, discussed with The New York Times the missteps that led the tech giant to move at least for now from creating a self-driving Apple car to creating technology for a car that someone else builds.

And that's because the idea that Apple is going to be an auto parts supplier like Delphi that sells middleware to car companies is completely laughable.

There isn't actually much news in this report. The tidbits that the reporter got clearly motivated writing this article but it doesn't actually live up to its premise. In fact, PAIL seems like an expansion of Apple's efforts from what was previously reported.

[1] https://www.macrumors.com/2016/07/28/apple-car-autonomous-dr...

[2] https://www.bloomberg.com/news/articles/2016-10-17/how-apple...

mypalmike 2 days ago 7 replies      
If Apple were truly serious about building self-driving cars, they would buy one of the big 3 US auto manufacturers. It could buy all 3 with cash and still have one of the largest hoards of cash ever accumulated.
1_2__4 2 days ago 3 replies      
Can the "mass production self driving cars are just a couple of years away" meme finally die yet? Are we ready to admit that maybe this is a harder thing to invent than we've been trying to make ourselves believe?
Epistle 3 marclaidlaw.com
358 points by verroq  6 hours ago   141 comments top 19
x775 2 hours ago 2 replies      
"Old friends have been silenced, or fallen by the wayside. I no longer know or recognize most members of the research team, though I believe the spirit of rebellion still persists. I expect you know better than I the appropriate course of action, and I leave you to it. Except no further correspondence from me regarding these matters; this is my final epistle."

This feels a lot like Marc talking about Valve, no?

hacker_9 4 hours ago 7 replies      
The story actually sounds really fun to play, and fits the Half-Life universe perfectly. Funny that this script has reached the top of Reddit, HN and Twitter within hours of posting, and even crashed the authors site. Even to a blind man it's obvious the demand for the game is there, so it's amazing that Valve continues to ignore it and the fans, but I guess without a cash flow problem they really don't see the point in spending time developing it. A shame.

Edit, GitHub Mirror: https://github.com/Jackathan/MarcLaidlaw-Epistle3/blob/maste...

wbillingsley 2 hours ago 1 reply      
Trying to imagine playing this, it sounds like they were struggling to get it "right", and it may have kept feeling like a poor cousin of other games. In comparison to HL1 & 2, the plot seems a bit slow-starting, ends on an actual anti-climax instead of a cruelly-interrupted climax, and the game mechanics (snow/stealth, map phasing, time bubbles) seem to suffer from other recent games having done variations on these very well.

Add the inherent disappointment of not having the portal gun that everyone's expecting to be in there somewhere, and it could feel like it was bound to disappoint.

HL1 & HL2 did a very good job of switching genres and game mechanics from level to level, while still keeping everything clear and centred on a simple familiar mystery plot. The levels were able to establish their genres very fast -- usually from the first scene you saw as the doors opened or you rounded a corner. Everything was clear, and in both cases the motivating story was very simple, and the "plot" was setting. You've got to get help; you've got to get to Lambda Complex, We've got to get you through the portal to shoot what's on the other side...

This HL3 plot seems to have got a bit "Lost" (sorry, tv series reference) as people's motivations are uncertain and there's exposition, and an attempt to partially unfold the mystery while always adding new ones... and still trying to make those bug-pod things work as a villain that didn't work in Ep1 or Ep2.

Still, the bones of a good game are there. From my amateur eyes, it just looks like it needed to stop trying to resist/subvert the viewer's expectations, and just hit a few of the notes the player's been waiting for so they can have a note of satisfaction on the way to the new mystery.

tomlong 5 hours ago 2 replies      
Other places[1] are referring to this as 'Half-Life 2: Episode 3 Plot', not HL3

[1] http://www.shacknews.com/article/101110/half-life-episode-3-...

trampi 2 hours ago 0 replies      
"This was the case until eighteen months ago, when I experienced a critical change in my circumstances, and was redeposited on these shores"

Marc Laidlaw left Valve in January 2016. The end of the post is also probably about Valve, as others have already figured out. I wonder what else there might be to discover.

olivierva 3 hours ago 1 reply      
I think a new Half Life game would be the perfect opportunity for Valve to showcase their virtual reality kit. So far there are no blockbuster VR games and the Half Life franchise (Portal included) has a history of being very innovative (e.g. HL2 using a physics engine for the narrative, HL1&2: story told through level design). There is a lot of potential to use immersive virtual reality to enhance the story telling.
klondike_ 4 hours ago 0 replies      
hasenj 4 hours ago 3 replies      
I'm not sure if I'm an idiot or if it's because English is not my native language or what, but I find it really difficult to follow this narration style.

Can someone give a short summary of what happened?

TD;CU (too dumb, can't understand).

97803459807 4 hours ago 0 replies      
My website's down for now. I guess fanfic is popular, even a genderswapped snapshot of a dream I had many years ago.


Pica_soO 3 hours ago 1 reply      
Im waiting for the fans to pick up the pieces and make it real.
klondike_ 2 hours ago 0 replies      
It's interesting how much this differs from the storyline of the infamous leaked HL2 Beta [1], which was based on a rough version of what became HL2, Episode 1, and Episode 2.[1]http://combineoverwiki.net/wiki/Half-Life_2_original_storyli...
JSONwebtoken 5 hours ago 4 replies      
Yeah, it never would have lived up to the hype. Explained nothing.
madspindel 5 hours ago 1 reply      
Half-Life 3 confirmed dead? :(
thearn4 3 hours ago 0 replies      
It's a good and short read, and I accept it as closure for what I always thought was a very well written series.
stupidcar 4 hours ago 2 replies      
Not much of a story really, it seems like they had no intention of actually answering any of the mysteries around the G-Man and Alyx, but it sounds like it would have been a fun game.
throw2016 2 hours ago 0 replies      
This would have been a treat for those who had finished episode 2 and were waiting anxiously for the next step in the saga.

Sometimes its simply not possible to do things and fans understand but Valve just shuttered the series and turned their back on fans. It's like Game of Thrones suddenly deciding to close down for no obvious reason and with no explanation to fans.

This reeks more than a little of the arrogance of success and it's in some ways a betrayal of all the gamers who appreciated Half life for what it was and propelled Valve to its initial success.

grwthckrmstr 3 hours ago 0 replies      
Half Life 3 confirmed!
vectorEQ 2 hours ago 0 replies      
a lot of complaints about valve. u know what people also complain about a lot... companies milking their intelectual property.... i think half-life so far has left a great legacy. if they ever decide to continue it , it would be sweet. i'd hope it would be in the same fashion, shooter for pc, not VR bullshit. but hey... still enjoying half life 1 and 2+ so fuck all the whiners. be thankful for what you have got, not a needy little baby crying for more!. maybe if u guys behave thankful people like gabe/marc and others involved with what we love would listen.
The Web in 2050 jacquesmattheij.com
316 points by darwhy  1 day ago   141 comments top 22
talyian 1 day ago 9 replies      
The year is 2050. You are reading this comment from a compatibility layer in your open-source browser that translates HTML from the 2010s into Thought-Interface Language 3.2, which was an open standard ratified in 2045 by a global consortium of content and browser developers.

Back in the 2010s, web access was peculiarly gated in a dendritic configuration as ISPs provided all the single-points-of-failure interconnections between end users (including both content providers as well as consumers) and the true "internet", a multiway resiliently-routed interconnect of servers. As we know now, extending the peer-to-peer core of the internet down to the consumer has had lasting impact, including breaking up the routing monopolies of the ISPs as well as making it possible for anyone willing to spend a few grand a year on server capacity to host a new peer-to-peer router for nearby Internet users.

Many of you may not remember the origins of Google as a "search engine", a monolithic index of "every reachable page on the internet." Such a quaint idea has long since joined even further historic concepts such as Yahoo's "human-curated list of pages on the Internet". Ever since the Searchtorrent protocol was introduced and consumer searches were conducted on one of several competing distributed hash tables across the internet, no one entity has had to shoulder the responsibility of storing all the web content on the internet. This author gladly pays a small monthly fee to a local search cache provider for reliably fast localized caching of search results.

The web is here to stay. Remember your history next time you visit the local Homo Sapiens preserve and give thanks to the carbon-based beings that invented the Internet.

triangleman 23 hours ago 4 replies      
>If youre over 50 you might just remember the birth of Google, with their famous motto Do No Evil.

I love how people misremember this motto. The original slogan was "Don't be evil" which is quite different and far more subjective to start with. Now they have updated it to "Do the right thing" and you can imagine how easy it is to dance around that.

But people seem to think Larry and Sergey were actually trying to be ethically meticulous. Nonsense--the slogan always had the subtle meaning of "Don't be Microsoft-level evil" and it turns out that was not an easy hurdle to clear.

csomar 1 day ago 4 replies      
I disagree. Cryptocurrencies have shown that the new generation (as well as the old one) can embrace new and decentralized technologies.

The decentralized web is already a "successful" idea. The correct implementation for its wide use is not there yet. But it will be there.

It is just a matter of time before we have a bigger "dark web", a decentralized web, decentralized payment networks, and still have Google, Facebook, and the likes.

As the internet population grows, and as people move to more digital lifestyles; the people won't be limited (or gravitate) to a single portal. Instead, they'll spread over different networks/infrastructures for their different needs. Facebook can still be successful and grow while the decentralized internet happen.

The Internet is growing both in number (population) and in use. People today use the Internet to surf, chat, read the news, buy stuff online, book flights and hotels, pay taxes, work, study, find partners, buy drugs, etc...

altotrees 1 day ago 2 replies      
I like the idea of "rebooting the web". If things continue in the direction they are going now, I could see many forms of the internet existing. Just as the Darkweb exists, I could see other splinter networks and technologies taking shape as the internet we know now becomes more homogenized, whether it is because of giants like Google and Facebook or government control (oh god pls no) or any other factor.

I still fondly remember looking at Nike's newest shoe offerings in 1997, waiting for the photos to download and listening to my dad complain about the phone line being tied up. I looked at my girlfriend the other day in fact, and just went "god, think of how different the internet is now compared to when we were younger. What the hell will it look like in twenty years?" She called me a nerd, but still considered the question. Exciting and sightly terrifying thought to ponder, really.

AaronFriel 1 day ago 4 replies      
If I know anything about the future, it doesn't look like the present.

The web won't look like it does now in 2050, and neither will the internet.

But it might very well be built on webassembly on browsing engines cum operating systems on top of hypervisors on top of verified microkernels, and the web will probably be delivered on top of HTTP/2 on top of TCP/UDP and so on. The layers probably won't change that much.

hawkice 1 day ago 0 replies      
If you reduce the details of the story into the statement "the future of the web will be driven by anti-trust", I'd probably agree. The _present_ of the web is driven by anti-trust, and there's always more consolidation.

Where machine learning, social networks, and advertising have economies of scale, a tolerable future for the web would necessarily involve diseconomies of scale. Personal connection, concierge service, local long-term engagement with communities.

gaius 1 day ago 1 reply      
I remember in the late-90s portals were all the rage, everyone wanted to be the one-stop shop for all their users browsing needs.
freech 22 hours ago 1 reply      
Servers are only going to get cheaper. Programming is only going to get easier. If anything, things like search engines and social networks are going to become more competitive.

If someone has a genius idea for making a better engine, he won't work for google, he'll create his own.

Implicit in this fear of centralization is a kaczynskiist belief that "everything that can be invented has been invented".

People predicted some company taking over everything forever, and in fact even before the web existed sci-fi-authors imagined a centralized network, were from the servers to the software everything is provided by the government. It's never going to happen.

obiefernandez 22 hours ago 2 replies      
"Zuckerberg running for president as a Republican candidate in the United States"

LOL... he would run as a Democrat, wouldn't he?

shubhamjain 1 day ago 2 replies      
The rebooted decentralised web sounds exciting, but it's hard to deny that there are large number of projects that only Google can carry out. At what point does the dominance becomes irresponsibly large and requires intervention?
TekMol 22 hours ago 2 replies      
If history repeats itself, then some new technology will take Google and Facebook by surprise. And let a new player rise to the top.

AI is the obvious elephant in the room here.

If in 10 years Apple, Amazon, Tesla or some new startup has the better AI, then this AI will search and present content better. And market it better. And monetize it better. It might also produce its own content. Perfectly customized interactive 3D surround sound content.

Mayb it will be some decentralized autonomous organization that lives on a blockchain. Driven by AI, doing its thing. Outside of what a human mind can understand.

rst 21 hours ago 0 replies      
Relics like this exist today -- there are still Gopher servers out there. Current browsers no longer support the protocol, but you can tour the relics through a proxy -- info here:


The links to gopherspace itself are on the upper right ("standard version"/no-javascript); I'm honoring their request not to link to the proxy itself directly.

kolbe 21 hours ago 0 replies      
The symbolic nature of WWW leading to WW3 would be a little to much for me to handle.
romaniv 23 hours ago 2 replies      
If I was to bet, I would bet that in 2050 the web will be mostly replaced by some kind of VR network with a lot of sound, 3D videos and interactive objects. The web as it is already decays due to tons legacy cruft, insane complexity of doing trivial things, oceans of bad content and hyper-centralization. And all of these things are getting worse every year. VR is our best bet for a clean start.
zanybear 19 hours ago 1 reply      
So... everybody starts from the premise that we will still be here in 2050....
swiftting 1 day ago 2 replies      
"at first joining the AMP bandwagon not realising this was the trojan horse that led to their eventual demise"

Hopefully this will not be the result of AMP but interesting take nonetheless.

mdekkers 23 hours ago 1 reply      
An aging Richard Stallman throwing his towel into the ring

That's when I realised this article was Fake News!

aaron-lebo 1 day ago 1 reply      
If things really are so dire in...33 years, then it won't be Facebook or Google's fault, it'll be the fault of hundreds of thousands of hackers who had the technology available and did nothing because everyone knows those two are unbeatable, despite the fact that the tech gets cheaper and more accessible every single day.

We've got a long way to go. They're not unbeatable. They're massive goliaths, yes, but they also bloated and slow to adapt, can't focus on any one thing, and don't have consumer loyalty. They can be beaten. Not saying they will, but they can.

Side note, Halt and Catch Fire, which has always tried to be technically accurate starts focusing a lot on the early web in season 3 and 4. CERN, NeXTcubes, and related all make an appearance. It's a fun watch if you are interested in that stuff. The pilot starts with them reverse engineering an IBM PC.

cerealbad 20 hours ago 0 replies      
the internet will go the way of the telephone network.

what will replace it? a copy of all your favourite people stored on the implant in your brain. the interface will be a waking dream.

niftich 21 hours ago 1 reply      
This scenario contains a lot to unpack. Let's try to extract some of the claims:

1. Most websites will get little to no traffic.

2. Consolidation will eventually result in a mere handful of verticals remaining, in the author's opinion, solely Google and Facebook.

3. At first, content framing tactics, like FB Instant Articles and Google AMP, will result in these providers obviating the need for users to navigate outbound links; instead, the content will be surfaced from within the ecosystem.

4. Content providers (i.e. "publishers") go along with the above because in truth they are desperate for revenue. Giving away content for free in exchange for the potential of display ad revenue due to high volume is seen as their only realistic hope for survival, making this a coercive relationship.

5. Some strange political speculation, but, notably, the two giants banning people and services who have presence on the other. Also, independent newspapers get bought out and absorbed.

Out of this distillation of claims, #5 is complete baloney more egregious than an industrially-processed slice of knockoff Mortadella; beyond even a fanciful fantasy of how these companies work. Claims #1 through #4, on the other hand, are very astute predictions, or rather, observations, as they're already here.

The long tail of websites is pretty long, and most sites indeed get very little traffic even today. One need not look further than the power of communities like HN and Reddit to slashdot all sorts of sites by overwhelming it with legitimate traffic. This brittleness and inability of some sites to scale to momentary demand, along with ISPs forbidding home servers and the risk of malicious denial-of-service means that the original way of self-hosting sites on the Web is largely dead [1], or at the very least, a risky call. This unfortunate fact means you probably want to pay someone to host your site instead. Though there are thousands upon thousands of professional hosting providers, it's a dramatically smaller number than the number of websites; so we're slowly walking up the tree of vertical consolidation.

#3 is well-documented, and #4 follows naturally from the tribulations of finding business models that work on the web [2].

I stand by the view that #5 is too much of a leap; willingly excluding potential customers seems like an act of folly -- Home Depot doesn't ban anyone who shops at Lowe's, but instead they'd love to lure them away. Orthodox airlines in the US at time of writing might as well be regarded as quadrupoly: they have suspiciously similar ticket prices for most non-hub destinations, and they have semi-secret programs to offer matching frequent flier status to the topmost tier of most profitable travellers if one wants to jump ship.

Nonetheless, there is in fact a real emergent phenomenon in the continued vertical consolidation of content silos. Apple -- mysteriously absent from the author's narrative -- is the exact sort of player whose excellent products, dedicated fanbase, and seeming benevolance will result in the sort of transformations that the author fears: Apple has doubled down on producing original content [3] for its captive ecosystem, following the tactics of Amazon and Netflix, but unlike them, Apple's presence does not extend horizontally to other platforms. In fact, we just had a trending article [4] which covered in-depth the different tactics companies use to achieve reach and retain customers.

It's more believable to envision a future similar to what happened to major US television networks: NBC, ABC, CBS, Fox, and Turner; lots of mergers and intrigue, phases of ownership by movie studios, phases of ownership by seemingly unrelated enterprises that pivoted to holding companies from something else, acquisitions in efforts to form new verticals; and yet despite all this, there's still several of them. They're all deeply vertical now, but their valuations and regulatory pressure keeps them existing side-by-side.

[1] https://news.ycombinator.com/item?id=14699084[2] https://news.ycombinator.com/item?id=12299230[3] https://techcrunch.com/2017/08/16/apple-said-to-be-spending-...[4] https://news.ycombinator.com/item?id=15082966

hackertux 23 hours ago 0 replies      
>If you read this far you should probably follow me on twitter
nafey 1 day ago 0 replies      
So guys who will you place your bets on personally I think Google will prevail over FB.
ThoughtWorks has been sold to private equity martinfowler.com
311 points by phosphate  1 day ago   181 comments top 34
Abishek_Muthian 1 day ago 2 replies      
Thoughtworks is unlike other IT services company I've seen. Their adoption of latest programming paradigms and open source software is not generally seen else where, because most clients like to stick with "What has been working for everyone".

Also it's one of the few service oriented companies which actively focus on developer relations. They have several developer/techie oriented programs in every city they have a base.

e.g In my small city, Coimbatore; which even several Indians doesn't know it exists has small Thoughtworks base which conduct monthly open event called 'Geek Night' focussing on geek stuff.

Here, the developer speaks about Clojure functional programming on how they adapted it for their web application & the dwarf who follows that talk with android customisation is actually me :D - https://www.youtube.com/watch?v=R-VUlDgJ6aA#t=02h05m08s

obiefernandez 1 day ago 3 replies      
At least back in the early 2000s when I worked there, the expectation set by Roy was that once the legal issues versus Schroder, et al [1] were settled, that Thoughtworks would someday become a public trust. It would be very surprising to learn that the company was now being sold for the profit of an individual.

[1] http://caselaw.findlaw.com/de-court-of-chancery/1138774.html

chiph 43 minutes ago 0 replies      
Sounds like everyone knew that a sale event would happen one day - why did they not focus on building a pile of cash that could be used to pay the taxes (Martin implies a ~50% rate in his footnote).
hoodoof 1 day ago 5 replies      
Consulting firms are a strange thing... they build up a reputation for being all about being the veryh best etc etc but in the end the work tends to be just ordinary development work that you get sent in to do at some big company or government department.

I guess it appeals to some but wouldn't be my choice of place to work.

Firegarden 1 day ago 3 replies      
You know as a programmer myself for the last 15 years I can see how the market has changed and everything is becoming part of the global economy which means the rates are a lot lower which means Ukraine and India are playing a bigger and bigger role I could see private Equity trying to lower costs by outsourcing.

I think there's a fundamental flaw in the idea that you can just pay for cheaper labor- as Steve Jobs has said before the difference in a good programmer and an outstanding programmer can be 50 to 1 or 100 to 1 you can't capture that by trying to cut your cost from $100 an hour to $20 an hour.

Software inherently is one of the most scalable business models in the world the cost of manufacturing is almost zero all of the cost is in the design they should be able to make money. The mindset of being a consulting firm has to be changed.

jacknews 1 day ago 1 reply      
Well, I've heard of Martin Fowler, but not Roy Singham.

Let's face it, the company (and probably all companies) has really been built by all it's people, not just a single founder/owner.

So who's now getting what of the $600 mil?A cursory search on Roy turns up various phrases like "software socialist" etc, but honestly it would be hard to justify that, or claim success in the "social responsibility" part of the company's famous "3 pillars" values, if they're not even giving their own people a fair stake.

tyingq 1 day ago 3 replies      
Interesting. A company like ThoughtWorks doesn't seem to fit the typical private equity playbook. Assuming that most of the employees, for example, are assigned to consulting gigs at customer sites...you can't just start laying people off. What would drastic cost cutting look like at ThoughtWorks?

Edit: Or maybe there are a lot of people "on the bench" or middle management is overstaffed?

This discussion might be enlightening: https://news.ycombinator.com/item?id=12458683

nunez 1 day ago 3 replies      
I am surprised that they didn't sell to Accenture or IBM like I thought they were going to. This sale was pretty much a given. (I worked there for a year.)

I'm pretty sure that TW will cut TWU (onboarding for new starters in India) or scale it back massively, restart sales commissions, close a bunch of their intl offices, cut nearly anything that doesn't make money and build their India office super hard (body shop).

It's gonna be BAD.

hinkley 1 day ago 0 replies      

 ... His death would trigger a tax event that we could not pay from our own resources, forcing a fire sale. Despite several years of effort, we haven't found a way that would preserve the company in its current form. This further encouraged him to sell when there is no tax bill hanging over our head.

edpichler 1 day ago 1 reply      
My short story with TW. Seven years ago, after several interviews, I was invited to move and work to TW office in Porto Alegre (Brazil) I asked 2,5k USD month, and they refused. It was a good salary at that time, I could live comfortably and travel to my home once a month. I was not willing to accept less.

I continue liking this company, but now I have my own small business so I haven't applied anymore.

TheMagicHorsey 1 day ago 6 replies      
The funniest thing about this whole transaction is that Neville Roy Singham always talked a big game about socialism, Hugo Chavez, central planning, etc.

He's all about the government redistributing wealth. But when it comes to his own wealth, instead of gifting the company to the workers who built it, he has all kinds of excuses for why it needed to be sold to a private equity company.

It doesn't surprise me one bit. What individuals won't do voluntarily themselves, they want the state to force others to do.

Neville isn't offering to give a dividend distributing the sales profit to all his workers. No no no ... he's going to use it himself for his charitable works. Much like Hugo Chavez's succesor in Venezuela, Maduro, takes all the resources of the country on the "behalf of the people" and distributes it himself. I'm sure the prestige and power associated with being the distributor is not a motivating factor at all.

abledon 1 day ago 1 reply      
Whatever they do, dear god I hope they keep publishing the Tech Radar. That thing is so great to read.
corpMaverick 1 day ago 0 replies      
Te me Martin Fowler has been branding of Thoughtworks. I am surprised he was not an owner. I wonder if Martin will remain.
brepl 1 day ago 3 replies      
A few questions:

* What's the likely outcome for employees?

* Who was the previous owner?

* How much did it sell for?

* Are operations in specific regions likely to be shut down or sold off?

clarkent 1 day ago 0 replies      
Presumably the PE value prop is that they charge over the odds for their consultancy while paying their employees under market rate. If they can maintain a public image of technical excellence and social responsibility for a couple more years while the equity firm milks the profits, Apax get their multiplier. At the end of that they can sell off what's left and doubles all round.

Or am I getting cynical in my old age?

brepl 1 day ago 4 replies      
What did ThoughtWorks do with their profits before they were bought? If it was privately owned, does that mean the profits were dividended to the founder? Or were they all re-invested in the company to fund expansion?
faitswulff 1 day ago 1 reply      
Does anyone know what Roy Singham's activism consists of?
apapli 1 day ago 0 replies      
I can just hear the private equity firm claiming it's "business as usual".

My experience tells me when that phrase is uttered only a short time passes before lots changes!

elliotec 1 day ago 1 reply      
>>> Funds advised by Apax Partners (Apax Funds) have today announced a definitive agreement to acquire ThoughtWorks, Inc.

What does that mean? It's not literally Apax buying the company, but "Funds advised by Apax"... does that mean it's just the money that owns it, or child companies, or some sort of fancy tax/legal structure?

aryehof 1 day ago 0 replies      
I've always wondered what design methodology does ThoughtWorks apply to most business systems. Is it the same modeling of use-case driven data-based design, as used by most outsourced development companies?

Edit: grammar

Havoc 1 day ago 1 reply      
> His death would trigger a tax event that we could not pay from our own resources, forcing a fire sale.

This just screams sht tax laws. Good tax laws don't wreck their major economic contributors.

The sad thing is I knew this was an American company based on that line alone.

frenchman_in_ny 1 day ago 1 reply      
Either way a sale / transfer of ownership triggers tax consequences.

Couldn't this have been done as an ESOP?

worldwar 1 day ago 2 replies      
Really? I search in google and twitter, all related posts' sources point to this page.
bill899 1 day ago 0 replies      
Bought by Apax
sidcool 1 day ago 0 replies      
What would it mean for the Devs, QAs etc of ThoughtWorks?
Analemma_ 1 day ago 5 replies      
Pretty much the only things I know about ThoughtWorks I got from Zed Shaw's Rails rant, which put them in a sharply negative light as a body shop of mostly-useless people. Was that an accurate characterization or is there more to them?
foxh0und 1 day ago 0 replies      
Recent CompSci grad, I went down the interview path with both TW and my current employer, ended up cutting the TW process short as I decided to accept the latters offer. This is very interesting.
sidcool 1 day ago 1 reply      
So it's official.
desireco42 1 day ago 0 replies      
I was always saying that if I was younger, I wish company like ThoughtWorks got me under it's wing, send me off to their Bootcamp/University, indoctrinated me into right ways of doing things.

I had to learn same things, but much harder.

obiefernandez 1 day ago 0 replies      
Weird Python Integers kate.io
343 points by luu  21 hours ago   116 comments top 28
adtac 19 hours ago 2 replies      

Plugging my near useless Python library that does this and a lot of other subtle, annoying things to break programs. The library is essentially a display of how much Python actually exposes to the user and how modifiable it is.

DonHopkins 3 hours ago 1 reply      
"Is", "is." "is"the idiocy of the word haunts me. If it were abolished, human thought might begin to make sense. I don't know what anything "is"; I only know how it seems to me at this moment.

Robert Anton Wilson, The Historical Illuminatus Chronicles, as spoken by Sigismundo Celine.


Kellogg and Bourland describe misuse of the verb to be as creating a "deity mode of speech", allowing "even the most ignorant to transform their opinions magically into god-like pronouncements on the nature of things".

Bourland and other advocates also suggest that use of E-Prime leads to a less dogmatic style of language that reduces the possibility of misunderstanding or conflict.

Alfred Korzybski justified the expression he coined "the map is not the territory" by saying that "the denial of identification (as in 'is not') has opposite neuro-linguistic effects on the brain from the assertion of identity (as in 'is')."

std_throwaway 20 hours ago 4 replies      
Summary: Integers in python are full blown objects. Small numbers are stored in a central preallocated table where each entry represents one number. Setting a variable to a small integer makes it point to an entry in that table. Multiple variables may point to the same small integer objects in that table. Fooling around with the table leads to funny results.
woodrowbarlow 20 hours ago 1 reply      
the python documentation [1] says the following:

> The current implementation keeps an array of integer objects for all integers between -5 and 256, when you create an int in that range you actually just get back a reference to the existing object. So it should be possible to change the value of 1. I suspect the behaviour of Python in this case is undefined. :-)

does anyone have any idea how they chose that range? it's a 262-wide block starting at -5, which seems incredibly arbitrary.

[1] https://docs.python.org/2/c-api/int.html

squeaky-clean 17 hours ago 2 replies      
I wrote a blog post about this in the past. It's really fun going through the oddities of the language like this.

It caches small integers, but also literals used in the same interpreter context (I'm probably getting that last term wrong). You'll get different results if you run these in from the shell as opposed to executing a script, try it out!

Here's a fun example

 >>> x = 256; x is 256 True >>> x = 257; x is 257 True >>> x = 257 >>> x is 257 False >>> def wat(): ... x = 500 ... return x is 500 >>> wat() True

mbell 19 hours ago 0 replies      
Ruby does something similar, but all Fixnum (native sized) values are 'fixed objects':

 a = 2**62 - 1 b = 2**62 - 1 a.object_id == b.object_id # true a = 2**62 b = 2**62 a.object_id == b.object_id # false
Ruby does automatic promotion from Fixnum (native size) to Bignum (arbitrarily large) and uses one bit of the native size as a flag to identify this which is why 2^62 - 1 is the max instead of 2^63 - 1. Though I think this is only true of MRI and other implementations handle it without the flag bit.

Perhaps one difference from Python is that in MRI Ruby Fixnum doesn't really even allocate an 'object', the object_id is the value in disguise. In fact all 'real' objects have even object_ids and all odd object_ids are integers:

 a = 123456789 (a.object_id - 1) / 2 # 123456789

boramalper 19 hours ago 2 replies      
> We can use the Python built-in function id which returns a value you can think of as a memory address to investigate.

> [...]

> It looks like there is a table of tiny integers and each integer is takes up 32 bytes.

It is the memory address but it's a "CPython implementation detail: This [return value of the id() function] is the address of the object in memory."[1]

Though you cannot use this to determine the size of an object, or rather you "shouldn't" because that assumes a very specific implementation detail, which isn't there.

If you'd like to get the size of an object, use sys.getsizeof().[2] Also keep in mind that containers in Python does not contain the objects themselves but references to them so the returned size is the size of the object itself only, non-recursively. Read "Is Python call-by-value or call-by-reference? Neither."[3] for some more details.

[1]: https://docs.python.org/3.6/library/functions.html#id

[2]: http://docs.python.org/3.6/library/sys.html#sys.getsizeof

[3]: https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-...

lispm 20 hours ago 1 reply      
Lisp systems also have fixnums and bignums. For example a 64bit Common Lisp:

 MOST-NEGATIVE-FIXNUM, value: -1152921504606846976 MOST-POSITIVE-FIXNUM, value: 1152921504606846975 
Fixnums are typically stored inline in data structures (like lists, arrays and CLOS objects). Bignums will be stored as a pointer to an heap-allocated large number. Data has tags and thus in a 64bit Lisp the fixnums will be slightly smaller than 64bit. Bignums can be 'arbitrary' larger and there is automatic switching between fixnums and bignums for numeric operations.

jedberg 18 hours ago 3 replies      

 In [7]: a = "foo" In [8]: b = "foo" In [9]: a is b Out[9]: True In [10]: b = "foobaljlajdfsklfjds l;kjsl;dfj ls;dfj l;skdj flsdjluejsklnm " In [11]: a = "foobaljlajdfsklfjds l;kjsl;dfj ls;dfj l;skdj flsdjluejsklnm " In [12]: a is b Out[12]: False
Seems to work with small and big strings too.

ghewgill 20 hours ago 0 replies      
Lots of good discussion at this Stack Overflow question (2008): https://stackoverflow.com/q/306313/893 (Python is operator behaves unexpectedly with integers)
rcthompson 18 hours ago 5 replies      
Is there any practical reason to use "is" to compare two ints (other than demonstrating integer interning)? Should doing so produce a warning?
timonoko 18 hours ago 0 replies      
I remember implementing this too on Nova 1200. When the address space is bigger than the memory, you can place those integers outside the memory. Those objects do not actually exist in other words. Saves you memory cycles too, because you can calculate the numeric value from the address.
avyfain 20 hours ago 0 replies      
The case of strings is also pretty interesting: http://guilload.com/python-string-interning/
knutae 5 hours ago 0 replies      
For comparison, integers in clisp:

 [1]> (eq (expt 2 47) (expt 2 47)) T [2]> (eq (expt 2 48) (expt 2 48)) NIL
Explanation here: https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node17.html

tomsmeding 19 hours ago 1 reply      
You can also do a similar thing in Java, as illustrated in this answer on CodeGolf stackexchange: https://codegolf.stackexchange.com/a/28818
throwaway613834 19 hours ago 3 replies      
Does anyone know why Python refcounts everything -- even small integers, True, False, None...? Why not avoid it?
wyldfire 19 hours ago 2 replies      
> That is suprising! It turns out that all small integers with the same value point to the same memory. We can use the Python built-in function id which returns a value you can think of as a memory address to investigate.

Unfortunately this blog post seems to miss a great opportunity to show you how you should compare integers for equality -- using the equality operator `==` and not the identity comparison `is`.

EDIT: odd, this post attracted a lot of downvotes. Please help me learn how this post could be improved.

jonbarker 16 hours ago 0 replies      
Cool examples, but I'm not super concerned about the problems arising from the ability to 'use ctypes to directly edit memory'. It's actually pointers to memory blocks, not the memory contents itself: https://docs.python.org/3/library/ctypes.html If you're advanced enough to need to handle pointers to memory blocks in your python program, you are probably good enough to know not to create problems with the behavior of iterators on ranges.
santiagobasulto 19 hours ago 1 reply      
Nice article. I wrote a similar piece some time ago related to booleans, in case anybody is interested: https://blog.rmotr.com/those-tricky-python-booleans-2100d5df...

And to avoid issues with is/==, we recommend our students to always use == (except for `is None`). Also related piece:https://blog.rmotr.com/avoiding-being-bitten-by-python-161b0...

wgrover 18 hours ago 0 replies      
You can use sys.getrefcount() to explore these "weird integers":https://news.ycombinator.com/item?id=15093897
mattbillenstein 14 hours ago 0 replies      
Stumbled across this when debugging a reference leak with a C-extension once -- small integers didn't exercise the problem, but larger ones did...
supertramp_sid 8 hours ago 0 replies      
a = 1000

b = 1000

a is b

The output is false only if you use the REPL. Run a .py script writing the above code and it returns True. I read about it on SO but can't find the link to it!

sl4i6j3o4i98g 15 hours ago 0 replies      
Brilliant! Very interesting and an insight into the inner workings of python. Thank you for sharing Kate!
tosh 18 hours ago 0 replies      
related read:

Equal Rights for Functional Objects or, The More Things Change, The More They Are the Same (1993) by Henry Baker


gre 19 hours ago 0 replies      
It's not weird, it's pythonic.
dsfyu404ed 19 hours ago 1 reply      
Yup, 0day hunters have fun with this behavior from time to time.
zde 18 hours ago 0 replies      
That's not weird but pretty common knowledge. IIRC 1-char strings are interned too.
Billionaire Porn King Reinvents Himself as Japan's Startup Guru bloomberg.com
302 points by champagnepapi  1 day ago   97 comments top 14
patio11 1 day ago 8 replies      
A lot of DMM's other businesses are also seediness arbitrage. The FX exchange, for example, is a bucket shop.

(This is a shorthand criticism, but since many HNers might not know it already: pretend you have a country where gambling is mostly illegal. Wagering on the roll of a die is illegal, but people really want to wager, so, you give them "investment" options like "Is the yen going to trade up against the dollar?" While you swear blind that this is investing and that the proverbial Mrs. Watanabe is making informed investment decisions after deeply analyzing the latest market trends, Mrs. Watanabe is actually just depositing yen with you and withdrawing less yen later. She doesn't actually own dollars at any point; she is just "trading" with you. You might not even own dollars at any point. The only purpose of the exchange rate is to be a source of legal random numbers, since a die or deck of cards would be a source of illegal random numbers.)

I haven't used DMM's FX offering but out of morbid curiosity I looked at a Bitcoin exchange which is widely reported to have copied it. It was like Zynga had made a pachinko game with slightly more numbers and less chesty mermaids. There was even a "bloop" sound effect when other people's trades went through.

faitswulff 1 day ago 0 replies      
Wow, this guy is a hustler. I just got to the part of the article describing his entry into producing porn. Apparently, he mass-produced the VHS tapes using thousands of household video recorders, then used a clever pricing model ("here's 100 tapes, pay me only for what you sell") to get into video stores. The next bit really impressed me:

"The next big idea was a cash register Kameyama developed that looked like a tablet computer. He gave it to customers for free, in exchange for their sales records -- data that made him better than anyone at tracking the preferences of Japans porn consumers."

Abishek_Muthian 1 day ago 2 replies      
The writer says,

"Asked to explain his philosophy, he struggled for a tidy phrase and settled on this: I like to be able to think, Im a little less flawed today than I was the day before.

But IMO, his philosophy is pretty clear with these quotes -

"To him, porn is the proverbial widget -- a thing to sell for more than it costs to make and market, no different from any other product."

"If my own daughter told me she wanted to be an adult film actress, Id tell her, look, there are risks, but its something for you to decide.

Another interesting anecdote is that Mr.Kameyama is part of the growing list of Billionare who choose to keep their identity dark (or) go great lengths to protect their privacy.

I wonder whether not running a publicly traded company directly gives these kind of people an advantage to secure their privacy over the counterparts who run businesses which fall into public scrutiny.

halflings 1 day ago 1 reply      
Reminiscent of Xavier Niel [0], who started with "Minitel rose" services offering phone sex, peep shows and sex shops (was even arrested because one of his peep show businesses was a cover for a prostitution business)... then went on to found Free (and later Free Mobile), who truly disrupted telcos in France (and arguably everywhere else in Europe) with cheap broadband and mobile plans + acquired some of the largest French media, started a tech school in Paris, and recently opened the biggest startup campus in the world [1].Now worth 9.6 billion USD.

[0] http://www.nytimes.com/2013/05/06/business/global/xavier-nie...

[1] https://stationf.co/

nayuki 1 day ago 1 reply      
> Some of his best ideas, including the one for the hit video game Fleet Collection,

The title translation was a bit too literal. It's actually https://en.wikipedia.org/wiki/Kantai_Collection .

rgrieselhuber 1 day ago 2 replies      
I shared a taxi with him a few years ago. Really cool dude and smart as hell.
colbyh 1 day ago 3 replies      
Porn has led the way on a number of technological innovations, I'm surprised there aren't more billionaire porn kings/queens.

edit - modifiers are hard

peteretep 1 day ago 1 reply      

 > I hate it, he says. But > if it works, great. If it > doesnt, well try something > else.
Quite the mantra

akamaozu 23 hours ago 0 replies      
For those looking to hear more about DMM's work in Japan's tech community, I think you'll find this podcast episode interesting.


lowry 1 day ago 1 reply      
The founder of Pornhub is a startup guru in Belgium.
gfredtech 1 day ago 1 reply      
Wow, he's a billionaire and he's still talking about masking himself and his privacy
max_ 1 day ago 0 replies      
This interview he gave make think he has really unique ways of making decisions.


wprapido 1 day ago 1 reply      
jackie treehorn gone digital
TaylorGood 1 day ago 0 replies      
What a title.
Moving The New York Times Games Platform to Google App Engine nytimes.com
293 points by spyspy  1 day ago   140 comments top 15
obulpathi 1 day ago 2 replies      
Two things keep coming up while comparing GCP and AWS:

* This accomplishment would not have been possible for our three-person team of engineers with out Google Cloud (AWS is too low level, hard to work with and does not scale well).

* Weve also managed to cut our infrastructure costs in half during this time period (Per minute billing, seamless autoscaling, performance, sustained usage discounts, ... )

ciguy 1 day ago 5 replies      
As a DevOps consultant I've actually worked with clients migrating stacks to and from GCE/AWS (Yeah, both ways, not the same client).

What I've found in aggregate is that GCE is a bit easier to use at first as AWS has a LOT of features and terminology to learn. When it comes down to it though, many GCE services felt really immature, particularly their CloudSQL offering.

One client recently moved from GCE to AWS simply because their CloudSQL (Fully replicated with fail-over setup according to GCE recommendations) kept randomly dying for several minutes at a time. After a LOT of back and forth Google finally admitted that they had updated the replica and the master at the same time, so when it failed over the replica was also down.

There were other instances of unexplained downtime that were never adequately explained, but overall that experience was enough for me (And the client) to totally lose faith in the GCE teams competence. Even getting a serious investigation into intermittent downtime and an explanation took over a month. By that time our migration to AWS was in progress.

GCE never did explain why they would choose to apply updates to replica + master SQL at the same time and as far as I know they are still doing this. I asked if we could at least be notified of update events, was told that's not possible.

There were other issues as well that taken together just made GCE seem amateurish. I'm sure as they mature a bit things will get better, and it is cheaper which is why I wouldn't necessarily recommend against them for startups just getting going today. By the time you are really scaling it's like they'll have more of the kinks worked out.

neya 1 day ago 2 replies      
Hey community, let me share my experience with AppEngine. I work in a small firm, where we've developed a massive Software Application comprising of 12 medium-sized apps. I went with Phoenix 1.3 w/ the new umbrella architecture.

With AppEngine, the beauty is that you can have many custom named microservices under one AppEngine project and each microservices can have many versions. You can even decide how much percentage of traffic should be split between each of these microservices.

What's awesome is, in addition to the standard runtimes (Ruby, Python, Go, Java, etc.) Google also provides something called custom VMs for AppEngine, meaning you can push docker based setups into your AppEngine service, with basically any stack you want. This alone is a HUGE incentive to move to AppEngine because usually custom stack will require you to maintain the server side of things, but with Docker + AppEngine, zero devops. Their network panel is also very intuitive to add/delete rules to keep your app secured.

I've been using AppEngine for over 4 years now and every time I tried a competitive offering (such as AWS Beanstalk, for example) I've only been disappointed.

AppEngine is great for startups. For example, a lesser known feature within AppEngine is their real-time image processing service API. This allows you to scale/crop/resize images in real time and the service is offered free of charge (except for storage).

Works really well for web applications with basic image manipulation requirements.


The best part is, you call your image with specific parameters that'll do transformations on the fly. For example, <image url>/image.jpg?s=120will return a 120px image. Appending -c will give you a cropped version, etc.

I really hope to see AppEngine get more love from startups as it's a brilliant platform, much more performant than it's competitors' offerings. For example, I was previously a huge proponent of Heroku and upon comparing numbers, I realized AppEngine is way more performant (in my use case). I'm so glad we made the switch.

If you're looking/considering to move to AppEngine, let me know here and I'll try my best to answer your questions.

nrjames 1 day ago 1 reply      
I migrated a big data stack to GCP from AWS. Reasons: GCP has better documentation, the AWS console and various services confuse the heck out of me (I guess I'm getting too old), and the security integration between GCP services saves a huge amount of time. It's super easy and very fast to used the Google Compute Engine VMs. Given that the company I work for uses G Suite, it's a piece of cake to implement SSO and other integration pieces. It's also cheaper for us than AWS and more performant.
vs2 1 day ago 4 replies      
"Due to the inelastic architecture of our AWS system, we needed to have the systems scaled up to handle our peak traffic at 10PM when the daily puzzle is published."

WT... I had to reread this to make sure I didnt misunderstand... why not work on making the current arhictecture elastic?! #cloudPorn

pgrote 1 day ago 1 reply      
Interesting they are using Medium instead of in house publishing tools. It is the first time I've noticed then open.nytimes.com articles.
NightlyDev 1 day ago 0 replies      
This doesn't really make much sense to me. How many peak users are there? What's the number of requests per second?

I can't imagine that the load would be so high that it wouldn't be possible to do it without GCP with three developers.

It would be way more interesting with performance details. :)

Jedi72 1 day ago 6 replies      
Getting pro GCP articles to the top of HN must no-doubt be a high priority for the Google marketing team. This is the nature of modern advertising, sneakily trying to subvert your thinking by masquerading as something else.
bsaul 1 day ago 3 replies      
Has anybody had successful experience deploying docker containers on appengine ? Last time i tried, i had such a bad experience in terms of deployment speed ( time to build the image, then upload it, then waiting for the stuff to deploy) that i reverted to managing my own gce instance.

But maybe i had bad luck..

foxylad 1 day ago 1 reply      
OT. How nice not see a single "avoid all Google services because reader" comment. Maybe we are finally moving on.
merb 1 day ago 0 replies      
Uh thanks to this article I've seen that AppEngine know supports Java8, this is really really cool.
kennethh 1 day ago 1 reply      
Anyone know how much it cost to add a custom domain and SSL to AppEngine(Standard og Flexible)? I have been looking and not able to find out how much it cost.
zitterbewegung 1 day ago 0 replies      
Nice advertisement that Google bought from the NyTimes.
revelation 1 day ago 2 replies      
I was sure this was about some multiplayer game thing, but no, it's a crossword. Not entirely sure what they are even scaling here, I was expecting an article about a CDN..
mbesto 1 day ago 3 replies      
This reads eerily like a press release for GCP...
HackerNews Grid hackernewsgrid.com
356 points by kubopaper  2 days ago   115 comments top 44
fairpx 2 days ago 11 replies      
Interesting experiment. My observation: With the thumbnails, the title of each post becomes a less-important-caption. In the case of HN, I think the text-only approach is far better. Product Hunt used to be text only, and frankly, it was a better experience. The moment you introduce images to these types of communities, people will start using that real estate to create flashy-attention-grabbing visuals. Over time, it'll be more about how good a thumbnail looks, rather than the curiosity of a title that lures you into the content.
helloworld 2 days ago 3 replies      
I'm hoping that this makes HN's front page just to see the fun recursion of the site displaying a screenshot of itself. (And I do appreciate the experiment in user experience design, too.)
nemoniac 2 days ago 1 reply      
A huge part of the appeal of the standard HN page for me is the simple, straightforward, sensible headline without the discration of images. The title guidelines and the insistence on adhering to them are a big plus in this regard.
AriaMinaei 2 days ago 7 replies      
Whenever I see something like this, I sigh and wonder, "Why should it be so hard for the average internet user to create a live 'grid of thumbnails' for 'a list of links to webpages'? Why should it take a whole developer to code and deploy an entire website, just for this one use-case?"

Software today is not as "soft" as one would've hoped, fifty years ago. It's not malleable. It's not composable. It's barely reactive.

This is not how it was meant to be.

Jonas_ba 2 days ago 1 reply      
We have done something similar for the hn search at Algolia but flagged it under style -> experimental in the settings panel. It's not a grid layout, but more a refresher to the current design. https://hn.algolia.com
grey-area 2 days ago 2 replies      
You really, really need to update the screenshot for hackernewsgrid.com for infinite recursion, now that you're on the HN home page the screenshot should include a picture of itself.
jacquesm 1 day ago 1 reply      
Nice example of the Droste Effect.


sidcool 2 days ago 0 replies      
The internal HN links are going with a double slash.

E.g. https://news.ycombinator.com//item?id=15080693

Resulting in Unknown page.

hiisukun 2 days ago 1 reply      
Thanks for posting this - I quite enjoy browsing hacker news using thumbnails from mobile after trying it. On my laptop, I think I prefer the original homepage. I like to check the news once a day, but some days I'm short of time and use an alternative that cuts down results shown [1].

Overall I'm very happy to have now three good options for checking out what I consider to be a very good source of fuss free tech news and discourse.

[1] hckrnews.com - hopefully it isn't a faux pas to mention a potential competitor to your site in this thread.

Bobbleoxs 2 days ago 0 replies      
I definitely clicked a couple more just by looking at the screenshots than reading the plain text index. I wonder if there's psychological lure in graphics. Thank you!
captainmuon 2 days ago 0 replies      
That's nice. I thought about using thumbnails to linked sites before, but I wonder about the legal dimension.

What if someone puts something illegal, or copyrighted on one of the linked pages? Does anybody have advice (internationally / US / Germany)?

I'm based in Germany, and here there is strong legal protection for "quoting" excerpts. However, it is often debatable what counts as a quote. German news sites often take a photograph of a screen, instead of a screenshot. There seems to be protection for search engines (e.g. Google Photo Search), but the situation is not clear. There is also no "fair use" or safe harbor like in the US.

I'm especially afraid of cease and desist letters (Abmahnungen) - there is an entire industry of people who crawl the web and find copyrighted images with image recognition. The mean thing is that they don't let you use their tool to check for compliance - I would gladly buy a license for images I accidentially use, or remove them - but it is more profitable for them to send you a letter.

(Rant: I once had a case where someone accidentially printed a copyrighted image on a document and put a scaled down picture of that document on a domain managed by me. The copyrighted image was about 50x50 pixels, mirrored, and black and white, but they had me pay ~800 Euros for it. Funny thing is that they never ever contacted us via the contact email. They didn't care about their client's rights or about selling an image, they wanted to milk us. They sent physical letters to people they thought related to the site, until they grabbed me (the Admin-C of the domain).

Now I heard they are going after people rewteeting or liking copyrighted images - IMHO that is ridiculous, there should be a difference between "including" and "linking to" an image.)

thiht 2 days ago 0 replies      
I don't really see the point since most thumbnails are just a screenshot of illegible text. It doesn't help at all.

I think a better thumbnail system would be to use an actual image of the article (for example, the thumbnail for the page https://www.gobankingrates.com/retirement/1-3-americans-0-sa... would be a crop of https://cdn.gobankingrates.com/wp-content/uploads/2016/03/sh...), or simply a favicon in case there's no image available. Hell, you're experimenting so why not even a carousel of all the images in the article? (moving on mouse hover ideally)

Also the title should not be secondary, below the thumbnail. Maybe it should be over the image in some way?

aaronhoffman 2 days ago 1 reply      
I have a "preview" feature on https://www.sizzleanalytics.com/HackerNews that uses OG tags, but I'd much rather use these images. Any way we can work something out?
Waterluvian 2 days ago 0 replies      
If I were to add another feature to HN I would add color coding of how content dense a link is. Sometimes a link is a big long essay, and I might not want to click on it just yet.
owens99 2 days ago 1 reply      
What API did you use for the screenshots?
max23_ 2 days ago 1 reply      
Just notice some thumbnails are showing pop up dialog instead of the site itself.

If Puppeteer[1] is used to screenshot the site, probably need to use the page.click API to close it.

But, one problem with that is you need to know the exact selector name which maybe is not a generic one.

[1] https://github.com/GoogleChrome/puppeteer

znpy 2 days ago 2 replies      
This is awesome, but I would really appreciate if it would use all of my screen estate (i am using a 1920x1200 screen) instead of using only three columns.
dredmorbius 4 hours ago 0 replies      
Ugh. No.

I keep seeing sites shifting to high-graphics, low-text content. HN is an interesting exception in that it's zero-graphics, low-text -- the only indication of content is a <80 char subject, the originating site, user, and current votes.

What I'd prefer is deeper textual context, a'la Jacob Nielsen's long-standing microcontent guidelines. A 140 - 500 character introduction a strong title, and maybe an accompanying thumbnail or avatar.

97-109-107 2 days ago 1 reply      
have_faith 2 days ago 1 reply      
Personally, I don't find the thumbnails add anything and also detract a little from reading the headlines. Not to knock on it as an experiment.

My main UX issue with HN is the comment nesting. Would much prefer less nesting and something akin to 4chan's backlink post referencing.

Vilkku 2 days ago 0 replies      
Nice. There's a bug, self posts have an extra slash in the url (for example "Show HN: How to discuss with opinionated people using the Socratic method[video]" which is currently on the front page for me).
Pavan_ 2 days ago 0 replies      
There is one more similar site http://richhn.herokuapp.com/which fetches meta tags of hackernews links and shows it's rich preview.
senectus1 2 days ago 1 reply      
looks great, but needs meta info... like post time/comment numbers/points etc...
hiven 2 days ago 0 replies      
When I clicked on a link it added an additional slash to the URL, I.ehttps://news.ycombinator.com//item?id=15074526
djKianoosh 2 days ago 0 replies      
Usability question.

Is it easier to read, or just otherwise better, if the picture came after the link/heading?

I find it hard to visually read/parse the way they have it now with picture first then title.

technofide 2 days ago 0 replies      
Would love to know what are you using for the screenshots? Is it urlbox?
Meekro 2 days ago 1 reply      
Several people here are asking how you automatically screenshot websites. Look up PhantomJS -- you don't need to use someone else's API when you can make your own! =)
myth_buster 1 day ago 0 replies      
Opportunity for infinite recursion.


ankit84 2 days ago 0 replies      
Is this a side effect of Puppeteer lib?


mrleinad 2 days ago 0 replies      
Awesome, just found my new go-to HN site.
funvill 1 day ago 0 replies      
check out https://hckrnews.com/ the "top 10", "top 20", "top 50%" links are great features for me.
calcifer 1 day ago 0 replies      
Wow, looks and works perfectly without JS enabled! Really appreciated.
carapace 11 hours ago 0 replies      
Sort of off-topic, but uh, how to generate thumbnails of websites? Is there a service?

Apologies for the laziness.

sAbakumoff 1 day ago 0 replies      
Cool, is that thing made by using React?
lowkeyokay 1 day ago 0 replies      
For the love of God please upvote the this (the OP) so that it will be in the top 9 posts. Then we can all watch a feedback loop in all its pixel glory!

edited to clarify that I'm not in any way asking anyone to update my comment - just the original post

williamle8300 2 days ago 0 replies      
How'd you get the thumbnails?
thinbeige 1 day ago 0 replies      
This is the wrong kind of presentation for the typical HN content. And somebody did the same thing already before.
filipmares 1 day ago 0 replies      
That site loaded so fast!
iorekz 1 day ago 0 replies      
a bit higher and we can make a hackernews grid inception
justbaker 1 day ago 0 replies      
Nice! I like it :)
mindhash 2 days ago 0 replies      
Try pinterest style
popol12 2 days ago 0 replies      
I frankly prefer the original, it's more efficient to me.
sunilkumarc 2 days ago 0 replies      
How are you getting the thumbnail images?
allenleein 2 days ago 0 replies      
Fantastic one!
A history of branch prediction danluu.com
292 points by darwhy  2 days ago   65 comments top 14
userbinator 2 days ago 6 replies      
The use of previous branch history and branch address as a "context" for prediction reminds me of the very similar technique used for prediction in arithmetic compression as used in e.g. JBIG2, JPEG2000, etc. --- the goal being that, if an event X happens several times in context C, then whenever context C occurs, the probability of X is more likely.

Also, since modern CPUs internally have many functional units to which operations can be dispatched, I wonder if, in the case that the "confidence" of a branch prediction is not high, "splitting" the execution stream and executing both branches in parallel until the result is known (or one of the branches encounters another branch...), would yield much benefit over predicting and then having to re-execute the other path if the prediction is wrong. I.e. does it take longer to flush the pipeline and restart on the other path at full rate, or to run both paths in parallel at effectively 1/2 the rate until the prediction is known?

ramshorns 2 days ago 4 replies      
Very informative. I missed the part about 1500000 BC though a time when our ancestors lived in the branches of trees?

Another beginner-friendly explanation of the effects of branch prediction is this Stack Overflow post which compares a processor to a train:https://stackoverflow.com/questions/11227809/why-is-it-faste...

ufo 2 days ago 3 replies      
One surprising thing that I discovered recently is that after Haswell, Intel processors got much much better at predicting "interpreter loops", which are basically a while true loop with a very large seemingly unpredictable switch statement. It lead to a dramatic improvement in micro benchmarks and made some traditional optimizations involving computed goto and " indirect threading" obsolete.

Does anyone know how it achieved this?

Sniffnoy 2 days ago 1 reply      
> PA 8000 (1996): actually implemented as a 3-bit shift register with majority vote

This actually seems interestingly different from the two-bit saturating counter. Like, it's not just a different way of implementing it; you can't realize the saturating counter as a "quotient" of the shift/vote scheme.

ajkjk 1 day ago 2 replies      
Is there any system out there that supports branch 'annotations', of a sort, so that the programmer or the compiler can just tell the CPU what the branch behavior is going to be?

Like -- it seems kinda silly for the CPU to do so much work to figure out if a loop is going to be repeated frequently, when the code could just explicitly say "fyi, this branch is going to be taken 99 times out of 100".

Or, if there's a loop that is always taken 3 times and then passed once, that could be expressed explicitly, with a "predict this branch if i%4 != 0" annotation.

irishsultan 2 days ago 1 reply      
I seem to be missing something when the two bit scheme is introduced it's said that it's the same as the one bit scheme except for storing two bits (seems logical), but then the index in the lookup table seems to involve both the branch index (already the case in the one bit scheme) and the branch history (as far as I can see never introduced).
legulere 2 days ago 4 replies      
I really have problems reading this website. You don't have to make a website bloated to make it readable: http://bettermotherfuckingwebsite.com
filereaper 2 days ago 2 replies      
Ryzen has rolled out a Neural-Net based branch predictor, would be curious to see its accuracy compared to the listed approaches.
seedragons 1 day ago 2 replies      
Is this correct? "Without branch prediction, we then expect the average instruction to take branch_pct * 1 + non_branch_pct * 20 = 0.8 * 1 + 0.2 * 20 = 0.8 + 4 = 4.8 cycles"

other than branch_pct and non_branch_pct being reversed, this seems to be assuming that 100% of branches are guessed incorrectly. Shouldn't something like 50% be used, to assume a random guess? ie 0.8 * 1 + 0.2 * (0.5 * 20 + 0.8 * 1)=2.96

lordnacho 2 days ago 2 replies      
Top quality article. Now we need one with specifics of how to write code that's aware of this. For instance when do use what compiler hints. Anyone have links or books?
zaptheimpaler 2 days ago 0 replies      
I love your posts dan. High quality writing, no fluff and bullshit every time :)
unkown-unknowns 2 days ago 0 replies      
Figures 12 and 14 are the same but I think the figure used is only supposed to be like that for figure 14, not for figure 12.

The "two-bit" scheme that fig 12 is for does not have branch history, whereas "two-level adaptive, global" which has fig 14 fits the bill.

agumonkey 2 days ago 0 replies      
Beautiful article. The kind that makes you want to dig deeper in the whole field.
deepnotderp 1 day ago 1 reply      
TAGE and perceptron combined are the SOTA right now, right?
Amazon has as much office space in Seattle as the next 40 biggest employers seattletimes.com
232 points by eropple  1 day ago   191 comments top 8
Bucephalus355 1 day ago 9 replies      
Amazon has been pretty careful about their appearances in town. A good example that I saw when I went to Seattle is the subtle shame you experience as an employee if you drive yourself to work. Supposed to carpool / do whatever else you can do so at least Amazon doesn't have to shoulder the blame for rising Seattle traffic. Probably a nice thing in the end.

Also their well-known frugality principle, while not really meaning a lot when you consider salaries, helps to stem the "entitled" image that say a company like Google gets more of.

barsonme 1 day ago 8 replies      
Obviously I don't foresee Amazon tanking any time soon, but if/when they do I can't imagine how it'd do anything other than ruin Seattle, just like the exodus of automobile manufacturers started the decline of Detroit in the '50s.
noodle 1 day ago 5 replies      
> Amazon now occupies a mind-boggling 19 percent of all prime office space in the city

I'd be more interested in how "prime" is defined here and what the percentage looks like if you include what they're presumably defining as secondary office space.

tristanj 1 day ago 2 replies      
In comparison, Apple occupies about 70 percent of Cupertinos commercial real estate. Though Cupertino isn't a "major U.S. city" so Seattle Times' claim stills stands.
njarboe 1 day ago 7 replies      
Of course the Seattle area is home to Microsoft and Boeing. Both have a huge amount of square feet. A single Boeing building in Everett, north of Seattle, is 4.3 million square feet. About half of Amazon's total.
southphillyman 23 hours ago 0 replies      
I know Amazon earned a very bad reputation recently with all the complaints about it's work conditions and employee morale. Does anyone here know if there has been improvement in that area? It seems like Amazon is recruiting very aggressively now days and I'm thinking of giving them a chat.
ausjke 1 day ago 1 reply      
I was wondering why the two richest guys are both from Seattle, Bill Gates and Jeff Bezos? Must be a super good location to start business.

Anyway, with Amazon growing uncontrollably these days, monopoly lawsuit is probably on the horizon somewhere soon.

banku_brougham 1 day ago 1 reply      
My favorite Seattle downtown office building occupant: Cray Supercomputers.
Why I haven't jumped ship from Common Lisp to Racket just yet fare.livejournal.com
267 points by networked  2 days ago   94 comments top 11
flavio81 2 days ago 6 replies      
The author, a famous and well-liked lisper, is not consider ing portability features. CL is an ANSI standard and code often runs with no changes in many distinct CL implementations/compilers/interpreters.

Also, related to that point: There are many different CL implementations out there that satisfy different use cases, like for example JVM deployment (ABCL), embedded systems (ECL), speed(SBCL), fast compile times (Clozure), pro-level support (LispWorks, ACL), etc. So the same code has a huge amount of options for deployment. It really makes Common Lisp be "write once, run anywhere".

Then speed is also never mentioned. Lisp can be seriously fast; under SBCL it is generally 0.3x to 1x C speed; LispWorks might be faster, and there's a PDF out there called "How to make Lisp go faster than C", so that should give an idea of Lisp speed potential.

CL wasn't created by philosophing about what a programming language should be for many years; CL was basically created by merging Lisps that were already proven in the industry (Maclisp, Zetalisp, etc), already proven to be good for AI, heavy computation, symbolic computation, launching rockets, writing full operating systems, etc.

CL is a "you want it, you got it" programming language. You want to circumvent the garbage collector? need to use GOTOs for a particular function? want to produce better assembly out? need side effects? Multiple inheritance? Want to write an OS? CL will deliver the goods.

In short, i would guess that from a computer scientist or reseacher point of view, Racket is certainly more atttactive, but for the engineer or start-up owner that wants to have killer production systems done in short time, or to create really complex, innovative systems that can be deployed to the real world, Common Lisp ought to be the weapon of choice!

mjmein 2 days ago 2 replies      
Racket is a really exciting language, especially with its focus on building small languages to solve problems.

However, where it fails for me is in its lack of interactive development. When I investigated it, there seemed to be no way to actually connect a repl to a running program.

Unlike with common lisp or clojure, with racket if you make changes to your code you have to restart the REPL, which destroys your state.

This was a big disappointment to me, because even python with ipython and autoreload allows for more interactive development.

I suspect that this decision was made because of racket's start as a teaching language, because it is simpler, but way less powerful.

jlarocco 2 days ago 1 reply      
I use Common Lisp quite a bit, and I'm just not interested in switching to another Lisp. I've looked at most of them, and haven't seen anything compelling. CL still wins on everything that I care about (performance, portability, libraries, ease of use, books/documentation, etc.).

Even the article's list of areas where Racket is "vastly superior" is questionable, IME. Granted, the author wrote ASDF, so he has a very different perspective than I do on the module system, but in practice nothing on that list has been a problem for me, and a few of them I'd actually consider to be anti-features (like a built-in GUI library).

peatmoss 2 days ago 0 replies      
I really like this article, because it manages to be a love letter to both Racket and Common Lisp.
farray 2 days ago 1 reply      
Interestingly, I just added a section on Gerbil, the Lisp dialect I have adopted instead of PLT, for many personal reasons.
bjoli 2 days ago 2 replies      
One thing that makes racket shine is it's macro facilities. Syntax case is nice and all that, but Jesus Christ in a chicken basket I wish scheme would have standardised on syntax-parse.

Syntax case vs. syntax parse isn't and will never be close to a fair comparison. Not only is it more powerful, it also provides the users of your macros with proper error messages. It blows both unhygienic and other hygienic macro systems out of the water for anything more complex than very basic macros.

laxentasken 2 days ago 1 reply      
If you do work in CL for a living, may I ask what kind of applications and which area? The reason for my question is that CL (and racket) seems like a very good idea to put some time into but the market for such jobs is dead where I live (sweden). Or those jobs might be held by lispers on a lifetime ...
i_feel_great 2 days ago 4 replies      
That Racket module functionality where you can add unit tests right with your code ("module+"), but will get stripped when compiled - that thing is quite magical. Is there another system that has this?
myth_drannon 2 days ago 0 replies      
For anyone interested in Racket, excellent book/online tutorialshttp://beautifulracket.com/
zerr 2 days ago 4 replies      
Anyone using Racket in the wild? (Besides the Racket team)
lottin 2 days ago 3 replies      
> trivial utilities for interactive use from the shell command-line with an "instantaneous" feel

Last time I checked CL images were huge though. Something like 24MB for a "hello world" executable, even bigger with some compilers.

       cached 25 August 2017 15:11:01 GMT