hacker news with inline top comments    .. more ..    14 Aug 2016 Best
home   ask   best   3 years ago   
Zero-cost futures in Rust aturon.github.io
898 points by steveklabnik  2 days ago   334 comments top 38
AndyKelley 2 days ago 2 replies      
This is huge.

This allows one to express concurrency in a natural way not prone to the typical errors of problems of this nature, with no runtime overhead, while competing with C in terms of the constraints of the runtime.

Big kudos to Aaron Turon and Alex Crichton. You guys knocked it out of the park.

rdtsc 2 days ago 4 replies      
As mentioned in the post, given Rust wants to operate in the same space as C, this approach makes sense. However from a higher level, building more complex concurrent systems, dealing with futures/deferred-s/promises and/or a central select/epoll/kqueue reactor loop gets daunting and doesn't mix with complex business rules.

Deferred based approach has been a round for many years. I experienced it by using Twisted (Python framework) for 5 or so years. And early on it was great However when we switched to using green threads, the logic and amount of code was greatly simplified.

So wondering if Rust provides any ability to add that kind of an N:M threading approach. Perhaps via an extension, macro or some other mechanism.

Note that in C, it being C such things can be done with some low level trickery. Here is a library that attempt that:


And there were a few others before, but none have taken off enough to become mainstream.

jaytaylor 2 days ago 3 replies      

Ive claimed a few times that our futures library provides a zero-cost abstraction, in that it compiles to something very close to the state machine code youd write by hand. To make that a bit more concrete:

- None of the future combinators impose any allocation. When we do things like chain uses of and_then, not only are we not allocating, we are in fact building up a big enum that represents the state machine. (There is one allocation needed per task, which usually works out to one per connection.)

- When an event arrives, only one dynamic dispatch is required.

- There are essentially no imposed synchronization costs; if you want to associate data that lives on your event loop and access it in a single-threaded way from futures, we give you the tools to do so.

This sounds quite badass and awesome. I'm not sure what other language implementations take this approach, but this is clearly an extremely beautiful, powerful, and novel (to me at least!) concept. Before reading this, I thought rust was great. This takes it to the next level, though.

leovonl 2 days ago 0 replies      
In my opinion - as someone with some background in CS - the name "future" is a little too overloaded here. It is not only used for the deferred computation of a value so much as it also means the composition of computations. This is not wrong per se, but calling the result a "future" alone oversimplifies what's happening below and hides some properties about the combinations.

The first observation one can make - which is not mentioned anywhere in the article - is that the composition of futures here can be understood as a monadic composition. This by itself gives a big hint why this interface is so powerful. Second is that this library could be understood as an implementation of process and process combination from pi-calculus [1] - sequential combination, joining, selection, etc - so it could be formalized using its process algebra.

From the practical side, one example of a mature library that implements similar concepts is the LWT [2] library for OCaml, which has the same idea of deferred computation, joining and sequencing, but calls the computations "lightweight threads". One could also argue about naming in this case, but it seem to reflect a better the idea of independent "processes" that are combined on the same address space.

Finally, as much as these concepts of futures and processes look similar on the surface, they each have their own properties - so it's always good to consider what better fits the model. By looking at the research and at other similar solutions, one can make more informed choices and have a better idea of what to expect from the implementation.

[1] http://www.cs.cmu.edu/~wing/publications/Wing02a.pdf

[2] http://ocsigen.org/lwt/manual/

nv-vn 2 days ago 1 reply      
Anyone else find the f.select(g)/f.join(g) syntax unintuitive/awkward? I'm confused as to why they wouldn't go with the (IMO) more logical select(f, g) and join(f, g) in this case (since neither Future is really the "subject" in these cases). Not that this is a major concern (it would take only a few lines of code to change within your own program using an alias for the functions), just interested in knowing the rationale behind the choice.
losvedir 2 days ago 6 replies      
I dabbled with rust in the past and was really fascinated with it, but haven't played around lately. One thing caught my eye in the post:

 fn get_row(id: i32) -> impl Future<Item = Row>;
That return type looks odd to me. What does it mean to return an "impl", and is that a new feature in rust, or just something advanced that I missed in my exploration before?

cyber1 2 days ago 2 replies      
Little benchmark rs-futures vs lwan (https://lwan.ws) on my machine Core i5


 $ wrk -c 100 -t 2 -d 20 Running 20s test @ 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 823.09us 449.37us 20.98ms 98.69% Req/Sec 62.15k 10.51k 105.24k 48.63% 2479035 requests in 20.10s, 340.44MB read Requests/sec: 123335.77 Transfer/sec: 16.94MB

 $ wrk -c 100 -t 2 -d 20 Running 20s test @ 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 596.45us 573.31us 24.46ms 99.33% Req/Sec 86.17k 13.15k 119.71k 76.00% 3429720 requests in 20.01s, 624.73MB read Requests/sec: 171404.15 Transfer/sec: 31.22MB
For lwan i use http server example from lwan.ws main page.

As you can see in this example C http server much faster than simple http Rust server.

* futures-minihttp release build

* lwan -O3

soulbadguy 2 days ago 5 replies      
Finally a nice async/io interface for rust, always felt that it was a big missing piece, couple of questions for peps familiar with async in other languages :

1 - Isn't the state machine approach the same as C#/.net async/await is using ? But the with the added convenience of the syntactic sugar ?

2 - no allocation : , does'nt the lambda closure need to be allocated somewhere ?

3 - I would have love some comparison (booth performance wise and on theory) with C++ up comming coroutine work, from my understand the C++ approach is even more efficient in term of context switching and have the advantage of even less allocation.

Animats 2 days ago 4 replies      
This is cute. This is clever. Whether or not it's too clever time will tell. A year ago, I noted that Rust was starting out at roughly the cruft level C++ took 20 years to reach. Rust is now well beyond that.

All this "futures" stuff strongly favors the main path over any other paths. You can't loop, retry, or easily branch on an error, other than bailing out. It's really a weird syntax for describing a limited type of state machine.

I'm not saying it's good or bad, but it seems a bit tortured.

thomasahle 2 days ago 4 replies      
I'm confused by

 .map(|row| { json::encode(row) }) .map(|val| some_new_value(val))

 .map(json::encode) .map(some_new_value)
Is the explicit extra layer of lambda generally prefered in Rust over just passing the functions?

bfrog 2 days ago 1 reply      
I love the direction this is going, and the performance it achieves.

Debugging promises/deferreds in other languages has given me nightmares, compare with erlang/golang debugging where you get a simple stacktrace.

Does this provide some nice way of debugging complex future chains? Are there plans towards making it super easy to debug?


Manishearth 2 days ago 6 replies      
I'm rather surprised by the benchmark; I would expect the Go benchmark to be faster than Java (and the fact that it isn't may indicate some improvements that can be done to fasthttp by learning from rapidoid or minihttp). Then again, the difference isn't that much, so it just could be implementation details that would require a total refactor to fix.
skybrian 2 days ago 2 replies      
Is there any special handling for Futures that complete with an error?

Also, how do you debug code that's hung or taking too long? It might be useful to get a list of all the jobs (incomplete Futures) that are currently running, much like running 'ps'.

plesner 2 days ago 0 replies      
This looks really impressive. I'm curious what the story is around propagating errors through chains of futures. Traditionally future libraries don't pay much attention to that which can make debugging excruciating, which it doesn't have to be. But then rust does errors differently so maybe it's less of an issue there?

About the naming though, I was a little disappointed. Out of future, deferred, and promise, "promise" is the better term. The two others imply that something will happen later which is misleading because it's fine to have promises stick around long after they're fulfilled.

bascule 2 days ago 1 reply      
While the benchmarks are looking a lot better than many other similar Rust libraries in this space, I'm not sure the code is in a state where they're actually meaningful yet: https://github.com/alexcrichton/futures-rs/blob/master/futur...
lossolo 2 days ago 1 reply      
Why you didn't compare it to C++ or C ? If you want to compete with C/C++ it would be natural to compare those in benchmarks. Java and Go have GC. It's like comparing super car with street cars when you should compare it to other super cars.
tomdale 2 days ago 1 reply      
The recent flurry of activity around async IO in Rust has been really exciting; to me, it indicates that the core team's decision to stabilize the language was a smart bet that is paying off in rapid ecosystem growth.

One quibble I have with this post is that it talks about futures as a zero-cost abstraction. That might be true (or close to true) from a performance perspective, but in my (admittedly inexperienced) opinion, it seems to have a significant ergonomic cost that is not accounted for.

While futures help us deal with multi-threaded coordination of data from multiple sources, that overhead isn't necessary for situations where you're running in a single thread dedicated to doing IO operations.

Dealing with futures in your code is not non-trivial. Browsing through the futures version of the HTTP server, I had a hard time following along:


And it requires a bunch of helper code to go with it:


The blog post mentions Tokio, another high-level abstraction on top of mio (by the same author). Because it doesn't require the futures abstraction from top-to-bottom, it offers similar (maybe even a little better) performance with what, to my eyes, is far simpler code:


I'm still learning Rust and spend most of my time in JavaScript. The analogy I'd use is: imagine if in the Node programming model, every API required you to use JS Promises, even at the very lowest level. Even if you could reduce the cost of creating new Promise objects, interacting with them over simple values could make the code you write more verbose. In Rust, that problem is exacerbated by the much stricter type system and the fact that you have to do cross-thread coordination.

I'm a total beginner to systems programming, and a lot of this stuff is above my pay grade. However this shakes out in the community, I'm very happy to see Rust on the way to becoming the fastest, most productive way to write high-performance web services.

crudbug 2 days ago 0 replies      
One thing I have not seen in discussion is - Work vs. Worker abstraction.

Your application work - computation logic/business rules, should be decoupled from the type of worker.

The worker can be - blocking or non-blocking - Futures/Continuations/Co-routines.

vvanders 2 days ago 2 replies      
Not sure if I missed this in the post, does this depend on any unstabilized features or can we use this today on 1.10.0 stable?

Awesome stuff btw, love the iterator inspiration.

saynsedit 2 days ago 2 replies      
Big downside is now you will have a dichotomy of functions that block using futures and functions that block at the OS level and no sane way to intermix them. Rust essentially becomes two languages. Async/await sugar doesn't fix this.

Would be great if functions could be written in a general way for both IO models and users could select the implementation at their convenience.

cm3 2 days ago 0 replies      
This is cool and validates Rust, but I just want to add that even 2kb stacks as mentioned in sibling comments is bigger than Erlang's process stacks. In Erlang 19.0.3, even with dirty-schedulers enabled, a process's default size is 338 words.
the_mitsuhiko 2 days ago 1 reply      
Wohoo. I was waiting for this. I hope that at a later point this will also mean that we get some sort of syntax support for it once it's stable and entered std.
hinkley 2 days ago 0 replies      
Back when futures and promises were a new concept to most people, if someone asked me to explain why you would want to do such a thing, my favorite example was loading images in a web browser. You wouldn't want to load the same image four times just because it appears in four places on a page, would you? Yada yada promises etc etc.

Seeing articles like this makes me feel like a circle has finally been closed.

soulbadguy 1 day ago 0 replies      
For those who are curious about how does that fair against a coroutine based approach : https://www.youtube.com/watch?v=_fu0gx-xseY
kbenson 2 days ago 0 replies      
> a simple TCP echo server;

How convenient. I've been exploring/learning Rust, and writing a simple echo server and comparing it to a reference version I've written in Perl is my first semi-trivial program I wanted to do to compare.

michaelmior 2 days ago 1 reply      
Curious if someone has tried this and Eventual[0] with any thoughts on how they compare.


eggnet 2 days ago 1 reply      
How are futures handled for open() and disk i/o?
meneses 2 days ago 0 replies      
Aawesome. So to cancel a future, I just drop it! Awesome.
matthewaveryusa 2 days ago 1 reply      
I'm genuinely interested in knowing what the problem is with an event loop using epoll and a threadpool for IO that blocks but epoll can't poll. I've used proprietary event loops at 2 giant companies, libuv with C, asio with cpp and nodes async, and the async IO was never the problem in terms of performance or complexity. What is the problem that's trying to be solved?
ufo 2 days ago 0 replies      
Unfortunately, it seems that you still need to use callbacks and lots of and_thens to write this async code.

Wouldn't it be possible to add coroutines to Rust instead?

ridiculous_fish 2 days ago 4 replies      
How does the zero-cost abstraction work?

Say we make a Future<Int> and then chain `.map(|x| x+1)` on a dynamic number of times (N). Presumably this requires storing at least N function pointers.

How can we store these N function pointers with zero cost? If it only takes one allocation, where does the N-1 future store its function pointers?

natrius 2 days ago 1 reply      
What would a rough sketch of async/await syntax sugar look like implemented with Rust macros?
shmerl 2 days ago 1 reply      
So will this become the official part of the language / standard library?
hackaflocka 2 days ago 2 replies      
What's the meaning of "zero cost future" in this context? I googled the phrase and got a bunch of irrelevant material.
b34r 2 days ago 1 reply      
select is an odd term choice for what is essentially a race condition. What's the thought behind the naming of that method?
pbarnes_1 2 days ago 4 replies      
This is awesome, but I have an off-topic rust question:

Why can't we have some syntactic sugar to get rid of .unwrap()?

ben0x539 2 days ago 0 replies      
I guess it's cool that Rust is getting zero-cost futures, but they have a long way to go to catch up to C++'s negative-overhead coroutines!
mike_hock 2 days ago 1 reply      
I suppose you could say, this way of programming is the future.
Indie Hackers: Learn how developers are making money indiehackers.com
919 points by csallen  2 days ago   181 comments top 42
radarsat1 2 days ago 3 replies      
Very cool, but also a bit misleading.. I was wondering how the hell wub machine actually makes 900$/month, so I read it.

> Record high was $850/mo, record low was $40, with the average month bringing in around $300. Certainly not quit-my-job money, but it helps.

Posting the all-time high instead of the average is a bit off-putting imho, considering it's supposed to represent "paychecks." This is more like getting lucky once. Anyways, haven't read the rest of the stories, and I think it's pretty inspirational. I'd just appreciate more honest revenue reporting on the front page. (It actually says $900/mo, not just $900, so it's not like I'm reading this ambiguously..)

Edit: On second thought, it's not clear to me from the description whether he means that it made $900/mo for some kind of long streak or just one month.

rafapaez 2 days ago 10 replies      
This is very cool, but can I ask you a question?

Some days ago I posted a very similar website here (http://www.transparentstartups.com/) and I didn't get barely any vote whereas this one is getting totally viral (385 votes and adding).

Please let me know what I'm doing wrong. The only thing I can think of is that I didn't mention some key words like "developers" and "money" but "startups" and "transparency". Or is there anything else I'm missing here?

Thank you in advance.

UPDATE: I'm learning a lot today, thank you guys for the feedback.

ryandrake 2 days ago 2 replies      
I'd be interested in learning from founders of full-time businesses that started as side projects, how they made that transition from side project to full-time.

The amount of time you need to spend on a side project to grow probably increases faster than revenue. So inevitably there will be some point in time where your level of effort is much more than what you'd call a side project, yet your revenue is much less than what you'd call a business. I bet that's a frustrating point for many, as you can't just quit your job and stop paying your bills. I'd love to hear creative ways people have gotten past that hump that don't involve mortgaging the house, selling all your possessions and depleting your life savings.

inputcoffee 2 days ago 4 replies      
This is a very useful contribution.

The only thing I would add is how many man hours the project took over what period of time.

I have to click on them and sort of guess if that site took 5hr/week for 10 weeks, or if it took 10 people working 10 hour days for 2 years.

Since you already ask for the tech stack, this would also help launch a dozen "which tech stack is more productive?" studies.

On a final note, I appreciate that you also ask about marketing. Maybe the marketing efforts and man hours can be summarized too?

malcolmocean 2 days ago 2 replies      
Founder of Complice (https://complice.co/) here, one of the sites featured (https://indiehackers.com/businesses/complice).

I'm game to answer questions that people have that weren't answered in the interview!

whamlastxmas 2 days ago 2 replies      
If I was personally making thousands a month from a web app I made, I don't think I'd want to advertise that. Partially because it motivates competition, and partially because I feel like public information about my income could later be used against me (no concrete examples other than alimony/child support). I wonder why these people aren't bothered by this.
epalmer 2 days ago 5 replies      
So I went to the site in Chrome and tried to back arrow to HN and the site cleaned out the history queue in Chrome. I stopped there. This is bad form in opinion.
cyberferret 2 days ago 1 reply      
Hi all, founder of HR Partner (http://hrpartner.io) as featured on Indie Hackers here. Happy to answer any questions that anyone has. As we are 'pre revenue', I would also appreciate tips and hints with respect to marketing and B2B sales from anyone who has been there, done that. :)
nodesocket 2 days ago 3 replies      
Founder of Commando.io here (https://indiehackers.com/businesses/commando-io). Let me know if you have any questions.
herbst 2 days ago 1 reply      
That you include actual numbers is awesome. Kudos
negrit 2 days ago 4 replies      
I'm very confuse with sentence:

 sideProject.generate(8500, 'dollars').per('month');
Those are not side projects

timbowhite 2 days ago 4 replies      
Great site, considering sharing some of my projects.

> Learn how developers are writing their own paychecks.

Would love to see a forum dedicated solely to "indie hacking" for developers. ie. threads related to all the ins-and-outs of independent product development, idea validation, market research, dealing with customers, marketing strategies, founder Q&A, etc.

avipars 6 hours ago 0 replies      
Great. I wish there could be charts of the monthly cash rate, also it would be nice to mention how much maintenance time and cost of servers and such...
mettamage 2 days ago 1 reply      
This is awesome. Soon, I'm going to devour all the stories. How do the companies get to know your site and share their story before you were on HN?
johnward 2 days ago 3 replies      
I'd like to be able to subscribe via RSS
stockkid 2 days ago 0 replies      
This actually motivates me a lot. Thanks for making this.

UI nitpick: when I navigate to the project's page, the if the project name is long, it goes out of screen on Nexus 5X + Chrome.

Silhouette 2 days ago 0 replies      
On very rare occasions, there's a post on HN that I wish I could upvote so much it would pin at the top of the home page until everyone had a chance to see it. This is one of those rare posts: fascinating, well presented, and I can see it being practically useful for a lot of people who aren't there yet as well.
redstripe 1 day ago 1 reply      
I'd like to see an additional question asked: How did you go about designing the interface of your app?

I'm blown away by how good these and so many "show HN" entries usually look. Everything I've put online is functional but looks so obviously "designed by a programmer" that it's embarrassing to mention.

Do people first work on a prototype and then run it by a graphic artist/UI designer to make it look decent? Where do you find these people?

danr4 2 days ago 0 replies      
It's not every day I'm happy to give out my email. very nice.
gricardo99 2 days ago 0 replies      
Cool project! One forward-looking idea for your site: You could start to incorporate a community platform where people can post side-project ideas, skills they're willing to contribute, skills they're looking for, etc..
pascalxus 1 day ago 0 replies      
I think this site is Awesome! IndieDevs love to find out what worked and what didn't. Can I make a suggestion? Perhaps, you can add a section or another site that lists how any successful businesses got their start, especially with elaborate insights on distribution as this is the number 1 barrier to entry for most start ups. Thanks!
jackmaney 2 days ago 0 replies      
Nice site, but it hijacked my back button. There's no excuse for that.
imaginology 2 days ago 0 replies      
Nice site, I enjoyed reading it.

I like the effect when transitioning from grid view to list view. Is that just CSS or is there some Javascript magic doing that?

Sindrome 2 days ago 0 replies      
Was literally lying in bed sleepless last night thinking about which side project on my list to start. Was digging through "Ask HN: How do you make recurring revenue" posts. There's always someone asking in HN. Didn't even think about researching side projects as a side project.
palerdot 2 days ago 0 replies      
Great work. Please provide an easy way to clear all the filters in one click. Subscribed and will be eagerly following.
swah 2 days ago 0 replies      
Very motivational that some apps can make those numbers - thank you for making this! (subscribed!)
was_boring 2 days ago 0 replies      
I really like this idea, and even signed up for the mailing list. I've been scratching my head for years trying to crack income diversification without a lot of money to begin with. It's good to see some success stories.
andretti1977 2 days ago 1 reply      
Great job, subscribed! But here it is a simple question: what is the business model of indiehackers.com?
vatotemking 2 days ago 0 replies      
Very important question but isn't mentioned much is: How do you let other people know about your business after launch?
tummybug 2 days ago 0 replies      
Great site for inspiration. Gave me the same feels reading revenuenumbers.com (posted here a while ago) did.
NinjaTrappeur 2 days ago 0 replies      
Feature request: it would be nice to be able to communicate with the indie developer within your website. Maybe though a comment section at the bottom of the page or a community FAQ.

Anyways, nice website I will come back!Looking forward the rss feed ;)

shellerik 2 days ago 1 reply      
Sorted by revenue only seven make more than my Amazon affiliate site. I'm not sure if that type of site would fit in there but I did develop quite a bit of code for it. It's not a review site but rather a searchable product catalog.
ktu100 2 days ago 1 reply      
It would be great if there is a comment section, or private Q&A, to ask founders questions.
augb 2 days ago 0 replies      
Having the month and year of the "interview" would be helpful. Down-the-road, it will help give context to the information. A neat idea would be to allow for follow-up "interviews" later.
otto_ortega 2 days ago 0 replies      
Cool idea, I hope to build something one day that I can publish on it.
augb 2 days ago 1 reply      
I like that you can filter on solo vs. multiple founders. Very cool.
tener 2 days ago 0 replies      
Really good read, the structured Q&A format is great.
dudeget 2 days ago 0 replies      
wow, very interesting reads. Many of them make me inspired and frustrated at the same time. Inspired because of how cool the ideas are, frustrated because "why didn't I think of that?!"
robotnoises 2 days ago 0 replies      
This is very cool and one of the most handsome designs I've seen in a while.
one_thawt 2 days ago 3 replies      
Cool. Although the site breaks my back button on Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
okket 2 days ago 2 replies      
Evil site, blocks back button.
azernik 2 days ago 1 reply      
Could we get a less link-baity title? Something like "Viable Single-Developer Businesses" or the like?
What I learned as a hired consultant to autodidact physicists aeon.co
545 points by tbrownaw  3 days ago   303 comments top 38
Xcelerate 2 days ago 12 replies      
Whenever I talk about physics (to non-scientists), I notice that people have a tendency to start veering away from the math and onto irrelevant metaphysical tangents. For instance, I'll be trying to explain the history of renormalization in quantum field theory, and someone will suggest, "Well maybe we don't really understand infinity". No, we understand "infinity" just fine. It's a concept that's clearly defined using a set of axioms that have been around for thousands of years. "Well maybe the mathematicians are wrong." I start losing my patience pretty quickly at this point. The other big one that annoys me is "Well it's just a theory". Sure, and gravity is just a theory too. If you doubt it, you're free to go skydiving without a parachute, but personally, I'm not taking any chances.
j2kun 2 days ago 7 replies      
> A typical problem is that, in the absence of equations, they project literal meanings onto words such as grains of space-time or particles popping in and out of existence. Science writers should be more careful to point out when we are using metaphors. My clients read way too much into pictures, measuring every angle, scrutinising every colour, counting every dash. Illustrators should be more careful to point out what is relevant information and what is artistic freedom. But the most important lesson Ive learned is that journalists are so successful at making physics seem not so complicated that many readers come away with the impression that they can easily do it themselves. How can we blame them for not knowing what it takes if we never tell them?

Just the other day I went to a talk by a prestigious physicist who, on top of telling only half-truths _at best_, made all of these mistakes and more. And the audience ate it up! As a mathematician and guy-who-writes-about-math-online, it makes me feel very frustrated. I also realized the difference between a physicist and a mathematician: a physicist is openly willing to compromise their principles and stretch the truth for the sake of press, while a mathematician sticks to the truth and as a result nobody cares.

mindcrime 2 days ago 4 replies      
Some of these folks could probably benefit from reading a book that I just bought: The Theoretical Minimum: What You Need To Know To Start Doing Physics.


It's a cool book... written to be relatively accessible, but is actually grounded in the real principles and math used in physics. As somebody who considers himself an autodidact of sorts (in that I'm as much self-taught as formally educated), but who has some awareness of "what I don't know" (and therefore doesn't sit around coming up with crackpot theories about quantum mechanics and what-not), I love this kind of stuff.

One of the authors is Leonard Susskind who is pretty credible. This is a book that is serious, but succinct (as you might guess from the title). Note that there is also a companion volume that is specifically about Quantum Mechanics. https://www.amazon.com/Quantum-Mechanics-Theoretical-Leonard...

All of that said, I do think it's important to note (as others already have) that "autodidact != crank". Plenty of autodidacts are just people who study physics (or whatever) because they find it interesting, but they are aware of their limitations and don't pretend to have amazing new insights that have escaped physics for decades, etc. Likewise I'm pretty sure you can find cranks who have a formal education as well.

pklausler 2 days ago 1 reply      
A very long time ago, I worked for Seymour Cray. He received a surprising amount of crank mail (and back then, it was real postal mail, not e-mail). His secretary filtered out the crank mail, spared him from it, and was good enough to pass the best stuff on to some of the engineers that would appreciate it.

I still have some of it, including a long treatise from an inmate at the county jail who had a theory of interplanetary transportation involving kangaroos whose energy output would be measured in "gigahops".

EDIT: two minor typos

forgotpwtomain 2 days ago 1 reply      
> Sociologists have long tried and failed to draw a line between science and pseudoscience. In physics, though, that demarcation problem is a non-problem, solved by the pragmatic observation that we can reliably tell an outsider when we see one.

So generally for sciences (and for compsci cranks as well) we have a direct answer because either your theories can be experimentally verified or they cannot. This is normally a solid position but it puts for example the decades of work on string theory in a bind - since they haven't produced a single verifiable result either.

So the author offers a tangential and more broadly encompassing but subjectively experiential position:

> During a decade of education, we physicists learn more than the tools of the trade; we also learn the walk and talk of the community, shared through countless seminars and conferences, meetings, lectures and papers. After exchanging a few sentences, we can tell if youre one of us. You cant fake our community slang any more than you can fake a local accent in a foreign country.

Sure, that's great you've verified membership in a social group - but that's really insufficient when you are trying to identify crank science. This sentence can also be applied to all kinds of cults and secular belief systems, hell, I think most of academic humanities fall under this as well.

Anecdotal - I know someone who is a well accomplished researcher in their respective experimental physics field (with numerous citations), as a hobby they also happen to have an interest in theoretical physics, where they have published several papers entirely to no response (which to my understanding would be pretty awesome of they were not incorrect) . So it's not just in and out of 'professional physics', the number of people specializing in a particular area can be very small and closed off in an even more domain particular kind of way.

epistasis 2 days ago 1 reply      
>My clients almost exclusively get their information from the popular science media. Often, they get something utterly wrong in the process. Once I hear their reading of an article about, say, space-time foam or black hole firewalls, I can see where their misunderstanding stems from. But they come up with interpretations that never would have crossed my mind when writing an article.

This isn't just physics articles, and isn't just cranks. Most of the popular reporting on science gets things very wrong. I can't say whether physics is more correct or less correct, but I think I notice less eye-rolling and complaints from physicists about popular news articles than I do from other fields.

Something to keep in mind for people that are getting their science news from the media.

schoen 2 days ago 2 replies      
I get the e-mail for the EFF Cooperative Computing Awards


so, despite having put lots of effort into not having people make spurious claims, I hear from a whole lot of math cranks.

Two things that I find striking are many people's level of confidence that they can personally "solve the problem" (in this case, by inventing some kind of "formula for primes" that has eluded the organized mathematics world for decades), and many people's lack of understanding of what a solution would consist of (in terms of knowing that mathematical proofs exist, and being able to understand whether they have a theorem or just a conjecture).

Our situation is especially tricky because we chose a problem that experts said would require lots of computational resources and couldn't be solved by new mathematical insight, but then we didn't outright forbid people from trying to solve it by insight. So a lot of people see an exciting challenge, like "they think it will take a lot of computer time, but if I can just see the pattern, I can skip all of that!". Also, we have a large monetary reward for solutions and so people are excited by the idea that they have them and are about to receive a bunch of money.

I think it's true that many of the people who contact me about this are excited about mathematics in the way that people who contacted Dr. Hossenfelder were excited about physics (and, as pdkl95 said, that Carl Sagan's cab driver was excited about science). But it's still frustrating that, after we've gone to some lengths to say that you need a proof and not just a guess, and that decades of research indicate that you can't find primes of this size without significant computer time, people are still so confident that their guesses are right and so resistant to accepting that they haven't met the awards criteria.

It would be interesting to see an equivalent service for talking to mathematicians and to see what some of the people who contact us might get out of it, and whether it might inspire them to pursue more constructive things. (I always wish that our awards would motivate someone to start doing Project Euler problems or something...) If someone set up that "talk to a mathematician" service, I would probably try to send lots of people their way.

api 2 days ago 1 reply      
I've been fascinated for a long time by just how much effective autodidactism there is in software vs. other fields. There are tons of people who have made major contributions here that are completely self-taught.

Software is uniquely suited to autodidactism for three reasons:

(1) The tools are easy to obtain and easy to start using. Capital cost is low to non-existent.

(2) The learning feedback loop is nearly instantaneous and the results are almost always perfectly objective. Things either work or they don't. There is not much room for delusional or wishful thinking.

(3) Resources for learning are readily available and are mostly written in a style that is utilitarian and straightforward rather than cliquish and arcane.

Theoretical physics passes on point #1 until you hit the need to do serious experimentation, but it fails on points #2 and #3. There is no command prompt that will tell you in 10ms if a theory is at least rational and internally consistent, and advanced mathematics has an arcane symbology and jargon that seems almost intentionally designed to resist penetration by those outside the academic circles where it is used and taught.

martincmartin 2 days ago 2 replies      
There are many people who decide not to go into research but enter industry instead. Some of them don't have the intellectual chops, but others are turned off by the politics, long hours that professors work, spending more time writing grant proposals and managing students than doing research, etc.

Some of them have spare time, or maybe will have spare time after their kids are grown, or will be able to retire early. Then they could become citizen scientists [1], independent scientists [2], etc.

I wonder how to organize and encourage them? How to redirect or weed out the cranks, and encourage those who are motivated and can look at things from a new perspective?

[1] https://en.wikipedia.org/wiki/Citizen_science[2] https://en.wikipedia.org/wiki/Independent_scientist

panglott 2 days ago 2 replies      
This strikes me more as a success of science journalism (people are inspired to improve their understanding of physics) and a failure of science education (intelligent, motivated amateurs receive no support outside of formal education).

Along these lines, is there a good recommended contemporary popular work on quantum physics for non-physicists?

netcan 2 days ago 1 reply      
This is a fun idea for a service.

I happened to read 3 articles this week on labour productivity. My economics is undergraduate level with 15 years of rust, but I had an idea. I thought I was brilliant for the rest of the day. But, I'd like to know if my idea is an existing theorem, wrong for some reason I don't understand or (most likely) a novel, brilliant idea that economists just overlooked.

Dunno if I'd pay $50 to find out. 34.99 tops, maybe. :)

angrow 2 days ago 1 reply      
If we need a more neutral term than "crank", rather than "autodidact," why not "outsider"?

An outsider artist is anyone who creates art not easily dismissed, despite not participating in the social and academic communities of their medium, so why can't there be outsider scientists and engineers as well?

martincmartin 2 days ago 1 reply      
Einstein famously couldn't find a teaching position after graduation, spent 2 years unemployed, then worked as a patent clerk in a patent office. According to wikipedia: "Much of his work at the patent office related to questions about transmission of electric signals and electrical-mechanical synchronization of time, two technical problems that show up conspicuously in the thought experiments that eventually led Einstein to his radical conclusions about the nature of light and the fundamental connection between space and time."

So there's two lessons here, I think: 1. People outside academia can still make important contributions, and 2. spending a lot of time thinking about other people's proposals, separating the good from the bad, and inspire a new, fruitful way of looking at things, or at least overcoming standard mental traps.

sevenless 2 days ago 1 reply      
Seems to bear comparison to phone sex lines, in that you're satisfying a basic human need - in this case, to be listened to.

John Baez keeps a 'Crackpot Index' score at http://math.ucr.edu/home/baez/crackpot.html

mrcactu5 2 days ago 2 replies      

 Many base their theories on images, downloaded or drawn by hand, embedded in long pamphlets. A few use basic equations. Some add videos or applets. Some work with 3D models of Styrofoam, cardboard or wires.
Actually these are perfectly good ways of communicating ideas and solving problems.

keithpeter 2 days ago 0 replies      
"They are driven by the same desire to understand nature and make a contribution to science as we are. They just werent lucky enough to get the required education early in life, and now they have a hard time figuring out where to even begin."

Any chance of on-boarding via experimental work/data analysis in some way like in Astronomy?


bigger_cheese 2 days ago 0 replies      
"Many of them are retired or near retirement, typically with a background in engineering or a related industry...After exchanging a few sentences, we can tell if youre one of us. You cant fake our community slang any more than you can fake a local accent in a foreign country."

This matches my experience. During the final year of my Engineering degree I decided to take a third year particle physics elective because it sounded interesting. The course had no pre-requisites but it probably should have. I remember showing up to the first lecture and being one of the only non-science students in the theatre. The lecturer started talking about Hamiltonian's, Fermi-Dirac Statistics and Wave-Functions and it all just went completely over my head. There was a whole bunch of "foreign" concepts that were assumed. I ended up needing to check out a bunch of physics texts from the library and over the next few weeks I had to teach myself the 2+ years of physics knowledge the rest of the class was familiar with. I passed the course but it was a lot of work for what was supposed to be an elective.

TylerH 9 hours ago 0 replies      
"None of us makes good money"

$150 an hour is very good money, especially if you have interested parties sending e-mails that are "piling up" in your inbox.

Get outta here with that nonsense.

drauh 3 days ago 1 reply      
I've had my share trying to convince someone that a perpetual motion system they described did not conserve energy or momentum. They refused to believe my assertion that momentum and energy were conserved quantities.
dandare 2 days ago 0 replies      
>My clients almost exclusively get their information from the popular science media. Often, they get something utterly wrong in the process. Once I hear their reading of an article about, say, space-time foam or black hole firewalls, I can see where their misunderstanding stems from. But they come up with interpretations that never would have crossed my mind when writing an article.

Can you see the parallel with democracy? Autodidacts can not really harm the field of physics no matter how naively wrong they are. But we have voting rights and we believe we understand the complex issues in sociology, justice, economics ... I am depressed.

erroneousfunk 2 days ago 5 replies      
I'm hardly a physicist, but I have a degree in "general engineering" (long story about how I managed to escape specialization there) and a master's in software engineering, and I've taken a few advanced math courses, including partial differential equations, computational theory, scientific computing, and a math-heavy course on relativity. I also spent a year and a half working for a Harvard physics professor, alongside his team of grad students. So, while I can't "do physics" I think I know enough to understand a little about what it takes to _be_ a physicist, and appreciate the work that they do.

My husband and I were listening to a radio program (http://www.thisamericanlife.org/radio-archives/episode/293/a...) about a man convinced that he had found a mistake in Einstein's theory of relativity, and trying to communicate this idea to a physicist (futilely, obviously). My husband, who was a music major for a year before dropping out of college, and I started getting in an argument about this episode that was so heated, I felt like we were listening to two completely different stories!

I kept insisting that the advanced math and education wasn't just some funsies shibboleth the physicists had to keep the hoi polloi out of physics -- the devil's in the details and the man in the story didn't even understand the big picture correctly. My husband was angry and insulted that the physicist dismissed the man's theory out of hand, and felt that anyone could make a contribution to physics with perhaps a little help from a calculus book -- just look at history! I thought the hero of the story (if there was one) was clearly the physicist, while my husband was solidly on the side of the electrician with a little learning, trying dangerous things.

I was really shocked. We don't usually fight like that, and especially over something so seemingly trivial, but, in retrospect, I thought that it displayed huge tension in society as a whole, between academics and non-academics. We hear so much about the "one percent" and income-based class distinctions, but relatively little about academic barriers in society, whether real or artificially imposed. Should physics open up in a real way (not just "pop science" articles and occasional books for laymen)? Should we put a stronger "academic" focus in early public and high school education? Should we provide more resources for "physicists who just need a little help with the math" whether they're right or wrong?

Although I was staunchly in favor of the hallowed halls of academe, and it still holds a special place in my heart, I suspect that the correct solution lies somewhere in the middle. Anyway, fantastic article, and it really brings up an important point that is too seldom addressed, by physicists, or society as a whole!

evanwolf 1 day ago 0 replies      
Heh. What I learned as a product manager by watching 100 hours of Hallmark movies. https://medium.com/product-hospice/100-hours-of-hallmark-mov...
dekhn 2 days ago 0 replies      
While I agree with plenty the author wrote, I have seen plenty of people who are great with math, can speak the language, know how to promote their results in pubs and conferences, and yet are still completely and totally wrong.

My best example is a smart physics graduate that I went to grad school (in biophysics) with. An open problem at the time was how motor proteins couple the energy in ATP hydrolosis to directed motion. I said one day, "hmm, maybe it works like this..." and she said, "oh no, my advisor and I proved that mechanism was impossible."

A few years go by, we're getting ready to graduate. I ask her, "so, since you spent the last 7 years studying motor proteins, how do they work?" And she told me it was the mechanism I had proposed. I said, "but you disproved that!". And she said, "well, then I collected data, and it turns out our assumptions are wrong."

I constantly end up arguing with quantitative people in my own old field- for example, I used to argue with people who did GWAS, they insisted all their stats were great and perfect, then Ionnides and others showed their stats were abysmal, and they were massively overconfident in their results.

This is not to say all the cranks are right- they are almost certainly wrong. Anybody who attempts to get around the second law of thermodynamics is going to lose, unless there is something truly and fundamentally wrong with statistical mechanics.

lcvella 2 days ago 2 replies      
That is why I believe society is not getting what it is paying for in physics and mathematics. Too much of the funds invested in physics knowledge is from taxpayers money, and all the knowledge produced is useless if we can't get it back and, with due dedication, understand it. I always felt most of the modern physics knowledge is completely inaccessible to me.

The latest physics book I could read and understand was one by Einstein himself, "Relativity: The Special and General Theory", which is 100 years old.

When I tackled to learn quantum mechanics, I couldn't find good accessible (cost-wise) material, and the supposedly good book appointed by a physicist friend of mine cost more than $100 on Amazon (which is about 1/3 of the minimum wage of where country I live). I end up buying the Indian print of the book much cheaper. But there was no chance I could read it at the time, because of my lack of calculus basis, what made me watch the entire Udacity course on differential equations.

Thanks to that, I had the bare minimum to be accepted in a PhD program on mechanical engineering (I am MSc on Computer Science) to work on computational fluid dynamics. Now, halfway through an engineering PhD, I believe am (more) able to tackle the QM book (look all that took me!)

That is why I deeply value the effort of Udacity, Coursera, Khan Academy and such, because without real efforts to bring actual knowledge to public, in an accessible way (both cost and didactic-wise), modern physics and mathematics are a waste of money on private clubs.

atemerev 2 days ago 3 replies      
This guy have secured himself so much good karma. Wow. Thank you, from all us autodidacts.

(I am trying to learn some astrophysics as a hobby. Amateur science is currently mostly frowned upon).

jxy 2 days ago 2 replies      
It is a good form of communicating sciences. Seriously, more professors should do it as their contributions to community. Perhaps starting from more reddit AMA?
fritzo 2 days ago 0 replies      
I wish there were services for this in other areas. I'd pay to ask embarrassingly naive questions in fields where my knowledge-to-interest level is near zero, and where reading the internet has proven ineffective.
paulcole 2 days ago 0 replies      
The comments here are full of people who should be paying $50 for 20 minutes of a physicist's time. Maybe we should start taking up a collection and seeing if we can get a bulk discount.
jsprogrammer 2 days ago 1 reply      
>Sociologists have long tried and failed to draw a line between science and pseudoscience. In physics, though, that demarcation problem is a non-problem, solved by the pragmatic observation that we can reliably tell an outsider when we see one.

Sorry, but this is an admission of pseudoscience. All apparently in the name of committing an ad hominem.

It is commendable that the author is helping others to get answers to their questions, but this article indicates that there are substantial issues to be dealt with.

Yenrabbit 2 days ago 0 replies      
Semi-relevant xkcd: http://xkcd.com/1486/
emmelaich 2 days ago 0 replies      
I think there is a place for speculation in science as long it is clearly understood as such.

There used to be journal for it; it was fun reading.

Speculations in Science and Technologyhttp://link.springer.com/journal/volumesAndIssues/11216

It only ran for two years, 1997-1998.

hardlianotion 2 days ago 0 replies      
The bit that I take from this is a scientist who is taking on a mission to explain himself fully, to people who make an effort and give a damn. Very easy to call people like this cranks, and rather hard to make something positive come out of the engagement in many cases.

I suspect the fact that money is involved goes some way to making this initiative a success.

erdevs 2 days ago 1 reply      
I wonder if this is changing with things like online education and the wealth of real information and in-depth knowledge on the internet. It seems self-teaching is much, much more viable today and I imagine that younger generations of autodidacts might not be so ill-informed on the whole.
dredmorbius 2 days ago 0 replies      
This raises any number of points and questions.

How effectively can we communicate (and pass on to new generations) complex ideas at all? There's an essay on mathematics PhDs, noting that in any given seminar of a half-dozen or so people, you may well have the only six people in the world who can even understand the topic in the room. What does it mean to "know" something if only one billionth of the global population can even grasp it?

There's the question of how to assess the quality of knowledge within a field. How can laypeople, inlcuding the politicians, voters, and taxpayers who ultimately pay for research, education, and otherwise support many of these operations, even assess what it is they're paying for and receiving?

There's the matter of media quality and access. I make absolutely no bones that I'm a book thief, and praise Alexandra Elbakyan daily (along with Kale Brewster of the Internet Archive, Reddit's /r/scholar and BookZZ.org) for providing access to the raw materials of my own research. (Also various libraries, though they're far less convenient or accessible.) Information is a public good. It has massive positive externalities, it's nonrivalrous, and it needs to be disseminated and accessible in order to be useful. And yet we lock it away. Which is among the reasons why autodidacts rely on such poor alternatives as the popular press.

Blaming journalists for poor descriptions of scientific concepts doesn't fly when it's scientists themselves who are kicking out these concepts. OK, to paraphrase other unfortunate social slogans, not all scientists. But many. And yes, reporters and editors should absolutely be called to task for sloppy writing and whitewashing crud.

(Ah. I'm remembering a panel, Christopher Hitchens was among the presenters, where a female writer mentioned her experience writing for a fashion magazine on social topics -- the original work was hers, but after it had been washed through many more hands, it was anything but. I think in here: https://m.youtube.com/watch?v=fkLX58ZWbWw Sorry, that's a 2 hour video, I'll see if I can't narrow down the timeframe. Probably Katha Pollitt. The relevant comments concern writing for Glamour, and occur at 14m30s.)

And finally, as I've raised the issue with @schoen below, there's the question of how best to filter out the sensible cranks from the nuts. Finding good new ideas amongst the many bad ones, and sorting out how to keep from having to relitigate bullshit, is a Very Hard Problem.

rupellohn 2 days ago 0 replies      
I recommend 'Physics on the Fringe' for anyone interested in this topic


lifeisstillgood 2 days ago 0 replies      
This is an awesome example of real down in the dirt science communication.

I'm impressed

Analemma_ 2 days ago 7 replies      
This is a fun article, and also a useful reminder that although Hacker News tends to mythologize autodidacts, the boring reality is that in disciplines other than programming, the overwhelming majority of them are useless cranks.
ARothfusz 2 days ago 0 replies      
This is exactly developer support for "open source" physics.
Machine Learning and Ketosis github.com
615 points by ddv  1 day ago   273 comments top 57
ghshephard 1 day ago 5 replies      
What's interesting about this post isn't the actual diet advice, but the guidance to track and see what works for you. Different people react to fasting differently - some continue to burn calories and drop weight, some just go into starvation mode, have their BMR crash through the floor, and end up exhausted all the time because their body is fighting like crazy to conserve energy.

Likewise - some people, when they start to eat carbs, see their BMR ramp up, are full of energy, and end up running 5-10 miles a day to burn that energy off, and are pumped the rest of the day.

Also, focussing on weight to the exclusion of everything else is really horrible. It's pretty easy to lose 40-50 pounds and end up being much less healthier than you were before if you aren't careful. Having some sense of your V02 max, your strength, endurance, flexibility, etc.. are sometimes as important, if not more important, than what your weight is to +/- 20 pounds.

Everybody is somewhat different in how they'll react to various diet and exercise regimes. Understanding that, and taking a bit of time to watch how your body responds, is the important insight here.

onetwotree 1 day ago 5 replies      
I'd really like to see this kind of analysis applied to a larger sample.

If someone were to put together a little app to make this sort of data collection trivial and upload it to a central (preferably cheap) repository, would people be interested in contributing their data in exchange for analysis?

Also, since this is HN, is anyone interested in building something like this? I've got an hour or two a week of spare time to chip in.

Unrelated question, mostly for OP: do you know of any publicly available databases of GI/GL info? It's really important for people with type 1 diabetes (a subject rather dear to my heart), and could also be useful for OpenAPS stuff (https://github.com/openaps).

gtrubetskoy 1 day ago 6 replies      
I don't have a weight problem, I learned about ketosis researching natural ways for improving focus and concentration, and somehow came across "bulletproof coffee". (I'm only using BP for lack of a better name and I don't endorse the stuff they sell online).

To my utter surprise, something as stupidly simple as coffee blended with butter with coconut oil (or MCT) first thing in the morning did unbelievable things for mental productivity. After much googling I learned that this is not a fluke, and that there is real science behind it - beta-oxidation, etc, all that good stuff.

What's most incredible to me is that (1) I didn't know about it, I always thought glucose is the only fuel (and I consider myself fairly knowledgeable as far as basic physiology is concerned) and (2) that sugar, especially the industrially produced kind, is about as close to being the root of all evil as it gets and there is nothing wrong with fat at all.

benkuhn 1 day ago 2 replies      
It's astonishing (and really awesome) that they were able to extract such a strong signal from daily weight swings! This makes me a lot more optimistic about the possibilities of quantified-self-type stuff.

Three things I'm really curious about:

- How significant are the estimates of lifestyle factors? Do you have p-values? If you bootstrap resample, how much do the rankings change at the extremes?

- How much cognitive overhead did it impose to collect the data for this? Did you put a lot of effort into designing the tags beforehand or making sure you weighed yourself at a consistent time?

- It looks like the predicted delta from going from a "nosleep" day to a "sleep" days is about 1.4 pounds (sleep coef minus nosleep coef). That seems fishy, or at least like it will stop working fairly soon because you can't actually lose 1.4 pounds/day sustainably. Is it possible there's something weird going on with the data or those variables don't have the obvious meanings?

tunnuz 1 day ago 8 replies      
This is all good if your goal is weight loss. However weight loss doesn't necessarily means higher fitness. Glycogen is fundamental if you do sports, and exercise is a major ingredient in getting fit. If I read correctly, exercise was not a big part of your experiment, how would you suggest modifying the experiment to accommodate one's exercising needs?

On a side note, I reached a similar conclusion on the role of "carbs at night", sleeping, and fats, and I read this interesting article https://aeon.co/essays/hunger-is-psychological-and-dieting-o... on the importance, for effective weight loss, of feeling satisfied (I believe there is also a reference to the relationship between eating fats and feeling satisfied).

fernly 1 day ago 3 replies      
Just for what it's worth, there are several Soylent-like meal replacement products that are explicitly formulated for ketogenesis. This could be an aid for a person who wants to try a keto diet without having to research a lot of new recipes.

Biolent has a keto variant: http://biolent.ca/

Keto Chow: https://www.thebairs.net/product-category/ketochow/

Keto Fuel: http://superbodyfuel.com/shop/keto-fuel/

KetoSoy: https://www.ketosoy.com/

PrimalKind is "paleo": http://primalkind.com/

Edit: added Keto Chow, which I forgot on first pass!

cel1ne 1 day ago 3 replies      
I lost 15 kg (33 lb) over a period of 7 months.

* Stopped drinking juice, coke etc. completely* Ate sweets strictly only once a week (like one cake every sunday)* Ate carbs mainly at lunch when doing sport afterwards. In the evening I ate carbs too, but much less.

There's one thing that all these fasting-guides and tips fail to mention:

The fastest way to loose weight is to heighten your resting-energy-consumption, and the fastest way to do this is to gain muscle mass by training.

frankus 1 day ago 3 replies      
Totally anecdotal, but I want to second the recommendation of a longer fasting period.

I switched to a strict-but-not-religious "no food between 7 pm and 11 am" system (with exceptions for weekends and social occasions).

Within a few months I was down 15 pounds (~182 to around ~167) and had shed 4 inches off my waist (~34 to ~30). I'm about 5'10" and 41 years old.

It's definitely helped with physical activity (mostly parkour/free running) and I look better. It's also more convenient than what I was doing before, since I don't have to cook breakfast.

The only negative side effect (possibly unrelated) has been that I need a much cooler sleeping environment to be comfortable.

The only thing I would add is that I'm starting to (upside-down) plateau at around 165 and (what my scale says is) 20% body fat. I would love to lose another 5-10 pounds but it'll probably be a slow process.

philip1209 1 day ago 2 replies      
I've been doing ketosis for about 6 months, after having done twice in the past. From my heaviest to today over the last 2 years, I'm down 50 pounds.

The data in this post was cool. I found Dr. Peter Addia's analysis to be one of my favorite resources. He's a medical doctor who was an overweight endurance athlete, then began doing ketosis. I appreciated an honest scientific analysis of the benefits and drawbacks of ketosis.

His blog:


As an aside, I think that our attitude toward insulin from a public health perspective is going to change a lot in the next few years.

davidf18 1 day ago 0 replies      
In males, over time testosterone decreases which causes muscle mass to decrease. Building up the large muscles through resistant exercise or any exercise (eg, gluteus maximus -- thigh) increases the muscle mass. Increased muscle mass increases the basic metabolic rate (BMR) which is the number of calories consumed even while at rest, even while sleeping.
ryeguy 1 day ago 3 replies      
This is delightfully nerdy and overly analytical, but I can't believe someone would go to the extent of creating this yet never think to look for counterarguments against their dieting regimen. Doing this would have given him the answer much sooner.

Trying to figure out how the consumption of certain foods correlates with weight gain/loss is a waste of time because it's simply the caloric content, not the macronutritional content (eg protein vs carbs vs fats). There are dozens of studies showing that only calories matter for weight loss, including vs low-carb, low-fat, and more[1].

The author is acting like each food has arbitrary properties that make them food or bad for weight loss. It's a good reason to play with ML, but it's easier to just count calories. I hate seeing people go down the "good food/bad food" path of thinking because they end up overanalyzing the shit out of everything they eat and for no good reason.

The glycemic index doesn't realistically matter either for a ton of reasons. It hasn't been taken seriously as a useful marker when chosing foods for quite some time now. It's not a reliable indicator of anything. I posted a summary a few years back that debunks all the insulin/GI spike voodoo[2].

Low carb diets work, but they only work because they're a trick to get you to reduce calories. Look at what you're eating and remove all the carb foods. Notice how much your calorie intake dropped. It's a simple way of losing weight without counting calories, but that's it. The weight loss on keto, atkins, etc have nothing to do with carbs, and everything to do with calorie restriction.

It's worth noting that low carb diets have great health benefits, however [3].

[1]: http://examine.com/nutrition/what-should-i-eat-for-weight-lo...

[2]: https://reddit.com/r/Fitness/comments/j853z/insulin_an_undes...

[3]: http://examine.com/nutrition/are-there-health-benefits-of-a-...

sangd 1 day ago 0 replies      
I really like your reading because I went to a time period when everything I read was so misleading even words from doctors. That's the time when I tried very hard to improve my health (I gained about 30 lbs since the time I came to this country 15 years ago). To do that, I ate less fat, avoid carbs, getting a lot of exercises (1-3hrs a day, at least 5 days a week), eating oatmeal, even taking statins. And all one failed after another. I went through a lot of things mentioned here. And finally I decided to forget about everything doctors and researchers said. I look at my parents diet and how I was raised and started slowly from there. My health then got improved much better. I'm glad that you wrote something here for everybody to read. But there's one point I would like to add to the recipe: improve mental state by listening to the body (meditation, yoga, brisk walking are good methods), eat only when the body feels hungry and stop eating when it feels full, eat the food that it feels good after eating (you will develop your own list).I lost about 10lbs and I don't gain/lose any weight for the last 3 years.
hoodwink 1 day ago 4 replies      
As a long-term ketogenic eater (10 years), here are my top simple tips. Unfortunately they were not gleaned using machine learning.

1. Watch your protein. Most people when first going keto will eat too much protein and not enough fat. Protein has an insulinogenic effect when eaten in quantity. Keep protein below 8 oz per meal. Don't be afraid to eat more fat.

2. Avoid cheese. Yes, it's technically low carb, but it repeatedly throws me and my girlfriend off (also a low carber).

3. Avoid nuts. Yes, like cheese, nuts are delicious. But they're a slippery slope. Life will be easier if you avoid them.

elo_ 1 day ago 1 reply      
I collect some of my own data in 1/0 form in regard to whether I was productive the day before as well as a few other things (I have a google form for myself that I fill out each morning). I only have a month of data but here is what it spat out:

 FeatureName HashVal MinVal MaxVal Weight RelScore ^dreams 24546 0.00 1.00 +0.0705 47.11% ^shower 215555 0.00 1.00 -0.0239 -15.96% ^exercise 190069 0.00 1.00 -0.0350 -23.41% ^vitamins 252959 0.00 1.00 -0.0442 -29.56% ^write 129676 0.00 1.00 -0.0687 -45.90% ^publish 12600 0.00 1.00 -0.1496 -100.00%
Which is to say that when I write and publish I also have the willpower and combined other factors to lose weight.

Interestingly I was tracking dreams because I made various changes all at once:

 -phillips hue bulbs -new mattress -new exercise routine (up from nothing) -started taking vitamins -stricter on my diet
And was wondering what the cause might be.

When I say "dreams" I mean - "did I wake up remembering vivid dreams?". I wonder now if it's related to caloric surplus.

I also have minutised step data, nightly minutised sleep data and hourly mood self-reported data that I might try to throw in to the system and see what it says.

raffy 1 day ago 3 replies      
I guess if we're sharing fitness plots, here's 1300 measurements and 7 DXA scans:


bmarkovic 1 day ago 1 reply      
The author is wrong that sleeping more just gives less time for eating. Truth is that sleeping more reduces insulin resistance, increases testosterone production in males and puts leptin under control. All three reduce carb craving, push us towards meat and fat and increase natural fat burning.
samuelbrin 1 day ago 3 replies      
I might have missed it, but doesn't look like ketone levels was one of the factors being measured. You can actually buy ketone pee strips at any drugstore for cheap. Would have been an interesting thing to track, since conventional wisdom says that ketosis is binary, you're in it or your not, and the actual ketone level doesn't affect the weight loss. I'm sure this has been put to the test in some experiments already, but maybe applying learning could teach us more and validate/invalidate this hypothesis
Borkdude 1 day ago 3 replies      
Warning: unpopular opinion here. Long term low carb dieting and ketosis is not without risk. Yes, you will see weight loss, but what about other health markers? Check out some of the information here: http://nutritionfacts.org/topics/low-carb-diets/What has worked for me was a plant based diet without added oil and processed foods: https://medium.com/@borkdude/tl-dr-of-my-long-term-weight-lo...Been doing that for five years now. Never hungry, still happy with it.
zzleeper 1 day ago 1 reply      
> The 'stayhome' lifestlye, which fell mostly on weekends, is a red-herring, I simply slept longer when I didn't have to commute to work.

Is it?

First, whatever method you use should already take into account that sleep happens together with stayathome. Even basic regressions take into account that.

Second, staying at home means your eating binges are constrained by what's around you. If it is healthy stuff it might mean weight loss, if it is bread and chips the opposite

lucidguppy 1 day ago 4 replies      
Carbohydrates do not make you fat. I am eating a high carb vegan diet and have lost 63 pounds since last October.

My weight loss: http://imgur.com/a/cGb4X

I do not restrict the amount of food I eat. I snack and have big meals.

Carbs do not make you fat. Eating high caloric density foods makes you fat.

More resources:http://nutritionfacts.org/



iopq 1 day ago 1 reply      
Of course you're going to gain weight when you eat carbs. Glycogen contains mostly water and carbs, so if you store glycogen you gain weight, up to 20 lbs from completely depleted.

So let's say you eat low carb and train very hard for a week. Your muscles will be depleted, your liver will be depleted. You see that you lost 10 lbs! Great! But is it? Maybe you lost one pound of fat and 9 pounds of water. Then you eat NOTHING BUT carbs for three days and you gain 20 lbs. Oh no! But actually it's your muscles expanding their capacity to store glycogen (since you trained hard and depleted yourself they will regain even more glycogen than before). You may have not gained ANY FAT AT ALL!

I have had great success dieting with medium fat medium carb diets. I try to keep between 180lbs and 220lbs (I'm 6'4"). I'm saying that this kind of tracking is oversimplifying the issue of fat loss - you need to see that you're losing fat tissue which involves at least caliper measurements.

timeu 1 day ago 0 replies      
Sorry for the crosspost/copy-and-paste:

For me personally this worked quite well for reducing bodyfat:

Intermitted fasting[1] and lifting weights 3 times a week[2] and being on a cutting regime[3] (cutting on the rest days with low carb and loading on the workout days wiht more carbs).

In the beginning IF was quite difficult but after a while the body get used to it and also I tend to have less cravings during the day. On the loading days/workout days I often have a hard time to get enoug calories because I feel full. I am using MyFitnessPal to track what I eat but it's more about the macros and not so much about the exact calories (but it's also good to get a feeling how much calories different kinds of food has)

[1] http://www.leangains.com/2011/03/intermittent-fasting-for-we... [2] http://stronglifts.com/ [3] http://www.lgmacros.com/standard-leangains-macro-calculator/reply

sytelus 7 hours ago 1 reply      
The observation that "no breakfast" is #1 way to lose weight by extending fasting period that includes sleep is pretty astonishing. Everything I have read says no breakfast is one of the contributors to weight gain.
laretluval 1 day ago 1 reply      
Interestingly "gioza" and "gyoza" as factors come out with opposite signs. I guess that provides a heuristic confidence interval for the weights.
agentgt 1 day ago 0 replies      
It is funny but I had pretty much the exact weight loss as the OP had doing intermittent fasting. Sadly I wasn't as rigorous on the recording of weight loss but I went from 195 to 175 over roughly the same time period as the OP.

One thing I found is writing down a plan seems to really help you stick with it.

I finally wrote down my workout routine after decade of on and off training. I always logged my workouts but I never wrote the overall plan down. If you are interested in my routine it is here (it has very little to do with diet as I was going to write a follow up some day): https://gist.github.com/agentgt/f93b78dbe13870a6d0a1

I have never publicly posted the routine so if you feel the need to trash it I suppose you can do so in the comment area of the gist but routines are pretty personal anyway (to the OPs point).

clamprecht 1 day ago 0 replies      
This is awesome. I'd love to see something similar from someone who (intentionally) gained muscle mass. Is it really about high-protein + high-carbs + workout + sleep, or is there a more optimum diet for this.
escoz 1 day ago 1 reply      
If anybody is reading this and curious about Ketosis, I'd recommend Taubes book (https://www.amazon.com/Why-We-Get-Fat-About/dp/0307474259). It's a great review of scientific studies done over the years.

I read that 4 years ago, spent another 4 months reading the listed studies, convinced myself it was a good plan, and lost 40 pounds with no exercises. I still do LCHF after all these years, and likely will never go back to a traditional diet, it feels great.

thisisananth 1 day ago 3 replies      
This matches my experience of weight loss using high fat diet. I started it after hearing Sarah Hallberg's Tedx talk. https://www.youtube.com/watch?v=da1vvigy5tQ&feature=youtu.beFor a vegetarian, starting and continuing the high fat low carb diet is difficult. Are there any resources for more vegetarian recipes for high fat low carb food?

The article was very impressive. I liked the graphs and presentation.

cpplinuxdude 1 day ago 1 reply      
A word of advice to those attempting a Ketogenic diet: get your blood work done on a regular basis.

If possible, get heart scans done once a year.

It's easy to slip into a sloppy version of the Ketogenic diet, at which point you're consuming lots of unhealthy fats and carbs, you triglycerides and cholesterol could shoot up and put you in shit street.

This is a strict diet.

notyourloops 20 hours ago 0 replies      
When I ditch the carbs and eliminate grain from my diet, there's no denying that I look better and feel better. I've cycled on and off several times now and there's no way to get around it: The carbs and grain have to go and they have to go forever.

If you can eat them, no problem. However, there's many of us that can't and we're going to talk about how to operate with that as baseline.

zachrose 1 day ago 1 reply      
Among words correlated to weight-gain, the least correlated of these is cheeseburgers! Be still my heart (not literally)!
vinnyp 1 day ago 0 replies      
The challenge for me is always knowing what to eat to help me accelerate my weight loss. I need structure. If I don't have structure, it's easy for me to eat things I shouldn't. I started a weight loss program 5 weeks ago called "Ideal Protein," which uses the Ketosis method. They supply breakfast, lunch, and a snack. I only need to make dinner (plus 2 cups of veggies for lunch). I love how easy it is.

I'm wrapping up week 5 tomorrow and I'm down 25lbs (~14% of my body weight)! They promise 2-5lbs of fat loss a week. I generally drop around a pound a day, but I'll get stuck a few days here and there. This method really works for me.

The best part is how fast you lose it. When I did South Beach years ago, it took me months to hit my goal. I have 14lbs left now and I should be able to hit my goal by Labor Day.

Give it a try.

smokedoutraider 22 hours ago 0 replies      
A couple a years ago I did keto for 8 months. During that time I lost 31 kg. While this diet was amazing for losing weight it really affected my concentration negatively, to the point where I was forced to drop the diet because it was simply interfering with my performance at work. It's a real shame though, as I've never been able to find a diet as easy and effective since.
kensai 1 day ago 0 replies      
Remarkable work. I am really pleased about the contribution of "sleep" in the whole weight loss experience. I have reached some similar anecdotal conclusions; I bet this is similarly not surprising for many persons.

I am not entirely sure sleep contributes only as "a fasting period". Sleep also means we are relaxed. All the other times of the day (when we are awake) we are relatively stressed, meaning more stress hormones (ie cortisol) which are known to be contributing to weight gain.

criddell 1 day ago 0 replies      
One thing I've never been able to get a clear answer on, is when you "lose" fat, how long does it take for the number of fat cells to go down? As I understand it, at first fat cells just deflate (not the right word) are will easily plump up again and that's one reason why it's hard to keep weight off. What's involved in actually ridding the body of the extra cells (outside of mechanical means)?
d23 1 day ago 1 reply      
I don't get it, and I've really tried. I'm referring to the keto diet (though this person's approach is obviously extreme overkill). I absolutely love meat, but when I go that low on carbs, I feel like crap. And sure, some people say if I just wait 4 weeks to 6 months my body will adjust, but... why bother?

Get a calorie tracker like my fitness pal, set a goal weight loss, stick to it, and 1 lb per week will be a breeze. You don't need a fad diet. For me, meticulously paying attention to what I was putting into my body and putting a number next to it made it a no brainer. It was almost gamified at that point.

raverbashing 1 day ago 2 replies      
It's a great experiment

However I think the biggest issue with this is that it's considering only a 1 day window. Weight can vary a lot between every day (bladder and bowel contents, muscular glycogen, etc)

brandon 1 day ago 0 replies      
I appreciate that the author shared his data because it's interesting to compare notes. I embarked on a strict ketogenic diet in 2012 from a significantly higher initial weight and observed a very different pattern of weight loss: https://i.imgur.com/mOt6P.png

I hit a loss plateau around 180 lbs that I couldn't break through until I began eating on an 8-16 fasting schedule like Ariel mentions in his "Further progress" section.

I gave up on the diet during 2015 and have since regained a significant amount of weight, but I suppose that's just an opportunity to apply some of the tracking techniques in this article to my next foray.

emptyroads 1 day ago 0 replies      
I'm pretty sure this is just an example of a "placebo diet"[1] at work.

[1] http://www.placebodiet.org/

stickydink 1 day ago 1 reply      
Semi-related. For those who want to track their weight daily, efficiently, try Fitbit Aria (or one of the cheaper no-brand WiFi scales). Hook it up to Fitbit app, connect that to TrendWeight (https://trendweight.com/), then throw away the Fitbit app.

Whatever your goals are, however serious you are, I find it a great way to keep track. I just stand on this thing once a morning, and forget about it. And now I have a near-daily record, smoothed out with a weekly moving average.

ifemide06 1 day ago 0 replies      
If you're looking to extend this further, I'll be privileged to work on this with you. Another great opportunity to be part of something! yay!
bluetwo 1 day ago 0 replies      
The impact of sleep/no sleep. Wow.
mbrundle 1 day ago 2 replies      
Very interesting littlw study. The results seem to make sense, and I'm impressed that his model can learn what causes his weight swings given the low resolution delta-weight data he collects.

The author uses vowpal-wabbit to train his regression model. Anybody know what learning algorithm it uses (eg random forest?) Here's the link: https://github.com/JohnLangford/vowpal_wabbit/wiki

ermterm 1 day ago 2 replies      
I'd love to see something like this applied to the goal of gaining muscle. I've been a twig my whole life. In college, I gained 20 lbs of "noob gains" and then tapered off severely. I've maintained that weight, but can't gain more with any reasonable effort.

Dear ddv,

You've got a potential million+ dollar business in the works. In fact, I'd quit my job to be a programmer on your team.

arisAlexis 22 hours ago 0 replies      
I understand the principles and found the low-carb diets sound scientifically but I can't understand how eating a lot of fruits and potatoes would be bad for you from an evolutionary perspective.
zelcon 1 day ago 0 replies      
Maybe your weight loss rate accelerated because you started watching what you eat more carefully, knowing that you want to provide the most sanitary input to your program. Anyway this is impressive for a n=1 study. Glad it worked for you.

Also, OP, what materials would you recommend for a machine learning newbie?

apineda 1 day ago 1 reply      
I just started my keto diet about 2 weeks ago and I feel great. Did have one rough night of "keto flu" but then I started supplementing with minerals and more water. I'm super low carb right now. Funny thing is to get (about 4 weeks ago) I would get a double big mac or a double quarter pounder but no drink and no fries and that was it till breakfast. That was fun.
WhitneyLand 1 day ago 2 replies      
He advocates a book "The Truth About Statins".

Is it helpful info on evidence based science or mostly zealotry and soapboxing?

etangent 1 day ago 1 reply      
One should be careful extrapolating this type of data to other individuals. For example, sleep on its own may not actually be the primary factor in weight-loss --- the weight gain during periods of lack of sleep may occur simply because you have a habit of snacking while staying up late.

That said, awesome work.

jalopy 1 day ago 0 replies      
This is freaking genius. Kudos. I wish I could upvote this 1000x.
tim333 1 day ago 0 replies      
I've had weight graphs like that and the trouble always is that if you plot further instead of it being a flat line at a constant weight it bounces back up again. It's maintaining that's the hard part.
quickpost 1 day ago 1 reply      
I've tried keto a couple times in the past, but struggled with serious acid reflux / heartburn from having to digest so much fat all the time. Any one else struggle with this and find a way to incorporate keto without negative digestive consequences?
bresc 1 day ago 0 replies      
I don't understand what exactly he did with the data and how the calculations indicate weight loss/gain.

Can someone explain, please?

catalystframe 22 hours ago 0 replies      
Lol bacon is associated with weight loss by 32%
swang 1 day ago 0 replies      
so i cannot give up rice. how much can i consume of rice/carbs per day and not mess up this balance?
valde 1 day ago 1 reply      
What about sex?
dschiptsov 1 day ago 0 replies      
Yeah. I has been downvoted into oblivion by saying almost the same things without any machine learning, just by observing eating habits and traditions of "village folks" in Nepal, India, Tibet and Sri Lanka where I have spent last few years.

The message was that traditional (evolved according to local food sources and seasons) unprocessed foods of Asian tribes is the most natural and healthy. Economic and habitat selection pressures implicitly works the same way as machine learning, for hundreds of years.

Visit any rural area and try simple folk's foods. They will be way healthier that any processed crap. The similar patterns could be notices really quickly.

Machine Learning Exercises in Python, Part 1 johnwittenauer.net
467 points by jdwittenauer  1 day ago   58 comments top 11
jupiter90000 1 day ago 5 replies      
Often this sort of material seems to be a collection of methods and understanding them, which is obviously important to being able to use them. However, I usually feel like the example problems are much cleaner and simpler than those I've encountered in business. I feel like there's this missing link between learning the methods and doing something that actually adds significant value for a business using machine learning. Perhaps it's just me or my field though.

I found that usually lots of work involved just transforming or examining data in relatively simple ways or using human expert decisions as to important threshholds for outliers. For example I could run an outlier algorithm on data and either the returned outliers were very obvious and could have been found using a manual query by knowing the business context, or it returned alot of false positive outliers that were useless for the business.Other times, we'd have a predictive model that was good for 95% of cases but would make our company look ridiculous on predictions for the other 5%, so couldn't use it in production-- and the nature of the data was such that we couldn't use the model for only certain value ranges.

Perhaps it was just the nature of our realm of business (telecom), and these approaches are more useful for others (advertising, stock trading, etc). Any experience with business fields where this stuff made a sizable impact for something they productionized in business they can share?

Animats 1 day ago 4 replies      
I took that course from the pre-Coursera Stanford videos, when someone from Black Rock Capital taught the course at Hacker Dojo. Did the homework in Octave, although it was intended to be done in Matlab.

It was painful. Those videos are just Ng at a physical chalkboard, with marginally legible writing. All math, little motivation, and, in particular, few graphics, although most of the concepts have a graphical representation.

fitzwatermellow 1 day ago 0 replies      
During the time of the original class, I don't think scikit-learn and spark were quite as mature. But perhaps Octave still enjoys a certain prominance in academic machine learning research. Matlab was also used for the recent EdX SynthBio class. And it just feels a bit archaic now, doing science in a gui on the desktop, instead of on a cloud server via cli ;)
ivan_ah 1 day ago 0 replies      
Related, the demos from Kevin P. Murphy's excellent ML book implemented in Octave [1] and (partially) in Python[2].

[1] https://github.com/probml/pmtk3/tree/master/demos[2] https://github.com/probml/pmtk3/tree/master/python/demos

jjallen 1 day ago 1 reply      
Seems like to compensate for day to day weight/water fluctuations one would need to track the trailing activity and food data for a period of days prior to the data analyzed. I'm thinking 3-5.

.2 lbs/kilos lost is mostly a rounding error. Our weight could fluctuate that much on a daily basis from the amount of salt consumed.

NelsonMinar 22 hours ago 0 replies      
Ng's machine learning class is excellent, but the main thing holding it back is its use of Matlab/Octave for the exercises. A Python version (with auto-grading of exercises) would be a huge improvement.
mark_l_watson 1 day ago 0 replies      
Very nice. I took the class twice and think it is easiest to use Octave, but for after taking the class these Python examples might help some people.
earthpalm 1 day ago 1 reply      
Lets talk about about how much Michael I. Jordon taught Andrew Ng what he knows about machine learning and AI.
motyar 1 day ago 1 reply      
Can I find same in R?
denfromufa 1 day ago 1 reply      
What is the best learning resource for gaussian process (kriging) using Python?
danjoc 1 day ago 3 replies      

I will not make solutions to homework, quizzes, exams, projects, and other assignments available to anyone else (except to the extent an assignment explicitly permits sharing solutions). This includes both solutions written by me, as well as any solutions provided by the course staff or others.

Lake Nyos suffocated over 1,700 people in one night atlasobscura.com
448 points by vmateixeira  2 days ago   101 comments top 17
bnjmn 1 day ago 7 replies      
A few years ago I wrote a poem loosely based this event.

I know unsolicited poetry from strangers on the internet is almost always awful, but this poem still holds up for me, which is pretty unusual for anything I've ever written.

So I hope you enjoy it, too:

 NYOS You took me in on dusky breath, tasted me, tasted nothing, gathered by my easy take that I was oxygen enough for idle inspiration. How swiftly my lack became your lack; my misgiving, your mistake. Your eyes flashed a baffled petition as you fell limp in a thousand different doorways, cribs, embraces, fits and fields, yet I pressed after whatever it was I thought to find in the lowest parts of Cameroon, as foolish in love as a gas trapped in a lake.

Diederich 1 day ago 2 replies      
Horseshoe Lake, in Mammoth Lakes, California, had a substantial CO2 vent in the early 2000s:http://pubs.usgs.gov/fs/fs172-96/

300 tons per day at the time.

I visited in 2002. There were danger signs everywhere, advising people to stay away from the lake itself, and away from low-lying depressions.

In such forested areas, there is always a background of natural sounds, but that day, there was almost complete silence. Large swathes of trees, especially those nearer the lake itself, were standing, dead and dying. I found a couple of dead birds on the ground, completely undisturbed.

The whole situation was so eerie that I bugged out even without taking any pictures.

MollyR 1 day ago 4 replies      
Wow, the lake turning red and so many people dying sounds like a biblical style event. It's great to have a scientific explanation of what happened.
donretag 1 day ago 1 reply      
"Also worrisome is Lake Kivu, a lake over 1,000 times larger than Nyos and in a much more populous area."

I stayed on the Rwandan side of Lake Kivu and had a great relaxing time. Met many peace core volunteers.

Lake Kivu is a major source of methane gas, which powers much of Rwanda. I am assuming they are very aware of the dangers. If something like that would happen at the scale of Lake Kivu, it would be a major catastrophe. That said, I encountered nothing regarding the potential of disaster. Nothing like tsunami/flood zone warnings. Rwanda tends to be a bit more western (aka not chaotic African) in its safety precautions. The DRC on the other side would definitely be more lax.

lanius 1 day ago 1 reply      
>As the CO2 settled, every flame and fire was immediately extinguished, a sign of the doom descending all around Lake Nyos

Despite being relatively enlightened due to living in a modern society, I would be freaked the fuck out if I witnessed this personally.

_audakel 1 day ago 3 replies      
bad choice of ads - at the end of the article there was a little date widget to "find a hotel near lake Nyos" and choose your departure date and return date.
ScottBurson 1 day ago 2 replies      
Given that CO2 levels in the lake have now built up again to be even higher than they were in 1986, according to this article, I wonder if they shouldn't evacuate the area and see if they can set the thing off intentionally. Seems like explosives might do it -- like shaking a soda can.
Animats 1 day ago 0 replies      
Maybe geologic sequestration of excess CO2 isn't a good idea.[1]

[1] https://www.undeerc.org/pcor/sequestration/whatissequestrati...

yincrash 1 day ago 0 replies      
This was an extremely carbonated lake. Seltzer water is about 4 volumes (4:1 CO2 to water measured in volume). The article places the lake at 5 volumes.
tantalor 1 day ago 3 replies      
"1.2 cubic kilometers" is a ridiculous unit, because it obscures the cubic power on the kilometer. "1.2 billion cubic meters" would be much better.
waqf 1 day ago 2 replies      
Isn't CO poisoning in humans supposed to cause a panic reaction which would serve as a defence? Why didn't that happen here?
teslaberry 1 day ago 0 replies      

Hydrogen sulfide caused Permian extinction.

My favorite theory is that it was caused by massive increase in volcanic activity which was itself caused by a giant meteor hitting earth.

aandrewc 1 day ago 0 replies      
One of my favorites podcasts did a great episode on this! http://www.stuffyoushouldknow.com/podcasts/how-can-a-lake-ex...
Pinatubo 1 day ago 1 reply      
Did anyone else find the juxtaposition of "over" and a very precise number in the title a bit strange?
scarygliders 2 days ago 2 replies      
Scroll down to the bottom of that article and... "Find a hotel near lake Nyos".

Er, no, thank you.

wcummings 1 day ago 1 reply      
I like the ad for hotels "Near Lake Nyos" below the article about how dangerous it is.
nilved 1 day ago 5 replies      
This is hardly HN material, but this article included the best application of the Cloud To Butt browser extension I've seen yet.


CPUs are optimized for video games moderncrypto.org
427 points by zx2c4  3 days ago   334 comments top 21
sapphireblue 3 days ago 19 replies      
This may be an unpopular opinion, but I find it completely fine and reasonable that CPUs are optimized for games and weakly optimized for crypto, because games are what people want.

Sometimes I can't help but wonder how the world where there is no need to spend endless billions on "cybersecurity", "infosec" would look like. Perhaps these billions would be used to create more value for the people. I find it insane that so much money and manpower is spent on scrambling the data to "secure" it from vandal-ish script kiddies (sometimes hired by governments), there is definitely something unhealthy about it.

pcwalton 3 days ago 4 replies      
Games are also representative of the apps that actually squeeze the performance out of CPUs. When you look at most desktop apps and Web servers, you see enormous wastes of CPU cycles. This is because development velocity, ease of development, and language ecosystems (Ruby on Rails, node.js, PHP, etc.) take priority over using the hardware efficiently in those domains. I don't think this is necessarily a huge problem; however, it does mean that CPU vendors are disincentivized to optimize for e.g. your startup's Ruby on Rails app, since the problem (if there is one) is that Ruby isn't using the functionality that already exists, not that the hardware doesn't have the right functionality available.
speeder 3 days ago 6 replies      
As a gamedev I found that... weird.

A CPU for games would have very fast cores, larger cache, faster (less latency) branch prediction, fast apu and double floating point.

Few games care about multicore, many "rules" are completely serial, and more cores doesn't help.

Also, gigantic simd is nice, but most games never use it, unless it is ancient, because compatibility with old machines is important to have wide market.

And again, many cpu demanding games are running serial algorithms with serial data, matrix are usually only essential to stuff that the gpu is doing anyway.

To me, cpus are instead are optimized for intel biggest clients (server and office machines)

Narann 3 days ago 3 replies      
The real quote would have been:

> Do CPU designers spend area on niche operations such as _binary-field_ multiplication? Sometimes, yes, but not much area. Given how CPUs are actually used, CPU designers see vastly more benefit to spending area on, e.g., vectorized floating-point multipliers.

So, CPUs are not "optimized for video games", they are optimized for "vectorized floating-point multipliers". Something video game (and many others) benefits from.

joseraul 3 days ago 0 replies      
TL;DRTo please the gaming market, CPUs develop large SIMD operations.ChaCha uses SIMD so it gets faster. AES needs array lookups (for its S-Box) and gets stuck.
nitwit005 3 days ago 1 reply      
Not really. Just look through the feature lists of some newer processors:

AES encryption support: https://en.wikipedia.org/wiki/AES_instruction_set

Hardware video encoding/decoding support (I presume for phones): https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video

It's more that it's relatively easy to make some instruction useful to a variety of video game problems, but difficult to do the same for encryption or compression. You tend to end up with hardware support for specific standards.

wmf 3 days ago 0 replies      
Maybe a better headline would be something like "How software crypto can be as fast as hardware crypto". I was curious about this after the WireGuard announcement so thanks to DJB for the explanation.
magila 3 days ago 1 reply      
One important aspect DJB ignores is power efficiency. ChaCha achieves its high speed by using the CPU's vector units, which consume huge amounts of power when running at peak load. Dedicated AES-GCM hardware can achieve the same performance at a fraction of the power consumption, which is an important consideration for both mobile and datacenter applications.

Gamers generally don't care about power consumption. When you've spent $1000 on the hardware an extra dollar or two on your electricity bill is no big deal.

revelation 3 days ago 6 replies      
I thought modern video games are predominantly limited by GPU performance? Maybe the argument is that while usually CPU performance isn't the most important part of the equation, video gamers base their purchasing decision on misguided benchmarks that expose it.

The big CPU hog and prime candidates for these vector operations nowadays seems to be video encoding.

joaomacp 3 days ago 1 reply      
Of course. Gamers are the biggest consumers of new, top of the line PC hardware.
milesf 3 days ago 0 replies      
And because CPUs are optimized for both gamers and Windows, the world has access to lots of cheap, powerful hardware. I'm not a Microsoft fan, but I'm very appreciative to them for making this ecosystem possible.

In fact, games have always driven the modern computer industry. Even Unix started because of a game (http://www.unix.org/what_is_unix/history_timeline.html).

rdtsc 3 days ago 2 replies      
Wonder how a POWER8 CPU would handle it or if it is optimized differently. It obviously is not geared for the gaming market.
stephenr 3 days ago 0 replies      
Isn't this exactly why HSM's exist - to provide optimised hardware crypto functionality?

Honestly I would treat this the same as eg Ethernet - high end cards have hardware offload capabilities that the software stack can utilise to get better performance.

tgarma1234 3 days ago 2 replies      
I really find it hard to believe that people for whom such an interest in security at the CPU level would buy "retail" processors like you and me have access to. I am no expert in the field but it just seems weird that there isn't a market for and producer of specialized processors that are more militarized or something. Why does everyone have access to the same Intel chips? I doubt that's actually the case. Am I wrong?
Philipp__ 3 days ago 3 replies      
ARMA III could be the good example of CPU bottleneck. Or maybe it is badly optimized... Then we hit the hot topic of multicore vs singlecore performance.
rphlx 2 days ago 0 replies      
Perhaps that was true in the mid 90s, but today Intel optimizes x86_64 for its highest margin core business: server/datacenter workloads. Any resulting benefit to desktop PC gaming is appreciated, but it's a side effect rather than a primary design goal.
wangchow 3 days ago 1 reply      
The form-factor for laptop screens are built for media consumption, even though the square form-factor is superior for productivity (I found an old Sony Vaio and the screen form-factor felt very pleasant). Seems the general consumption of media has dominated CPU design in addition to everything else in our computers.
wscott 3 days ago 0 replies      
No, Intel CPUs are optimized to simulate CPUs

Some stories from back around 2000 when designing CPUs at Intel. Some people did bemoan the fact the few software actually needed the performance in the processors we were building. One of the benchmarks where the performance is actually needed was ripping DVDs. That lead to the unofficial saying "The future of CPU performance is in copyright infringement." (Not seriously, mind you)

However, here is a case where the CPUs were actually modified to improve one certain program.

From: https://www.cs.rice.edu/~vardi/comp607/bentley.pdf (section 2.3)

"We ran these simulation models on either interactive workstations or compute servers initially, these were legacy IBM RS6Ks running AIX, but over the course of the project we transitioned to using mostly Pentium III based systems running Linux. The full-chip model ran at speeds ranging from 05-0.6 Hz on the oldest RS6K machines to 3-5 Hz on the Pentium III based systems (we have recently started to deploy Pentium 4 based systems into our computing pool and are seeing full-chip SRTL model simulation speeds of around 15 Hz on these machines)"

You can see that the P6-based processors (PIII) were a lot faster than the RS6K's and theWmt version (P4) was faster still? That program is csim and it is a program that does areally dumb translation of the SRTL model of the chip (think verilog) to C code that thengets compiled with GCC. (the Intel compiler choked) That code was huge and it had loopswith 2M basic blocks. It totally didn't fit in any instruction cache for processors. Mostprocessors assume they are running from the instruction cache and stall when reading frommemory. Since running csim is one of the testcases we used when evaluating performance the frontend was designed to execute directly from memory. The frontend would pipelinecacheline fetches from memory which the decoders would unpack in parallel. It could executeat the memory read bandwidth. This was improved more on Wmt. This behavior probably helpssome other read programs now, but at the time this was the only case we saw where it reallymattered.

The end of the section is unrelated but fun:

"By tapeout we were averaging 5-6 billion cycles per week and hadaccumulated over 200 billion (to be precise, 2.384 * 1011) SRTLsimulation cycles of all types. This may sound like a lot, but toput it into perspective, it is roughly equivalent to 2 minutes on asingle 1 GHz CPU!"

Games were important but at the time most of the performance came from the graphics card.In recent years Intel has improved the on-chip graphics and offloaded some of the 3d work to the processor using these vector extensions. That is to reclaim the money going to the graphic card companies.

xenadu02 3 days ago 0 replies      
tl;dr: AES uses branches and is not optimized for vectorization. Other (newer) algorithms are designed with branchless vectorization in mind, which makes specialized hardware instructions unnecessary.
Philipp__ 3 days ago 2 replies      
And what if games are better (or worse) optimised for certain type of hardware? So that way, you spend on new Intel CPU every 3 years. So the point is, what if some games are badly optimisied and run bad on certain hardware on purpose. Maybe it sounds like a conspiracy theory. But look, CPUs are stalling, Intel wants to sell it's things every year, what if they come to developers and say "Look make your game run 10% better on our latest hardware and we give you money"?
DINKDINK 3 days ago 0 replies      
Off-topic: That's a great favicon
I'm a Judge and I Think Criminal Court Is Horrifying themarshallproject.org
409 points by juanplusjuan  19 hours ago   195 comments top 23
bleachedsleet 19 hours ago 1 reply      
Several years ago, I was arrested on a hacking charge and got to see first hand the appalling nature of our legal system in criminal courts.

Normally, after your arrest, you have to have a hearing within 24-48 hours, but if they arrest you on a Friday as they did me, they are allowed to detain you for an extra day because it's the weekend. I'm sure this is a tactic used often to frighten and goad people. My hearing was exactly as the judge in this article describes. There were lots of minor, non violent offenders in the court room with me, most minorities, and many couldn't speak English well at all. The judge would openly mock them and condescend. One man obviously had no idea what he was even pleading to because his English was so poor.

Once I got to higher court for my actual sentencing, it was no different. The judge didn't even read my case and the clerk forgot to have it presented and available for the judge to review. My lawyer had to give him her own copy which he briefly skimmed without adjournment. I later discovered that the prosecuting attorney was good friends with the plaintiff and the investigating FBI officer assigned to my case.

The courts in America are a joke, the legal system is in bad need of an overhaul. I couldn't believe the level of incompetence, racism, bias and prejudice existed there.

rayiner 18 hours ago 5 replies      
The criminal justice system can be jarring, even to those of us who are lawyers. I was a clerk for a judge in Philadelphia, and saw a ton of criminal convictions - usually drug cases - come up on appeal. Two things stuck out: (1) we jail a lot of people for stupid things; (2) there are a lot of bad people in these communities.[1]

I'd see someone appealing a sentence for selling some OxyContin and think, "geez, what a waste of money to put this guy in prison." But he's got a rap sheet a mile long - theft, robbery, assault, etc.

The treatment of the criminally-accused in this country is deplorable. But it's also the product of a society that got fed up with skyrocketing crime a few decades ago and responded in a harsh and heartless manner. Crime is a lot lower today than it was in the 1990s, but even in the safest American cities murder rates are 5 times higher than big cities in Europe that aren't even considered that safe (like Berlin). And crime is heavily concentrated in poor places like the Bronx.

And the usual canard - for profit prisons - isn't even applicable here. Private prisons are illegal in New York. This isn't lobbying at work, this is purely a product of the democratic will reacting to devilish social issues. That's what makes it so hard to fix.

[1] I was also living in downtown Baltimore during the post-Freddie Gray unrest. I was disappointed to see the acquittals. At the same time, if I were in those cops' shoes, I'm not sure if have the moral strength to be any different. Society needs a certain amount of order to function. In much of Baltimore, that order doesn't exist. Gangs are in charge and the law-abiding people of those neighborhoods are the biggest victims of that.

twoquestions 18 hours ago 6 replies      
And I hear from family and friends about how we 'coddle' people in the system too much, and we're not 'tough' enough.

A large number of our people seem to think if we're sufficiently cruel and inhuman to people accused of breaking the law, then people will magically stop getting suspected of breaking the law. It's more than a little horrifying seeing people's eyes light up when they talk about how our cops aren't afraid to kill people and how merciless our prisons are.

This horrifying system is a symptom of our cruelty, and any move to make this more humane (or less of an atrocity) will face stiff resistance from people who get off on seeing people get punished.

I don't know what I can do, or what anyone can do.

chillwaves 19 hours ago 0 replies      
I had a friend who ended up with a drug related charge. When bailing them out, the bondsman, who was a conservative, ex cop, hardened and huge, told me how my friend was lucky to get the judge they got.

There were three judges in that court and one was known to be particularly rough. A defendant (drug charges, heroin) was asking for bail to be set, had secured a bed in a locked down rehab facility and the judge denied his bond. The bondsman said he had never seen someone leave the courtroom so broken.

Here is a case where a man acknowledges his crime, says he will do his time but wants more than anything to get clean. And securing a bed in a lock down rehab facility, besides being expensive, is not easy. Here is a case where the state had every interest in sending the man to rehab, even to save the cost of housing them in a jail, but the system doesn't care. The DA, the judge, don't care. Bail denied because the man had missed a hearing due to being in another jail after being picked up on the street with dope.

The bondsman said he sees this kind of thing every day. It was the rule, not the exception. People are just cycled in and out of the system. The addict will get locked up, released and locked up again.

ryanmarsh 18 hours ago 2 replies      
I've observed the criminal justice system at work in Texas a few times. A murder case, some misdemeanors, a few felonies. I've seen it from arrest to prison life.

It is a dark and deeply depressing thing. There is little compassion for victims, the accused, the convicted. I could write a thousand words but it pains me to think about it.

I pray I never get falsely accused of a crime and I'm so glad I'm not black. If all the boring white folk who couldn't fathom finding themselves so much as suspected of a crime had to go through the criminal justice system as the average YBM the system would be massively overhauled yesterday.

Most of the boring white folk I know think everyone in jail is a cold blooded psychopath who works out all day and dreams of stabbing people... "hardened criminal".

The system is just full of unlucky humans. God knows I did shit in my youth I could still be in prison for. The extremely violent type are actually quite rare.

foepys 19 hours ago 4 replies      
I never understood the concept of "plea bargain" in the USA that comes up in the article. Either somebody is guilty and should be convicted by a judge or they are not guilty and should have the chance for a fair trial. I can see that there are situations where they can be useful but they seem to be the norm instead of being used in certain, limited situations. Scaring somebody into pleading guilty for a lesser sentence is simply unfair and in my eyes undemocratic.
tbihl 19 hours ago 0 replies      
I have some family friends, and their son recently got arrested for a felony drug dealing charge at the end of his spring semester at college. From what I understand, the police built this whole sting to catch the guy at his school mailbox when he got his shipment from whatever Silk Road's current successor is. Anyway, he confessed everything and now he's just waiting around, unable to get a job because interviewers invariably ask why he stopped school. Apparently having a job really helps in these cases.

Anyway, they remarked one day on the phone, "we've never dealt with criminal Court, and we don't know anything about it, except the lawyer doesn't sound very optimistic. The only hope we have is because the system is so racist that being middle class and white might actually save us."

They also said something about how the police were pushing some angle about using Tor. I'm very concerned about that: I can just picture the ghost story that the prosecutors are going to tell some septuagenarian judge about this special internet for terrorists and arms dealers.

kafkaesq 19 hours ago 3 replies      
I was shocked at the casual racism emanating from the bench. The judge explained a stay away order to a Hispanic defendant by saying that if the complainant calls and invites you over for rice and beans, you cannot go. She lectured some defendants that most young men with names like yours have lengthy criminal records by the time they reach a certain age.

Thereby instantly disqualifying herself from the privilege of serving on the bench. Ample grounds for impeachment, by any reasonable standard.

will_brown 19 hours ago 6 replies      
>Even I, as a bankruptcy judge, know that the point of bail is supposed to be simply to ensure that a person will return to court.

That is the point of posting bail, but that is not the standard of granting a defendant the right to post bail/bond.

The standard is more along the lines of: a) is the defendant a flight risk; and b) does the defendant present a danger to the community.

As to b) it is not simply enough that the charges are non-violent, which seemed to shock this Judge. For example, DUI while not a violent crime generally one eligible for bail/bond by default, may not be granted if, say, it is the 3rd or 4th DUI. Or if say the defendant was already out on bail/bond and picked up a new charge, a Judge may revoke the bail/bond on the 1st crime. I'm not claiming this is always how it works and the decisions are always just, but I just want to give a little more perspective.

As to the shock of the state of the courthouse, only a Federal Judge would find that shocking. Don't get me wrong it's a pleasure to practice in a gorgeous Federal Courthouse, complete with grand marble accents, but I'm of the opinion those types of luxuries are a waste of tax payer dollars.

teddyknox 16 hours ago 1 reply      
There's a new HBO series called "The Night Of" that portrays the New York justice system this way.

Trailer: https://www.youtube.com/watch?v=556N5vojtp0

desdiv 16 hours ago 1 reply      
>Once the court officers caught their breath from laughing, they barked at him, Where is your belt? Of course, it was taken from him in the lockup, he said.

I don't like to watch crime/legal drama and even I know this. How does someone who work in the legal system as a day job not notice this?

tomohawk 4 hours ago 0 replies      
And then you have politicians like Martin O'Malley using the system to further their careers at others expense.


ak217 17 hours ago 1 reply      
We need a better accountability system for judges. Even for those judges who are elected, the public is usually not meaningfully informed about the judge's performance.
ChuckMcM 18 hours ago 1 reply      
Great, where is the follow up with the judge in chambers? Don't judges talk to each other? Standards and mores are upheld by peers not by individuals.
epynonymous 12 hours ago 0 replies      
i must say, though i was in people's court once trying to get my security deposit back from my landlady, which is by definition the lowest level court in the land, i too have many apprehensions about the "fairness" of the judicial system in america. that's not to say that my 1 incident should be representative of the entire judicial system, as i still believe that there are lots of excellent examples of upstanding individuals, probably more so than bad ones, but the judicial system is powered by people and their interpretation of the law, and though with some checks and balances, people having unconscious biases and tend to work these around the system, just like her example of guns and butter, no one's going to penalize that judge for this and she knows it.

my particular case was in quincy, mass, i remember the faces of the judge, deputy, and clerk that waited to put my case towards the very end after about 10 cases and all other people had left, behind closed doors, seemed i didnt understand much about the system back then, but this was an arbitration, under the guise of a court order to reappear for an appeal. the landlady, a chinese national, had lost the claim originally and had ninety days to appeal, she appealed after 138 days, and the court ordered me to show up with an official court summons. in looking back, i should not have showed up, i had no need to show up, this was over a measely 700-900 usd if i recall correctly. and the way the judge, clerk, and deputy nodded their heads in contempt of me as i spoke my case was so planned and orchestrated that in my mind i decided that this was useless and to go ahead and tear up the check that she was asked to pay me for my security deposit after the first decision which i hadnt even cashed because it was never about the money.

if these are the shenanigans in the lowest of courts, i wonder what things are like in the upper courts. my lesson from that day was 2 fold, try your best never to go the legal route, and the american system, although perfect in many ways, still cannot provide perfect justice.

jostmey 19 hours ago 0 replies      
After reading this article, I have to wonder if the criminal court system is under funded and overworked. I think America's core institutions have been under decay for some time now.
marklyon 15 hours ago 2 replies      
An actual criminal attorney in NYC did a far better job of deconstructing this judicial tourist's article than I can: http://blog.simplejustice.us/2016/08/13/the-dilettante-judge...
peter303 7 hours ago 0 replies      
Some of the modern courthouses reduce the courthouse violence problem with a "triple entrance" system, one for all the court personal, a second for defendants briefly released from county jail, and a third for public witnesses and spectators. You dont have the hallway encounters between the three parties you had in older facilities.
Qantourisc 8 hours ago 0 replies      
A judge holding the court in contempt basically. And since the judge is part of the court, the court is holding the court in contempt.

Perhaps we need judged for judges.

shams93 15 hours ago 1 reply      
In Los Angeles it's quite a bit more professional. When I served on a jury in a criminal case the judge was very professional.
mLuby 10 hours ago 0 replies      
It's heartening to read so many thoughtful posts about our justice system. Gives me some small hope for the future. :)
jondubois 17 hours ago 1 reply      
I wonder if that has something to do with the different level of 'leniency' which is accorded in bankruptcy cases versus criminal cases.

My gut tells me that white-collar transgressions will always be punished less often (and less severely) than outright blue collar crime.

It sounds like the bankrupcy judge who wrote the article is surprised by the fact that the justice system actually punishes people for doing bad stuff.

Maybe it's not the criminal justice system which is harsh - Maybe it's her who is too lenient with her own cases...

Or maybe both need some recalibration.

japhyr 18 hours ago 1 reply      
This is what bothers me so much about the Trump campaign. Even though he'll probably lose, he's helped legitimize the kind of speech and attitude this judge demonstrates.

I'm hoping there's a strong enough backlash that this kind of speech and attitude gets called out more often. I hope the long-term effect is not to make this judge's behavior even more accepted.

Ask HN: Is it possible to run your own mail server for personal use?
533 points by jdmoreira  21 hours ago   270 comments top 94
Nux 19 hours ago 12 replies      
It's absolutely possible to run an email server in 2016 and I encourage anyone capable to do so!

Email is one of the bastions of the decentralised Internet and we should hang on to it.

Every day more and more people are moving to Gmail/Hotmail/Outlook and while I do understand the reasons, it also puts more and more power into the hands of these providers and the little guy (us) gets more screwed (like marked as junk by default by them :< )

Having said that, here's my check list for successfully delivering email:

- make sure your IP (IPv6) is clean and not listed in any RBL, use e.g. http://multirbl.valli.org/ to check

- make sure you have a correct reverse dns (ptr) entry for said IP and that ptr/hostname's A record is also valid

- make sure your MTA does not append to the message headers your client's IP (ie x-originating-ip), messages can be blocked based only on "dodgy" x-originating-ip (see eg https://major.io/2013/04/14/remove-sensitive-information-fro... )

- set up SSL properly in your MTA, there are so many providers giving away free certs nowadays

- SPF, DKIM, DMARC - set them up, properly, this site can come in handy for checking yourself https://www.mail-tester.com/

- do not share the IP of your email server with a web server running any sort of scripting engine - if it gets exploited in any way usually sending spam is what the abusers will do

- last but not least - and while I loved qmail and vpopmail - use Postfix or Exim, they are both more fit for 2016, more configurable and with much, much larger user bases and as such bigger community and documentation.


mrb 19 hours ago 6 replies      
One little trick that I rarely see mentioned for working around the negative or neutral reputation your MTA's IP might have is that you can route your outgoing emails through another MTA that has a higher reputation. For example route them through smtp.gmail.com (or for other options see https://support.google.com/a/answer/176600?hl=en). It does not mean you have to use Gmail. It does not mean you have to change your MX records. It does not mean you have to use a @gmail.com address. None of that. Your recipients will not even notice you are routing through smtp.gmail.com (unless they inspect the detailed headers). All you need is a Google account and password to authenticate against smtp.gmail.com, and Google will happily route your email to wherever, to any external domains, etc.

Doing this makes you retain all the advantages of running your own MTA: none of your emails are hosted at a third party provider, no scanning of your emails to personalize ads, no government agency can knock at the door of an email provider and ask them for the content of your inbox, etc.

The only downside is that in theory Google can scan and block your outgoing emails (not incoming emails since these hit your MTA directly). But if you don't send spam, this should never happen.

Another option is to route your mail through your ISP's MTA. Yes ISPs usually offer SMTP relay service accessible only from their customer's IP addresses (eg. for Comcast it is "smtp.comcast.net" IIRC.) However the reputation factor of an ISP's MTA might be worse than Google's MTA.

walrus01 18 hours ago 1 reply      
Having a perfect smtpd that speaks TLS 1.2, has properly set up dkim, sfp and dmarc records, working reverse DNS, etc is sadly not enough these days if you use a commodity vps/VM host. IP block reputation matters as well. Sadly, some other customers in your same /24 have been less clueful than you within the recent memory of major SMTP operators(gmail, office365/Microsoft, etc) and your IP space probably had a bad reputation.

Reputation perception by opaque large SMTP operators will not show up in RBLs and other ways to check for blacklists. You cannot query your IP block's status unless you happen to personally know a senior sysadmin on their mail operations teams. They don't share this information because it would help spammers choose new "clean" places to spam from.

One solution is to Colo your own 1u system with an ISP that is known to have very stringent zero tolerance abuse policies. Typically not one that is a commodity hoster.

ChuckMcM 19 hours ago 1 reply      
Absolutely possible but its a battlefield on the Internet so you have to understand the players. Two things I haven't seen mentioned in all the excellent advice:

1) Does your ISP let you send email? Some ISP's will not allow any outbound traffic to port 25 from a non "business" port. They force their users to send their email to their server, and then they forward it on to the Internet.

They do this with nominally good intentions (it is easier to control spam generated from their networks), but they also are financially motivated to do so.

2) Don't try to send mail from a dynamic IP address, you should have (and would probably pay extra for) a fixed static IP address (V4 and V6).

Dynamic IPs have two problems, one they change and mail receivers don't like that. Two, they carry with them the abuses people who had the IP address before committed. So your email may get delivered one day, and then poof you renew the lease on your IP and get one that is on a black list somewhere.

hannob 20 hours ago 0 replies      
I'm running small scale mail servers. It's not nearly as difficult as the common folklore makes you believe.

Make sure you don't send spam and your server is properly configured. If you are sending mails to people that don't want it then it is spam. "They silently agreed to get our newsletter because it was listed in our ToS on page 357" is not acceptable. No other excuse for sending spam is acceptable. Whenever you send any automated mail there must be an easy way to unsubscribe.

A few more tips:

* Check your mail server on http://multirbl.valli.org/ - if it's in any blacklist try to find out why (there are a few rogue blacklists, ignore them).

* Hotmail allows you to receive a report for every mail that a hotmail user thinks is spam. Use that. Act on it.

* Check your logs for messages that indicate that others think you're spamming.

* E-Mail forwarding is a tricky business these days. Avoid it.

I occasionally get dubious spam rejections, but they don't come from the large hosts. They usually come from some small ISP using a proprietary antispam solution that gives you no insight what's going on.

My suspicion would be that qmail is your problem. There are a great many details that a mail software has to get right, qmail often doesn't do what the email ecosystem expects.

TheMog 20 hours ago 3 replies      
I've been running my own MTA for about 15 years now, so it's definitely possible without spending the majority of one's waking hours to do so.

Even when I switched mail server IPs twice over the last few years I didn't run into the issues you ran into. A large part of it depends on where you run it - if you, say, run it on your home Internet connection that's usually an immediate strike against you because of the insane number of spammers using backdoor'd PCs to do exactly that.

The only time I ran an MTA out of my home was when I was on a commercial ISP with a fixed IP address, that seemed to be good enough for most services including gmail and hotmail.

These days I run my MTA on a VPS with a reputable hosting provider and don't seem to have that many issues with outgoing mails marked as spam.

SPF and DKIM are pretty much a must these days, so that's a good starting point, as are the rest of the precautions you already too. I assume you're using your own domain, how "old" is that domain? That might also have an impact giving how many phishers and spammers register odd domains and use them for a short amount of time. I've used the same domain since about 1999 so that could make a difference.

I use postfix instead of qmail, but I've used qmail in the past. Both work well and are easier to configure than sendmail or exim IMHO. On top of that I do run amavisd/spamassassin/clamav for the incoming emails as well.

One more thing I've got set up that I didn't see in your list is that I've got TLS set up with a non-self signed certificate for both incoming and outgoing email. I suspect that this also makes a difference even if the other email server won't request a client certificate (most, if not all, won't). Certainly shows up when I send an email over to gmail.

My biggest issues these days are more with incoming email:

- You'll never get to the level of spam filtering that, say, gmail offers. To me, that's OK

- I use greylisting to weed out a lot of the spam that would normally make it through spamassassin, but unfortunately that's when you find out how many people have misconfigured servers that bounce emails when they encounter temporary failures

pflanze 20 hours ago 1 reply      
I'm doing it, also using Qmail. I've felt the same pains as you (even started to suspect that providers might detect mail was being sent by Qmail and scoring that lower (perhaps (only) spammers are using Qmail today?), but more probably my network block (Hetzner.de) is the biggest reason for my difficulties).

Here's what I've done on top of your list:

- backscatter prevention (using my own https://github.com/pflanze/better-qmail-remote)

- do the Google domain verification dance (postmaster tools, configuring their entry in the DNS); still didn't prevent mails ending up in spam, but who knows whether it might still have helped.

- started running mailing lists on it anyway, in spite of me knowing that mails end up in people's spam folders, and simply tell all new subscribers that mails first end up in their spam folders and that once they mark them as non-spam the problem goes away. This seems to be working (people haven't complained), and will over time hopefully give my server the reputation I need.

(PS. I'm also still using DJBDNS, with a config generator written in Scheme, look out for tinydns-scm on my Github)

jwr 18 hours ago 0 replies      
Of course it is. I've been doing it since 2001 or so. It isn't as easy as it should be, but it isn't that hard, either.

I had problems with mail acceptance only once, when one of my ISPs got me an IP address that was either used by a spammer in the past, or was in the same subnet that the spammers used. Other than that, no problems over the past 15 years, and I switched providers and systems at least three times over that time.

I'd encourage everyone to go ahead and do it. It isn't very hard, cost on the order of several dollars/euros a month, and you finally own your E-mail. I find it appalling that most people either use company E-mail (it isn't yours, anyone can read it, and if you part ways with the company you have a problem) or Google Gmail (Google does read it, trains its algorithms on it, and targets advertising based on that).

Don't worry too much about DKIM. It is no longer a good signal anyway, most spam gets it right.

So, if you're capable of it, go ahead and run your own mail server. I wish more people did it, so that we could avoid the "big guys" restricting E-mail. If more individuals ran their own servers, we could democratize E-mail again: it wouldn't be that easy to just reject E-mail for no good reason.

For the reference, the software I use right now is: Ubuntu LTS, postfix, postgrey, amavis, dovecot. I rent a virtual server at Hetzner.de.

Kadin 12 hours ago 0 replies      
You can do this, and I do this (although not for my personal email, currently, although I have in the past -- I do it for a club though and it works fine).

If you are not being blacklisted (check the common ones plus AOL, they run their own), and are using SPF and DKIM, you shouldn't be having problems with messages getting blocked. That's pretty unusual.

What could be happening is that you might be in an IP range that's residential; there are some operators who blackhole all messages originating from "residential" IPs, even if they are not specifically being blacklisted for bad conduct, and even if they have valid SPF/DKIM records. I think this is a pretty bullshitty thing to do, and completely out of the spirit of Postel's Law, to the point where I think anyone who configures a server this way ought to be forced into a lifetime of Windows XP helpdesk duty. But it's a thing that happens.

One solution that's worked for me is to get a cheap VPS and run my mailserver there. It's in an IP block that traces back to a big datacenter, and it seems to be much more acceptable to various overzealous spam filters than my home IP.

rahkiin 18 hours ago 0 replies      
I will not repeat what everyone else has already said, but I can add one thing.You need to 'warm up' your IP address. You need to send a lot of non-spam email. MTAs will A/B test it: mark some as spam, mark others as not spam. Then they see if they get user spam reports or non-junk reports. They don't know you. The more you send (successfully, if everything is marked as spam by receivers it won't work), the more they start trusting you.

SparkPost allows the use of dedicated IPs and it has a warm up time. They tell everything about it in [0].

[0] https://support.sparkpost.com/customer/portal/articles/19722...

tezza 20 hours ago 1 reply      
Linode + Postfix successfully for years.

Reverse DNS very important and the SPF

Linode have excellent setup documentation ( https://www.linode.com/docs/email/postfix/email-with-postfix... )

tristor 19 hours ago 0 replies      
Yes, it's absolutely possible. I wrote an extensive set of step-by-step instructions on how to deploy secure email services on top of Debian 7[0]. They still work but are no longer maintained because my current position is that services like Proton Mail [1] make running your own email services unnecessary. You're welcome to review and use them, and if anybody wants to update them to work under Debian 8, PRs are welcome on the Github repo [2]

[0] http://securemail.tristor.ro/#!index.md

[1] https://protonmail.com/

[2] https://github.com/Tristor/securemail.tristor.ro

soneil 19 hours ago 2 replies      
It's worth checking whether you're running IPv6. It does become very relevant. e.g., SPF records need to include it, it must also have a good rDNS, etc.

In particular, I know gmail hold IPv6 to higher standards. Some things (e.g. rDNS) that we traditionally treat as 'should', gmail will treat as 'must' over IPv6 - it's being treated as a chance to drop a lot of legacy leeway.

I do run my own MTA. It's not high-maintenance, at all. Understand the pitfalls, iron them out, and then stick with it to build your reputation. There is no magic bullet - the big providers won't tell us how they measure us - the best we can do is be well-behaved, stay well-behaved, and adopt modern standards (TLS, SPF, DKIM, etc) as they're thrown at us.

My best advice is to choose a reputable host. There's a lot of race-to-the-bottom in the web hosting market, and VPS are turning out no different. Keeping a clean house is good for your reputation - but so is living in a nice neighbourhood. It's well worth a couple of bucks extra to find such a neighbourhood.

sigil 19 hours ago 1 reply      
Send a mail from your address to mailtest@unlocktheinbox.com. They send back an extensive "lint for smtp" report within minutes. I found it indispensable for debugging a DMARC issue recently.
tacon 10 hours ago 0 replies      
If you are having trouble with Google putting your emails into spam folders, email to yourself at Gmail. Then examine the original headers ("show original") and check the authentication header Google adds. It begins:

Authentication-Results: mx.google.com;

and details what it did and did not like about your message. For example, it let me know my mail server had suddenly started sending over IPv6 (actually, Google started accepting there and IPv6 had priority) and I only had SPF records for the IPv4 address. Google's authentication results are the friend of everyone with a personal mail server.

acd 19 hours ago 0 replies      
Was a mail admin for quite a number of years, here is some tips. Check that the ip address you are running your own mail server from is not black listed. Nor can the IP hosting the mail server be in a home ip range. This is because there is spam black lists that explicitly mark home user ip ranges as possibly spammy.

Check the repuation in the various anti spam blacklists out there* Check IP in multiple blacklist http://multirbl.valli.org/* Check server IP on mxtoolbox http://mxtoolbox.com/whatismyip/* If you are using home server IP please consider using a VPS with a good IP reputation.* Consider lookup up ip at Cisco senderbase reputation and check its score, make sure its consider good. https://www.senderbase.org* Lookup ip in Barracuda repuation http://www.barracudacentral.org/lookups(Cisco and Barracuda is because these are somewhat common antispam services at edges).

jjnoakes 18 hours ago 3 replies      
PSA: If you run your own mail server and use that email address for password resets, please use a reputable hosting provider and dns provider, and turn on 2FA.

Don't let that often overlooked weak point be the way every one of your accounts gets compromised. Once they have your email, they have everything that resets via that email domain.

FollowSteph3 16 hours ago 1 reply      
It's possible but it's not worth it unless you have a lot of knowledge about it or a lot of time and enjoy it. That's why most small to medium companies outsource this to services like sendgrid, mailchimp, etc.

You can't do everything so you have to pick your battles ;)

mehdym 18 hours ago 0 replies      
1. Creating a good IP reputation takes time, get a static IP from your ISP and gradually increase number of sent out emails.

2. Having multiple IP addresses and throttling helps if you send bulk emails.

3. Check email headers of spammed emails, they usually contain valuable info about the reason of being detected (SPF/DKIM ,...)

4. Check the contents of your emails and find out spam score of the content.

5. You can look into commercial solutions like: www.own-mailbox.com

6. Hillary, if it's you, don't do it again!

bensbox 20 hours ago 2 replies      
I am running my own mailserver for several years now and it is quite possible. The problem is, that your IP is new to the other MTAs and you need a couple of months to build up the reputation. Services like the one from Microsoft have forms, where you can delist yourself from their blacklists. Even though you are not blacklisted, it helps for the reputation if you fill out the forms with the MTAs you have problems with. Nevertheless it is a constant work (not much..1 hour per week) to keep up the reputation. Just make sure you are not loosing the IP when it starts to work out ;)
galori 7 hours ago 0 replies      
If you look at the aggregate of the comments here, I think you get the idea. Its possible, but you have to constantly work at keeping it going with reputation, whitelisting, etc. and you'll never get to 100% deliverability ingoing or outgoing. Probably more like 70%-80%.

The big guys whitelist each other.

There are (very expensive) services for the medium guys, such as [Return Path](https://returnpath.com/) , that help you keep good relationship with the various ISPs. With many ISP's they even have a very specific whitelisting deal where you literally pay to be whitelisted...I think thats mostly the second tier ones like AOL, Comcast, Yahoo. Gmail for example doesn't play ball with that - but they will whitelist you if you follow all the rules and have a good reputation, which someone like Return Path can help with.

Is the situation shady? Yes. But it rose out of a need for dealing with a real problem (spam), which you have to admit has gotten under control over the last 10 years.

Bottom lime, I would use a 3rd party service.

waits 20 hours ago 1 reply      
I've been running my own mail server on AWS for about 3 years now. Postfix, Dovecot, and a Rails app I wrote for webmail. At first I had 0 deliverability but over time it's improved to near 100%. Just setting up SPF, DKIM, not being on a blacklist, and building up a reputation of good mail seems to have worked wonders. I've been wanting to move to DigitalOcean but I don't yet know if there will be a significant hit on my IP reputation.

Postfix occasionally drops a legitimate incoming email due to a misconfigured sender, usually from a domain that doesn't resolve to anything, but I just log those in case I miss something important.

zzzcpan 20 hours ago 1 reply      
> I don't know what else to do

These days you also have to get a bunch of different VPSs and test sending e-mails from their IPs and choose the ones working. That's what e-mail marketing companies do. Because a lot of IPs and subnetworks out there have poor reputation or even completely blacklisted and that reputation is not generally recoverable.

mifreewil 20 hours ago 1 reply      
It's been many years since I've run my own mail server, but especially if you are running your mail server on a public cloud, you should make sure you aren't on any blacklists like Spamhaus: https://www.spamhaus.org/lookup/

EDIT: Looks like this is one of the things https://glockapps.com/ checks.

jeffmould 20 hours ago 1 reply      
pja 19 hours ago 0 replies      
Yes, absolutely. I do this & have done for a decade or more.

But that decade probably helps - my mail server has kept the same static IP the entire time, so has a pristine spam reputation.

I added SPF a couple of years ago as otherwise Google was started to look askance at some of the emails sent from my server but I havent bothered with DKIM (apart from added a DKIM policy that says I dont do DKIM that is). No problems so far.

Where are you hosting your mailserver? If its on a dynamic IP on your home internet, then youre on a hiding to nothing. Static IP on a home internet might be OK, if you can fix your reverse DNS to be something sane rather than the more usual adsl123455.isp.net or something. Google generally hates reverse DNS entries that look like consumer internet connections.

If youre hosting with AWS or another cloud provider, then I believe the only way to get an server on an IP address that doesnt come with a terrible reputation for spam is to cycle through IP addresses until you find one that works - this is what the big mail delivery companies do I believe.

pmlnr 6 hours ago 0 replies      
Of course it is.I've been running my for ages; the current setup is postfix, dovecot, dspam, opendkim, opendmarc. The last IP change was ~1 years ago, the last domain change is ~1 month, no issues.

A long while ago, when I wasn't running the mail server in lxc, and it was a server which also hosted web frontends, I got it "hacked" once; a rouge perl script sent 10s of thousands of emails within an hours. Thankfully, this was in ~2007, so removing the node from blacklists wasn't impossible.

Anyway: add dmarc, and make sure you have TLS to send/receive. This latter is probably the most important bit.

lucb1e 20 hours ago 1 reply      
> is it possible to run your own MTA, for personal use in 2016? Who, here, is doing it successfully?

Hello, I run my own mail server, though admittedly with an attitude of "your spam filter thinks I'm spam? That filter is broken; have fun reading your spambox."

Company email addresses are actually never any trouble anyway, only personal ones (the free ones like yahoo, gmail and hotmail) are the ones where people have trouble with broken filters, so I don't think I'm missing out on anything. And even those filters usually learn (after a few emails to different accounts on the service) that my IP address is not to fear.

I add SPF records but don't sign with DKIM (too much trouble; I set this up years ago when I didn't have much experience yet).

The last time I had trouble sending email was with (of course) google apps. Some company, whose product we were required to use by school, had no privacy policy so I wanted to ask after it. Sending them an email, google's mail server outright refused my IP address to deliver a message (this is extremely rare, usually it goes into a spambox). Google is the only one that can get away with this, given the near monopoly, without people thinking it's an issue on google's side. In the end I just didn't send them an email. Also quite ironic that google, of all companies, is the one standing in the way of my email trying to ask after a company's privacy policy.

This was half a year ago. Before that I can't remember having issues with anyone or anything for another year or so. Given how much I use email and how little spam I receive (catch-all with a blacklist), owning a mail server is totally worth it. Also because I don't have to accept anyone else's privacy policy or consider how many people have access to my inbox (I host at home, not colocated nor VPS).

dbcurtis 14 hours ago 0 replies      
I've been doing it for ages, but I do it a very lazy way. It's so lazy that I highly recommend it.

Before I explain what I do, I'll just mention that if you run the server that is the destination of your MX record, then you probably also want to run a back-up spooler at a remote site for when the connection to the primary server, or the server itself, is down. Down for security patching, for instance. Running a rock-solid mail service is kind of a pain, because it never fails at a convenient time.

So... I let my ISPs handle the high up-time requirement stuff, and let them be the mail spool as far as the outside world is concerned. Then I pop it down and requeue the mail on a machine that sits behind my firewall and isn't even on the front lines of the internet. I run an IMAP server on that. If it goes down, pfffft, the mail spools up a the ISP for a while and it gets popped down when my sever comes back up. It actually all works pretty well, but since I use my ISP's SMTP server for outgoing, all of my e-mail clients have a rather funky asymmetric set-up. The e-mail setup wizards just don't handle it. At. All. As long as you remember how to do old-school e-mail config settings, and can convince the new-fangled e-mail client to let you do a manual config, the asymmetric server is not much of an issue.

For remote access, I port-forward IMAP in my firewall.

So I should probably modernize this whole kit, but.... I think I mentioned above that I am lazy.

fusiongyro 9 hours ago 0 replies      
I just set up mail myself last week. Postfix on FreeBSD over at DigitalOcean. Did everything you did and was quite frustrated that my wife seemed not to be getting my email. The thing is, I sent a test email before adding SPF, and then one before adding DKIM, and Gmail figured out they were all part of the same "conversation" so because the first one seemed spammy the rest were penalized.

I made my own fresh Gmail account and messages go through fine. So I'd try that: make a fresh account from the one you've been testing with, and see if mail goes through.

jrnichols 13 hours ago 0 replies      
"gmail and hotmail both mark my mails as spam."

And they will continue to do so for as long as it takes for your mail server to earn a positive reputation. I recently had this problem with gmail after moving mail servers. there's really nobody that you can contact, gmail doesn't whitelist mail servers.

You're going to run into this problem with proofpoint, barracuda, postini, etc...

deftnerd 20 hours ago 1 reply      
Receiving email is easy. Sending it is much harder.

One of the things that is the most frustrating is that if you end up on a blacklist, or a large provider decides independently not to trust you, it's often completely silent when it blackholes all your outbound emails to that service.

I've just moved over to a hybrid of hosting my own MX servers for incoming email, and forwarding all my outgoing emails to an email-as-a-service provider for outgoing messages. Their trusted IPs usually help delivery, and they're actively paid by their users to have employees making sure that their IP ranges are whitelisted.

t3ra 3 hours ago 0 replies      
Related question : how to get push notifications from a self hosted MTA? The closest option I know of is app called CloudMagic
perakojotgenije 18 hours ago 0 replies      
Yes, it is possible. Here's a blog entry [1] I wrote some time ago how to set up your own email server that will accepted by major MTAs. I use my own mail server since 2004.


diego_moita 20 hours ago 1 reply      
Using Postfix & Ubuntu on a Linode server, they have a very good how-to[0]. The main problem I have is filtering incoming spam on spam-assassin.

[0] https://www.linode.com/docs/email/postfix/email-with-postfix...

calpaterson 20 hours ago 1 reply      
I run my own mailserver and have done since ~2012. I don't have DKIM or DMARC set up but I am using postfix and not qmail. I'm sorry I can't help you with your delivery problem - except that to say that if you didn't have eg rDNS set up to begin with gmail might have a negative cache of it. I haven't had problems with delivery of outgoing mail except to a couple of poorly administered exchange hosts run by recruitment agencies - which complain about my mail but confusingly do still deliver it.

Two real problems I have faced...I once (embarrassingly) created a unix user with test/test credentials for messing about and forgot that my postfix setup at the time reused unix credentials (ssh was locked down to only allow specific user to log in). I sent a few million spams a hour for a couple of weeks, getting my host into all the DNS blocklists. This took some time to fix (you have to apply to have your host removed from the blacklist). While I'm sure sending all that spam was annoying to many other people it didn't actually affect delivery for my mail...so it seems other administrators aren't using DNS blocklists?

Second, after a while I started to get a lot of spam. Maybe 10 per day. I tried various things to handle this, including setting up a proper bayesian spam filter (amavis-new) and using DNS blocklists myself. None of this worked for me. Greylisting however worked great.

So my suggestions to you: use defence in depth for mail as well as ssh. That means, fail2ban, different creds for both, unusual ports, user whitelists, high patchlevels (auto-patch and restart is great for a personal mail server) maybe client side TLS certs...etc. If you're relying on a single layer of defence eventually you'll make a mistake with it and then you're in trouble. I guess that really applies to anything you're trying to secure.

aabajian 14 hours ago 0 replies      
We've used our own mail server to send all email reminders from www.cronote.com for the past five years. I followed the following tutorial step-by-step. The hardest thing was understanding how to setup the DNS records correctly:


It'll take about three hours to get everything working right (and passing spam checks), but it's a great introduction to running your own mail server, and when you're done you can simply create an image of the machine to use in the future.

mrbill 16 hours ago 0 replies      
I've been doing so for years. Having spent almost a decade in the ISP industry, I don't trust other providers for anything but transit.

I have a 50/10 connection from Comcast Business with five static IPs. One of those hosts what used to be a colo box with an ISP in Austin (I'd worked for them years ago, and had a services-for-colo agreement that lasted until an ownership change).

For about five years now I've had no problems sending or receiving mail; just keep a common-sense best practices configuration and do regular checks to make sure that you're not relaying/sending spam and aren't on any RBLs. "Nux"'s comment is a good list.

Jaruzel 16 hours ago 0 replies      
Adding in my 2p's worth of advice here as well. The big thing I think is your IP. Even if it is 'static', if you are running the MTA from your house, your IP will be marked as residential, and probably also still 'dynamic' (just made sticky by your ISP to your Router/Modem).

The best way to give your outgoing mail 'authority' is to relay it through a smarthost. Some ISPs offer this - All my outgoing mail from the Exchange server in my Garage routes through my ISPs smarthost - it's the only way I can be sure that the big webmail hosts (hotmail/outlook, gmail, yahoo) actually get the mails. If I try to route direct, the mails get blocked.

hawat 17 hours ago 0 replies      
You can deploy end-to-end solution like zimbra (community version is open source and free). Has all necessary features (DMARC, SSL, clamav, spamassasin, and s-mime), really good webmail interface, some "cloud" file storage features (briefcase!) and more. As more and more people depend on big providers like google and microsoft with they mail we should advertise as much as possible, and convert as many as we can to self hosted mail solutions. Mail is last bastion of free communication. And should last "neutral" as long as is possible. Big providers are bulling smaller ones marking mail as spam/junk or blocking entire address spaces on "we thing that we should, and we do as we thing" policy.
timdeneau 20 hours ago 1 reply      
You also need a DMARC record (https://dmarc.org) along with your DKIM and SPF records.

Be careful testing your configuration when sending emails to the large providers, you can inadvertently score negative marks against your own reputation, which is hard to recover from.

lisper 15 hours ago 0 replies      
I've run my own mail server for >10 years. Getting it set up the first time is a bit of a chore but one it is running it requires nearly no attention. The hardest part is keeping your IP address off the spam black lists.

My setup:

Debian Linux + Postfix + Dovecot + a little greylist milter I wrote myself in Python. Happy to release the code if there's any interest.

I also have a script that automates the process of setting up a mail server but it's not quite ready for prime time. If anyone is interested in being an early adopter let me know.

ashitlerferad 8 hours ago 0 replies      
dboreham 10 hours ago 0 replies      
What exactly is happening to your outbound test messages? Is the recipient MTA accepting delivery but filtering? Are you seeing rejects from the MTA? If so what's the error? Try sending messages that are ordinary looking (not "Testing, testing...").

How long has the sender domain been registered?

dboreham 10 hours ago 0 replies      
What exactly is happening to your outbound test messages? Is the recipient MTA accepting delivery but filtering? Are you seeing rejects from the MTA? If so what's the error? Try sending messages that are ordinary looking (not "Testing, testing...").

How long has the sender domain been registered?

swenn 19 hours ago 0 replies      
When I was in the same situation some time ago, I used https://www.mail-tester.com/With the score around 7 or 8, Gmail didn't mark it as spam anymore.
jghn 17 hours ago 0 replies      
I used to and stopped just because it was a pain in the butt to keep my emails from getting flagged as spam all over the place. Several years ago I switched to using Dreamhost which was great for a while but I'm running into the same issues again, an increasing number of popular mail hosts are flagging emails from me as spam. Likewise Dreamhost has pretty much given up providing reasonable spam blocking tools.

I'm now considering something like a google organization account, or whatever it is called, where it's really gmail but with my domain name

vancan1ty 18 hours ago 1 reply      
Those of you who run your own email servers -- do any of you run it from a residential connection with dynamic ip address? Or do you pay extra for a static ip address/host using a VPS?
bluejekyll 16 hours ago 1 reply      
I used to run my own MTA, and then I get fed up staying ahead of spammers.

I chose postfix which has a lot of off the shelf support for blacklisting and such, but even so I found that I couldn't stay ahead of spammers and their (new at the time) techniques like reflection attacks.

Anyway, while it's fun to play with this, unless you want to spend time every week keeping it up to date, etc., I found it mostly a drain on other better things I could be doing.

cdysthe 3 hours ago 0 replies      
Hillary Clinton is the expert here \_()_/
wtbob 13 hours ago 0 replies      
I followed Ars Technica's article years ago, and modulo a few minor alterations, their instructions seemed to have worked pretty well. I will note that I mostly receive, rather than send email, but I've had no problems I'm aware of.
virtualio 18 hours ago 0 replies      
The problem is that your ISP has named your home connection with a DNS name that probably doesn't match with the domain name that you're trying to recieve and send mail for. There's a workaround by adding a vsp text record to your DNS. But that's no guarantee that all mayor mail providers will accept mail from your mailserver as unmarked (rather they'll end up as spam in your spam folder)
lazyant 19 hours ago 0 replies      
Yes, with the caveat of non-guaranteed deliverability (in other words: one day out of the blue gmail/hotmail/whoever will drop silently your emails)
saynsedit 19 hours ago 0 replies      
Gmail blocks based on IP. If you're running off of a dynamic IP at home you're going to need a smarthost.

If you're running from a VPS you may need a smarthost.

qwertyuiop924 11 hours ago 0 replies      
Running an MTA isn't too hard, but it's annoying. Postfix, qmail, and opensmtpd are the easiest to set up. I never actually got POP/IMAP working, so I have no advice there.
lrusnac 17 hours ago 0 replies      
You should think about security though. An interesting article you should read before deciding to use your own mail domain: https://medium.com/@N/how-i-lost-my-50-000-twitter-username-...
RIMR 13 hours ago 0 replies      
Yeah, all you need is a business-class Internet connection (residential Internet services block mail services), and a server.

I run my own servers both onsite and in the cloud. I have my only little personal "corporate" network.

bluedino 19 hours ago 0 replies      
Ars did a series of articles on running your own mail server: http://arstechnica.com/information-technology/2014/02/how-to...
gbuk2013 20 hours ago 0 replies      
I have been running an Exim4 email server servising several domains for many years, mostly without issue. I do have SPF configured and it is running on a commercial VPS server.

For spam defence I use grey-listing based on DNSBL lookups, some standards-enforcement ACLs and then SpamAssasin via MailScanner on the messages that get through.

z3t4 19 hours ago 0 replies      
The spam filter is a product. And if it doesn't work, like legitimate mail getting marked as spam, they will lose business. Just look at Yahoo where it's currently impossible to get white-listed, who are losing business because of this. (Yahoo mail was as big as Gmail is now about 15 or so years ago)
darkhorn 18 hours ago 0 replies      
In addition to the other things; DNSSEC and then registering your domain for at least 3 years might help.
mverwijs 20 hours ago 3 replies      
Been running it for over 15 years. No SPF. No dkim. Never had complaints, though threads like these have me worried.
meej 12 hours ago 0 replies      
I have been running my own email server on a VPS running slackware and sendmail for over 3.5 years now and have not had any trouble sending mail.
efesak 19 hours ago 1 reply      
Sure! I run several servers (and developing them, see https://poste.io). Watch DMARC reports and give it little time, most of time it will solve itself. You can also register to feedback loop...
wfunction 17 hours ago 0 replies      
I'm not sure it's a great idea from a privacy viewpoint.

Why should anyone who has ever received an email from you know your IP address? Especially if it's a home server, that will give them some idea of where you live.

trentmb 21 hours ago 1 reply      
jack9 17 hours ago 0 replies      
I run my own for my own personal use...not public available accounts. I've never even looked at SPF, DKIM, or cared if I was in blacklists. I rarely send mail and receive lots of it.
markvaneijk 19 hours ago 1 reply      
One important thing is that for Gmail you need to send your e-mail using TLS.
kazinator 19 hours ago 1 reply      
I run my own mail server on a dynamic IP. Not being marked as spam is mainly a matter of how you send mail. Make your next SMTP hop a server of decent repute. Don't send mail directly.
ams6110 20 hours ago 2 replies      
Are you running it out of your house (e.g. a residential ISP). If you are you'll probably never have much success, all the major email providers block residential IP addresses.
omginternets 18 hours ago 0 replies      
Is there a comprehensive guide to running a homebrew end-to-end encrypted mail server? Something roughly analogous to tutanota or protonmail?
drudru11 20 hours ago 0 replies      
Yes - I just started doing it again. I'm glad I did. I just made sure that I could send/receive mail without issue from the large email systems.
janci 18 hours ago 0 replies      
Have a look at the message headers when it goes to Gmail or any other target. There will be a hint, why it gets marked as SPAM.
mindslight 19 hours ago 0 replies      
Postfix, Mail Avenger, Linode, unison, mutt, -spf, -dkim, -dmarc. Fine for >10yr, although I'm sure the age helps with rep.
dstjean 9 hours ago 0 replies      
Ask Hilary Clinton
ebbv 20 hours ago 2 replies      
It's gonna be really hard. I work at a hosting company and our staff has to work constantly to make sure people's servers get taken off of spam blacklists, or IP blocks of ours need to get removed, etc. There's a lot of stuff to navigate out there, basically kludges that have been put in place because email is just such a terrible, insecure system.

I'm sure if you're willing to put in the effort you can do it. But from my point of view, I'd probably just get a managed VPS with a hosting company who will take care of all the headaches of dealing with spam filters for me. They can be had pretty cheaply and the money is well worth it if you get good support.

cdnsteve 17 hours ago 0 replies      
Anyone aware of any newer MTAs built using Python, Go or Node? Maybe with nice JSON config.
igk 19 hours ago 0 replies      
Yes, am doing it successfully using the awesome iredmail.
ABorserker 5 hours ago 0 replies      
Check with mxtoolbox.com
meeper16 19 hours ago 0 replies      
Sendmail is your friend.
cookiecaper 20 hours ago 2 replies      
No, not really. It's a constant battle to get delisted from spam blacklists and your site keeps popping back up. Even most companies don't bother with it anymore.
yeukhon 11 hours ago 0 replies      
Some ISP don't allow SMTP at all. FWIW, if you are hosting on Amazon for example, you can't run mail server on port 25 from an EC2 until you submit a support ticket.
marmot777 16 hours ago 0 replies      
SixSigma 17 hours ago 0 replies      
Your IP block assigned to domestic suppliers is probably the thing that kills you.

My VM in the ISP has no issues, but from home, blocked at lots of places.

What you can do is use your ISP as a mail sending relay.

sliverstorm 17 hours ago 0 replies      
Make sure you aren't delivering over IPv6. I had all my IPv4 rules set up, and mail worked fine - except delivering to gmail. Turns out I was delivering over IPv6. I wound up shutting off my IPv6 interface.

As a good netzien I should work to behave on IPv6, but it was just too much of a pita for my tiny, single user server which works fine on IPv4

j45 19 hours ago 0 replies      
You can run something like Zimbra. I ran my own personal server for almost 15 years and intend on going back to it.

I trusted Microsoft once with my hotmail and they somehow deleted 16 years of my emails and correspondence with a few friends who passed away.

jwatte 20 hours ago 0 replies      
I do that, using postfix.

You need to also run spamassassin, with auto update, and check in with a number of rbl servers on the receiving side. (This is more important when you forward aliases to gmail and such)

cmdrfred 20 hours ago 1 reply      
I did almost exactly what you describe and it works flawlessly, but I only send a small volume of mail (my own). Digital ocean droplet $10 a month, but it would run on a $5 one if you don't also have caldav/sftp/kerberos/vpn/etc there like me.

Fun Fact: Microsoft Exchange has no native support for DKIM and tons of business run that in house, often on 'business class' connections without correct reverse IP records.

notadoc 19 hours ago 0 replies      
Sure, but do you want another chore?
RajkumarOvi 15 hours ago 0 replies      
miguelrochefort 17 hours ago 0 replies      
People like you should be punished.
tacostakohashi 20 hours ago 0 replies      
Basically not, if you have other things you're trying to do with your life at the same time.

It's like trying to make a toaster from scratch, or growing your own wheat to make your own bread. It's possible, but it's also impractical and you'll end up with a worse result than you can get off the shelf.

tootie 17 hours ago 0 replies      
I don't know if "run your own" means your own everything, but Amazon SES is a pretty viable option. Even if it's not, they have a pretty good checklist for keeping yourself out of spam filters: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/deliver...
mindcrash 20 hours ago 1 reply      
Grab https://github.com/sovereign/sovereign, follow instructions, done.
Stop Treating Marijuana Like Heroin nytimes.com
326 points by okket  1 day ago   246 comments top 22
poof131 1 day ago 5 replies      
The DEA needs to go bye bye, and be replaced by the DRA, Drug Rehabilitation Agency. While I respect the risks operational agents take against crime syndicates, the war on drugs is a disaster. Even the governments own RAND corporation concluded enforcement is far worse than rehabilitation, with rehab being 7 times more effective than policing, 400 times more effective then border interdiction, and 800 times more effective than actions in source countries.[1] Not to mention the privacy violations its enabled. If the goal is to reduce drug use, the US policies are stupid. If the goal is for politicians to sound tough and maintain a never ending drug war so they can keep sounding tough, then its a success.

And letting the DEA decide what is Schedule 1 is like letting the fox guard the hen house. Ive never heard of a government agency voluntarily reducing its scope and budget. Just more of our leadership deferring responsibility to so called experts with a vested interest. A pretty rampant phenomenon these days ranging from the banks to the military.

[1] http://www.rand.org/pubs/periodicals/rand-review/issues/RRR-...

martinald 1 day ago 33 replies      
While I'm totally for drug (all drugs - not just cannabis) legalisation/regulation, I do think the "dangers" of marijuana have been really downplayed over the last few years.

I know a lot of people at university who got into smoking it very often and basically lost 10+ years of their life to complete apathy to anything. Some have now stopped and are totally different people - just 10 years behind.

While it doesn't cause overdose, cirrhosis or criminal activity from the user, it does become very addictive for some people and causes them to be extremely unmotivated in their life.

Is this as bad as heroin or crack? No obviously not, but a lot of people end up really trapped by it.

contemporary2u 1 day ago 3 replies      
The big question is why drugs are illegal at all?

Drugs are a multi trillion dollar business. 100's of billions of dollars turning the wheels of the shadow economy every day.

Who stands to lose all that money if drugs were legalized? Who is involved in drug business? think of all of the players in the chain..

We know examples of developed countries where drugs are legalized and everyone did not become a junkie over night, or next week, or next year.

Drug money fuels a lot of things in this world, entire countries are built and destroyed using drug money, new political systems are built and presidents elected using drug money. It is convenient "invisible" hand that makes a lot of things happen in this world.

Drug problem is not about me or you becoming a junkie. Its about money, a lot of money, its about economical and political power that comes with it.

The day when drugs are legalized the convenient way to make lots of money out of "thin air" all of the sudden would be gone...

Hearings on the CIA and Drug Trafficking:


John Kerry committee:


Police Officer Mike Ruppert (dead now, shot in the head 2 years ago) Confronts CIA Director John Deutch on Drug Trafficking:


will_brown 1 day ago 1 reply      
>The D.E.A. and the F.D.A. insist that there is not enough scientific evidence to justify removing marijuana from Schedule 1. This is a disingenuous argument; the government itself has made it impossible to do the kinds of trials and studies that could produce the evidence that would justify changing the drugs classification.

This can not be overstated. I recently discussed a prior client of mine on HN who is one of the few patients who receive the federally authorized marijuana from University of Mississippi (at a dosage of 360 joints/month). My client has a rare bone disease and has been in the federal program for over 30 years, one of my clients biggest complaints is that while he wanted studies to be conducted regarding the use/effects of marijuana the Government has refused.

thisjustinm 1 day ago 0 replies      
The biggest change I've seen since marijuana was legalized here in Colorado is that pot isn't just one type (just like beer isn't just lagers, etc). There's hundreds of strains and some are VERY different than the others (again think Guinness vs bud light vs your favorite IPA, etc).

You can walk into a dispensary here and choose your levels and mix of THC and CBD - want 20%+ THC for an intense high? Maybe 15% CBD with no THC for almost no psychoactive high but a focus on pain relief / relaxing? All that and everything in between is available.

As research into all the other cannabinoids increases I expect to see their breakdowns included on labels as well.

And that's where the analogy to alcohol breaks down - there's typically just one active ingredient in beer but dozens in pot. And the different strains can deliver various ratios of the cannabinoids to create very different effects.

Bottom line is that I could easily see how if you only ever experienced illegal pot i.e. no choice and low quality I think you are likely to hold a different view on it than if you'd experienced the variety and nuance of legal pot.

arcticbull 1 day ago 1 reply      
Stop treating addiction as a legal problem and treat it as a public health problem, like Portugal. Decriminalize all substances and push addicted to treatment programs. And just legalize pot entirely, one of these is not like the others (at least Canada is leading the way there).
Fifer82 1 day ago 0 replies      
I am in the UK and have been smoking cannabis daily now for 12 years. Legal or not.... does it look like I care?? At the end of the day, I am a nice guy, I help people out, I have held down a 10 year happy relationship and do pretty well at my job. I also keep fit and "healthy".

I don't care if I die, so take that away from me, and literally there is no valid argument to stop me enjoying a smoke.

Fuck the Police.

snarfy 1 day ago 3 replies      
I've been a daily user for about 30 years now. It's more like coffee and cigarettes than it is like heroin or alcohol.
god_bless_texas 1 day ago 0 replies      
So we're looking to an enforcement agency to make legal one of the very things that justifies its existence?
mercer 3 hours ago 0 replies      
Without arguing for anything in particular I'd just like to add the following:

If you feel or suspect that marijuana negatively affects your life, and if you want to quit (even temporarily), then I can strongly recommend visiting the 'leaves' subreddit (leaves.reddit.com).

In fact, even if you're just curious about how people compare living with and without weed as part of their lives, it could be interesting to read a bunch of the posts there.

It's a remarkably varied community; not everyone there sees weed as evil and quitting as an necessity.

And for alcohol there's a similar subreddit called stopdrinking. It's similarly open-minded.

Both have really helped me deal with (borderline) dependence.

aaron695 1 day ago 2 replies      
Stop demonizing heroin.

To be honest, not sure what is right here.

Is it ok to demonise interracial marriage while trying to get the minority the right to vote?

Does the end justify the means?

But if you think heroin is a demon and marijuana(or alcohol) is ok you are sadly mistaken.

mark_l_watson 23 hours ago 0 replies      
I like to listen to Catherine Austin Fitts who was the undersecretary of HUD in the George H. Bush administration. Catherine often says "follow the money."

Unfortunately there are a lot of vested interests, organizations that make a ton of money from the cruel Marijuana laws: private corporations running prisons, police unions/organizations, and I would argue the big pharmaceutical companies (because Marijuana is a good natural pain killer). The war of drugs is a high-profit business.

mahranch 23 hours ago 0 replies      
Stop treating Heroin like heroin. That's the very reason why we have so many opiate addicts. As a former opiate user who almost fell down the rabbit hole (unfortunately, my brother eventually did fall down that hole), by demonizing heroin, you set it apart from other opiates like Codiene, Vicodin, and Percocet/Oxycontin. It's not any different from those drugs. I found out the hard way. All of those drugs are opiates and if you were to snort heroin and if you were to snort Oxycontin, you would almost certainly prefer the oxycontin. It's a much cleaner, longer lasting high while the heroin high kinda blows. Taking it in pill form isn't any different. Heroin is only sorta useful when it's taken IV or smoked, but heating it up destroys the compounds which get you high so it's not a very good ingestion route.

Pure heroin is quite harmless by itself (provided you didn't take too much). People can live a full and complete life totally addicted heroin with probably less risks than even pot. The problem of heroin is never direct -- it kills/harms via overdose (misjudging a dose, or doing the same amount after your tolerance drops) or the most common "overdose" we hear of is when Fentanyl gets cut into your heroin (Yes, Fentanyl is way more potent than heroin and hospitals prescribe it all the time). The other ways people die is by mixing heroin with other stuff, namely xanax. You virtually never hear of someone just dying from heroin alone. And have you ever heard of someone getting lung cancer from their heroin?

The drug itself isn't like crack or meth where doing it for an extended period of time can permanently turn you crazy (Psychosis). Or kill you from exhaustion/sleep deprivation. People can take it for decades and show no obvious signs (functioning addict). The drug is always labeled "super hard" by the same people carrying around Percocets in their purse. But that's OK because they got it from a dentist/doctor. Percocet is oxycontin mixed with Acetaminophen and Oxy is almost as strong as heroin per mg. Demonizing heroin while you're taking vicodin or oxy is so hypocritical that there should be a new word invented for it, something that means "more than hypocritical".

Heroin's problem lies in that it really doesn't have a great medical application. What it does, other drugs now do literally (not figuratively) 1000x better. But heroin is incredibly cheap to make (relatively speaking) and provides a unique rush when taken IV. There are other drugs (like Demerol) which provide that same rush but hospitals instead inject it into your fatty tissue (ass) so it's slow to release (so you can't get the rush). So why don't drug kingpins make that stuff instead? Because those pharmaceuticals (like Vicodin, Demerol, and Fentanyl) require complex processes (many steps) and expensive labs to make well, and it's way more expensive than Heroin. Not just cost-wise but also time-wise. When you're out in some back-country in Afghanistan mixing your freshly dried Opium in a vat of Acetone to dissolve it, you're on the clock to get in and out with the product or the boss is going to be pissed.

I realize I'm starting to rant but that's because I myself always though "drugs are bad!" then I tried pot. Realized it wasn't that bad, safer than alcohol. That same "Well that's not so bad" happened to me with heroin. After snorting it I was actually disappointed and thought I was ripped off. I didn't realize it wasn't stronger than Oxycontin. I thought it was orders of magnitude stronger than these harmless little pills my dentist gave me when I got my wisdom teeth pulled. I was wrong. And that lack of understanding is what almost threw me down the rabbit hole.

davidf18 1 day ago 2 replies      
A problem is that women are smoking marijuana and may not yet know they are pregnant.

Medical marijuana laws and pregnancy: implications for public health policy.http://www.ncbi.nlm.nih.gov/pubmed/27422056

"Although there is much to learn yet about the effects of prenatal marijuana use on pregnancy and child outcome, there is enough evidence to suggest that marijuana, contrary to popular perception, is not a harmless drug, especially when used during pregnancy. Consequently, the public health system has a responsibility to educate physicians and the public about the impact of marijuana on pregnancy and to discourage the use of medical marijuana by pregnant women or women considering pregnancy." [emphasis added]

Long-term Marijuana Use and Cognitive Impairment in Middle Agehttp://archinte.jamanetwork.com/article.aspx?articleid=24849...

Association Between Lifetime Marijuana Use and Cognitive Function in Middle AgeThe Coronary Artery Risk Development in Young Adults (CARDIA) Studyhttp://archinte.jamanetwork.com/article.aspx?articleid=24849...

Marijuana use leads to increased stillbirth: (free download)

Association between stillbirth and illicit drug use and smoking during pregnancyhttp://www.ncbi.nlm.nih.gov/pubmed/24463671

"CONCLUSION: Cannabis use, smoking, illicit drug use, and apparent exposure to second-hand smoke, separately or in combination, during pregnancy were associated with an increased risk of stillbirth. Because cannabis use may be increasing with increased legalization, the relevance of these findings may increase as well."

fithisux 1 day ago 2 replies      
And start treating nicotine like heroin.
kennell 1 day ago 0 replies      
More importantly: view drug use (and abuse) as a health issue, rather than a criminal justice issue.
viraptor 1 day ago 4 replies      
Do I understand it right? Even in states with legalised recreational use you can't do research on cannabis without DEA approval?
Mz 17 hours ago 0 replies      
My observation has been that drugs don't really fuck up people's lives. People with fucked up lives turn to drugs and then, when they are ready to get their act together, they also get off the drugs. Then blame the problems on the drug use.

I am pretty much a tea-totaller. I am also allergic to marijuana, so I have no desire to be around it. But I have known people in person and read articles and the like. One example that comes to mind: 16 year old boy's brother is gruesomely murdered, he becomes an addict or alcoholic and when he is 19 he decides to get clean and sober. So, he's had three years to process his grief and he is now a legal adult, not a helpless legal minor who can't do much about anything wrong in his life. Now, he wants to be sober and talks trash about what a bad person he was for using/drinking and blah blah blah.

My best guess: It is easier and/or more socially acceptable to blame drugs than to admit that life really shat on them, their parents are assholes, whatever. But, in most cases, it looks to me like they use for a reason and if that underlying reason gets better, then they will tend to stop using.

I am fond of the book "The truth about addiction and recovery" which basically takes this view.

caub 22 hours ago 0 replies      
just don't smoke it
alvarosm 21 hours ago 0 replies      
This is like lsd in the 60s... Hopefully in a couple of decades we'll realize marijuana is shit.
youngButEager 23 hours ago 4 replies      
INSIGHT: Being in a position to observe the behavior of users -- MANY users -- over a period of time, and non-users in the same settings.

Most of you probably don't have that experience.

- Parents do. Ask parents "how did your kid change after commencing use of pot?"

- Teachers do, if they know a student imbibes. These days teachers have a great chance to see the difference between regular students and those who smoke it.

- Property managers of apartment properties do.

I'm the latter. I made my Silicon Valley startup bucks and have been buying and operating apartment properties since 1993, 2 years out of college.

Here's what I experience:

1) my pot using tenants do not like following rules compared to other tenants.

2) they are defiant in their attitude to varying degrees, challenging things they initially agree to ("no smoking", "new occupants must pass the same tenant screening you did and must be added to the lease", "no guest parking", "no loud parties/noise after 10pm", etc).

RECENT EXPERIENCES- tenant moves in, it's a no smoking building, they begin smoking pot in their unit EVEN THOUGH there are 'no smoking' signs everywhere and the lease clearly calls for 'no smoking cigars/cigarettes/marijuana'. THAT'S HAPPENED 18 TIMES (18 different tenants) IN THE PAST year and a half.

- tenant moves in, gets a warning for their car blocking other tenants in the parking lot, they kept blowing it off, parking and blocking others. FIVE TIMES over a 2 month period until they were evicted.

- tenant moves in someone without adding them to the lease (a requirement), we catch them, the added person does not pass the normal tenant screening process and they have to leave, the original tenant keeps them there anyway, we catch them, gave them a final warning, they ignored the final warning, and got evicted

Just a very small set of examples.

IF YOU SMOKE, you are the LAST person to know if your pot use has changed you, added some negatives to your behavior. "A doctor who treats himself has a fool for a patient."

I myself imbibed for 3 years as a teen. WHAT AN UNMITIGATED DISASTER. Normal recreation time was clouded by intoxication.

If you prefer being intoxicated in your leisure time, how would you feel telling people that?

"I like being intoxicated. It's my recreational activity."


"I like being intoxicated during my leisure time."


"When I spend free time with recreational pursuits, I like being intoxicated."

VERY FEW pot users will admit that to arbitrary others. Deep inside, we know to ourselves "I shouldn't need intoxication to enjoy myself."

You should not need to live in an altered state, intoxication, and if you are frequently choosing intoxication from pot as 'recreation', something is wrong.

sandworm101 1 day ago 0 replies      
The OP speaks of a lack of studies due to a lack of supply. That's incorrect. There are plenty of studies and plenty of research-grade marijuana out there. There is just very little American material. The plant has been studied in Canada and Europe with plenty of material available from a variety of producers.

Canadian government's list of sanctioned providers:


Their trade association:


The growing supply of research-grade material:


So stop harping on about a lack of research. The fact that a medical substance hasn't been studies within the magic bounds of a particular country should be irrelevant to any reasonable person.

AWS Application Load Balancer amazon.com
365 points by rjsamson  3 days ago   125 comments top 30
encoderer 2 days ago 6 replies      
We plan to do a blog post about this at some point, but we had the pleasure of seeing exactly how elastic the elb is when we switched Cronitor from linode to aws in February 2015. Requisite backstory: Our api traffic comes from jobs, daemons, etc, which tend to create huge hot spots at tops of each minute, quarter hour, hour and midnight of popular tz offsets like UTC, us eastern, etc. There is an emergent behavior to stacking these up and we hit peak traffic many many times our resting baseline. At the time, our median ping traffic was around 8 requests per second, with peaks around 25x that.

What's unfortunate is that in the first day after setting up the elb we didn't have problems, but soon after we started getting reports of intermittent downtime. On our end our metrics looked clean. The elb queue never backed up seriously according to cloud watch. But when we started running our own healthchecks against the elb we saw what our customers had been reporting: in the crush of traffic at the top of the hour connections to the elb were rejected despite the metrics never indicating a problem.

Once we saw the problem ourselves it seemed easy to understand. Amazon is provisioning that load balancer elastically and our traffic was more power law than normal distribution. We didn't have high enough baseline traffic to earn enough resources to service peak load. So, cautionary tale of dont just trust the instruments in the tin when it comes to cloud iaas -- you need your own. It's understandable that we ran into a product limitation, but unfortunate that we were not given enough visibility to see the obvious problem without our own testing rig.

ihsw 2 days ago 1 reply      
Can we agree on the terminology for Application Load Balancer and Elastic Load Balancer?

* ALB: Application Load Balancer

* ELB: Elastic Load Balancer

I have seen Application Elastic Load Balancer/AELB, Classic Load Balancer/CLB, Elastic Load Balancer (Classic)/ELBC, Elastic Load Balancer (Application)/ELBA.

In any event, I think it is great that AWS is bringing WebSockets and HTTP/2 to the forefront of web technology.

tobz 2 days ago 0 replies      
The real question: does this provide a faster elasticity component than ELBs?

At a previous employer, we punted on ever using ELBs at the edge because our traffic was just too unpredictable.

Combining together all of the internet rumors, I've been led to believe that ELBs were/are custom software running on simple EC2 instances in an ASG or something, hence being relatively slow to respond to traffic spikes.

Given that ALBs are metered, it seems like this suggests shared infrastructure (binpacking peoples ALBs onto beefy machines) which makes me wonder if that is how it actually works now, because it would seem the region/AZ-level elasticity of ALBs could actually help the elasticity of a single ALB.

If you don't have to spin up a brand new machine, but simply configure another to start helping out, or spin up a container on another which launches faster than an EC2 instance... that'd be clutch.

Deep thoughts?

0xmohit 2 days ago 4 replies      
AWS still doesn't support IPv6. Good to see them talking about HTTP/2.

Waiting for AWS to embrace IPv6.

boundlessdreamz 2 days ago 2 replies      
So this is pretty much the same as Google HTTP load balancing https://cloud.google.com/compute/docs/load-balancing/http/ + websocket & http2?
fred256 2 days ago 1 reply      
+1 for CloudFormation support on launch day.+1 for support for ECS services with dynamic ports (finally!)-1 for no CloudFormation support for ECS

(To configure an ECS service to use an ALB, you need to set a Target Group ARN in the ECS service, which is not exposed by CloudFormation)

cheald 2 days ago 1 reply      
Exciting! Disappointing that you can't route based on hostname yet, though. I've got 5 ELBs set up to route to different microservices for one app, and because we couldn't do path-based routing before, that's all segmented by hostname. As soon as ALB supports hostname routing, I can collapse those all into a single LB.
agwa 2 days ago 1 reply      
> 25 connections/second with a 2 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer or

>5 connections/second with a 4 KB certificate, 3,000 active connective, and 2.22 Mbps of data transfer.

"2KB certificate" and "4KB certificate"? Is this supposed to read "2048 bit RSA" and "4096 bit RSA"?

indale 2 days ago 1 reply      
This looks pretty sweet. The next big thing for api versioning would be header instead of url based routing, looking forward to 'give you access to other routing methods'.
rjsamson 3 days ago 2 replies      
They finally added support for websockets! Really looking forward to giving this a try with Phoenix.
daigoba66 2 days ago 1 reply      
These new features are cool... but they still pale in comparison to something like HAProxy.

I guess the tradeoff is that with ELB/ALB, like most PaaS, you don't have to "manage" your load balancer hosts. And it's probably cheaper than running an HAProxy cluster on EC2.

But for the power you get with HAProxy, is it worth it?

Does anyone have experience running HAProxy on EC2 at large scale?

erikcw 2 days ago 1 reply      
I'm curious if this will Convox to route to multiple services with just a single ALB instead of the historical default of 1 ELB per service. Would be a real cost savings for a micro-services architecture.
avitzurel 2 days ago 0 replies      
This is very good.Recently my workflow has been ELB -> NGINX -> Cluster.

Nginx was a cluster of machines that did routing based on rules into the ec2 machines. Now that the AELB has some of those capabilities it's time to evaluate it.

archgrove 2 days ago 1 reply      
Any love for Elastic Beanstalk with these? They seem well matched. Though EB always feels a bit of a red-headed stepchild in the AWS portfolio.
renaudg 1 day ago 0 replies      
I'm the process of containerizing an app that includes a Websockets service, and given ECS / ELB limitations we'd just decided to go for Kubernetes as the orchestration layer.

This ALB announcement + the nicer ECS integration could tip the balance though.

Any thoughts on how likely it is that Kubernetes can/will take advantage of ALBs (as Ingress objects I suppose) soon ?

shawn-butler 2 days ago 0 replies      
Anybody know whether the new ALB handles a client TLS (SSL) when operating in http mode?

I was trying secure an API Gateway backend using a client certificate but found ELB doesn't currently support client side certificates when operating in http mode.

There was this complicated Lambda proxy workaround solution but I gave up halfway through...


dblooman 2 days ago 1 reply      
It seems that routing is done in the following way /API/* goes to applications and expects :8080/api/ rather than the root. Would be nice to have the option to direct traffic to just :8080.
axelfontaine 2 days ago 2 replies      
It looks like the big missing piece is auto-scaling groups as target groups...
sturgill 2 days ago 1 reply      
This sentence sums up one of my main reasons for appreciating AWS:

The hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer.

They frequently introduce new features while cutting costs.

kookster 2 days ago 1 reply      
As a heavy ECS user, all I can say is thank you, finally!
nodesocket 2 days ago 3 replies      
Do ALBs support more than a single SSL certificate?
manishsharan 2 days ago 0 replies      
This is definitely nicer than having to create subdomains for microservices and mapping each subdomain url to its own Elastic Loaad Balancer + Elastic Beanstalk instance. But I have already gone down this path so I am unlikely to use AWS Application Load balancer. I wish I had this option a year ago.
nailer 2 days ago 1 reply      
Nice haproxy / nginx alternative. It's got http2 support though which puts it ahead of haproxy.
DonFizachi 2 days ago 0 replies      
Any idea if sticky TCP sessions will be supported on ELB/ALB any time soon?
amasad 2 days ago 0 replies      
I wonder if they fixed the routing algorithm for TCP connections. It's round-robin on ELB, which is performs terribly for long lasting connections.
joneholland 2 days ago 0 replies      
Disappointing. I was hoping they were launching a service discovery stack to compliment ECS.
nodesocket 2 days ago 3 replies      
So what would be a use case for using ELBs now? Seems like ALBs do everything ELBs do, but with websocket and HTTP/2 support.
bradavogel 2 days ago 2 replies      
Does anyone know if it (finally) supports sticky websocket sessions?
merb 2 days ago 0 replies      
Virtual Host Load Balancer would be great.
NeckBeardPrince 2 days ago 0 replies      
Any idea if it's HIPPA compliant?
Shape of errors to come rust-lang.org
421 points by runesoerensen  3 days ago   157 comments top 17
cfallin 3 days ago 8 replies      
I really appreciate the user-friendliness of Rust's error messages -- I can't remember seeing a compiler tell me "maybe try this instead?" before (perhaps something from Clang, but never with the specificity of, e.g., a suggested lifetime annotation). And from a parsing / compiler-hacking perspective, it seems really hard to get the heuristics good enough to produce the "right" root cause. Kudos to the Rust team for this continued focus!
justinsaccount 3 days ago 1 reply      
In my little rust experience the suggestions the error messages had, even in the longer explanations, were useless. Mostly it came down to me trying to do something that was simply not supported, but the compiler not knowing that and leading me on a wild goose chase.

From what I remember, I was trying to use the iterator trait, but return records that contained str pointers.. The goal being to parse a file and yield pointers to the strings from each line to avoid allocating memory and copying bytes around. Rust tries to tell you that you need lifetime specifiers, but if you try adding those nothing compiles anyway because the iterator stuff doesn't actually support that.

I eventually got it to work by returning copies of the strings.. maybe the unsafe stuff would have done what I wanted, that's what rust-csv seems to do at least.

I concluded that Rust is definitely not a language you can learn by chasing down error messages and incrementally fixing things. If you don't 100% understand the borrow and lifetime stuff, fixing one error is just going to cause two new ones.

dllthomas 3 days ago 1 reply      
Please put file locations (as many as might be relevant) at the start of a line in the standard format!

The other changes look valuable. Improving error reporting is great.

Edited to add emphasis: I really did mean a line, not every line.

waynenilsen 3 days ago 3 replies      
I would love to see an option for showing the error order forward or backward. My workflow is to start fixing compile time errors from the top of the `cargo` output but scrolling to the top can be fairly annoying when there are a lot of errors. Having the most relevant error at the bottom of the command line (where it automatically scrolls) would be useful as an option IMO. This probably causes some other unseen problems however
karysto 3 days ago 0 replies      
I always love it when we bring UX to the the tooling we use, and not just on the end product in our users' hands. Everyone appreciates a delightful UX, including software engineers. I've been eyeing Rust for a while now, this just gives me another excuse to hack with it.
gnuvince 3 days ago 0 replies      
Very good! I always liked the content of Rust's error messages as it clearly explained the issue, but the form of those error messages was a bit problematic, they were very noisy and it wasn't easy to see the issue by simply glancing, you had to scroll up, find the beginning of the error and read carefully.
ColinDabritz 3 days ago 0 replies      
I love the clarity and readability of these errors. You can work on UX at lower levels, and it looks like this. Beautiful. I'm not even a Rust dev, I'm mostly in C# land these days, but I appreciate the effort this takes. Well done!
Animats 3 days ago 4 replies      
Imagine what you could do if error messages didn't have to appear as plain text. You could have arrows, lines, shaded areas, and even multiple fonts.
zalmoxes 3 days ago 3 replies      
Inspired by Elm?
IshKebab 3 days ago 0 replies      
This is great! Why keep the row/column numbers at the end of the source file though? Aren't they redundant (IDEs shouldn't be parsing command line output anyway so they don't need it).
nneonneo 3 days ago 0 replies      
This focus on compiler usability is really fantastic. C calls these "compiler diagnostics" for a reason - they should help the programmer diagnose and fix the problem. I loved it when Clang started making C errors sane (and getting GCC to introduce better messages too!), and I'm glad to see Rust take the next step. Since I'm stuck in C++ for the time being, I'm (selfishly) hoping that Clang takes a page from these new Rust errors - these sorts of diagnostics would look great on C++!
marsrover 3 days ago 3 replies      
Not related to this article, but I was looking through the Rust survey mentioned at the bottom of the article and was surprised at the amount of people using it for web development.

I'm not very knowledgeable about Rust but I guess I assumed it would not be the best technology to use in that space. Is Rust really that prevalent in web development?

mrich 3 days ago 1 reply      
Great to see an improvement over the already improved error reporting established by clang and later adopted by gcc.

However I don't understand why backticks are still being used - they tend to look ugly especially when pasting them into emails etc.

Symmetry 3 days ago 0 replies      
That looks really cool and I'll have to give learning Rust another try when this lands. Also, the title was pretty wonderful. I spent a bit thinking about it before looking at the domain and realizing what it had to be.
knodi 3 days ago 0 replies      
damn... thats nice. Can't wait to dive into it.
hyperpape 3 days ago 1 reply      
Do I understand correctly that since this is in the current nightlies, it's slated for 1.12? So, sometime in October?
miguelrochefort 3 days ago 0 replies      
Nothing that Visual Studio and C# doesn't do.
React Server react-server.io
397 points by uptown  2 days ago   137 comments top 30
pault 2 days ago 6 replies      
This is interesting, especially since I just spent the last two weeks setting up a boilerplate for a universal react/redux SPA on spec for a new client. I enjoy the flexibility but the need to develop a deep working knowledge of several independent libraries, transpilers, and build tool configuration files (each of which has several competing options with their own way of doing things) just to get to hello world is cost-prohibitive for most people, I'm sure. At the same time, I'm hesitant to go "all in" on a stack that I haven't heavily researched myself. If the developers are reading, can you go into some details about how you handle routing and data stores? Are you using off the shelf libraries or have you rolled your own?
android521 2 days ago 1 reply      
It doesn't seem like it is for beginners. Docs look at the author assumes everything is self-obvious. Need better tutorials.
idbehold 2 days ago 4 replies      
How does it handle fetching data from a path on the same origin? For instance, I only need to fetch('/api/users.json') on a certain page (/users). This means that it can either be hydrated in the initial state when performing a full page load of /users or needs to be fetched (using xhr/fetch) when navigating to that page from another page on the site (which shouldn't require a full page load).

So how exactly does the server perform that same fetch when attempting to hydrate the full page without actually making an HTTP request against itself?

ken47 2 days ago 0 replies      
If this works as advertised, this may prove to be a very useful project, that can replace numerous homegrown, half-complete implementations strewn about the internet.
mattbroekhuis 2 days ago 2 replies      
I've been banging my head in this boilerplate for the last few weeks and it's been very interesting.


How does this compare to that?

underwater 2 days ago 1 reply      
Based on the documentation and the design principles this seems like a really promising framework.

The data hydration, incremental HTML delivery and incremental code loading are really, really important for creating web apps that aren't load time hogs. Great to see that they were unopinionated about data fetching, too. That's one of the things that has made it difficult to drop Relay into existing applications.

Is this used in production? Are there any performance numbers that you can share.

fdim 2 days ago 1 reply      
I see a bunch of failed attempts in console to connect to slack websocket. Is everything looking ok for you?

After blocking 2 iframes with adblocker I could finally inspect what was going on :)

Anyway, I can definitely feel that is fast and seamless and worth to give a deeper look! In the meantime, prefetching all the content in docs or source views upon load generates quite a few requests and might explain your scaling issues. Would you mind sharing statistics for number of users and hardware behind ?

hughbetcha 2 days ago 2 replies      
Would be useful if this project explained the difference between React and ReactServer. It seems they are as similar as Java and Javascript.

Instead of the render() method in React to output JSX, it appears that ReactServer uses getElements() for a similar purpose. So the entire model and object lifecycle is probably different as well?

gedy 2 days ago 1 reply      
Humorously this looks a bit like Apache Wicket, a Java-based client/server UI framework which has been around for about 10 years: https://wicket.apache.org
kelvin0 2 days ago 3 replies      
I just recently made the switch into the Web Dev world (coming from C++/Python, desktop world). Since I knew of Django, I've started using it as the 'Backend/Server' part of my Web app dev stack. Basically using Django to render minimal React/HTML/JS/CSS to bootstrap my single page Web app.Wondering, what advantage would I have would I get from using react-server instead of Django (aside from using JS across the board)?

Would appreciate feedback!

sergeym 2 days ago 0 replies      
there is a wealth of information in their documentation about making performant rendering on the server and client https://react-server.io/docs/intro/why-react-serverI found it super interesting.
silasb 2 days ago 0 replies      
The company that I was consulting for tried to get Spring and React to play nice via Nashorn, but ended up scraping the idea 6 weeks in because of performance issues and not enough developers knowing the stack. Nashorn was missing a lot of essentials to make this easy out of the box. So looking at this is a breath of fresh air.
mxuribe 1 day ago 0 replies      
Mad props to a crew from the real estate industry who just pumped out cool tech for universal utility! Kudos to the folks from Redfin!
mbreedlove 2 days ago 0 replies      
One of the great things about React is its support for server-side rendering

Cool, I never knew that.

crudbug 2 days ago 0 replies      
A simple model/flow of rendering pipeline would be helpful here based upon routing logic.

As I understand, Pages are composed of Components, Components provide HTML sections. HTML sections are loaded incrementally.

encoderer 2 days ago 1 reply      
An alternative here is Hypernova by Airbnb.
zinssmeister 2 days ago 0 replies      
React Server looks fantastic!

For some reason this week I stumbled over many great React.js articles, so I started a collection here http://deepreact.com mostly to save&share things with friends, since a lot of us are getting deeper into React now.

Swennemans 2 days ago 1 reply      
Are there any numbers comparing pre-rendered React versus React communicating with a JSON Api?It seems to put a lot more stress on the server which can neglect the (theoritical) speed improvements
bsimpson 2 days ago 2 replies      
I'd really love to see a demo of what they mean by "seamless transitions."
Amplifix 2 days ago 0 replies      
Looks very interesting, will give this a spin. I've been skimming the docs, but I assume you'll still have to install redux to handle state etc.
tboyd47 2 days ago 1 reply      
Looks fascinating, I will definitely be watching this project in the coming year.

Just curious, if it's been successfully running in production for over a year, why is it only being used to serve three pages?

deegles 2 days ago 0 replies      
Could react-server build in support for AMP? https://www.ampproject.org/
antar 2 days ago 1 reply      
You have a misspelling in your homepage: ne not ne.
oelmekki 2 days ago 0 replies      
It even includes a Dockerfile and a docker-compose.yml file to get quickly started, that's cool :)
krebby 2 days ago 1 reply      
This looks cool. Any way to make it easily work with Relay?
pseudointellect 2 days ago 0 replies      
> "Blazing fast page load and seamless transitions."

Click on Get Started and no seamless transition, but rather a very abrupt page load.

kra34 2 days ago 0 replies      
every time somebody makes a new react / angular derivative a unicorn loses it's horn.
rando832 2 days ago 1 reply      
wtf is ne?
charford 2 days ago 3 replies      
Getting random 504 timeout errors on this site. Is this hosted using react-server?
rajangdavis 2 days ago 1 reply      
Getting some weird issues, can only get to the docs by clicking on the link to the docs and then refreshing the page a couple of times...
What Danes consider healthy childrens television economist.com
345 points by CraneWorm  2 days ago   187 comments top 24
skrebbel 2 days ago 11 replies      
Kaj and Andreea is fantastic!

I'm Dutch but my wife is Danish. She does her utmost best confronting our kids with Danish TV (we live in Holland), so I understand a bit of what the author is on about. I was initially surprised as well.

Truth is, it _is_ entertaining! But most importantly, once you get past the absolutely amateuristic way things are shot (hey, the kids don't care, why should you?), there's a lot of depth to a lot of it. You need to let that depth sink in a bit before you can see the value of it.

For example, the article about Kaj and Andreea:

Probably most striking, though, is another thing lacking: education. Quite simply, there is none, academic or moral. Kaj and Andrea, a pair of puppets, are sweet friends, but also goofily flawed: Kaj is terribly self-obsessed, Andrea is warbling and neurotic. When other characters do something wrong, there is little of the obvious consequence-and-lesson resolution of American shows; the results are usually left to speak for themselves.

The lovely thing here is that it is educative, but it's up to the watcher to draw the lesson. Kaj's self-obsession is usually very funny and rather ridiculous. It teaches kids "wow, it's pretty ridiculous to be so self-obsessed". And at the same time it teaches kids that even if you're flawed, that's ok - both Kaj and Andreea appreciate one another and are appreciated and respected by the human co-hosts of their TV shows. This is something that I've not really seen anywhere else.

I'm not convinced that kids learn lessons like these better when it's spelled out for them.

ups101 2 days ago 0 replies      
You missed Bamse ("Teddy"): A self-absorbed egocentric narcissist, taking advantage of his naive, little-minded friend Kylling ("chicken"), episode after episode with zero negative impact. After +20 years he's now joined by Bruno, also a teddy bear, sharing similar personal flaws except with much refined insults bordering on psychopathic abuse, bullying kids instead of chickens, believing firmly that the world exists solely to please him in every which way.

However, mostly you may have missed the subtle, educational point (yes, there is one) underpinning these characters: Critical thinking. Is it right to behave like this, even if there is no negative consequence (as is often the case in real life)? Do you emphasize with someone taking advantage of their friends? In Denmark, we have this crazy idea that kids are interested in distinguishing right from wrong, even when the answer isn't spoon-fed by a morally correct television character - and maybe exactly because it isn't.

The kids don't like Ramasjang because it's a training ground for inspiring film makers. The like it because naughty is fun. And, if done right, it just may induce some critical thinking. It'll take more than a few days of watching, but just like the kids, you'll see the point :-)

Best regards from a grown-up, bottle-fed on Bamse, somewhat certain of right from wrong.

johnjuuljensen 2 days ago 0 replies      
As a Citizen of Denmark and a father of two (4 & 1), I absolutely love the danish stuff on Ramasjang and hate most of the American (foreign) stuff. The programs produced in DK are rude and fun and slow and scary, while the foreign stuff is mostly lame attempts at education (Dora the explorer is a particularly nasty example of this) or over the top animations with too much moral crap, too pretty characters and backed by toy franchises and whatnot.

I seems to me that the author actually enjoyed the shows and I mostly agree with his assessment, but he can't have been watching too much because he missed some important shows that are in fact educational.

Most prominent of the educational shows is Mr. Beard, who teaches numbers, counting, simple math, letters and simple spelling. The show is weird and quirky, but also features some great songs. The whole thing is backed by some pretty great apps, which expand on the educational stuff.

There's also a great show about kids helping animals in trouble. They don't pretend that the kids do most of the work, it's a professional that does the heavy lifting, but the kids get involved as much as possible.

I don't mind watching these shows with my kids, but I'll leave as soon as Dora or Thomas the tank engine or Chuggington starts.

If you're interested in watching some completely outrageous danish childrens TV, go watch "Carsten og Gittes vennevilla". It's created by the duo "Wulf Morgenthaler", who have produced some the weirdest stuff on danish TV. Particularly the one where Carsten creates a fox is a hoot.

aedron 2 days ago 6 replies      
The most astute observation is that characters on Danish kids TV are often completely, unapologetically flawed. This makes them relatable to kids, and kids enjoy watching them in all their dysfunction.

And I actually disagree with the premise that this contains no lessons or morals. Children are able to see clearly how ridiculous and ill-advised the behavior of these characters is, and so learn to recognize it in themselves and others.

Someone once pointed out that while Mickey Mouse is the mascot of Disney in the U.S. and the main character of the franchise, in Europe it was Donald Duck who became popular. Europeans like the flawed anti-hero, while moralizing America (sorry!) preferred the do-gooder know-it-all Mickey Mouse.

ulrikrasmussen 2 days ago 2 replies      
I'm a Dane, and grew up with Danish children's television, including the shows mentioned in the article. I just recently re-watched one of my favorite shows "Nanna", which you can now stream from the archives of DR. Even as an adult, I still enjoy watching it for its humor and creativity, and for the fact that it doesn't speak down to its audience. I don't think that can be said about most other children's shows.


Nanna: "What're you doing?"

Mother: "We're having sex, what're you doing?"

iamthepieman 2 days ago 0 replies      
I just turned off the television after my kids watched more television in one morning than they have in an entire month. The reason? Rio Olympics track and field. Finally had to turn it off because I wasn't getting any work done.

Other than that, in the summer time anyways, I'll usually toss them a book of matches and tell them to start a fire or go build a dam in the brook.

We usually have lofty educational goals at the start of summer but quite honestly I can't be arsed when the weather is warm and there's so much great romping to be had. When we watch TV it's usually youtube for something we're curious about as a family. The "Fun" T.V. most always leaves my kids grumpy afterwards so we avoid it, not for moral or educational or child development reasons, but because i don't like grumpy kids.

After a couple kids and being the oldest of a large family I've realized from my individual scientific survey of one that kids will develop and learn in spite of their parents. We get too much credit and too much blame.

You gotta water if the sun's hot and it won't be raining for a while and you should try to get a load of fertilizer at least once a year. I guess you should weed after the first planting but you certainly can't make them grow.

If this comment is too rambling for you maybe you should go watch something with a moral in it[0]


tronje 2 days ago 1 reply      
> But until then, they seem utterly unharmed by a childhood of hearing about the queens bottom and watching grandma light some bodily gas on fire.

Is that really as surprising as is implied? To me, none of what the article describes seems like such a bad thing. Sure it's a little weird, but it seems right up there with the humor children enjoy. And not everything has to have a message or a moral at the end of it.

wodenokoto 2 days ago 4 replies      

 > And God he says, lives in heaven with Santa Claus and > their dog Marianne, implying that the Supreme Being > is not only imaginary, but also gay.
That is like saying Ernie and Bert are gay. I'd appreciate if the author tried a little harder at interpreting childrens television from a childs perspective.

ganzuul 1 day ago 0 replies      
"And God he says, lives in heaven with Santa Claus and their dog Marianne, implying that the Supreme Being is not only imaginary, but also gay."

Or, you know, a woman.

TeMPOraL 2 days ago 5 replies      
At the risk of making a tu quoque argument - if you want to see unhealthy children television, look no further than Cartoon Network.
koolba 2 days ago 0 replies      
> Probably most striking, though, is another thing lacking: education. Quite simply, there is none, academic or moral.

I already love this!

If there's one thing that children need to experience at an early age, it's that the majority of what they'll deal with in life will have no redeeming qualities whatsoever. Life should be enjoyed for life ... it doesn't always have to be a backhanded way to learn to count.

silvestrov 2 days ago 1 reply      
This link to "Ramasjang" might (or might not) work from outside Denmark:


Most bakers sells "Kaj Cakes", we don't mind eating the cute puppets: https://scontent.cdninstagram.com/t51.2885-15/e35/c72.0.495....

jonah 1 day ago 0 replies      
I'm surprised no one has mentioned Mr. Roger's Neighborhood yet. One of the best if not the very best childrens' program in the US. It was slow-paced and deliberate like the Danish shows are described, but instead of being crass and letting things play out, he dealt with serious topics in a simple, honest, and caring way. He also encouraged curiosity and investigation and broke gender and ethnic mores way before it was the in thing to do.

Well worth checking out.

gurkendoktor 1 day ago 0 replies      
I see a lot of fellow Europeans in this thread who prefer local TV series to preachy US cartoons (me too). This is slightly off-topic, do you have an opinion on the old Dutch show Alfred J. Kwak[1]? It was extremely political, but somehow I didn't mind when I was a kid. I wonder if it was simply surreal enough to make up for all the reality that it depicts? Has anyone re-watched this series again with kids, 25 years later?

[1] https://en.wikipedia.org/wiki/Alfred_J._Kwak

zeristor 1 day ago 0 replies      
Hi kids, don't forget we live in the future; seems odd to discuss something and not have a link to it.

Onkle Reje:


It may be in Danish, but it is for 4 year olds.

What would Rasmus Klump do?

goodJobWalrus 2 days ago 0 replies      
> Ramasjang is entertainment, not a replacement for parents or school. Parents are expected to know when to switch it off (but just in case, the characters go to bed at 8.00pm, and are shown sleeping until the morning) rather than pretend that it is self-improvement.

yes, I grew up watching mainly old American cartoons (like Tom and Jerry), and I don't remember them being particularly "educational", and it was well understood that we are watching them for entertainment, and the last one was at 7:15 pm on our television.

mankash666 1 day ago 0 replies      
Denmark - Crass, potty mouthed but entertaining kid shows. USA - Clean, preachy, educational kid shows.

Denmark - No gun crime, low homocide rate, high Human Development Index (HDI)USA - High gun crime, high homicide rate, lower HDI.

So - What exactly do kid shows have to do with long term success of the kid, or the general society they contribute to?

chvid 2 days ago 0 replies      
As a dane I was deeply traumatised by the show "Poul og Nulle i hullet".
zeristor 1 day ago 0 replies      
You can download the DR app on an Apple TV and watch Danish television.

Strangely a lot of reshot UK programmes.

My particular favourite was putting a camera on the front of a train: Helsingr to Helsingborg the long way round.

I'll get my coat.

galfarragem 1 day ago 0 replies      
They are using 'vaccination' logic: a small dosis of reality makes kids healthier in the long term.
repler 1 day ago 0 replies      
95% of the population needs to be brainwashed means 95% of the children's shows are going to be garbage.

It's just math. Make sure your kids are in that 5%.

spullara 1 day ago 0 replies      
None of my kids watch traditional television at all, everything they are interested in is on YouTube, both for entertainment and education.
arethuza 2 days ago 6 replies      
"Supreme Being is not only imaginary"

I was immensely proud when my son at 4 years old decided to stand up in his pre-school class and point this out to everyone.

sixhobbits 1 day ago 1 reply      
Can we please edit the title to match the (correct) original "children's" instead of the modified (incorrect) "childrens'"
New Leaf Is More Efficient Than Natural Photosynthesis scientificamerican.com
327 points by jseip  2 days ago   129 comments top 18
ChuckMcM 2 days ago 4 replies      
I am surprised no one has mentioned replacing the adsorption CO2 scrubbers on space craft[1] with something that is powered by electricity. The article claims 130 grams of CO2 removed from the air per kilowatt-hour. Astronauts might expel 40 - 50 grams of CO2 per hour into the air, so 500wHrs of power keeps the air breathable forever? That is a good deal. For reference the ISS has a crew of 6 and has 84 - 120kW of power capacity [2].

[1] Study problem on CO2 removal from NASA -- https://www.nasa.gov/pdf/519338main_AP_ED_Chem_CO2Removal_St...

[2] https://www.nasa.gov/mission_pages/station/structure/element...

matt4077 2 days ago 8 replies      
This needs the additional information that photosynthesis is incredibly inefficient. It's <5% IIRC, so we already have solar panels almost a magnitude better than what nature did.

(RuBisCO as the protein at the center of the process is also quite strange: it's huge and slow. As in 'this ain't funny any more, start working' slow with about a reaction per second.)

Animats 2 days ago 3 replies      
The article seems to indicate this is electrically powered, not powered by light. Where's the photosynthesis? First they make electricity, then they crack water into oxygen and hydrogen, then they combine the hydrogen and C02 to make hydrocarbons. If you've got electricity, why make fuel? That's wasteful.
cmccart 2 days ago 2 replies      
This got me thinking: how much CO2 is emitted by different energy sources in generating 1 kilowatt-hour?

I came across this link: http://blueskymodel.org/kilowatt-hour

Seems like solar is more or less a break even, where nuclear /wind/geothermal/hydro are pure wins, which makes sense I suppose. Could you manufacture enough of these to consume 1 kilowatt-hour without generating more CO2 in the process than they would consume in their lifecycle?

eggy 2 days ago 0 replies      
Very cool. I thought it was going to be more 'biological' i.e. less about finding a catalyst, and more about microbes and chlorophyll somehow.

I think technology creates things, sometimes problems, and then newer technology sometimes addresses these problems. I like the concept of all of these CO2 extraction-for-energy technologies that seem to be popping up lately.

Now, let's hope they can scrub some CO2 from the atmosphere, but not too much! After all, the climate models have been proven to be underestimating the rise of temperature, so the models are deemed not reliable for prediction or forecasting.

Take too much CO2 out, and we're in for a Global Winter. Sort of like the old Twilight Zone episode on TV (Ok, I'm old) where a guy is feverish, and in the background the Earth is getting too hot because of the sun getting closer? He then wakes up and it is snowing outside, and just when you think it is fine and dandy, it is the beginning of an Apocalyptic Winter!

Osiris30 2 days ago 6 replies      
Previous discussions on 'artificial leaves':


fernly 2 days ago 1 reply      
Found the paper:


Found a nice quote in the LA Times coverage:


Stay tuned because Pam and I are on a path to do nitrogen fixation in the same sort of way weve just done water splitting, [Nocera said]

taneq 2 days ago 0 replies      
It seems disingenuous to say this is "more efficient than natural photosynthesis" when it's a bioreactor using photosynthesizing microbes (and thus presumably use the same chlorophyll-based chemistry as natural photosynthesis).

On the other hand, I wonder about the potential for evolution to occur inside reactors like these, essentially self-optimising them over time.

sleepychu 2 days ago 1 reply      
I read science fiction story where they're on a colony on venus where the big problem is "how do we do make artificial photosynthesis? Plants are rubbish." but it was a free self-published novel and I now cannot find it. Anyone else read it?
_pmf_ 2 days ago 0 replies      
I cannot fathom why we have large scale, highly efficient meat production facilities (remember: animal rights are only relevant to hippie tree huggers), yet there's virtually no commercial efforts to harness plants. The technology exists [0]; I don't understand what the practical or political problems are. I think small scale plant reactors have the potential of being able to be manufactured very cheaply; maintenance is probably the issue, but I'd like to hear from an expert in the field.

[0] https://en.wikipedia.org/wiki/Algae_bioreactor

lechevalierd3on 2 days ago 0 replies      
I was expecting some crazy analogy between the a Nissan Leaf and natural photosynthesis.
mtgx 2 days ago 0 replies      
The question is if this is more efficient than just using solar panels + batteries to power something up. My guess is it's not, probably not even close. But perhaps there can be some niche uses for it where this system complexity and lower efficiency still makes more sense than using batteries.
frgewut 2 days ago 0 replies      
Does anyone know how does the efficiency of an entire tree compare (i.e a single leaf reflects some light, which then is absorbed by another leaf)?

I would guess that number should be higher than simple "photosynthesis efficiency".

fsiefken 2 days ago 0 replies      
So if you replace standard solar panels with this and you burn the alcohol in an engine each hour, does it produce more or less electricity each hour if we know it's 10x more efficient then natural photosynthesis?
naasking 2 days ago 0 replies      
Very neat technology for a possibly carbon neutral fossil fuel economy.
lwis 2 days ago 0 replies      
Does this present any potential improvements to solar panel efficiency?
mtgx 2 days ago 1 reply      
sanoy 2 days ago 0 replies      
This is insanely cool.
Unsafe levels of toxic chemicals found in drinking water for 6M Americans harvard.edu
314 points by upen  3 days ago   147 comments top 24
ianlevesque 3 days ago 3 replies      
Since people seem curious how to analyze this for their locale, the best I could come up with (using the raw data at https://www.epa.gov/sites/production/files/2015-09/ucmr-3-oc... ) was:

1. Search zipcode in UCMR3_ZipCodes.txt, obtain PWSID.

2. Filter by PWSID in UCMR3_All.txt.

3. Filter that result by rows containing "=" (which means at or above minimum reporting level)

4. Don't panic.

5. Compare AnalyticalResultsValue column to the Reference Concentration in ucmr3-data-summary-april-2016.pdf. If its under the Reference Concentration then you're safe, within the limits of how incomplete their reference concentations are. The document specifically states:

> The intent of the following table is to identify draft UCMR reference concentrations, where possible, to provide context around the detection of a particular UCMR contaminant above the MRL. The draft reference concentration does not represent an action level (EPA requires no particular action1,2 based simply on the fact that UCMR monitoring results exceed draft reference concentrations), nor should the draft reference concentration be interpreted as any indication of an Agency intent to establish a future drinking water regulation for the contaminant at this or any other level.

The minimal reporting level seems to be based on how small an amount is detectable, not harmful. The reference concentration appears to be a best guess at the moment for what a maximum safe amount is.

My zipcode for example came up with several of these above the MRL but below the reference concentration. Enjoy.

Edit: added link.

mortehu 3 days ago 7 replies      
A small PSA: "Brita" brand water filters seem to be very popular in US, but they're also some of the worst in tests. For example in this test ...


... "Brita" filters were found to remove around 55% of PFOA, whereas "ZeroWater" filters remove more than 95%. I.e. "Brita" filters leave 9x more stuff in the water. The performance difference is similar for other unwanted stuff, like lead.

refurb 3 days ago 2 replies      
had at least one water sample that measured at or above the EPA safety limit of 70 parts per trillion (ng/L)

Just as a comparison, the EPA safety limits for other compounds in drinking water is[1]:

Arsenic 10,000 parts per trillion

Cyanide 200,000 parts per trillion

Carbon tetrachloride 5,000 parts per trillion

All of those compounds are known to be very toxic. I'm curious why the safety limit is so low for these PFAs if there isn't much data on them.


mox1 3 days ago 2 replies      
If you want a report like this for your own personal drinking supply check out http://www.karlabs.com/watertestkit/

I did it recently. I'm using the results to put some water filters into the house that are actually NSF (1) certified to remove the contaminants the test found.

[1] http://info.nsf.org/Certified/DWTU/

stokedmartin 3 days ago 4 replies      
To see specific zip codes, download zip[0]. UCMR3_ZipCodes.txt has all the zip codes and you can cross reference UCMR3_All.txt using PWSID to get facility ID; then cross reference UCMR3_DRT.txt using the facility ID to get the disinfection type. Details about the disinfection type can be found in a pdf within the zip file.


vskarine 3 days ago 2 replies      
Original article has a generic map without much details but does anyone have a list of specific cities or zip codes that are affected?Original article with map: https://www.hsph.harvard.edu/news/press-releases/toxic-chemi...
Loic 3 days ago 0 replies      
This illustrates the future unknown issues with respect to chemicals and health/environment. At the moment, we have two schools to handle this:

1. not proved bad, we allow.

2. not proved good, we disallow.

The US is more on the first side and the EU on the second, of course, like for any complex issues, the right decision is context dependent and in the middle.

I am working in the area of chemical properties, this is a hard problem and I am sure that when this class of compounds arrived on the market, smart people tried their best to figure out the health implications. We have more tools and experience 50 years later, but we are not smarter. We need to move with caution while handling an increasing market pressure... this is challenging!

ars 3 days ago 1 reply      
A simple carbon filter - of any type, will take care of these.

Lead is a much bigger problem since it's much harder to filter out.

wfunction 3 days ago 1 reply      
I saw an article on this back in January. I remember reading that CA seems to have the highest number of people affected by PFOA.

Scary and worth reading: http://www.nytimes.com/2016/01/10/magazine/the-lawyer-who-be...

jonah 3 days ago 2 replies      
Ermm, my city's well closest to me has the following:

 MRLTested chromium0.20.67 vanadium0.21.1 strontium0.3960 chromium-60.030.44 chlorate20190 molybdenum18.7
[Edit] to change "Allowable" to "MRL":

"The lowest amount of an analyte in a sample that can be quantitatively determined with stated, acceptable precision and accuracy under stated analytical conditions (i.e. the lower limit of quantitation). Therefore, analyses are calibrated to the MRL, or lower. To take into account day-to-day fluctuations in instrument sensitivity, analyst performance, and other factors, the MRL is established at three times the MDL (or greater)."

[Edit 2] In my town's overall distribution system there are multiple samples above the allowable Reference Concentration. The highest were:

 TestedAllowable strontium19001500 chlorate410210

lvs 3 days ago 0 replies      
I continue to be shocked that consumer beauty products containing polyfluorinated alkyl compounds are being sold without any significant attention or regulatory oversight [1].

[1] http://www.livingproof.com/buy/our-science

hiou 3 days ago 2 replies      
Good grief. There is very little data this stuff is harmful. Certainly not going to wipe out a town in a weekend. Can't wait for millennials to age a few more years and hopefully get a little context. Being wedged between baby boomers and millennials gets more uncomfortable by the day.
the_mitsuhiko 3 days ago 1 reply      
I'm not sure how it's in the US but my biggest fear with water quality is never the water from the source but what happens on the way to your tap. In Vienna Austria for instance they are super concerned about PH levels because some old houses still have lead piping and if it gets out too far, then the pipes can corrode and lead makes it into the water.

As another example the difference in water quality in the same district in Moscow and SPB from different taps is crazy.

Now that smart meters are a thing for electricity I really wonder if it would not start to make sense to work on basic water quality measurements in houses.

smaili 3 days ago 0 replies      
Anyone have personal experience or research from reverse osmosis filters? I keep hearing it makes the water "better" but never any hard facts on how it performs statistically.
finid 3 days ago 1 reply      
> Drinking water samples near industrial sites, military fire training areas, wastewater treatment plants have highest levels of fluorinated compounds

Areas around fracking sites must be even worse.

emilong 3 days ago 0 replies      
This appears to me to the raw sample data https://www.epa.gov/sites/production/files/2015-09/ucmr-3-oc...

The UCMR3_All.txt file inside looks to have a fairly nice, denormalized set of samples with location names, dates, and detections.

bronz 3 days ago 2 replies      
i rent a room in a very old house and i wouldnt be surprised if there were higher than acceptable lead levels in the tap water here. so it got me thinking about water filtering. a reverse osmosis filter and charcoal filter working in series seems to be the most thorough method. does anyone have experience with filtering their water?
tdaltonc 3 days ago 0 replies      
Any tips on how to tell what I should actually be worried about from FUD or WOO wrt/ drinking water?
Aloha 3 days ago 2 replies      
Show me more on the science on them, and I might be concerned about this being a problem.
drsim 3 days ago 0 replies      

> ...from the Faroe Islands, an island country off the coast of Denmark

Not really off the coast of Denmark...https://www.google.co.uk/maps/place/Faroe+Islands

artur_makly 3 days ago 0 replies      
What are the stats for NYC?
azinman2 2 days ago 0 replies      
Being lazy here: anyone know the data for SF?
grandalf 3 days ago 5 replies      
Anything that is public health related is bound to be full of tradeoffs and "good enoughs". The water in Flint, MI was considered good enough by Flint's officials, as is the water in many other municipalities.

Just as a private monopoly would start to reduce quality once it cornered the market, public, regulated monopolies behave similarly. Why care about quality if there is no competition and it's just another year earning a pension for the bureaucrats involved?

Consider how the bidding process for building roads results in poorly built roads and no incentive to the winner of the contract to build something that will last. Nearly every piece of American infrastructure is riddled with corruption or incompetence (or both). Even basic construction is full of various rules intended to bolster union workers, etc. and adding enormous cost.

Bi-Directional Replication for PostgreSQL v1.0 2ndquadrant.com
292 points by eloycoto  2 days ago   89 comments top 16
sinatra 2 days ago 1 reply      
As the linked page doesn't describe what it is: Bi-Directional Replication for PostgreSQL (Postgres-BDR, or BDR) is an asynchronous multi-master replication system for PostgreSQL, specifically designed to allow geographically distributed clusters. Supporting more than 48 nodes, BDR is a low overhead, low maintenance technology for distributed databases. [0]

[0]: https://2ndquadrant.com/en/resources/bdr/

baq 2 days ago 1 reply      
It's not immediately clear for me when to use BDR and when would I choose XL. Can somebody do a quick comparison? I like this quote in the announcement email:



BDR is well suited for databases where:

- Data is distributed globally

- Majority of data is written to from only one node at a time (For example, the US node mostly writes changes to US customers, each branch office writes mostly to branch-office-specific data, and so on.)

- There is a need for SELECTs over complete data set (lookups and consolidation)

- There are OLTP workloads with many smaller transactions

- Transactions mostly touching non overlapping sets of data

- There is partition and latency tolerance

However, this is not a comprehensive list and use cases for BDR can vary based on database type and functionality.

In addition, BDR aids business continuity by providing increased availability during network faults. Applications can be closer to the data and more responsive for users, allowing for a much more satisfying end-user experience.


But I can't seem to find something similar for XL and especially a diff between the two.

Klathmon 2 days ago 5 replies      
So this might not be the right place for this, but i'm curious.

How do people deal with "eventual consistency"?

In my head once a transaction is done, it's everywhere and everyone has access to it.

What happens if 2 nodes try to modify the same data at the same time? Or what happens if you insert on one node, then query on another before it propagates? And if the answer to those questions are what I think they are (that bad stuff happens), how do you setup your application to avoid doing it?

jimktrains2 2 days ago 3 replies      
> Im pleased to say that weve just released Postgres-BDR 1.0, based on PostgreSQL 9.4.9.

While I'm still excited to play with it, I can't wait until pglogical comes into mainline.

DelaneyM 2 days ago 1 reply      
I wonder if it's plausible for AWS to release a customized variant on this targeted to their data centers and designed for ease of multi-zone deployment? Something like Azure (which optimized and facilitates MySQL).

I realize that redshift already loosely meets that definition, but it doesn't quite work as a globally distributed regionally clustered web service back end. This is ideal.

koolba 2 days ago 3 replies      
Anybody in the HN crowd have experience using this? How does it perform on a WAN?
looneysquash 1 day ago 2 replies      
What I want to know is the timeframe of supporting PostgreSQL 9.5 and/or 9.6. (Though 9.6 is still in beta.)

Also, my understanding is that they're feeding patches back to postgres, and want it to eventually run on stock postgres. But it's not clear to me how progress on that is going.

I was also surprised that UDR was removed, I didn't even realize it was deprecated.

I'm not actually using the product at all right now, but I've been watching on the website, because I want to use it eventually.

I'm kind of hoping it works with stock postgres before I jump in. But if not that, I think I at least want to wait for 9.6 support.

nubela 2 days ago 2 replies      
Any reasons why I should not use this? Any other (Postgres-esqe) alternatives for such a solution?
rattray 1 day ago 0 replies      
They claim performance comparable to Hot Standby:


and, in some cases, ~1.5x over Slony.

zzzeek 2 days ago 1 reply      
um, LICENSE? am I missing something


onderkalaci 1 day ago 1 reply      
Is there a documentation on "How BDR works?" or so?
brightball 2 days ago 1 reply      
Can't wait to try this out.

I wonder how long it will take Heroku, RDS or another PG provider to make this available?

therealmarv 2 days ago 3 replies      
When reading this I wonder how to do this: OS or Postgres updates with one normal Postgres (or a typical master+slave setup) without downtime and without using Postgres BDR. Does somebody now?
rattray 1 day ago 1 reply      
It seems like this is very much a (FOSS) product of 2nd Quadrant. Does anyone here have experience with them? What is their reputation like?
therealmarv 2 days ago 1 reply      
Would not this be an ideal Docker database?! But I could not find a good maintained Docker image in the hub: https://hub.docker.com/search/?isAutomated=0&isOfficial=0&pa...
petepete 1 day ago 0 replies      
Title makes it sound like people are still writing extensions for PostgreSQL 1.0
Why we won't be selling Genuino or Arduino any more pimoroni.com
378 points by whiskers  3 days ago   241 comments top 24
nathancahill 3 days ago 42 replies      
Serious question: what are people building with these boards? The recurring projects I've seen are controlling lights, doors/locks and monitoring water in plants. I don't find these particularly compelling, am I missing something? How is this the next big thing?
chappi42 2 days ago 3 replies      
If this [untold Arduino history](http://arduinohistory.github.io/) link is true, it seems like Massimo Banzi is not only difficult to do business with but that he had a very particular way to give credit.

Better buy clones...

snarfy 2 days ago 2 replies      
The whole thing is a sham really when you consider the heritage from the Wiring/Processing.


Arduino is a commercial version of somebody else's academic work.

rocky1138 3 days ago 0 replies      
This is what happens when a company crosses the line from being a small, hacky, indie, startup shop to a corporation.

It's a line that can only be seen after you cross it, making it especially onerous.

SEJeff 3 days ago 5 replies      
I really don't see the point of the arduinos when you can buy an esp8266 that is arduino compatible (you can use the arduino IDE and sketches).

You can get the bare esp8266 chips for $1-$2 from aliexpress, or buy a tricked out one from adafruit (the huzzah) for $9. They have a full wifi stack, support micropython, etc.

I think we're in a post-arduino age for microcontrollers. There are simply better options out there, of which the esp8266 is just one of the current better choices.

rootbear 3 days ago 1 reply      
The local Microcenter sells Arduino Uno R3 clones under their Inland store brand for $10, currently and frequently on sale for $6. Ideally, I'd like to support Arduino LLC, but given how absurd this whole feud has been, I don't feel especially guilty when I buy a clone. I have bought a few Arduino brand 101 boards from Microcenter as well so I'm not a total sell out!
jklinger410 3 days ago 3 replies      
TLDR: Reseller is pissed that "Genuino" has to be sold as "Arduino" in the US and that they have separate SKUs. Also Genuino seems to have poor B2B/reseller support.

Is this post really that petty or am I missing something?

herbst 3 days ago 2 replies      
Cool. Now i can buy the cheap offbrand clones from Aliexpress without thinking i should have supported the actual creators.
antoniuschan99 3 days ago 2 replies      
So basically the Arduino the Software Company (IDE), and Arduino the hardware supplier broke ties?

I use the ESP8266 which I do believe is an Arduino (Hardware) Killer. However, the Arduino IDE has amazing support for the ESP8266 among other hardware components.

This isn't a bad thing, at least for consumers, since I feel like Arduino (the hardware) is a few cycles behind (eg. Particle, CHIP, bbc micro, pi zero, nodemcu/esp8266... etc)

Marazan 3 days ago 1 reply      
My conclusion in all of this is to buy Teensy instead. Lovely thing.
triplesec 2 days ago 0 replies      
It appears that greed and control issues were present right from the beginning of the Arduino project, given the experiences of Wiring's developer, Hernando Barragan, upon whose work it seems most of Arduino was branded, without recognition or recompense. I therefore would have very little sympathy for Arduino LLC.

http://arduinohistory.github.io/and HN: https://news.ycombinator.com/item?id=11212021

crumpled 2 days ago 0 replies      
This article acknowledges over and over that Arduino/Genuino is following common business practices. They are complaining that arduino has become too big, and it's not nice for them, because pomoroni is also big.Kind of ironic, really.

Yes, they want to cultivate a new brand while defending the old brand. It's awkward and I'm sure they aren't happy about it either.

falcolas 3 days ago 2 replies      
Dear Pimoroni,

Please add a link to your store from your blog.

Sincerely, everyone who might actually want to buy from you.

emptybits 3 days ago 0 replies      
So... asking on behalf of a friend ;-)... my local dealer stocks both Arduino and Genuino brand boards. Is this unusual or about to change?

I feel like a curtain of innocence has been torn away for me. Ignorance of the politics in the Arduino scene was bliss.

qz_ 2 days ago 0 replies      
I have a Teensy board, which is much cheaper and works pretty much exactly the same as Arduino and Genuino. The Teensy LC is great for small projects and is about 15 bucks IIRC. Never understood why you would stick to a more expensive brand just because of the name.
markhahn 2 days ago 1 reply      
Good response, and indeed the "maker vibe" seems to have been poisoned by commercial concerns...

Real makers tend to buy unbranded stuff from Chinese free-shipping places anyway. Branded Arduino is mostly a historic artifact...

rplnt 2 days ago 0 replies      
> Everyone sells worldwide now.

Yeah.. no. Not even close.

nfriedly 3 days ago 0 replies      
As far as cheap Arduino clones go, I've been pretty happy with the EDArduino ones from Electrodragon: http://www.electrodragon.com/product-category/arduino-2/ardu...
erichocean 2 days ago 3 replies      
Anyone know of any small, low-cost, battery powered programmable boards for Bluetooth LE? I've got the Pi 3 and CHIP is ordered, but I'd love something smaller if possible. Thanks!
dotBen 2 days ago 0 replies      
It's really not that unusual for resellers to be given geographical territories within which they can/can't sell. Try looking up your favorite electronics on Amazon.com and see if they will ship to Europe - most times the page will say 'for US delivery only'.

It is a shame that the Arduino guys split up but I would argue the situation would be even more of a mess if both new companies were able to sell into the same territories with the same name + similar product.

st3v3r 2 days ago 1 reply      
I'm sorry, but I don't buy it. Other stores, like Sparkfun, are quite able to do this.
xchip 3 days ago 3 replies      
Why do you want an Arduino when the ESP8266 is half the price and has WIFI?
dingo_bat 3 days ago 5 replies      
Bah! Arduino is way overpriced for the specs anyway. Raspberry pi and others are where it's at now. But I do think there is a massive power usage difference.
_pmf_ 3 days ago 1 reply      
> It's a real shame that Arduino LLC seem to have lost any of the Maker-vibe it had

What "vibe" would that be? Supply shortage with undependable lead time? Having to piece together documentation from malware infected wikis that may or may not still exist 10 days from now?

How the Arab World Came Apart nytimes.com
293 points by s3b  3 days ago   326 comments top 23
6stringmerc 3 days ago 2 replies      
How did it come apart? Almost exactly like Dick Cheney thought it would 20 years before cheering for the invasion. From an episode of Meet The Press in 2014[1]:


All right, let me ask you a couple of quick questions. I want to play for you an interesting clip of you 20 years ago about Iraq and Saddam Hussein. Take a look.


That's a very volatile part of the world. And if you take down the central government in Iraq you can easily end up seeing pieces of Iraq fly off. Part of it the Syrians would like to have to the west, part of Eastern Iraq the Iranians would like to claim, fought over for eight years. In the north you've got the Kurds. And the Kurds spin loose and join with the Kurds in Turkey then you threaten the territorial integrity of Turkey. It's a quagmire if you go that far.

[1] http://www.nbcnews.com/meet-the-press/meet-press-transcript-...

sevenless 3 days ago 4 replies      
The NYTimes itself is also a considerable part of the reason, considering Judith Miller, Friedman, Krauthammer and many others propagandized hard for invading Iraq.
return0 3 days ago 4 replies      
Long piece , but it fails in that it does not give a coherent message. Pieces of stories here and there are not history. First of all , lets stop calling it Arab Spring. If anything it's the arab Autumn of civil wars, where all arabs are united only in their resentment of the west. Secondly, let's talk about people, their souls, aspirations, and culture instead of political history. Given that these states were built as western protectorates, their history should not be a good guide for their future. Clearly the approach to colonialism/interventionism that the US takes is a failure, compared to the colonialism of the British for example (e.g. Jordan). The arab world has always been far too divided to able to draw clear borders around their states. Their only hope for peace is long-term economic prosperity and transition to secularism. Until then, tyranny worked.

This piece may be fun to get through your flight, but it should offer a holistic perspective.

anonu 3 days ago 2 replies      
My view on the Middle East: it has been a playground for the powers of the world to muck around. The fall of the Ottoman Empire at the end WWI allowed the region to be carved up between the French and the British. Secret agreements like Sykes-Picot cemented borders that should never have been there. After WWII a considerable amount of support was put behind Israel (and rightly so). However, the ongoing conflict and lack of a 2-state solution to this day continues to be a rallying call for millions of Arabs against the West. To keep the Arabs in check - the West continuously undermines their governments (which may be led by strongmen - but this is better than the alternative which we have seen). In addition, the West assumes it understands the intricate complexities of the demographics on the ground. Mucking around only exacerbates the problems.
DanielBMarkham 3 days ago 1 reply      
So far this is good. There's a lot of depth.

One nit:

" Much as the United States Army and white settlers did with Indian tribes in the conquest of the American West, so the British and French and Italians proved adept at pitting these groups against one another, bestowing favors weapons or food or sinecures to one faction in return for fighting another. The great difference, of course, is that in the American West, the settlers stayed and the tribal system was essentially destroyed."

I think it's a mistake to only back up to the end up WWI and start running the tape there. The Arab world has a rich and nuanced history full of the exact kinds of tribal tensions we see now going back hundreds of years. There's a reason the Ottomans were the way they were -- and it has nothing to do with Colonialism. There are also great parallels between what's happening with the Arab spring and what happened when other great powers consolidated their hold over the Arabs and then left. Just citing one example seems like a tremendous disservice to the history. Also the meme of "It was the SykesPicot Agreement" has some truth but is extremely easy to lean too much on. With this amount of verbiage being produced, I'm expecting some alternative lines of reasoning to be explored.

Looking forward to more of the series!

(Apologies -- looks like the entire thing is here? Wow! I've heard of long-format writing before, but this is kindle material. Tremendous amount of work here.)

hedonistbot 2 days ago 2 replies      
This piece was a huge waste of time. Following the personal stories of a number of individuals in most of the Middle East countries does not give any information to the reader about any of the geopolitics or power plays in the region. It just leaves the reader confused and depressed that nothing can be done and we should probably leave this matters to people more knowledgeable. Is this some new form of journalism. And to see this in a NYT publication...

Also NY Times has so much *to answer for in their coverage of the events that it kind of make sense that they are avoiding any real analysis of the issues.

not_a_terrorist 3 days ago 4 replies      
tl:dr 100 pages:

1) People are fucking poor and hungry (extreme wealth inequalities) 2) Salafi/Wahhabi (Saudi) funding of islamism 3) Antediluvian hatred between people (it goes, way, way, way farther back than Sykes-Picot)

pipio21 3 days ago 2 replies      
For me it is extremely simple: They have an extraordinary amount of oil:Iraq,Iran, Saudi Arabia, Kuwait, natural gas: Libya,Iran or they are in the middle of strategic places to build oil ducts: Syria, Afghanistan.


The West needs much more energy that what they have. They have industry and without energy their society will collapse.

Anything else is secondary. Most of those places are desert, and have not enough technology to protect themselves from Western (or Eastern)plundering.

Those countries can only life in peace as protectorates from powerful industrialized countries, like Saudi Arabia(de facto protectorate of USA, its oil can only be paid in USD), or Iran(protectorate of Russia and China) or Syria(Russia).

Libya itself had a lot of Chinese civilian presence, but not military. So UK, USA and France thought it was going to be easy to take the country by force, like they did.

They also tried with Syria, but Russia had an army there. They tried hard, remember Assad having chemical weapons so the West needed to "save" and "free" the country? Putin reacted fast to that. The need of creating a fly exclusion zone(prior to the invasion, like in Libya), again Putin reacted faster sending his own airplanes.

Salamat 3 days ago 2 replies      
This is just another propaganda piece to obscure what is really happening.To make sure that no Arab spring takes place, the US has sold all its allies all the weapons they might need to crush any opposition to their fiefdoms. The New York Times never explains to us who those moderate rebels are. "The alliance says it is fighting terrorists, a name it uses for all of Mr. Assads foes, from the extremists of the Islamic State to more moderate rebels who came out of the Arab Spring protest movement against his rule." http://www.nytimes.com/2015/10/16/world/middleeast/syrian-fo... "Donald Trump Praises Dictators, But Hillary Clinton Befriends Them""Clinton has described former Egyptian dictator Hosni Mubarak and his wife as friends of my family. Mubarak ruled Egypt under a perpetual state of emergency rule that involved disappearing and torturing dissidents, police killings, and persecution of LGBT people. The U.S. gave Mubarak $1.3 billion in military aid per year, and when Arab Spring protests threatened his grip on power, Clinton warned the administration not to push a longtime partner out the door, according to her book Hard Choices. After Arab Spring protests unseated Mubarak and led to democratic elections, the Egyptian military, led by Abdel Fattah el-Sisi, staged a coup. El-Sisi suspended the countrys 2012 Constitution, appointed officials from the former dictatorship, and moved to silence opposition. Sisi traveled to the U.S. in 2014 and met with Clinton and her husband, posing for a photo. The Obama administration last year lifted a hold on the transfer of weapons and cash to el-Sisis government....Egypt is far from the only military dictatorship that Clinton has supported. During her tenure as secretary of state, Clinton approved tens of billions of dollars of weapons transfers to Saudi Arabia including fighter jets now being used to bomb Yemen. Clinton played a central role in legitimizing a 2009 military coup in Honduras, and once called Syrian dictator Bashir al-Assad a reformer. And in return for approving arms deals to gulf state monarchies, Clinton accepted tens of millions of dollars in donations to the Clinton Foundation. Clinton has also boasted about receiving advice from Secretary of State Henry Kissinger, who was notorious for his support of dictators. According to records from the National Security Archive, Kissinger oversaw a plot to assassinate the Chilean President Salvador Allende and install the brutal dictator Augusto Pinochet."
jomamaxx 3 days ago 5 replies      
Ha ha ha ... ha ...

The 'Arab World' was never together. Ever.

All of the 'Anti-American Imperialism' kids here should remember that the bulk of the 'Arab World' is 'Arab By The Sword'.

Arabic is spoken across North Africa, in particular because of Arab Colonialism of the 9th-12th centuries.

Not since then has the 'Arab World' been anything resembling 'together'.

The Turks kept them (and there was not much of them) under the thumb, after that the Europeans tried to maintain some degree of balance, now the Americans.

The most recent and damaging decision by the US was Obama's withdrawl of troops in Iraq. Of course, invading in the first place - but Obama simply by virtue of having 10K soldiers sitting on a base 'behind the wire' doing nothing, could have kept forcing Malaki to play nice with the Sunnis. The moment Obama withdrew, Malaki purged Iraq of Sunnis, and the Sunni tribes decided that ISIS was 'less worse' than their own government and there you have it.

As far as Syria ... this is a function of the 'Arab Spring' more than anything, and I don't think anyone can say anyone else is directly responsible for that. Other than the standard: Assad, Saudis, Iran etc...

Once things stabilize in Syria, maybe things can start to settle down.

susi22 3 days ago 1 reply      
Slightly off topic:

Clicking "Simplify Page" on the google chrome printing dialog makes this a fantastic formatted PDF. I'm impressed (be it Chrome's doing or NYT's).

bogomipz 3 days ago 3 replies      
"The Arab World" - What does that even mean? Arab is a language distinction, a language of which there are many dialects. As Arabic is spoken from Western Sahara all the way east to Oman that pretty much disqualifies Arab World from having geographical significance. Arab also does not denote religious faith as there are Arab Jews, Arab Christians(Coptic) and of course Arab Muslims.

There was once briefly a concept of Pan-Arabism but that died when Gamal Abdel Nasser died in 1970.

Does a Muslim Arabic speaker from Morocco really have any sense of kinship with an Arabic Christian(a Coptic) from Egypt? I am going to say probably not. Probably not any more than two Slavic language speakers in different parts of Europe do. Have the Saudis taken in any Arab refugee "brothers" from Syria and Iraq? No. Have the Arab Emirates? Again no.

So what is this "Arab World" that the NYTimes and the rest of the media are so fond of using as a point of reference? Countries carved up as part of the Sykes Picot agreement? Can they not come up with a more meaningful distinction? This matters.

giis 3 days ago 1 reply      
Just last week, I watched this documentary called "Saudi Arabia Uncovered" to understand its current state. Its on youtube.
punnerud 3 days ago 0 replies      
To get pictures in the article, remove the last part of the url:http://www.nytimes.com/interactive/2016/08/11/magazine/isis-...
d23 2 days ago 0 replies      
I wish long form, informative articles like this had a way to donate a small amount of money to show appreciation. I don't read any particular news outlet enough to get an exclusive subscription with them, but I'd love to reward them for good individual pieces.
Hortinstein 3 days ago 0 replies      
I really hope this gets put into a podcast/audio format, would love to ingest it on a commute
nowey 2 days ago 0 replies      
I think even farther back with the overthrow of the Shah in Iran it started
Cyph0n 3 days ago 0 replies      
It's fun to see the armchair historians and political theorists pop up. HN is becoming more like Reddit by the day. Stick to discussing technology guys ;)
louwrentius 3 days ago 4 replies      
ishener 3 days ago 0 replies      
mms1973 3 days ago 4 replies      
I recommend to read Raphael Patai's classic "The Arab mind" to try to understand. Don't drink the NYT koolaid.
transfire 3 days ago 2 replies      
Just ask Lawrence. Same shit, different day.
RedHat is hiring to make Linux run better on laptops gnome.org
235 points by soulbadguy  2 days ago   178 comments top 23
brightball 2 days ago 11 replies      
As much as I like Ubuntu, I have a significantly higher level of trust in Red Hat to get this right for some reason. Maybe it's just confidence in the company's core principles.

I've hit a point where I'm ready to move on from my Macbook Pro after about 10 years and I've been looking at Linux laptop options. It's mind boggling that it's so hard to find good options.

Everything seems to revolve around "get a Windows laptop and wipe it" or "buy some flavor of old Thinkpad" with warnings about EFI and compatibility. Then there are company's like System76 who have what looks like a good offering on the surface but I keep seeing threads about bad experiences with them.

If I could order a laptop direct from Red Hat I'd do it without hesitation.

CptMauli 2 days ago 2 replies      
My ex colleague is now at Read Hat, and he told me one anecdote. The card reader of his ThinkPad didn't work. He created a bug in the bugtracker, got personally contacted by the guy who apparently does card reader stuff, and a day later he had a kernel, where the bug was fixed.
technofiend 2 days ago 0 replies      
RedHat seems to be trying to make their OS more accessible and mainstream friendly through their $0 developer license http://developers.redhat.com/blog/2016/03/31/no-cost-rhel-de.... Not that there's anything wrong with CentOS. So it's cool that effort extends to commodity hardware, too.

Sample size of one but as part of working down the Redhat certifications track I purchased an Intel NUC http://www.intel.com/content/www/us/en/nuc/nuc-kit-nuc6i5syk... for a low power lab machine; it's really just a laptop squeezed into a cube but everything I've needed to use it for works fine. Admittedly I have not tried wireless or bluetooth.

Hopefully that means laptops built off reference Intel designs will also Just Work. Interestingly NUCs are showing up now in the Hackintosh community because despite moderate specs next to a modern desktop, they're still competitive with Apple's current hardware. No doubt Apple will refresh this year; they seem to be overdue.

AdmiralAsshat 2 days ago 8 replies      
Fedora's bleeding-edge kernel support usually means better hardware support for newer laptops, which is great. One potential drawback, however, is the decision not to include any non-FOSS drivers in the installation package. I completely understand why they do it, but it throws a wrench into the idea of loading Linux onto a laptop and having everything "just work".

For instance, getting the Broadcom wifi card that comes with my Dell XPS 13 to work on Fedora has been such a pain in the ass (proprietary driver only) that most people recommend tearing it out and replacing it with an Intel card that has a more Linux-friendly driver.

If Redhat really wants better Linux penetration in the laptop world, at some point they're probably going to have to make a decision to either go the Linux Mint route and include proprietary drivers by default, or try to engage some of the hardware manufacturers to open-source their drivers.

dxxvi 2 days ago 2 replies      
I always had both Windows and Linux (Arch) on every computer in my house (now only a few of them have Windows because I use Windows in VirtualBox in Linux). It seems to me that wifi speed in Linux is always slower than that in Windows. Drivers are the culprit?If you want people to use Linux laptops, I think you should make them really really fast (esp. at boot) so that everybody has to wow when they just try it. No need to use gnome or kde, openbox is fine as long as it's not ugly.Then wifi spped must be faster or at least as fast as in Window. Next is printing.In summary, most of the issues are related to drivers.
hinkley 1 day ago 0 replies      
I have a lot of mixed emotions about RedHat and to what degree they are a net positive or negative, but I'm glad someone is taking this on.

It was so frustrating for me that the 'Linux on the Desktop' effort started right after the numbers showed that everyone was trading in their desktops for laptops.

I wanted to program ON my laptop, not program my laptop.

After spending almost 18 months trying to get all of the hardware on my laptop to work with linux (this included swapping out the wifi card and learning ACPI scripting so I could cobble together partial fixes from four other sources, and learning Crusoe CPU registers to contribute a power saving fix back to Transmeta, both things I have absolutely no interest in whatsoever), I said screw it and bought a Macbook. I'm on my fourth now, and aside for some difficulty installing command line tools, it's entirely removed hardware as a source of stress and procrastination.

cm3 1 day ago 1 reply      
Please focus on reducing and avoiding regressions in the Intel gpu stack. It's gotten pretty bad in the last two years. Major regressions were introduced beginning with kernel 4.2's atomic modesetting changes, across the board.
Zenst 1 day ago 2 replies      
Wouldn't be nice if there was a universal driver that you could use upon any operating system that supported universal drivers.

Alas not aware of any initiative or indeed reason that such drivers could exist, even in binary blobs it would be a step forward.

satysin 1 day ago 1 reply      
My main machine is an oldish ThinkPad T420s. It runs Fedora 24 flawlessly. Sadly not many machines seem to run Linux (any distro) perfectly. Part of the reason I haven't upgraded to a newer machine is because this machine just works and I am very lazy so trying to get a newer machine to run as well is more work than I care for. It isn't like performance has improved massively since SandyBridge.
walterbell 2 days ago 0 replies      
Is RedHat interested in contributing to Qubes, which uses Fedora? This would help advance the state of the art in desktop security and seamless UI compositing.
soulbadguy 2 days ago 1 reply      
While this is indeed a great news, Are just two people enough for the wide range of laptop and devices out there in the wild ?

What i would really love to see is a cross distribution effort in the same direction : People from the main distribution coming together, identifying the main experience pain point a fixing them upstream.

I really think there is an under served (from both hardware manufacturer and distros) market of people who wants a better linux desktop/laptop experience. But until someone figure out a way to monetized that (like linux is on the server side), it will be hard to build on the desktop side the same kind of momentum that linux is enjoy on servers.

dovdov 2 days ago 3 replies      
better (10 years) late than never, right? :D
arvinsim 2 days ago 0 replies      
Power Management should be top priority I think.
cs702 1 day ago 0 replies      
This is great, because all distributions will benefit from it eventually, not just RedHat/Fedora. The folks at Canonical, in particular, seem to be very adept at leveraging the work of others for the benefit of Ubuntu.

I would hope these kinds of efforts lead to better collaboration and coordination between the different distros for improving compatibility with desktop, laptop, tablet, and even phone hardware...

...but Unfortunately I don't think we should expect better collaboration and coordination, due to the usual political and quasi-religious barriers between distros.

cpach 2 days ago 0 replies      
Neat! Munich seems like a nice city to live in.
asteriskdelete 1 day ago 0 replies      
Yeah, two people will fix it.
hyperion2010 1 day ago 0 replies      
I'm currently running Gentoo on a T30, T60p, and X1 Carbon (1st gen). For the X1 Carbon I actually switched because Windows 7 was causing periodic freezes. Power managment and drivers are always what is missing, and getting drivers written in a timely fashion is hard. That said, if they focus on a subset of laptops then they could show some major improvements. I do wonder about the old M$ ACPI manoeuvring though, if vendors still aren't documenting features then there will be problems.
yobo 2 days ago 0 replies      
the year of linux on the laptop they said.
crudbug 1 day ago 0 replies      
Please improve GNOME multi-display support anybody.

I have triple monitor setup, GNOME always crashes. I am using XFCE with some success on F22.

digi_owl 1 day ago 0 replies      
Welcome back, Red Hat Linux...


jamespo 2 days ago 1 reply      
I find the Arch derived distribution Apricity runs very well on my HP Spectre X360
known 1 day ago 0 replies      
Sounds like RedHat is after https://bugs.launchpad.net/ubuntu/+bug/1
boynamedsue 2 days ago 1 reply      
The problem with Linux on laptops is Linux itself. The notion of Linux on anything other than servers turns most consumers off. Imagine if Android were called Linux Phones instead. It would be a disaster.

Linux should be used incidentally to the device or laptop itself. Or it should be spared for those who really want to know and understand more about it.

I used Linux on a laptop for a couple of years and, mostly, loved it. I hope Red Hat doesn't brand it as such.

George Orwell, Politics and the English Language (1946) mtholyoke.edu
291 points by Tomte  2 days ago   145 comments top 27
Houshalter 2 days ago 4 replies      
I love this essay. The whole essay is good, but I really like this paragraph:

>In our time, political speech and writing are largely the defense of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenseless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements. Such phraseology is needed if one wants to name things without calling up mental pictures of them.

Animats 2 days ago 4 replies      
Ah, Orwell. This was one of his pet peeves. He spent much of WWII translating news into Basic English for transmission to British colonies. The evasions and hyperbole of political speech had to be expressed in the plain and practical words of Basic English. That's a political act. Newspeak in "1984" came from that experience.

His list of worn-out metaphors understood by few, "ring the changes on, take up the cudgel for, toe the line, ride roughshod over, stand shoulder to shoulder with, play into the hands of, no axe to grind, grist to the mill, fishing in troubled waters, on the order of the day, Achilles' heel, swan song, hotbed" is still apt. "Ring the changes" is misused in today's South China Morning Post.[1] My own favorite is "free rein", which is a horse term. (One not used by riders today; riders say "loose rein".) It often appears today as "free reign".

Today's metaphors come from popular culture rather than the classics, and age faster. This may not be an improvement.

[1] http://www.scmp.com/sport/rugby/article/2002459/mark-coeberg...

ajkjk 2 days ago 2 replies      
I consider David Foster Wallace's "Authority and the English Language" to be a spiritual successor to this piece (http://wilson.med.harvard.edu/nb204/AuthorityAndAmericanUsag...). I'd recommend it to anyone who likes Orwell's essay.
chrisdone 2 days ago 1 reply      
The below example is stunning. I feel sick to recognize the second paragraph (especially in academic writing), and I feel strong relief that the first paragraph can exist.

> Here is a well-known verse from Ecclesiastes:

> I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

> Here it is in modern English:

> Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

narrator 1 day ago 0 replies      
"Marxism and the Problem of Linguistics" (1950) by Stalin [1] is an interesting read to go alongside this. You've got Stalin saying that we shouldn't distort language so much so that it loses it's practical use in everyday matters simply because it is inherited from pre-communist ideologies. In a way he is saying that the slippery slope of language manipulation is useful for political purposes, but should not be followed all the way down into impracticality. This was a problem Stalin had as people were excessively fanatical to the point of absurdity to avoid being purged, so he had to give them the ability to limit their fanaticism by saying that some allowance for practicality in the use of language was part of the Stalinist orthodoxy and thus not "reactionary".

[1] https://www.marxists.org/reference/archive/stalin/works/1950...

charlesism 2 days ago 1 reply      
This essay has been on HN a few times before, but I'm upvoting this anyways. It's one of the greatest essays ever written. It will change how you write, and how you perceive the writing of others.
devishard 2 days ago 2 replies      
What's frustrating about Orwell's writing advice here is that it doesn't really improve society unless everyone does it. If I write using Orwell's rules and my political opponent doesn't, I'll likely lose.

If anything, I wish that the phenomenon Orwell is describing would be used by the people on the "right" side more often--but people who are doing what they think is right don't as often feel the need to obfuscate it for PR purposes.

Ultimately I think understanding the phenomenon Orwell describes is fundamental. You should be able to read "Two died after a shooting incident involving LAPD officers" and know to look further to discover that meant "LAPD shot and killed two unarmed black men". But actually following Orwell's advice on how to write puts you at a disadvantage. Playing by the rules when your opponents aren't is a fool's game.

cafard 2 days ago 0 replies      
On the other hand, there is Samuel Johnson's observation in his Life of Milton:

"No man forgets his original trade: the rights of nations, and of kings, sink into questions of grammar, if grammarians discuss them."

RodericDay 2 days ago 2 replies      
I used to like this essay, but the folks over at [UPenn's Language Log](http://languagelog.ldc.upenn.edu/nll/?p=992) do a great takedown of it, and I am inclined to agree with them. Orwell is extremely hypocritical (which many people try to claim is a "deliberate stroke of genius", with very little evidence to support it).

It's particularly a hit amongst center-left liberals who are emboldened into feeling like they are very righteous by not doing anything at all. The more accurate observation comes from commenter Mark F:

> The reason Orwell's essay makes some people angry is that it depicts violations of stylistic rules as moral violations. Use the passive, it says, and you are playing into the hands of the totalitarians. I think that's also why some people like it; people can feel like they're defending the cause of freedom by writing concisely.

> I tend to side with the former camp. I think people pick up on cant pretty well without his help, except when it's telling them something they already want to believe. And in the latter case his help is no use.

TheLarch 2 days ago 2 replies      
When I hear neologisms from the radical left or right this is the first thing that comes to mind. I don't take it as a writing style guide, rather as a guide to spotting tyranny.

The neologisms from FOX and social justice warriors are politics of language.

adrianratnapala 2 days ago 0 replies      
I am enjoying the essay, to the extent that it makes C++ and its compile times seem not so bad.

But how much of the badness is really unique to the English language, or to the modern world?

My guess is that all times and places have wallowed in mushy rhetoric, and always will. If we look back at Pericles or Cicero or Jefferson and see better verbiage, that's because we are selecting for it.

conistonwater 2 days ago 4 replies      
> If it is possible to cut a word out, always cut it out.

Does anybody know if this has been (properly) studied? It's not quite so obvious that succinctness is good for readability and comprehension, and I think it could also go the other way.

elmojenkins 1 day ago 1 reply      
I'm surrounded by people who speak 'politically'. Rather than making their statements clear and explicit, they structure their words in ways that allow them to conceal the true meaning of what they said. Using verbs to misrepresent meaning makes it tough to have good communication and solid understanding between group members.
Jach 2 days ago 0 replies      
Truly a great essay... When I reread it a couple years ago I made the semi-serious connection to bad but all too common object oriented programming practices as exemplified in Yegge's http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom... Orwell says:

> As I have tried to show, modern writing at its worst does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer. It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug. The attraction of this way of writing is that it is easy. It is easier -- even quicker, once you have the habit -- to say In my opinion it is not an unjustifiable assumption that than to say I think. If you use ready-made phrases, you not only don't have to hunt about for the words; you also don't have to bother with the rhythms of your sentences since these phrases are generally so arranged as to be more or less euphonious. When you are composing in a hurry -- when you are dictating to a stenographer, for instance, or making a public speech -- it is natural to fall into a pretentious, Latinized style. Tags like a consideration which we should do well to bear in mind or a conclusion to which all of us would readily assent will save many a sentence from coming down with a bump. By using stale metaphors, similes, and idioms, you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself.

To me it's a funny, semi-serious connection. I see many Javalanders are very happy with Java, and have been for quite some time. For them it's easier, and quicker once they're in the habit, to fire up Eclipse (or IntelliJ), autocomplete and autorefactor and glue together this and that without having to think much (hey tests are green!), sling giant names and namespaces around dozens of files and directories up and down various call stacks in and out of giant systems like Spring and Hibernate, a few even try to match large programming patterns to everything... Yet still they frequently fail to convey in code just what exactly is actually happening. In many cases they just needed a few functions in a file or two with concise names that can be remembered and typed without assistance, even written faithfully on paper without having to use shorthand.

Jedd 2 days ago 0 replies      
I recall reading one of Christopher Hitchens' essays on language, where he referred back to this essay of Orwell's.

(I don't believe it was in his book 'Why Orwell Matters', but maybe it was.)

Hitch is possibly best known now as a vocal anti-theist, but his writings on language (mis-)use are delightful pieces.

mk89 1 day ago 0 replies      
Despite the fact it was meant for something probably different, this reminds me of what Umberto Eco wrote in his "Ur-Fascism": [...] We kids hurried to pick up the shells, precious items, but I had also learned that freedom of speech means freedom from rhetoric.

And, sad enough, nowadays we are full of rhetoric.

compil3r 2 days ago 0 replies      
oppressive ideology in close association with bad prose, love this essay.
yks 2 days ago 2 replies      
Using the opulent language Orwell argues against in English language tests like IELTS is a sure way to increase your score.
tkfu 2 days ago 4 replies      
I detest this essay. I won't go too far into the reasons why, because David Beaver has already done an excellent job of that: http://languagelog.ldc.upenn.edu/nll/?p=992

I'd also agree with Geoff Pullum's characterization of it as "a smug, arrogant, dishonest tract full of posturing and pothering, and writing advice that ranges from idiosyncratic to irrational" (http://chronicle.com/blogs/linguafranca/2013/04/04/eliminati...)

But apart from it being a very poor source of writing advice, I don't believe it's accurate in its diagnosis of how language is deployed for political endsthe question is a much more complicated one than he claims here.

There exist actual good books that can teach you how to write better, instead of patting you on the back and allowing you to tut-tut at those plebians who write things that are "outright barbarous". Pinker's The Sense of Style is one of those; Ann Lamott's Bird by Bird is another. There also exist much better (and more accurate and scientific) resources about how language actually affects the way we think about things. Benjamin Bergen's Louder than Words is an excellent start, and virtually anything from Lakoff's long list of publications is worth readingMetaphors We Live By is the classic, but Women, Fire, and Dangerous Things in particular is excellent from his more recent work.

vivekd 1 day ago 0 replies      
I remember reading this in my high school English class, it was really the only substantive work that I ever come across on how not to write badly. I could never find another how-to work on how to keep your writing from being bad.
merkleme 2 days ago 0 replies      
Great essay, and a mantra to live by - "If it is possible to cut a word, cut it."
tomelders 2 days ago 4 replies      
I've often (but only casually) wondered why "Orwelianism" isn't a thing, like Marxism or Leninism or Reaganism/Thatcherism. I kind of understand why, as I think his world view outs such things as inherently abhorrent, but since when has that stopped people making idols out of people and dogma out of the things they say?

But still, it would be nice if "Orwelianism" meant "Adhering to the principles of George Orwell in encouraging critical thinking and considered reasoned observation" and if it became a school of political thought... alas, all good ideas are corrupted in the end, but human progress walks on the stepping stones of ideologies, and it seems we've had nothing but nasty ones for a very long time.

oftenwrong 2 days ago 0 replies      
Is this essay still under US copyright?
igravious 2 days ago 0 replies      
What's the canonical way to cite this essay? Answers on the back of a bibtex napkin please.
elie_CH 2 days ago 0 replies      
The proposed French translation is awful :)
bitwize 2 days ago 1 reply      
People want to promote horrible policies without Trumping themselves.
grabcocque 2 days ago 4 replies      
Orwell's advice on how to write better English is at best naively harmfully, and at worst cravenly hypocritical. He never followed his own "rules", why in the hell should anyone else? Answer: because his rules are self-serving bullshit.

Case in point: Orwell uses passive constructions 40% more frequently than than an average English corpus. This essay is full of them. Language Log did a brilliant analysis of the essay's towering inaccuracy and hypocrisy:



If it is possible to cut a word out, always cut it out.

And wouldn't you know it, the very first sentence of Orwell's essay runs:

Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it.


His rules are bullshit, and he knew it, which is why he was smart enough to ignore them completely.

The Peoples Code whitehouse.gov
273 points by jonbaer  3 days ago   60 comments top 15
thewopr 3 days ago 1 reply      
Full disclosure, I'm in a federal department that has been pushing for more open source for a while.

This is a great move by the White House. While there are a lot of groups that are trying to push for more openness and release of software, it can often be challenging. A lot of federal groups have been taught over the years to be very risk averse, and open software is viewed by them to be risk. Probably one of the most common concerns is, "What happens if someone takes and misuses our software?" In a highly risk-averse federal environment, these can be challenging arguments to fight against.

If you like and support this kind of thing, one big thing you can do is to contribute and supply feedback. We frequently have to go to our superiors and justify what we are doing with regards to open source. We say things like, "this repository had X pull requests from non-federal contributors". Or, "We got Y comments and questions from non-federal users of our projects".

It could be as simple as an email saying "Hey thanks, I found this useful", to a full-on pull request fixing an issue or with a new feature request. The more fodder we have to say "open source increases engagement and creates positive feedback" the more you will see this kind of thing happening.

ideonexus 3 days ago 2 replies      
As a developer, I have been so impressed with the Obama Administrations efforts to put everything online. There is a .gov for anything now and I've been watching organization after organization digitize our commons and put them online.

This "Federal Source Code" policy is a great extension of the project-open-data initiative released a few years back:


I found the db-to-api project in this repository incredibly useful for quickly and safely exposing data from one of my applications to clients.

The only failing I see to these many many initiatives is that so few people realize these powerful free resources are out there to be taken advantage of. I hope that changes in the future.

swalsh 3 days ago 0 replies      
The cool part of this is it seems like there's a bunch of API's on here. Perhaps that means I can submit pull-requests to get features that perhaps previously that government would not have had resources to develop.

A good example, there's a recall API: https://github.com/GSA/recalls_api

this is cool, I want to use it... I have an eCommerce website where I sell food. It would be cool if I could be proactive in pulling items from my product catalog. The issue is there is no UPC in the API, so there's no easy way to correlate my products to recalled products. A cursory look at the source code shows me the source:


if you open that up, a lot of the items have the UPC codes in there. This gives me the ability to parse the details, and add the fields I need.

Roboprog 2 days ago 2 replies      
I ran into this last year: the feds set up their own "Bootstrap" type project:



Declanomous 3 days ago 1 reply      
I think this is great. Maybe the code won't be reused, but it adds another dimension to governmental transparency, which is always a good thing. Furthermore, any code produced by the government is effectively being produced for the American people. We should have access to the code to use as we see fit.

I wonder how this will affect bids for government software projects? Will companies be upset that they have to open-source their software? Regardless of whether an individual agency will use it, I can see the initiative saving time and money, since programmers will know they can just find what they need in a repository. If there is one thing you can count on, it's programmers and government employees being lazy.

SmellTheGlove 3 days ago 2 replies      
Putting the code out there for other agencies to use is great, but getting them to actually use it will be another battle. Reuse is tantamount to taking away someone's budget, and therefore, status. The occurrence of two different groups implementing very similar things entirely separately happens more often than you'd think. And I'm not convinced they want to talk or work together, because bureaucrat/military officer X doesn't want to lose budget/people.
emilecantin 3 days ago 0 replies      
> This is, after all, the Peoples code.

I've been thinking this for a long time, and I'm pleasantly surprised that the US government now says this publicly. I hope this point of view becomes more prevalent in the near future.

qwertyuiop924 2 days ago 1 reply      
As a person outside the government, I'm concerned about some potential problems with this initiative:

For one, Pull Requests: If a government agency gets a PR on some code, I'm concerned there may be pressure not to accept it: auditing requirements that are so high that nobody wants to review PRs (not that audits are bad!), or policies that otherwise don't encourage PRs, meaning that improvements don't get back to the government.

Secondly, us winding up with a repeat of some of the problems that other previously proprietary projects (namely, OpenSolaris) encountered: The code that was opensourced being dependant on code that, for whatever reason, couldn't be opensourced, hampering forks, and further development outside of the organization that developed the software in the first place.

Even if issues like this, or issues that I haven't even thought of, occur, this is a huge step forward.

fludlight 2 days ago 2 replies      
This is cool, but can we call it something else? The People's * is a prefix used by totalitarian governments.
rm_-rf_slash 3 days ago 1 reply      
As helpful as this may be, the real transformative code is often proprietary. As others have mentioned here, not everybody will use every reusable component, whether it be because of ignorance, larger system incompatibility, or simply turf protection.

We should look into solutions for intellectual property that are based on an information economy instead of an industrial economy.

Personally, I think it would make sense (perhaps more for pharmaceuticals than software) to significantly shorten the time a patent is valid and/or strip the protection of monopolistic production rights, and instead allow the free market to sell the product at the lowest cost it can be made at, as long as there is a royalty fee. How the fee is determined, I'm not sure yet.

Still, it's clear that our IP system is creaky, overcomplicated, and is tilted too far in the direction of big business, lawyers, and patent trolls, instead of the actual inventors and consumers.

clarkmoody 2 days ago 0 replies      
Avoiding duplication of source code across Federal agencies is nice, but it would be better to eliminate duplicate agencies and functions within the bureaucracy.
nsx147 2 days ago 1 reply      
Check out this guy's commit history: https://github.com/alex-perfilov-reisys?tab=overview
batbomb 2 days ago 0 replies      
I'm very familiar with the Department of Energy process, and there are a few considerations for every lab in the DOE.

TL;DR: The DOE encourages open source software, but it isn't default and there's some (low) barriers.

In general, though, what you can do with your (non-export-controlled) code consists of (in order of increasing difficulty):

1. Nothing. Keep your code private. If you'd like to stop maintaining code but want to make sure it sticks around, the DOE has a software library, the ESTSC, in Oak Ridge (a division of OSTI). It may also be the case that the entity running the lab wants to claim ownership.

2. Open Source. Due diligence is needed to ensure funding agencies and MOUs are respected. Copyright is typically assigned to the contractor running the lab in question (i.e. for Berkeley lab, -> Copyright goes to UC, SLAC -> Stanford, etc...). Major international collaborations can be a bit tricky because foreign countries have their own rules. I think more work needs to be emphasized on this front going forwards.

The DOE also wants to track the popularity of Open Source software, namely downloads. GitHub has met their requirements for reporting.

The DOE discourages use of the GPL and similar licenses. The reason, as I understand it, is due to the fact that the Government (i.e. Defense) must be able to use and modify software (and give to contractors, etc...) without falling under any additional burden. I believe the BSD license is preferred most widely across the labs.

In some cases, people at labs do release software under GPL. If they didn't get special permission, they are likely violating their lab's contract with the DOE.

3. Commercialize. This is really hard. You have to first perform market research, establish the market, spin off, deal with SBIRs, etc... This is a high barrier.

I've been personally working on streamlining the process for (2) with legal for my lab, so that anybody can open source their software very easily, hopefully by just filling out a web page. I'm hoping the recent white house directives help eliminate some of the bureaucracy involved in the process. I've also been trying to reduce fragmentation across the lab. The lab has never offered an official SCM platform, and grad students/postdocs are notoriously bad at keeping important source code in their personal GitHub and then leaving after some time.

It should be noted that almost all national lab facilities are effectively ran under contract, so nearly all national lab employees are not actually federal employees. So we do have a slightly different set of rules.

Finally, there is already a decent presence on github and bitbucket of labs, in case you are interested:






It should be noted this is an extremely, extremely small slice of the software that drives experiments, projects, and research in the lab. Many times software belongs to the project/research group, so there's likely a project github organization where the code naturally resides. This is sort of a consequence of labs becoming more and more multi-disciplined, i.e. the science missions of labs like SLAC and Fermilab are no longer aligned primarily around their accelerators.

OSTI is supposed to maintain an index of that software if it's reasonably important, but it's not really enforced.

PS: If someone from USDS/data.gov/18f can and would like to help out with this in any way, I'd be happy to collaborate!

afarrell 3 days ago 3 replies      
I wonder to what degree this applies to the DC city government and if it can be made useful for municipalities generally.
JimLaheyMD 2 days ago 1 reply      
How about we start with the source code for electronic voting machines?
Fuchsia, a new operating system github.com
346 points by helloworld517  2 days ago   144 comments top 26
c3534l 2 days ago 8 replies      
Fuchsia is not a combination of pink and purple. It is the color your brain comes up with when it sees contradictory color signals (such as very high and very low wavelengths without the appropriate middle stimulation). It's the only color not in the rainbow. As you can see from this additive color program (http://trycolors.com/?try=1&ffb5d9=0&c31cff=0), pink and purple create a lavender color. Whereas fuscia is what happens when you combine colors in an unusual way (http://www.exploratorium.edu/sites/default/files/ColoredShad...). Normally I wouldn't be this pedantic, but this is hackernews after all.
pavlov 2 days ago 2 replies      
The repo at https://fuchsia.googlesource.com reveals a rather interesting UI story for this new operating system.

It seems like the intention is to use Flutter [1] as the UI layer. Flutter uses the Dart language, so there's a Dart environment included in Fuchsia too [2].

For rendering, Fuchsia includes a project called Escher [3] which is described as a physically based renderer that supports soft shadows, light diffusion and other advanced effects. Looking at the source code, Escher is designed to use either OpenGL or Vulkan as the underlying graphics API. (There's an iOS example project included in Escher's source tree. Would be interesting to build that.)

It's not immediately obvious why a lightweight operating system would need a renderer that can do realtime soft shadows and light effects...! But I think the idea here is to build an UI layer that's designed from scratch for Google's Material design language. Shadows and subtle color reflections are a major part of that "layered paper" aesthetic.

So, the stack seems to be: Dart is the language for GUI apps, Flutter provides the widgets, and Escher renders the layers.

The underlying application framework is called Mojo [4]. It already offers bindings for Go, Java, JavaScript, Python and Rust in addition to Dart, but maybe those languages are meant for services rather than GUI apps. (At least I can't see an easy way to create Flutter widgets from something like Rust without loading the Dart VM.)

[1] https://flutter.io

[2] https://fuchsia.googlesource.com/dart_content_handler/

[3] https://fuchsia.googlesource.com/escher/

[4] https://fuchsia.googlesource.com/mojo/

ansible 2 days ago 2 replies      
I'm calling it now: this is for augmented reality displays and similar. You want an RTOS for loss and predictable latency. And current GUIs aren't really suited to 3D environments you can walk around inside.

This is Google's next Android, with a low latency rendering pipeline for the next generation of mobile devices.

fixmycode 2 days ago 6 replies      
I remember a post earlier today about how open source projects needed better marketing. prime example. I had to dive in to know what was the project all about...
ocdtrekkie 2 days ago 3 replies      
Some useful bits from IRC:

[16:21] <ocdtrekkie_web> Why's it public (mirrored to GitHub even) but not announced or even documented what it's for?

[16:22] <@swetland> ocdtrekkie_web: the decision was made to build it open source, so might as well start there from the beginning

[16:22] <lanechr> ocdtrekkie_web: things will eventually be public, documented and announced, just not yet

[16:23] <@swetland> currently booting reasonably well on broadwell and skylake NUCs and the Acer Switch Alpha 12, though driver support is still a work in progress

[16:24] <@travisg> yeah and soon we'll have raspberry pi 3 support which should be interesting to some folk

Sidebar comment: I wonder how much more activity this thread would be getting if the subject line had "by Google" in it. LOL

pavlov 2 days ago 1 reply      
> Pink + Purple == Fuchsia (a new Operating System)

Pink [1] and Purple [2] were both Apple codenames for operating systems. Probably not a coincidence, but I don't see an obvious connection...

[1] https://en.wikipedia.org/wiki/Taligent#Pink_and_Blue

[2] http://www.phonearena.com/news/Did-you-know-that-the-codenam...

helloworld517 2 days ago 0 replies      
Hosted on https://fuchsia.googlesource.com/ is what looks like an early in development operating system.

Mirrored on Github where it's described as Pink + Purple == Fuchsia (a new Operating System)

The kernel component 'Magenta' reveals it "targets modern phones and modern personal computers with fast processors, non-trivial amounts of ram with arbitrary peripherals doing open ended computation." [1]

[1] https://github.com/fuchsia-mirror/magenta/blob/master/docs/m...

kevin_thibedeau 2 days ago 3 replies      
> It is good alternative to commercial offerings like FreeRTOS [1]

FreeRTOS is GPL with an exception for static linking making it effectively free if you make no modifications. There is, however, an onerous clause prohibiting the publication of comparative benchmarks. [2]

[1] https://fuchsia.googlesource.com/magenta/+/HEAD/docs/mg_and_...

[2] http://www.freertos.org/license.txt

mcirsta 2 days ago 1 reply      
The only thing that really bothers me with this new OS is that the kernel is no longer GPL. With a GPL kernel like Linux we had a chance of getting the kernel source code ( some companies don't care if it's GPL anyway ) for our devices but if it's Apache or BSD good luck with that.
fredgrott 2 days ago 1 reply      
Its a new RTOS...

Basically that means more than just cell phones as you have embedded systems in vehicles that use RTOS, watches, medical devices, etc.

Its the operating system that runs the baseband cpu/chip other wise known as the BaseBandProcessor.

lwis 2 days ago 0 replies      
There's a distinct lack of information in their repos README's.
bobajeff 2 days ago 0 replies      
Does anyone have an idea of what kind of technical problems this is trying to solve?

It sounds like it's trying to be a RTOS for phones and modern hardware. But I know there has to be more to it than that.

merb 2 days ago 1 reply      
Actually this compiles pretty easy.It however only runs on the provided qemu. And actually without adding some user space tools you only can use kilo which is akward to handle under a MacOS terminal :(But it's really really easy to setup and even integrate userspace programs. kind of extremly simple to do useful stuff on it.
vacri 2 days ago 1 reply      
Well, if it takes off, it'll have the side-effect of getting more people to be able to spell 'Fuchsia' correctly...
HoopleHead 1 day ago 0 replies      
Well, it may be difficult to fathom what this project is all about but, at least, I've learned one thing. I've been spelling FUCHSIA wrong all these years. I always thought it was FUSCHIA.

[As I have several growing in the garden, this is an important development]

hackaflocka 2 days ago 1 reply      
I consider myself fairly tech savvy, and have done a little programming here and there.

But from the linked page I couldn't figure out where any of the documentation of this "thing" is, nor how to install it, nor what platforms it's for.

Was it just me?

zfrenchee 2 days ago 3 replies      
Put Fuchsia and Android green on a color wheel. I dare you.
colemickens 2 days ago 4 replies      
Written in C. What a shame.

edit: Thank you to all of the repliers, I had no idea that most OSes were written in C. Er, actually, I'm more than well aware of that fact and I'm familiar with the number of CVEs that have occurred over the years because of the lack of memory safety involved in that C code.

Sorry, I simply don't get the appeal of writing more operating systems and network-exposed code that isn't written in a safer language. Say like Rust; see Redox.

united893 2 days ago 1 reply      
Would someone most kindly provide a VM?
merb 2 days ago 0 replies      
Apache2 and MIT. Sounds interesting
dart_user 2 days ago 1 reply      
>> It seems like the intention is to use Flutter [1] as the UI layer.

Flutter already ready for use?If Flutter is not ready for use then how it can be used?

lotsoflumens 1 day ago 0 replies      
I'm (not) eagerly awaiting the day when some IoT thing "Fucks ya".
aezell 2 days ago 1 reply      
I'm working on a Twitter client that only runs on this OS. Wait a sec... Oh, I see. Time machine was NOT set to 2010.

Working on a Pokemon GO client that only runs on this OS.

whyagaindavid 2 days ago 1 reply      
No screenshots?
empressplay 2 days ago 1 reply      
thekevan 2 days ago 0 replies      
I have not looked in to this and don't have expertise, but my first split second thought was "how are they going to keep up with security exploits?" Sad when my first thought it that someone's going try to steal from people using it.
135M messages a second between processes in Java (2013) psy-lob-saw.blogspot.com
222 points by matteuan  17 hours ago   133 comments top 8
carsongross 16 hours ago 9 replies      
The JVM is a treasure just sitting there waiting to be rediscovered.

It really is a shame that there is so much noise and unnecessary complexity around using it.

jpalomaki 5 hours ago 0 replies      
There's also more light weight alternatives to "XML heavy traditional frameworks" on Java side. Check for example Play[1], which makes the "write some code, test in browser, write more, test" cycle possible.


atemerev 16 hours ago 1 reply      
Java in HFT environment is wonderfully perverse and beautifully abused in so many ways.
option_greek 3 hours ago 0 replies      
With all those memory accesses marked as unsafe, I'm wondering why not just use C++.
srtjstjsj 15 hours ago 2 replies      
Is there any reason someone would want to run multiple Java processes communicating at high speed? It seems useless, since the massive overhead of each process makes IPC useless in a high-performance environment.
xchip 16 hours ago 1 reply      
Moving blocks of memory around is something I am expecting java to do quite fast, just because the VM has intrinsic instructions to do that. You could do the same thing in any other language.
bogomipz 13 hours ago 2 replies      
Can someone elaborate on this:

"IPC, what's the problem?Inter Process Communication is an old problem and there are many ways to solve it (which I will not discuss here)."

How is IPC an old problem and how was it solved?

leeoniya 16 hours ago 1 reply      
Automatically closing FIN_WAIT_2 is a violation of the TCP specification cloudflare.com
234 points by jgrahamc  2 days ago   30 comments top 8
dap 1 day ago 3 replies      
This is an interesting case, but I'm confused about some of the details.

> A little known fact is that it's not possible to have any packet loss or congestion on the loopback interface.

This seems a bit misleading, given the two counterexamples that the article describes after this.

> If you think about it - why exactly the socket can automatically expire the FIN_WAIT state, but can't move off from CLOSE_WAIT after some grace time. This is very confusing... And it should be!

On illumos, the FIN_WAIT_2 -> TIME_WAIT transition happens only after 60 seconds if the application has closed the socket file descriptor. In that case, by definition the application has no handle with which to perform operations on the socket. The resource belongs exclusively to the kernel. If the other system disappeared forever, and there were no timeout, that socket would be held open forever.

By comparison, in CLOSE_WAIT, the application still has a handle on the socket, and it's responsible for the resource. The application can even keep sending more data in this case (as part of a graceful termination of a higher-level protocol). Or it could enable keep-alive. It's able to respond to the case where the other system has gone away, and it could break the application if the kernel closed the socket on its behalf.

I think the behavior is non-obvious, but pretty reasonable.

to3m 2 days ago 1 reply      
Also in socket surprises, once you've moved on from CLOSE_WAIT to TIME_WAIT, and doing everything pretty much by the book, you can run out of ephemeral ports and hit this: https://goodenoughsoftware.net/2013/07/15/self-connects/

(One way I like to test multithreaded stuff is to have a test mode where it runs for a while and quits, then run N copies of it at once, repeatedly, and leave the whole thing running for an hour/until it goes wrong. So that's how I encountered this in my case.)

wyldfire 2 days ago 1 reply      
This is only really a problem when there's a collision of the quadruple identifying the connection. The article indicates as much - "If you want to reproduce this weird scenario consider running this script. It will greatly increase the probability of netcat hitting the conflicted socket".

I will admit that I probably wouldn't expect an ETIMEDOUT over a loopback, but I think I would very quickly assume that one side or the other has a bug regarding leaked resources.

> It seems that the design decisions made by the BSD Socket API have unexpected long lasting consequences. If you think about it - why exactly the socket can automatically expire the FIN_WAIT state, but can't move off from CLOSE_WAIT after some grace time. This is very confusing... And it should be!

One side-effect of having a timeout drive you from CLOSE_WAIT to LAST_ACK without a close() would be that the remote side would not be able to see the application-level reaction to its active close. Determining whether this was a graceful application-level closure would not be possible anymore (though I'll admit I'm not sure how critical that is to the protocol).

mcguire 2 days ago 1 reply      
Interesting. I would have thought that the server would have sent a reset when it received a SYN on a CLOSE_WAIT socket, not ignored it.
bluejekyll 1 day ago 1 reply      
I always thought CLOSE_WAIT was due to unsent bytes after a close(). This definitely implies differently than my understanding, I'm surprised the article doesn't discuss SO_LINGER at all, from man pages:

>SO_LINGER controls the action taken when unsent messages are queued on

> socket and a close(2) is performed. If the socket promises reliable

> delivery of data and SO_LINGER is set, the system will block the process

> on the close attempt until it is able to transmit the data or until it

> decides it is unable to deliver the information (a timeout period, termed

> the linger interval, is specified in the setsockopt() call when SO_LINGER

> is requested). If SO_LINGER is disabled and a close is issued, the sys- tem will process the close in a manner that allows the process to con-

> tinue as quickly as possible.

Am I wrong?

theptip 1 day ago 0 replies      
Props to CloudFlare for putting out high-quality case studies. I consistently learn at least one new flag/command to add to the quiver every time I read one of these.
coldcode 1 day ago 2 replies      
Even today something as basic as TCP can have interesting bugs.
Alphabet is still figuring out how to be a conglomerate backchannel.com
190 points by mirandak4  1 day ago   91 comments top 13
ChuckMcM 1 day ago 7 replies      
"...operating at a level of stealth unusual even for the normally secretive firm."

I chuckled at that. When I worked there I participated in no less than a dozen "super secret" projects. Stealth was the standard operating mode. In engineering I attributed it to the somewhat interesting way in which recognition and promotions were handed out, which was to say if someone else got word of your idea/project and executed on it more quickly they could declare victory and take the glory. So keeping things secret until success was assured was the "smart" play. And it didn't hurt than when things went south you didn't have to take the blame since nobody knew you were working on it. A win-win situation as they say.

Levy makes it sound like a big deal they were being super secret but really? I suspect it was keeping things that might, or might not, happen in the future from becoming part of the questions at a TGIF meeting.

What the re-organization has done has put a spotlight on how much Alphabet is a one trick pony. It's a good trick, and a strong pony, but it doesn't translate into other markets. And several other organizations have been training their ponies hard and are taking away that specialness.

The current road Alphabet is on doesn't seem to me to end in a happy place, so I continue to watch them to see if they will find a way to turn off it.

BinaryIdiot 1 day ago 4 replies      
Looks like the author wanted to write something, anything about Alphabet since today is the one year anniversary but there isn't much substance in here.

I would agree with some of the assertions that yes Google is still Google and Alphabet seems almost non-present but I don't know if they have finished their integration / breaking apart of groups or really much of anything. I still think Android and some of the other parts of Google should probably be split out but I have no idea if that's the best choice for them or not.

I would love some more information on the whole formation of Alphabet. This, unfortunately, is not that article.

dragonwriter 1 day ago 2 replies      
That's a whole lot of words to say almost nothing, and to confuse unrelated things (like market cap and revenue, as in: "Wall Street seems to appreciate the Alphabet structure. In the last year, the stock price has risen over a hundred points and reached a record high this year, at a market cap of almost $550 billion. These revenues virtually all come from the Google division, of course.")

[EDIT: The article has been edited to make the particular paragraph quoted above less nonsensical.]

aresant 1 day ago 1 reply      
Interesting that the first supporting paragraph on the "turmoil" this is causing is Tony Fadell & Bill Maris leaving.

IMO Those are red herrings:

1) Fadell was extremely unpopular at the time of his departure after several missteps and lack of product releases. He was going to be shown the door regardless.

2) Bill Maris explanation is straight forward - he's super rich, just started a family, is sick of being on airplanes and phone calls 24/7 and literally CRUSHED IT as a VC on the weight of his Uber investment alone.

Animats 1 day ago 2 replies      
Self-driving cars are a great concept and will make money, but reality is starting to set in at Alphabet. Their destiny is not to disrupt the auto industry. Their destiny is to be an auto parts supplier in Novi, MI. Google is going to be selling a set of components to auto manufacturers for about $1000-$3000 to make a car self-driving. Self-driving will be an option the customer orders at the dealership, along with other infotainment options.

This is kind of a letdown for Google's people. Less going to TED and SXSW, more going to the Automotive Parts Suppliers Conference at the Marriott in Troy, Michigan. This may explain some of the turnover.

profeta 1 day ago 1 reply      
Summary: google is still google. It still have tons of heads and all the profit remained there (android, youtube, etc). Alphabet is just a place for the litigation-prone products, such as Nest and self driving cars.
patmcguire 1 day ago 0 replies      
Lol'd at "(Alphabet better be carefulif things get even better, no one will work there anymore)"
geodel 1 day ago 0 replies      
Well, I am not sure how much change was expected in 1 year. Was it that all other projects start posting Google level revenues or profits?
SmellTheGlove 1 day ago 0 replies      
I'm not really sure what the intent is here. Alphabet is a holding company and you rarely hear anything about them. Outside of the annual shareholders meeting, is anyone really writing about what Berkshire Hathaway is doing in Omaha day to day? No, they write about their businesses. There's really not much to add to the conversation regarding the holding company itself, since it's really a top level entity meant to compartmentalize business units, often for risk purposes.
swyman 1 day ago 0 replies      
I thought Fadell leaving was universally viewed as a positive...
dimino 1 day ago 1 reply      
I don't understand why there's so much hay being made over Alphabet's reliance on Google to fund its other ventures. Wasn't that the whole fucking point, to provide a way to separate the moonshots from Google, to ensure Google's profitability gets reported accurately, and not hit by the losses from the other projects?

Furthermore, aren't these project designed and expected to not be profitable in the medium term, with the hope/prayer of huge, life altering payoffs for humanity "eventually"?

adamb_ 1 day ago 0 replies      
I wonder if breaking up a monolithic company like Google has similarities to breaking up a monolithic legacy Java EE app :P
kctess5 1 day ago 0 replies      
They spelled "Verily" wrong in the 3rd paragraph (they wrote "Verity"). That's where I stopped reading.
Richard Feynman and the Connection Machine longnow.org
190 points by jonbaer  15 hours ago   26 comments top 6
danso 14 hours ago 4 replies      
> To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

I have most of Feynman's memoirs and a few of his biographies, but I only see scant references to his work in computing, possibly because they seem so trivial to his other accomplishments. That said, it would be interesting to read more about his computational work, given, as the OP says, that he was also very much a pencil and paper guy.

gavinpc 12 hours ago 0 replies      
Terrific writeup. Thanks for submitting this. Pieces about Feynman are seldom disappointing (it's as if he continues to live through these inspired stories), but this one had lots more to offer as well.

Note that the markup was not quite exported completely, so [links] and $math notation$ are still in a raw syntax, which is confusing at first.

bambax 1 hour ago 0 replies      
I re-read this text in full every time it comes back up on HN and never stop to love it.
eggy 13 hours ago 1 reply      
>The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models "kooky," but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

I am currently starting round four in my studies of CAs, and it I thought this quote was interesting in bridging the earlier work by von Neumann and Ulam to Ed Fredkin and Stephen Wolfram with Feynman in the middle spanning them.

The book, "Cellular Automata: A Discrete Universe" by Andrew Ilachinski, has had it critics, but it is an amazing compendium to read.

sverige 11 hours ago 0 replies      
TL;DR : Read the (fning) article! The details matter!
meeper16 12 hours ago 1 reply      
It's all about Biomimetic Cognition.
For a Better Economy, Add Commuter Rail? citylab.com
144 points by misnamed  19 hours ago   135 comments top 16
jswrenn 14 hours ago 1 reply      
Speaking as a Providence resident: Pawtucket begins where the comfortable-walking-distance-to-the-Providence-station ends. Pawtucket and Central Falls are languishing. Providence, generally speaking, is more affordable than Boston to live in (which drives a lot of commuter traffic to Boston), but the presence of Brown is steadily driving rents up in the suburb closest to the Providence station. Pawtucket and Central Falls should be the affordable residential suburbs of Providence, but they're not (or, at least, not so much as they should be). Commuter rail to these towns would help revitalize them and ease some of Providence's present growing pains.

The thrust of commuter rail into southern Rhode Island has failed because it has not responded to the needs of residents. In Wickford, where I grew up, very few residents take advantage of the weekday commuter lines running from the new Wickford Junction Station, since very few residents there commute to Boston. There are, however, plenty of retired folk who would love to take a weekend trip to Boston via train, but are stymied by the station being closed on weekends. If weekday commuter rail is to ever succeed in southern Rhode Island, it will not be in the short term.

niftich 19 hours ago 3 replies      
The point in this particular case is that Rhode Island has become a bedroom community for Boston, so improving transportation links between the two can make RI even more attractive to discretionary residents (who come here because they want to, not because they can't find anything better -- which is a byword for higher-income residents), who can stimulate both residential and commercial growth by their presence.

It's a reasonable suggestion. Another alternative is that RI could promote business growth to lure away some MA or CT talent. But whether they want to be a swankier, higher-income commuter suburb or a commercial-heavy exurb they really should do something.

honkhonkpants 17 hours ago 3 replies      
What a blessing of a problem to have a double-track high speed rail system passing through an underutilized station. For comparison to the Bay Area, the distance from Providence to Boston is about the same as the distance from Berkeley to San Jose. That route is also served by Amtrak, but on a neglected single-track, wooden-tie, local-stop service that's scheduled to take 1h33m, but almost always takes longer. Amtrak from Providence to Boston only takes 40 minutes and is generally reliable. Regional transportation in the northeast is so far beyond what we have in the Bay Area.
rchowe 18 hours ago 1 reply      
There are a few issues with getting these train stations built though:

A. The state of Rhode Island currently reimburses the MBTA for all operating expenses south of the RI/MA state line, and they just funded a commuter rail extension south of Providence to attract intra-RI commuters to take transit to Providence instead of driving. However, even providing incentives such as free parking, ridership at these stations has pretty drastically missed expectations [1], and the trains are scheduled to take the same time as the bus takes in rush hour traffic. The commuting situation/parking isn't bad enough in RI like Boston or New York to make the train obviously beneficial time/money wise, when you lose schedule flexibility of when you can go to/leave work.

B. The site of the proposed train station only has two passenger tracks and is located in a high speed (125 MPH or 150 MPH) zone. Starting service to the station is not as simple as just refinishing it and having trains stop there: Amtrak (which owns the tracks) would probably insist that the state of Rhode Island quad-track through the station so that its trains can pass a stopped commuter train.

C. It's difficult to get transit projects funded near state borders, because of the mindset of "we paid for it and they all go work in the other state!".

Providence is a pretty fast-growing city, so it's possible that in 5 to 10 years the traffic situation makes a much more compelling case for people to make use of transit, but additional commuter rail service there is a kind of hard sell.

[1] http://wpri.com/2015/05/18/south-county-rail-ridership-far-s...

Doctor_Fegg 3 hours ago 0 replies      
$40m for one new station and associated signalling? That's not much less than the cost of reopening the entire 19-mile, 8-station Ebbw Vale line in Wales [1], even though British railway projects are notoriously expensive [2].

[1] https://en.wikipedia.org/wiki/Ebbw_Valley_Railway[2] http://www.transportblog.com/archives/000492.html

Animats 18 hours ago 0 replies      
They already have the tracks and the trains; they just need a station. This is the easy case. This isn't about building a new line.

With jobs moving back to inner cities, the radial structure of commuter rail works again.

jbpetersen 16 hours ago 1 reply      
One thing I tripped over here is just what is a phototube in this context? https://cdn.theatlantic.com/assets/media/img/posts/2016/08/S... "Phototubes protrude from an abandoned building at the Conant Thread-Coats & Clark Mill Complex, in Pawtucket, Rhode Island. (AP Photo/Steven Senne)"

My own knowledge, Google, and Wikipedia have all failed me here. My best guess is it's old slang for pneumatic transport tubes, but I can't say I've ever seen anything quite like what's shown in the picture.

andys627 9 hours ago 0 replies      
Lots of very intelligent critical analysis applied to these are related issues - if only some of that was applied to roads and encouraging driving. Roads are showered with money with no thought of consequences. This is not in the slightest hyperbolic. Transit projects have to claw tooth and nail for scraps.
wtbob 14 hours ago 1 reply      
I love rail as a user, but the cost is a bit insane.

> In July, the feds awarded $13.1 million, just shy of the $14.5 million the state was seeking The grant application estimates it would serve 519 riders daily, within the range of other Boston-area commuter rail stations. But most riders would be drawn from busy stations nearby, resulting in a net gain of just 89 new passengers.

Surely we could just give $73,600 to each of the 89 people to pay for cab fare, and save the other half of the money?

nickbauman 15 hours ago 0 replies      
This should help a lot. When I worked in the rail automation biz (for a 100B multinational) the internal heuristic for people transport worldwide was two heavy commuter rail lines were the equivalent of a 24-lane highway all parameters being equal (which they never are: there were tons of planning formulae brought to bear when making projections).
coredog64 19 hours ago 1 reply      
If anyone is interested, I believe this [0] is the train station in question on Google Maps. I can't find the mentioned nearby mills though.

[0] https://goo.gl/maps/8UFfq737jgF2

planetmcd 17 hours ago 3 replies      
Yea, it is great for Rhode Island business to make it easy for talented people to commute to another city in another state to help their businesses grow. I bet they are thrilled their taxes fund that.
Noos 18 hours ago 0 replies      
It would probably descend into corruption. Rhode Island's governance is horrible, and I doubt publics works projects will solve that.
guard-of-terra 18 hours ago 1 reply      
It's 100,000 people[1], how come they don't have a station for a rail that literally passes through? Of course they need to fix it right away.

[1] By European standards that could as well be a railway hub.

graycat 17 hours ago 1 reply      
Will it pay for itself or need subsidies?

Usually in the US, passenger rail needs significant subsidies. A bit tough to thinkthat subsidies are a good path to"a better economy".

BuckRogers 18 hours ago 2 replies      
A better idea yet, force employers to allow all employees who can, to work from home. It's bad for the environment, wasteful of resources and additional stress (illness/cancer) for commuters.

Shifts the burden from the planet and people to the company as they learn to manage employees remotely. Which is where the balance should be set at.

Return True to Win alf.nu
223 points by moklick  2 days ago   116 comments top 28
fauria 2 days ago 2 replies      
This table is quite useful to solve some problems: https://dorey.github.io/JavaScript-Equality-Table/
apricot 2 days ago 4 replies      
Find the exact combination of browser/OS that does more than show "return true to win" on a white background to win.
qwertyuiop924 2 days ago 1 reply      
I already saw this on lobste.rs, but it's still cool. I like ES just fine, but to think we could have had scheme instead. Gee, thanks Netscape:

Every time you win, everybody loses.

nfriedly 2 days ago 0 replies      
It loaded for me eventually. First one is pretty straightforward. After that they seem to require knowledge of JavaScript quirks.
ninjakeyboard 2 days ago 1 reply      
I don't exactly know what I did, but it did say that I win.
idbehold 2 days ago 1 reply      
Let me document all the reasons I dislike the HTML spec: it defines a willful violation of the ECMAScript standard.
sirsuki 2 days ago 0 replies      
TypeError: NetworkError when attempting to fetch resource.

Maybe the challenge is to fix HTTP?

unsignedqword 2 days ago 3 replies      
Does anybody really know why JS's type system was designed the way it was? It seems so out of whack with what people want out of a language, dynamically typed or otherwise.
billpg 1 day ago 2 replies      
I typed "true", without the quotes.

Did I do it wrong?

xyclos 2 days ago 0 replies      
This is a nice challenge to start the morning.

However, on mobile the input loses focus after each individual character typed which is quite frustrating.

CGamesPlay 1 day ago 0 replies      
My gosh, this is way harder than the alert(1) series. http://escape.alf.nu
dou4cc 2 days ago 2 replies      
The game shows us how ugly JS is.
nxzero 2 days ago 0 replies      
Reminds me of "The Little Lisper" and coding koans; honestly thought it'd be harder to "win" the challenge.
brudgers 2 days ago 0 replies      
Fri Aug 12 14:35:18 UTC 2016:

Page had message that the Hacker News 'hug' was likely to affect performance.

plank 2 days ago 5 replies      
Took me a couple of tries...(Spoiler):Anyone anything shorter then 4 characters (two characters)?
curveship 2 days ago 2 replies      
For the first 5, I've got 2,3,8,14,7. Anybody got tighter solutions?
sladix 2 days ago 3 replies      
Does someone have a <52 char solution for the counter one ?
stoic 2 days ago 1 reply      
With games like these I feel like we all lose
agentgt 1 day ago 2 replies      
What is with the User/Score/Browser table? I only did like 4 of the tests and that thing popped up. I would finish it but I have some other things to do.
lalala1995 2 days ago 1 reply      
Please more levels
Zekio 2 days ago 0 replies      
I quite like this kind of challenge :)
conkrete 1 day ago 0 replies      
id(true ) // (4 chars)


You win!

grabcocque 2 days ago 0 replies      
'Break JS to win!'
Tepix 2 days ago 2 replies      
It doesn't seem to work for me on Firefox and Chrome..?
okket 2 days ago 2 replies      
This page only shows "return true to win" on a white background in Chrome?
mk7 2 days ago 0 replies      
I am so unhappy, because I can not play such games with Java... ;-)
alexmorenodev 2 days ago 0 replies      
What the hell am I reading.
omaranto 1 day ago 1 reply      
The link points to a page that only contains the text "return true to win" in large type on a white background. As with most modern art, I don't get it.
The Beauty of Roots (2011) ucr.edu
194 points by goldenkey  1 day ago   17 comments top 6
rootdiver 8 hours ago 0 replies      
I have made a python implementation of this fractal if anyone is interested : https://github.com/Alexander-0x80/Beauty-of-roots
peter-slovak 17 hours ago 0 replies      
The title got my hopes up for some biology stuff, but this was so much better.
Yhippa 1 day ago 2 replies      
Someone smarter than I will likely know this but are probability and functions nature's compression techniques?
wyager 19 hours ago 0 replies      
Interesting to see Greg Egan's name come up! I am a huge fan of his books. I guess I shouldn't be surprised to see him involved in the math/physics research community.

Egan's stories are often of the "one big lie" variety. He makes up some fact (e.g. fundamental particles are composed of 12-dimensional wormholes, we live in an uncountably infinite multiverse that can be traversed, you can build a machine that combines a space and time dimension, etc.) and then follows the made-up fact to some fascinating conclusion. He is clearly very intelligent and has substantial background in many scientific fields, which makes his sci-fi books quite mind-bending.

lugus35 1 day ago 2 replies      
I just see a donut.
ttflee 1 day ago 3 replies      
The title is not very accurate. The article mostly talks about roots of polynomials in the field of complex numbers and shows some beautiful fractal images derived from the roots.
       cached 14 August 2016 15:11:01 GMT