hacker news with inline top comments    .. more ..    28 Nov 2014 Best
home   ask   best   4 years ago   
Gates Foundation to require immediate free access for journal articles
1075 points by philip1209  3 days ago   147 comments top 28
jonknee 3 days ago 5 replies      
Bravo. It would be really neat if the US Government could get on the same bandwagon. Our tax dollars being used to fund research that we can't access is insane.
philip1209 3 days ago 3 replies      
> "The Gates Foundation will also pay the author fees charged by many open-access journals."
bluehex 2 days ago 5 replies      
This got me wondering: What am I missing by not being in the habit of reading research papers?

I remember being in college and finding nearly every paper I wanted to reference was behind a paywall, to the point that I lost all interest in even trying to gain access. I'm sure my own laziness plays a role, but I feel like the restricted access trained me to think research papers are for academics and scientists and it's not worth the effort to try reading them.

Now I'm a professional Software Engineer and I can count the number of research papers I've read on my hands. I wonder how much better off the next generation of knowledge workers will be as access becomes more open.

noonespecial 3 days ago 0 replies      
It's not a perfect solution but it sure does head in the right direction. If the perception can be changed so that "serious" science with big donors is always published openly and only rinky-dink "school" science is published behind paywalls because it's just "publish or perish" schlock it will be a huge win.
kenshaw 3 days ago 3 replies      
Is it not possible to create a separate foundation / organization that manages peer-reviewed, open-access journals for multiple scientific fields? I realize the costs involved in asking academics to review submissions, etc., but couldn't a small $10-$15 million grant for an organization be enough to kickstart an open journal consortium? Even if they did require small review fees from submissions ($25-$50), I feel that with proper management, and digital distribution, that a project like this could 'disrupt' conventional journals.
Steko 3 days ago 2 replies      
Unrelated design gripe...

Someone paid someone else to make their website look like this:


A disturbing trend that seems to be increasing.

nopinsight 2 days ago 0 replies      
Since the prestige of journals in many fields is significantly affected by the names on its editorial board. What if the Gates foundation earmarks funds to work on lobbying and even pay significant consulting fees for top editors to move to open access journals operated by PLoS or other non-profits.

(A highly successful example in the machine learning field is detailed in a comment by exgrv here.)

The foundation could also purchase a few smaller publishers which own good journals in a number of fields, especially fields which immediate access is important to human well-being. Then, turn all those journals into open-access with no or minimal author fees.

Bottom line: These strategies together will create big incentives for researchers to flock towards those journals since immediate and open access is a boon to citation counts and impact factor. Other journals will feel the heat and need to compete (like by reducing prices or time to open access) to gain back their market share.

If the foundation spends enough efforts, it could also out-lobbying congress and/or funding agencies to change its policy on open access, as the size of Elsevier and other publishers are significantly smaller than the Gates foundation and the public perception among those with any opinion on this definitely sides with open access policy.

markbao 3 days ago 3 replies      
This is really great, but to play devil's advocate, does this hurt the authors? That is, does this mean that they can't publish in Cell or another reputable journal, and would this discourage them from taking funding from the Gates Foundation?

As an article in The Winnower said, science is not disinterested and there are egos involved. "We got published in Cell" or "Our paper is in Science" still speaks prestige.

leni536 2 days ago 1 reply      
And the underlying data must be freely available.

This is the best part. It's not even funny how I have to extract data points from plots of articles if I want to use them. Even from recent ones. Most often the raw measurement data is not even available even in the form of plots.

tim333 3 days ago 0 replies      
Good on them. Maybe the various government funding agencies could do likewise?
return0 2 days ago 0 replies      
The question is what is the volume of papers published by the foundation's funding, and is it enough impetus to cause a shift of mentality across the academia?

Also, what will be the cost of this endeavor? It will certainly help to lower the price for open publishing among journals.

Hopefully this will put into question the hesitance of government funding agencies to stop subsidizing ancient publishing conglormerates.

ujjwalg 3 days ago 1 reply      
I love what they are trying to do. The entire scientific publishing industry is messed up and can use any help it can get. I wrote a blog about how terrible the scientific publishing system is sometime back.


carljoseph 3 days ago 1 reply      
Whilst I feel this is a step in the right direction, some have already argued[0] that it goes too far and sets up the wrong incentives.

[0] http://www.digitopoly.org/2014/11/24/the-gates-foundations-o...

htmcer 2 days ago 0 replies      
It is also interesting to note that Bill Gate is an investor at http://researchgate.net/ whose mission is to connect researchers and make it easy for them to share and access scientific output, knowledge, and expertise
jeangenie 3 days ago 0 replies      
If most articles are private (due to added cost) but all GF articles are public then will there be any practical consequence?
droithomme 2 days ago 1 reply      
> The policy doesnt kick in until January 2017

Why not kick in several years earlier - like... now.

vixen99 2 days ago 1 reply      
Oh Bill Gates, I love you! This is a wonderful step forward and maybe the beginning of the end for avaricious publishers. As a taxpayer I'm asked to pay 20 to access a single article written in 1960 on work financed and reviewed courtesy of taxpayers. This is madness.
rotskoff 3 days ago 0 replies      
I'm a serious advocate of open access, but it should be noted that money which is otherwise ear-marked for research is now being poured into the publishing industry in the form of author fees. The last article that I published cost nearly $2000.
indymike 2 days ago 0 replies      
It's a little scary to see journals that are trusted to vet the papers they publish charging the authors substantial fees.
zkhalique 3 days ago 1 reply      
This is really cool. I wouldn't call 2017 "immediate" but you know :)
nemoniac 3 days ago 0 replies      
The best thing you've ever done, Bill! Thank you.
mattxxx 2 days ago 0 replies      
This is beautiful.
sre_ops 3 days ago 1 reply      
Oh this is going to be fantastic. I cannot wait until the public see what kind of junk is produced by these "researchers" in their original form.
quadrangle 3 days ago 0 replies      
transfire 3 days ago 0 replies      
DanKlinton 3 days ago 1 reply      
Looks like Bill really likes open source :)
hotgoldminer 3 days ago 2 replies      
Fence sitting. The journals need a way to cover operating expenses. On the other hand, if a more efficient and cost effective model exists, this is good disruption. Nothings free tho, right?
bennyg 3 days ago 4 replies      
Why don't they just pay for all of the major science publications to have free access for everyone - not just requiring it of the authors they subsidize?
God's Lonely Programmer
875 points by eli  2 days ago   317 comments top 43
GuiA 2 days ago 7 replies      
I'm a huge fan of Terry's work, because he works hard, has a clear vision for what he wants to do, and makes it happen. That's more than can be said of many self-professed hackers who never see a project to completion or are motivated solely by peer recognition.

You may disagree with the logical coherency of his goals - I for one think the "temple for God" thing is pure kookery - but he is a master craftsman, and in this context it's all that really matters. There are people who devote their entire lives to pointless things. There are people who work on supposedly pointless things, which later turn out to not be so pointless. There are people who are supposedly working on world-changing things, which really are completely pointless. In the end, you can't predict the future. Just do honest work that you feel is worthwhile, and see what happens.

(if you want to play the hypothetical game, who's to say that Terry Davis won't develop some extremely efficient low level algorithms/techniques never done before that will dramatically impact computing?)

Every time his story comes up, there are people who respond with answers along the lines of "what can we do for him? can we find him assisted employment? can we raise money for him?" etc. While these comments are surely well intentioned, they end up being mostly condescending and out of place, as if talking about a completely helpless being. But Terry strikes me as anything but helpless - based on the various interviews he's given and comments he's posted, he seems to be quite content with his situation, and that "assisted employment" is the last thing he'd want or need. If anything, this is just a further argument for reforms like basic income and better treatment of mental conditions (e.g. with better early detection).

There are also always comments expressing surprise at how one can suffer from such a mental condition and yet do complex intellectual work like low level programming. These questions are based on the premise that schizophrenia (or a similar condition) impacts your brain in such a way that would make logical reasoning impossible. But that's not how it operates - logical reasoning is largely unaffected. Like `daveloyall says in another comment in this thread [0], what's affected are a few key "first principles". For instance, if you're persuaded that you're being constantly tracked by the CIA, removing all your clothes or dismantling your car to make sure you aren't bugged are very logical, reasonable things to do. The problem is that when you're operating under premises that are shared with no one else, the resulting dissonance makes integration with the rest of society problematic. But with this in mind, it's easy to see why one could be schizophrenic and yet still be able to program or do math or weave baskets, especially if the skills were mastered before the condition developed.

Terry is a maker of miniatures [1], and I'm happy we have such people walking among us.

[0]: https://news.ycombinator.com/item?id=8658958

[1]: http://www.newyorker.com/magazine/2006/04/10/in-the-reign-of...

fuligo 2 days ago 3 replies      
I wonder if he would get this much positive attention if he wasn't a Christian extremist. Would the article have conveniently left out his racism if he was, say, a militant antisemite instead of being focused on black people as he is? Would people admire his worship of a random number generator if he used it to spit out slogans of a religious text that isn't as revered as the Bible?

While his schizophrenia probably comes with very bad episodes, there is absolutely nothing that can be found in his own words allowing for the conclusion that this view is just a tourette-like symptom. On the contrary. His violence-laden hate speech is one of the constant factors defining him. Contrary to what has been suggested here he doesn't use slurs randomly and generically, either. There's a story behind his views that is just as coherent as the cute "god's own programmer" schtick.

Where his illness clearly manifests is the interpretation of random events all having a specific meaning. The radio talking to him, all the events in his life being just so that an invisible power is communicating with him, all the way down to literally a random number generator whose nature he cannot grasp. That's schizophrenia. If there was an adequate cure or means of suppressing it, this world view would completely go away.

It's more complicated to separate the man from the illness when it comes to pretty much anything else. At the very least I would be extremely hesitant to call him, as people do in this thread, an "inspiration". I'm not even sure it's safe to be in the same room with him.

Closing with a quote straight from his most recent account:

  "I spend my days clubbing retard-n$ggers. CLUB! CLUB!    DIE N$GGER! CLUB! RETARD! N$GGER! DIE!! CLUB! N$GGER!*"
HN's majority opinion of this guy makes me more uncomfortable than his comments by themselves. I don't get how we can label this guy as being "high functioning" and at the same time sweep 99% of everything he ever says under the rug. It's either or.

_exec 2 days ago 3 replies      
I've stumbled upon Terry's work a long time ago...and suffice to say he's been a major inspiration. It's a shame the majority of netizens (Yes, HN and Reddit included) can't look beyond his eccentricity and mental health issues (then again, it takes some digging and google-fu to find out the full story / context behind the man...his posts don't exactly come with a disclaimer).

Going down the rabbit hole of Googling, Redditing, finding out more about him and his story ("a one-man novel, modern x64 almost-but-not-quite-entirely-unlike-retro OS?" type of thing is catnip to my synapses), I've also come to understand a friend's schizophrenic relative a bit better. I've read accounts of schizophrenia and art intersecting, but did not understand what it is, what it's like, *, until I witnessed schizophrenia intersecting with IT, at which point the gates of empathy, admiration and fascination were flung wide open.

[/r/programming's 637 comments thread from 2010] http://www.reddit.com/r/programming/comments/e5d8e/demo_vide...

[another /r/programming thread]http://www.reddit.com/r/programming/comments/lhefd/losethos_...

His username on HN is / has been some variation on "TempleOS", "LoseThos", "SparrowOS", "TempleOSv2", etc. AFAIK all hellbanned due to un-PC comments posted in his, for lack of better phrasing, "Moments of Un-Clarity".

The irony (perhaps using the wrong word) of being hellbanned when the story of of your life's work..your magnum opus (especially in this case, an objective article that places it in context and with some background) is featured on HN's front page, makes me sad.

I consider his work to be a prime example of ["Outsider Art"] {http://en.wikipedia.org/wiki/Outsider_art} in our field.

I wish to collaborate with him one day.

Edit: Formatting, more caffeine.

noname123 2 days ago 0 replies      
Although the guy lives arguably outside the fringe of society, I cannot question his commitment to his project.

We tend to celebrate the successes like Elon Musk and Mark Zuckerburg and then quietly go back to our daily lives, and compromise our creativity for family, wealth accumulation and professional advancement. The opinionated amongst us will debate which of their favorite value is the "right way," (If LoseTheOs got the help and settled down with his family, a partner or friends; If LoseTheOs found a job at Red Hat and channeled his energy towards Linux Kernel development...).

But choices and our commitment to these choices are what defines us. TempleOS in this regard, seems more like a work of art in the medium of code. Of course, the merit of that art is up to the beholder (and tbh, I don't think I'd have appreciated Van Gogh a priori.) "The difference between genius and insanity is measured only by success," but is the guy who doesn't succeed by conventional sense but finds his own meaning and keeps going, insane?

jrapdx3 2 days ago 1 reply      
Having treated thousands of individuals with schizophrenia, bipolar and other similar severe, chronic disorders I found the article to be fascinating. It succeeded in portraying the "mindset" of the creator of the OS and giving a glimpse into his experience. It made me believe Mr. Davis really did share his deep convictions just as he described.

I've always thought having a mental disorder did not mean a person did not also have talents, just like the rest of us. "I may be crazy, but I'm not stupid" is a phrase I heard many patients say. And of course they were right.

In reading the comments posted here, one thing may have been missed. The TempleOS for all its eccentricity is also very beautiful and in its own way remarkable coherent. Mr. Davis demonstrates talent not only as a programmer, but also as an artist, points that should not be overlooked.

Ultimately, the OS is a freely-given contribution to our world and adds to the important body of work created by people with psychiatric disorders. I think it should be appreciated in that context.

dunstad 2 days ago 2 replies      
I find this guy's moments of clarity amazing. I'm not very familiar with mental illnesses in any form, but it surprises me how clearly he understands how the rest of the world views him. It makes me think about what the world might be like if we reverse the situation, and the article were about the only man who can't hear the voice of God.
andywood 2 days ago 2 replies      
To anyone brandishing the word 'racist' here, can you point us to any instance where what Terry wrote was not merely a rambling screed with the n-word in it, but actually racist thought. Because to me, those are different things - the former likely an excusable symptom, and the latter, y'know, actual racism.

Maybe you think it's the same. Maybe you're enforcing your zero-tolerance policy. If so, we disagree. To me it looks like a huge assumption, possibly more offensive than Terry's rants.

blt 2 days ago 2 replies      
I really like the "modern, souped-up, multi-tasking, cross between DOS and a Commodore 64" vision for TempleOS. The OS also has some innovative design choices like using the C compiler as the shell and compiling most programs on the fly. It reminds me of Lisp Machines in a way. A different vision of what computing could be like if closed-source binary-distributed software didn't exist.

I wish the source code (http://www.templeos.org/Wb/Accts/TS/Wb2/LineRep.html) were easier to read though. It's very dense with a lot of abbreviated variable names and no comments. It's the main reason why I haven't installed TempleOS yet, because of course I'd want to hack on it, and the code seems tough to learn.

seccess 2 days ago 1 reply      
Reading this, I'm reminded of Kary Mullis, inventor of PCR (polymerase chain reaction). PCR is widely considered to be the most important discovery of modern biology, ushering a new era of scientific research after he invented it in the 1980s. He won a nobel prize for it.

Kary was a bit off though (in this case due to intense drug usage, I think). Afterwards, he went on to deny that HIV is the cause of AIDS, tried to discredit global warming, and began believing in astrology and magic.

In the end, its possible that a little bit of insanity is a good thing; that it can help someone make a profound discovery that no one else could see. Too much is obviously detrimental, however.

mblevin 2 days ago 2 replies      
Throughout the span of modern history, we've had gifted creators whose talent is at a minimum intertwined if not completely predicated on their mental illness. Whether it was Van Gogh or Kurt Kobain, we've seen it consistently.

Terry is potentially one of the first if not the only creators of this type using a completely digital medium.

TempleOS is essentially a 10-year art project.

TeMPOraL 2 days ago 0 replies      
I really liked the article. It was very respectful towards Terry and shown a deeper understanding of his life and issues he faced. It was explaining, not judging. I now have a much more complete picture of the man I know only from shadowbanned comments here on HN.
yzzxy 2 days ago 3 replies      
I wonder if there is something the programming community could do to help him. I'm not sure what this could entail, but I'd hope there are people who would be willing to talk with him about his interests or attempt to set up some kind of assisted employment situation - he is by all accounts a very talented engineer.
m0th87 2 days ago 0 replies      
What fascinates me about Terry is that his schizophrenia doesn't somehow prevent his ability to make something as complex as an entire operating system.

Maybe this suggests that while schizophrenics look discordant to the rest of us, there's an internal consistency that very much makes sense.

Gracana 2 days ago 3 replies      
> As an undergrad hed been hired at Ticketmaster to program operating systems.

Ticketmaster, huh! Does anyone know anything about what they were working on?

anon4 2 days ago 0 replies      
Interesting that he says he talks to God through randomness. I heard an idea along these lines not long ago - the measure of entropy in the universe is constantly increasing. But entropy is also a measure of the amount of information stored in the system (if you take a file with low entropy, it will compress to a smaller file with higher entropy). So the weird world of quantum effects is basically caused by (or is the equivalent of) information being constantly poured in the universe. In a sense, God speaks new information and quantum effects happen.
jpgvm 2 days ago 1 reply      
Sad that the debate got almost entirely derailed by people labeling Terry a racist.

The poor man probably doesn't even realise he is offending peoples fragile sensibilities.

I think people need to cut him some slack when it comes to language. They are just words, there is no intent.

Torn 2 days ago 0 replies      
I remember the somethingawful.com thread where someone tried to install his operating system. It got real crazy real fast.

There was a HN thread here discussing it, and the top comments were people rightfully suggesting the guy needs serious mental help, not derision. The suggestion he had schizophrenia was thrown around a lot.

Following the rabbit hole from 2012, it seems he was banned from SA, and shadowbanned here on HN, but are visible if you turn on 'showdead' in your profile options: http://www.metafilter.com/119424/An-Operating-System-for-Son...

zafka 2 days ago 1 reply      
I really enjoyed this window into the life of "LoseThos".I am glad that he seems to be both relatively safe and happy.I think in the long run the value his life's work will be comparable or greater than the worth of most of his peers.
icedchai 2 days ago 0 replies      
I ran into Losethos / TempleOS many years ago and installed it on a VM. I found it interesting. Something different that wasn't another spin on Unix.

Terry is very clever, creative, and very opinionated. (Sorry, an OS needs basic networking if people are going to use it.)

He has a mental illness... but, without it, would he have that creativity and focus to make something like his OS? I doubt it.

Do I agree with the outrageous things he says on forums? Absolutely not. But I can respect him and his creation.

euphemize 2 days ago 1 reply      
From the TempleOS: AfterEgypt video[1]:

"Now you're supposed to do an offering before you talk to God. [...] just get conversation, maybe being witty and charming - or praise - like 'praise god for sandcastles and snowmen and bubbles and popcorn'...hum, I'm gonna praise him for something new, which is isotopes! Like you have Carbon-14 and stuff like that. I think that's pretty cool. Didn't have to be that way, maybe...Ok here we go!"


[1] https://www.youtube.com/watch?v=RzhRYGm_b9A

firebones 1 day ago 0 replies      
We have the ability to filter. Due to his condition, he does not. We should embrace the good and productive that comes from his work, without judgment, and use our filters to deflect and ignore the bad.

(Note: this heuristic applies to almost any human, regardless of recognized DSM status. We'd be well-served to save our breath and embrace this outlook.)

Delmania 2 days ago 5 replies      
Terry is technically brilliant, but I just don't understand TempleOS. I can't tell if he's serious, or it's a tongue in cheek concept.
jason_slack 2 days ago 1 reply      
Terry really is a gifted programmer.
kaffeemitsahne 2 days ago 0 replies      
Driving south with no clear destination, he says, "I was listening to the radio and it seemed like the radio was talking to me."

I had delusions of reference once, it's such a strange feeling. Not necessarily scary.

buovjaga 1 day ago 0 replies      
An article about TempleOS appeared in the Finnish computer magazine Skrolli earlier this year: http://www.skrolli.fi/2014.2.crt.pdf page 56).
pjbrunet 2 days ago 0 replies      
Looks like "folk art" to me.

Like dropping an IBM PC on Gilligan's Island ;-)

zaius 2 days ago 0 replies      
If you'd like to see his comments on HN, enable showdead in your profile and then go here - https://news.ycombinator.com/threads?id=TempleOS
esaym 1 day ago 1 reply      
I have never heard of this guy. His work and commentary are totally awesome! I have heard more original things and phrases from him in 30 minutes than in the last 2 years!
trit 2 days ago 0 replies      
What's he up to now? Is he pushing out any new projects? I had that he was making miniatures, but haven't had a chance to read the article.
minusSeven 1 day ago 0 replies      
meh can't read it. Article won't display on firefox. On IE formatting isn't correct and half the article is unreadable. Can anyone post the contents of the article.
placebo 2 days ago 0 replies      
>If he won the lottery three times, he asks, would she believe?

I would... :)

danielweber 2 days ago 1 reply      
Something about that webpage is making my computer slow to a crawl.
cyphunk 2 days ago 0 replies      
pluma 2 days ago 1 reply      
"Weird guy makes an OS because God told him so" sounds less hilarious if you're aware of why the Perl programming language exists.
davidgerard 2 days ago 0 replies      
Still a better love story than Urbit.
lexcorvus 2 days ago 0 replies      
I'm reminded of this quote from Joseph Campbell: "The schizophrenic is drowning in the same waters in which the mystic swims with delight." Terry Davis seems to be treading waterstuck somewhere between revelation and oblivion.
MarkPNeyer 2 days ago 1 reply      
i have had a number of experiences very similar to what terry went through. connecting computing to religion, seeing 'larger patterns' that weren't there, believing in conspiracies directed at me... it was rough.

i was lucky enough to have a few people really step up and support me when i was at the edge. it's weird reading this and feeling like i know exactly what he means when he says this stuff that sounds crazy.

here's a description of one experience i had in april 2011, a day before joining uber:


this is something i submitted later to hn, while cto of a gaming company:


i made it out of all that, though. thanks almost entirely to daily love and support from my now wife. if you have friends in a position like this - you CAN help them, it's just a crazy amount of work.

jqm 2 days ago 0 replies      
I have a theory that sometimes mental illness is manifestation of a naturally evolved attribute gone too far. Sort of a "cancer of thinking" if you will.

In this case I would say it is the the ability to perceive and relate patterns. A powerful attribute of our species and one of the defining features of intelligence I believe. But then it gets out of control and the pattern matching starts matching things that don't actually match.

Only a wild theory. I have no reference nor training in the subject. Just trying to match patterns:)

api 2 days ago 0 replies      
I consider this a pretty fascinating piece of art. It's like an artcar, but it's an artOS!

It reminds me of Salvation Mountain in Southern California, but with preemptive multitasking. :)


redthrowaway 2 days ago 3 replies      
Could we get TempleOS un-shadowbanned for this thread?
msie 2 days ago 0 replies      
God, I'm a lonely programmer.
MrBra 2 days ago 1 reply      
so, how 'bout that :
lmm 2 days ago 4 replies      
The lionization of this guy reminds me of the reporting of the Charge of the Light Brigade. All those noble British soldiers, following orders even unto death, what courage, what majesty... when what should really have been said was: what a bunch of bloody idiots, who got themselves killed for no reason. And so the same foolishness repeated throughout the first world war.

So let me say it: this guy, whether through his own fault or because of his problems, is acting like a dumbass. He's pouring a lot of talent into a pointless project that no-one will ever use or benefit from. He is not someone to look up to or admire, and certainly not to emulate. Do not romanticise his foolishness.

584 points by pmoriarty  4 days ago   169 comments top 37
rg3 4 days ago 10 replies      
It's very nice to see a project I started reach the front page of HN.

I remember starting the project around 2006. Back then, I had a dial-up connection and it wasn't easy for me to watch a video I liked a second time. It took ages. There were Greasemonkey scripts for Firefox that weren't working when I tried them, so I decided to start a new project in Python, using the standard urllib2. I made it command line because I thought it was a better approach for batch downloads and I had no experience writing GUI applications (and I still don't have much).

The first version was a pretty simple script that read the webpages and extracted the video URL from them. No objects or functions, just the straight work. I adapted the code for a few other websites and started adding some more features, giving birth to metacafe-dl and other projects.

The raise in popularity came in 2008, when Joe Barr (RIP) wrote an article about it for Linux.com.[1] It suddenly became much more popular and people started to request more features and support for many more sites.

So in 2008 the program was rewritten from scratch with support multiple video sites in mind, using a simple design (with some defects that I regret, but hey it works anyway!) that more or less survives until now. Naturally, I didn't change the name of the program. It would lose the bit of popularity it had. I should have named it something else from the start, but I didn't expect it to be so popular. One of these days we're going to be sued for trademark infringement.

In 2011 I stepped down as the maintainer due to lack of time, and the project is since then maintained by the amazing youtube-dl team which I always take an opportunity to thank for their great work.[2] The way I did this is simply by giving push access to my repository in Github. It's the best thing I did for the project bar none. Philipp Hagemeister[3] has been the head of the maintainers since then, but the second contributor, for example, was Filippo Valsorda[4], of Heartbleed tester[5] fame and now working for Cloudflare.

[1] http://archive09.linux.com/articles/114161[2] http://rg3.name/201408141628.html[3] https://github.com/phihag[4] https://github.com/filosottile[5] https://filippo.io/Heartbleed/

anonova 4 days ago 3 replies      
Despite its name, youtube-dl doesn't just download from YouTube but from a ton a different sites as well [1]. The rate in which this project keeps up with changes is incredible.

[1]: https://github.com/rg3/youtube-dl/tree/master/youtube_dl/ext...

renekooi 4 days ago 6 replies      
YT videos lag a lot when I stream them directly in VLC 2.2.0, (`vlc <youtube link>`) and some protected videos don't play at all, so I often use this:

    ytplay() { youtube-dl "$1" -o - | vlc - }
As a side benefit it of course also allows you to instantly watch stuff from all the other sites YT-DL supports :)

d0ugie 4 days ago 3 replies      
Quick protip for those wondering, the simple command to download an entire youtube channel is like so:

$ youtube-dl -citw ytuser:LastWeekTonight

I downloaded a channel with 121 videos, 4.4 gigs, took 26 minutes, so 2.8MB/s average. Curious if the Youtube people will shrug it off and free the beer or rate limit or more aggressively combat this.

Also, to get the total number of supported sites:

$ youtube-dl --extractor-descriptions|wc -l

466 (wow)

As this can run on anything with Python, I guess that includes Android[0], iOS[1], Windows Phone[2], heck even Blackberry[3]??

[0] https://python-for-android.readthedocs.org/en/latest/

[1] https://code.msdn.microsoft.com/windowsapps/using-python-on-...

[2] http://pythonforios.com/

[3] http://forums.crackberry.com/blackberry-z10-f254/blackberry-...

Thanks pmoriarty for submitting this. Awesome and I'm just getting started poking around with it. Makes me really want to learn Python, seems that's what all the fun stuff[4] is coded in.

[4] http://motoma.io/pyloris/ :)

nklas 4 days ago 4 replies      
A very nice utility to have installed.

It can also convert a video to an mp3:

youtube-dl --extract-audio --audio-format mp3 https://www.youtube.com/watch?v=OKbtC223e30

dirkk0 4 days ago 0 replies      
This may sound surprising but via youtube-dl I bought more music than before.If I find a new band that I might like, I search for Youtube videos first. The non-official videos often show just the cover of the CD or some useless slide show, so I extract the audio to have it in my playlist.Once I am decided that I like the music I had over to Bandcamp or Amazon to buy the mp3s.As an example: I lately bought four digital cds from progmetal act Redemption because someone upped their cd 'This Mortal Coil' to Youtube.
saurabh 4 days ago 0 replies      
This is the best video downloader. It has downloaded videos from every site I've thrown at it. It even downloaded videos from Comedy Central!

Edit: https://rg3.github.io/youtube-dl/supportedsites.html

escherize 4 days ago 0 replies      
I use this all the time and It's really great.

I have messed around with one sketchy youtube downloader or another for years until I found this.

Oh, and it's on brew!

JoshTriplett 4 days ago 1 reply      
I've used this quite extensively. It's less critical for YouTube now that almost all YouTube videos work with the HTML5 player, but it helped quite a bit when every other video required Flash. Still necessary for many third-party sites as well.

It'll also download an entire playlist, and add sequential numbers at the beginning (with the -A option).

zuck9 4 days ago 2 replies      
No one pointed out that it has a do-whatever-you-want license. This is the thing which bothers me most. There are gazillions of shitty youtube downloaders (paid, free and adware supported) out there that people still use and the code is being powered by the work of open source developers.
xuhu 4 days ago 1 reply      
I use this to replace noisy audio on a smartphone recording of dancing lessons with a high-quality version from a youtube video, automatically: http://youtu.be/AVIHpaNQLS0
quoiquoi 3 days ago 0 replies      
There's also a GUI wrapper for it: https://github.com/MrS0m30n3/youtube-dl-gui/

And a php web app that uses youtube-dl as a a backend: https://github.com/Rudloff/alltube/

rb2k_ 4 days ago 0 replies      
The amount of time it takes to keep up with all of the changes big sites make is impressive.

At some point I decided to write something similar in Ruby ( https://github.com/rb2k/viddl-rb ) and I'm kind of ashamed of how broken things are from time to time.

Video hosting sites don't have APIs and reverse engineering the sources for the videos is like shooting at a moving target.

So kudos for leading that project :)

noisy_boy 4 days ago 1 reply      
I've setup this function in my bashrc to check for the latest build if the new one is more than 24 hours old and then run it with arguments supplied:

function youtube-dl {

        exe=$HOME/bin/youtube-dl        link=https://rg3.github.io/youtube-dl/download.html        url=$(curl -s $link |grep ">sig<" |head -1|sed -e 's/href="/|/g' -e 's/">/|/g'|cut -d"|" -f2)        fetch=N        if [ -s "$exe" ] ; then                ts=$(date "+%s")                yts=$(stat -c "%Y" $exe)                [ $(( ($ts-$yts)/(60*60) )) -gt 24 ] && fetch=Y        else                fetch=Y        fi        if [ "$fetch" = "Y" ] ; then                url=$(curl -s $url |grep ">sig<" |head -1|sed -e 's/href="/|/g' -e 's/">/|/g'|cut -d"|" -f2)                echo "Fetching [$url] and deploying to [$exe]"                curl -s $url -o $exe                chmod a+x $exe        fi        [ -z "$@" ] && $exe --help || $exe $@}

derekp7 4 days ago 1 reply      
Is there a plugin version of this program, which would dynamically change any (supported) flash video reference to an HTML5 video tag? That way I can get rid of flash completely.
pavs 4 days ago 1 reply      
youtube-dl gets updated very frequently and the version that comes with your distribution (ie, ubuntu) usually is out of date and doesn't work often on most sites. So its better to download from the source and update it using "youtube-dl -U"
unicornporn 4 days ago 0 replies      
OK. I just realized this does something really cool. I've been troubled with 1080p videos as they no longer contain audio. They're separated and YT uses DASH to join the audio+video stream.

youtube-dl seems to solve this: https://github.com/rg3/youtube-dl/issues/2165

shmerl 4 days ago 2 replies      
I just use quvi for that.


    quvi -vm --format $format "$url" --exec 'wget %u -O %t.%e'
instead of $format put any format that the video supports. To query them, use:

    quvi --query-formats "$url"
So it's going to be something like:

    quvi -vm --format fmt43_360p "$url" --exec 'wget %u -O %t.%e'
And to extract audio from the result you can use:

    avconv -i something.webm -vn -acodec copy something.ogg
Youtube however is switching away from fixed video files to separate streams to be used with MSE. You can note that higher resolution video is not available the old way. So downloading that won't be so straightforward.

doh 4 days ago 1 reply      
If you don't want to install anything, just use http://savedeo.com for almost any video or http://auderio.com for Music from Youtube
ansgri 4 days ago 1 reply      
How does it compare to get_flash_videos, which is available in Ubuntu repository? https://code.google.com/p/get-flash-videos/
chris_wot 4 days ago 1 reply      
Is there a way of bypassing geoblocking via youtube-dl directly?
HaseebR7 4 days ago 1 reply      
I've used this one for so many things and it's awesome.

I've downloaded a whole youtube channel, downloaded videos as mp3's and what not.

I have couple of aliases set in my .zshrc too :)

mp3dl() {youtube-dl --extract-audio --audio-format mp3 $1}

root@haseebr7 ~ mp3dl <youtube_video_url>

and bam i have the mp3 download. I no longer have to visit shitty ad infested websites to do these kind of things.

hit8run 4 days ago 0 replies      
Till now I didn't know that other Sites are supported :D wow really impressive site support! Thx for maintaining this nice tool :)
ketralnis 4 days ago 3 replies      
There are lots of projects to do this, and this one has been around for quite some time. Is there any context as to why to post this now?
sage_joch 4 days ago 0 replies      
It seems like this functionality should be part of YouTube itself, at least as an option settable by the uploader.
clarry 4 days ago 1 reply      
Yget is an alternative I've found to be more reliable. It's just for youtube though, and doesn't support all the things (such as bypassing age restrictions).


of 4 days ago 0 replies      
It would be nice if it supported Netflix. There's some difficulties with this, but a discussion about it here: https://github.com/rg3/youtube-dl/issues/1564
Nux 4 days ago 0 replies      
youtube-dl rocks. One of my favourite features: youtube-dl -F
RVuRnvbM2e 4 days ago 1 reply      
This appears to be the only option for Free Software access to soundcloud.com.
gprasanth 4 days ago 0 replies      
On OS X, you can use afplay(/usr/bin/afplay) to play those downloaded videos' music(in a headless player). This is pretty useful if you listen to youtube music at work.
pimlottc 4 days ago 1 reply      
If I can lazyweb on this for a moment, is there a good script out there that can use this to sync/download your entire watch later queue?
wonjun 4 days ago 0 replies      
I really like this tool and it has been very reliable for me so far. Thanks for sharing the code.
lelandbatey 4 days ago 1 reply      
Youtube-dl is the biggest pre-built thing I use in GifMachine[0] after ffmpeg, and I've used it in innumerable projects since then. I love youtube-dl, it's fantastic!

[0] - http://gifmachine.xwl.me/

soyuka 4 days ago 0 replies      
chris_wot 4 days ago 1 reply      
Oh man, I had that ages ago and used it frequently! It stopped working, if this is working again that's awesome :-)
MrBra 4 days ago 1 reply      
What's so special about this?
notastartup 4 days ago 0 replies      
I just use chrome extension
JS1k demo: Highway at Night
520 points by practicalswift  4 days ago   98 comments top 18
sEEKz 4 days ago 1 reply      
anonfunction 4 days ago 1 reply      
This is so very impressive. I like how it uses canvas to make the moon:

  // Moon  c.fillText("(",99,-99);

ape4 4 days ago 4 replies      
I saw the double tilde (~~) in the code and wondered what it was. Of course, single tilde is the bitwise not operator. Googling finds that double tilde is a faster Math.floor(). http://stackoverflow.com/questions/5971645/what-is-the-doubl...
dsl 4 days ago 2 replies      
You can see all the submissions at http://js1k.com/2014-dragons/demos

My favorite: http://js1k.com/2014-dragons/demo/1868

adamcanady 4 days ago 2 replies      
Wow. I can make sense of a little bit of the original source, but I wonder more how RegPack [0] works to compress it into that final submission!

As a side note: I'm in college now and looking to propose an independent study on compression. Any suggested readings or algorithms I should look into?

[0] https://github.com/Siorki/RegPack

billybofh 4 days ago 1 reply      
Yikes - this took me right back to writing M68000 bootsector demo/loaders on my AtariST in the 80's. Happy times. I sometimes wish there was still more of that creative free-for-all spirit in todays UI's. To think of the computing power we have now compared to then - it's slightly saddening to see how conservative and often ugly the interfaces we use are.
neals 4 days ago 4 replies      
How does this loop work ?Does this create an array where B,F,Z,D and i are 0?


nailer 4 days ago 0 replies      
MrBra 4 days ago 4 replies      
Now, I remember that in a recent discussion there was some arguing on Firefox new JS engine whose results were benchmarked at a higher position than Chroome's.

So please try this on Firefox and then on Chroome and see the difference. Firefox is not yet there, sadly.

thomasfl 4 days ago 2 replies      
This demo gives some ideas about how the human brain manages to remember events. With just a few lines of code, containing very little information, it generates something we perceive as a very detailed record of a highway trip at night. Even if we don't know exactly what tricks the human brain uses to compress information about places and events, this demo let's us imagine how some of it might be done.

The human brain is a master at recognising and codifying geometric 3d shapes. The developer who created this is a master at coding geometric shapes and transitions in javascript as well.

jacobsimon 4 days ago 0 replies      
Wait did anyone see this procedural Minecraft entry? http://js1k.com/2014-dragons/demo/1854

It's really impressive and he posted the explanation: http://birdgames.nl/2014/04/js1k-post-mortem-minecraft/

userbinator 4 days ago 1 reply      
Reminds me of this 4k JS demo which has a somewhat similar concept but is big enough to contain sound:


Making of: http://www.ylilammi.com/webgl/highway4k/Making%20of%20Highwa...

voltagex_ 4 days ago 0 replies      
Very consistent jittering/jankiness (about every 5 seconds) on FF 33.1 - is it worth a bug report?

Edit: much better in 35.0a2. I wonder what they did?

elwell 3 days ago 0 replies      
The impressive facet here is RegPack getting that source under 1024 B.
_jomo 4 days ago 1 reply      
I am impressed how smooth many oft the demos run in Firefox for Android
lelandbatey 4 days ago 1 reply      
For best musical accompaniment to this, I recommend Stage7's "8-bit Mentality"[0].

[0] - https://soundcloud.com/stage7/8-bit-mentality

bgar 4 days ago 0 replies      
Wow this is pretty awesome!
notastartup 4 days ago 4 replies      
this was really beautiful. I almost blurted out it was art but then I was forced to remember we can't call computer generated things art by some hidden rule.
How We Did It: SNL Title Sequence
456 points by shakes  1 day ago   56 comments top 17
brokentone 1 day ago 2 replies      
This is incredible. Normally just one or two of these techniques would represent a pretty impressive feat. This used 3d printing, freelensing, pixelstick lightwriting, and a custom bokeh cutout -- in addition to the cool, but more common helicopter shots, timelapse, tilt-shift, and steady cam work.

Being willing to (or maybe having the budget to) use all these techniques AND getting a consistent result is SUPER impressive.

ejdyksen 1 day ago 2 replies      
If you think this is interesting, there's an entire site dedicated to title sequences in film and television:


HorizonXP 1 day ago 1 reply      
This is a fantastic blog post. I love that we can get this behind-the-scenes look behind something so iconic.

The light painting and lens-whacking details were awesome to read about. I'll definitely have to give the lens whacking a try.

RyanCooley 1 day ago 0 replies      
As someone who enjoys both programming and video production, this is great to see on HN. In my experience, there is a lot of overlap between both skillsets. As the article makes clear, a lot of time goes into finding cool "hacks" to trick the lens into conveying a particular look via lighting, optical effects and more.

Post-production is also a very technical process that takes a lot of time and effort to get right and involves exploring the particular quirks of your editing software and tricking it to get it to do what you want. There are often little moments of discovery where you do something you weren't even sure was possible. Then there are those serendipitous moments where visuals and audio come together better than you were anticipating or could have ever planned. It's a great feeling.

I encourage any programmers out there who have even a modicum of interest in the subject to go out there and experiment. Video production can be a great creative outlet that uses a lot of the same talents and opens up new artistic pathways.

Volscio 1 day ago 0 replies      
npinguy 1 day ago 3 replies      
Is it just me or is stuff like this way harder to appreciate these days (with the ubiquity of CGI), unless you work in the industry, or see a behind-the-scenes look like this?

This is simply incredible, and yet I don't normally pay the title sequence any attention at all...

marcuskaz 1 day ago 0 replies      
Great post, really interesting to see how they did the shots and glad to see they opted for real footage and techniques most of the time rather than just post processing everything.

I used to shoot free form lenses, it was difficult to get a still from, shooting video would be a challenge. My (old) post on free lens shooting: https://mkaz.com/2005/01/08/homemade-lenses/

jianshen 1 day ago 0 replies      
I'm really happy to see a post like this on HN. Pulling off creative in-camera shots like these are a million times more rewarding for some reason than creating/editing them in post. There's something visceral about getting the shot right in the moment.
function_seven 1 day ago 2 replies      
I usually fast-forward through the title sequence. Now I feel bad for doing so. Will take the time to watch it next SNL.
diggum 1 day ago 0 replies      
This summer I attended one of the editing workshops put on by Adam Epstein who edits all of the film unit productions. It's incredible how fast they write, produce, edit, and turn around these projects. They are literally working from Thursday afternoon until Saturday evening to build these from scratch.
jarnix 1 day ago 0 replies      
Wow that's incredible how they combine so many techniques. The use of lenses is really innovative, I mean, people would think it's made only with special effects and it's 3D printed or done entirely manually ! Congrats.
Pfiffer 1 day ago 0 replies      
robertfw 1 day ago 1 reply      
Was looking forward to watching the final product, but restricted due to being in Canada =/
dwynings 1 day ago 0 replies      
Now I feel bad for always fast-forwarding through this part of SNL.
joezydeco 1 day ago 1 reply      
What to Submit

On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

It's really cool to see filmmakers doing their thing, even moreso if they're using really recent tech (3D printing, pixelbars, new camera stabilizers). Is that sufficient?

notastartup 1 day ago 1 reply      
this was a really cool effect...but I never found SNL to be funny. I've never even laughed at it once. I don't understand why people laugh...that and jimmy fallon/kimmel
2 years with Angular
439 points by robin_reala  3 days ago   204 comments top 45
jasim 3 days ago 5 replies      
I recently wrote about my experience with Angular in a different forum. Sharing it here:

I worked on Angular last year building an app with a few complex views. The initial days were full of glory. Data-binding was new to me, which produced much goodwill towards the framework.

Things started falling apart as I had to inevitably understand the framework in a little more depth. They practically wrote a programming language in the bid to create declarative templates which knows about the Javascript objects they bind to. There is a hand-rolled expression parser (https://github.com/angular/angular.js/blob/v1.2.x/src/ng/par...), new scoping rules to learn, and words like transclusion and isolate scope, and stuff like $compile vs $link.

There is a small cottage industry of blogs explaining how Angular directives work (https://docs.angularjs.org/guide/directive). The unfortunate thing is that all of Angular is built on directives (ng-repeat, ng-model etc.); so till one understands it in depth, we remain ignorant consumers of the API with only a fuzzy idea of the magic beneath, which there is a lot of.

The worst however was when we started running into performance problems trying to render large tables. Angular runs a $digest cycle whenever anything interesting happens (mouse move, window scroll, ..). $digest runs a dirty check over all the data bound to $scope and updates views as necessary. Which means after about 8k-10k bindings, everything starts to crawl to a halt.

There is a definite cap on the number of bindings that you can use with Angular. The ways around it are to do one-time binding (the data won't be updated if it changes after the initial render), infinite scrolling and simply not rendering too much data. The problem is compounded by the fact that bindings are everywhere - even string interpolation like `{{startDate}} - {{endDate}}` produce two bindings.

Bindings are Angular's fundamental abstraction, and having to worry about its use due to performance issues seems quite limiting.

Amidst all this, React feels like a breath of fresh air. I've written a post about what makes it attractive to me here: http://www.jasimabasheer.com/posts/on-react.html.

Compared to Ember, neither Angular nor React dictate as rigorous an organization of files and namespaces (routes, controllers, views), and have little mandatory conventions to follow. But React is as much a framework as Angular is. The event loop is controlled by the framework in the case of both, and they dictate a certain way of writing templates and building view objects. They can however be constrained to parts of the app, and so can play well with both SPA and non-SPA apps. The data models are plain Javascript objects in both (it is not in Ember), which is really nice.

Google recently released a new version of their developer console (https://console.developers.google.com) which is built on Angular. So the company is definitely putting their weight behind the framework. However, Angular 2 is not at all backwards compatible. That was quite unexpected. If I had known this going in, I would have never used it for the project. But it felt like such a good idea at the time...

jhpriestley 3 days ago 7 replies      
I find the rise of Angular kind of baffling.

Angular's scope system is exactly analogous to the scope system of a programming language. This is a solved problem! When you make a scope system, make it lexical, and require explicit declaration before use. If you're not making those choices, then at least acknowledge that these are the standard answers, with very clear advantages over other scoping systems, and explain why you are not using these answers. But with angular, we have a dynamic, implicit declaration scoping system. New scopes are introduced somewhat unpredictably, at the discretion of each directive. I thought that introducing dynamic, implicit-declaration, non-block-scoped variables in 2014 was like introducing a new car with a coal-burning engine, but no one even seems to remark on it.

Then there's the dirty-checking loop. After every event there is a digest; every digest runs every watch. To me, just reading this description makes a voice speak up in my head: "Uh-oh! That sounds like O(n^2)!" Now that angular is being widely used, people are noticing that it's slow as shit. But why did the framework get to this level without anyone remarking, "this dirty-checking algorithm is fundamentally, irremediably not scalable"? Do people not have a sense even for the most coarse performance characteristics of algorithms like this? Or do people simply think that nowadays "performance does not matter"?

Angular's "module" system is the strangest of all. It doesn't do namespacing or dependency tracking. What is even the point of it? What thought process led to this useless module system?

It's just strange. Hundreds of years of people's work are spent on something, which the most cursory, CS 101 analysis shows to be seriously flawed. Is analysis simply a lost art in this industry?

Oh well, people are finally realizing Angular has its faults, because they've seen them with their own eyes and now they believe them. It would be nice if we could learn from this, and maybe skip the next boondoggle (web components for instance), but I have no hope for it.

kuni-toko-tachi 3 days ago 1 reply      
AngularJS owes it success to an easy onboard that allows a user to easily create a gimmicky two-way binding demo. And then the pain begins.

It matters little whether some find it productive, what matters is that the engineering principles it is based upon are fundamentally unsound.

Control and conditionals in attributes are absurd. Especially when they require learning an expression language unique to that framework. Especially when they create side effects. Why should something as simple as a loop or if create a new controller and scope? This is absurd. The expression language is not statically analyzable to boot.

There is no reason for a framework to do anything beyond handling the last mile tranform between view model and DOM. Everything else can be done through JavaScript and modules.

JavaScript is a wonderfully expressive language, reinventing that through some hacked up expression language makes no sense and buys no advantage.

Bindings can be handled through a multitude of great npm modules.

Watch the video of the Google Analytics team explaining the cortitions needed to make AngularJS performant. Watch the videos where the AngularJS 2 team discards nearly everything from 1.3 (and then adds their own comical nonsense).

Declarative DOM manipulation through a virtual DOM is the future - every more than web components will be. Why? Because instead of being another "web framework", is it sound computer science.

swombat 3 days ago 6 replies      
This seems to be the summary of every tech flame war ever, and applies rather well here:

A: I've used tech X in a lot of Y contexts, and I find it's not great. I will generalise slightly imply that tech X is not the panacea that it has been presented as.

B: Yeah? Well, I've used tech X in a lot of Z contexts, and I find it works fine! You're wrong! You're using it wrong! Maybe you're not wrong in context Y, but for most other contexts X is still the best tech!

C: I haven't used tech X at all, but here's my opinion on it anyway.

Nitramp 3 days ago 1 reply      
I work at Google, and have been using AngularJS in different projects for about three years. The OP raises a couple of good points (in particular his "The Bad Parts" are mostly valid), but I cannot understand some others, nor do I share his take away.

AngularJS is not a silver bullet or panacea. It has bad parts such as the directives API (making it hard to create reusable components), the global namespacing in the injector, and indeed, the number of watch expressions is an issue.

That being said, internally at Google:

- we do have well working, shared, reusable UI components based on directives. So it's quite possible to write usable AngularJS modules.

- There are multiple old (>3 years), large AngularJS apps that do not seem to have major maintenance issues. Maintenance of large code bases (>100k SLOC JS) is always an issue, but if you follow the style guide [0] at least it doesn't seem worse than with other JS frameworks

- Code is minified and compiled, using Closure Compiler's @ngInject and @export annotations as required.

OP's comments mostly sound like they were burned by not following software development best practices (e.g. throw the prototype away, make sure to properly design your domain model, have a qualified tech lead, have qualified engineers).

His "Lessons for framework (and metaframework) developers" seem generally useful, but unrelated to particular AngularJS shortcomings.

[0] http://google-styleguide.googlecode.com/svn/trunk/angularjs-...

pygy_ 3 days ago 1 reply      
This post reminds me of these other two [0, 1] that ultimately lead Leo Horie to create Mithril [2], a tiny (5 KB down the line) but complete MVC framework that also eschews most of the criticism raised by the OP.

The Mithril blog is also worth a look, it addresses a lot of concrete scenarios with recipies to solve common front end problems with the framework. For example, here's a post on asymetrical data binding [3].

0. http://lhorie.blogspot.fr/2013/09/things-that-suck-in-angula...

1. http://lhorie.blogspot.fr/2013/10/things-that-suck-in-angula...

2. http://lhorie.github.io/mithril/

3. http://lhorie.github.io/mithril-blog/asymmetrical-data-bindi...

wldlyinaccurate 3 days ago 3 replies      
I've worked on Angular projects of varying sizes -some as large as 30KLOC (products where every page has enough interaction to justify an Angular controller)- and I can never find myself agreeing with these articles.

Have I just drunk too much kool-aid? Or is it possible that with the right team, the right architecture, Angular can actually be a really great framework to use? The common theme for every large Angular project I've worked on is that the teams have leaned towards a more functional design where state is rarely used. This has always seemed to encourage smaller, decoupled modules which don't suffer from many of the problems that the author mentions.

But hey, it's probably the kool-aid.

lhorie 3 days ago 1 reply      
disclaimer: I'm the author of Mithril.js

I've also used Angular for around 2 years for a large application. This article resonates pretty accurately with the problems we were running into before I decided to write Mithril.

It can certainly work well (heck, the mobile part of our app was doing just fine because it was specifically designed to be a trimmed down version of the much more powerful desktop app), but performance problems aren't necessarily because people don't know how to use Angular. In our case, performance problems usually became obvious when we had UIs for editing large volumes of information, and large volumes of information did appear on the page. Two of the examples that we were running into problems with were a work breakdown structure UI, and a scheduling UI, which are far from being things-you-should-not-be-doing.

The team scalability issue is real, but I think it's not entirely Angular's fault per se. My general experience w/ co-workers dabbling w/ Angular was that they were accustomed to jQuery in terms of discoverability (i.e. if you don't know jQuery, you can fake it w/ Google-fu until make it). Getting into Angular is not like that at all. There are lots of places where you can shoot your foot if you don't do it the right way (tm), and deadlines will trump doing it the right way if the right way is sufficiently non-intuitive. You can blame that on teams not having good processes or good developer or what have you, but hey, that's the real world for ya.

My main problem with Angular is the error messages. Imagine writing this:

    $(".foo").each(function() {      this.addClass("bar")    })
But instead of throwing a familiar native js error on line 2, you get an asynchronous ReferenceUnboxingException on line 3475 of jquery.js and your code is nowhere in the stack trace. That's what a lot of Angular errors look like (when they do show up, because null refs from templates don't).

aikah 3 days ago 1 reply      
I'm a big angularjs fan butI agree with all the points made by the OP.

I will however stick with angularjs because frankly there is no better alternative.

the selling points for me are:

- Testing:Karma,Protractor,dependency injection are fundamental when working with a team.Everything is so easy to test,so easy to mock.

- Speed:Sorry but there is no other framework that makes front-end dev faster.I can come up with very complex apps within hours,fully tested.

- Resources:20+ books,hundreds of blogs,1000+ directives on the web.

- Easy to integrate with legacy jquery mess:since jQlite is compatible with jQuery,I can just drop a jQuery plugin in a directive observe something with no effort and have it rendered properly.

The main drawbacks:

- Dont expect to understand angular without a serious understanding of javascript.

- Performances: yes there are performance issues,but when they show up,one needs to work on these issues.

- Probably too much hype.

oinksoft 3 days ago 1 reply      
I've been using Angular on-and-off in professional settings since 2012. Angular is a framework obsessed with testability that treats usability as an afterthought. That said I've found Angular to be more than flexible enough to meet the needs of your typical CRUD apps, and generally enjoy working with it.

One thing I agree with the author about is the importance of expertise for a successful Angular project. Some specialized knowledge is needed to get a decent fit and finish, and the results can be horrible without that.

Strongly discouraging globals goes a long way towards improving code written by inexperienced engineers, but Angular's provider system is still not clearly documented with practical examples, which makes those engineers more likely to shove everything into the unavoidable Angular constructs (controllers, directives, $scope).

The middling quality and small availability of third-party Angular libraries is a problem. I believe that greater awareness/better tooling for ngDoc would be a tremendous help there. Best practices are not well-presented anywhere in the Angular world, particularly for designing reusable Angular libraries.

The other big problem is the project source code which I find poorly organized and documented. If you want to get into the guts of Angular for debugging purposes, good luck!

lingoberry 3 days ago 3 replies      
I'm late to the party, but I want to share an insight I've had regarding game development and UI applications. It baffled me for a long time why building UIs were such a pain, and doubly so in a web app. Why could I, and others, create such seemingly advanced graphics and interactions in a video game, but try to make a UI and you're stuck with thousands of difficult to discover bugs. After I started using react, I realised games are "easy" for the same reason react is a huge productivity multiplier: you re-render the game every single frame. You have the data that represents your game state, there's a game loop, and you render every single little damn thing, every damn frame. It's a one-way flow of information from your explicit state to the presentation layer. React works the same way, only it re-renders only if the state changed. That's it. Super simple, but it's a mind shift.
hassanzaheer_ 3 days ago 4 replies      
While most of the arguments presented in this article are somewhat valid but I hope with the release of Angular 2.0 majority of the issues will be addressed (though does it make sense to make such drastic changes in the upcoming is another debate and already taken care of at: https://news.ycombinator.com/item?id=8507632)

I'm currently working on a comparatively large webapp built in Angular and it was after about 7 months into the project that we started realising it's pitfalls, and it was very difficult to abandon it then.

So we worked it around by:1) using one-way binding (or bindonce to be exact) to reduce watches2) avoiding un-necessary $apply() and using $digest() carefully if required3) using ng-boilerplate for scaffolding4) defining our own style guides/coding conventions/design patterns to overcome Angular's bad parts5) frequent code-reviews that made sure new team members are upto speed with the above techniques

luckily we haven't ran into much issues after that :)

BinaryIdiot 3 days ago 1 reply      
Maybe I'm old fashioned but Angular just does too much for me. In fact many frontend frameworks simply do too much for me. I like to structure my web applications in a very minimal way.

I like having one layer that covers the UI display and UI events. This layer does nothing beyond styling, setting up the UI and using messages to pass back events in a generic way.

My business logic handles generic events. So say I have a button for saving, in my UI layer it registers the click event but then sends a generic message with a payload that is simply "Save". The business logic then saves it. This let's me drastically change any of the UI with zero affect on my business logic.

I wouldn't recommend it yet for production (very early) but I'm working on a small library that does much of this messaging and binding of messages directly to DOM objects. https://github.com/KrisSiegel/msngr.js

shubhamjain 3 days ago 1 reply      
I can't speak of Angular since I haven't used it but one problem that is recurring with use frameworks, in general, is that thinking or getting used to "their" way takes a significant amount of time and seeing the continuous change of technology, I am not sure that time is justifiable in the longer run.

Take example of rails. I was trying to learn it sometime ago and was really amazed how it has a process for nearly everything. Migrations, asset pipelines, generators, and very extensive command line. Sure it does make it seem like "Once I learn it, it will be so much easy to make the next app" but it is easy to realize after sometime that you have to cross usual hurdles of Googling everything, learning these processes, facing issues, digging out new ways of debugging to finally be good at it.

My idea is that frameworks should be minimal which only ensure a basic working architecture and everything else should be extensible (via packages).

tomelders 3 days ago 0 replies      
I'm currently enjoying angular after having spent a year and a bit working with it exclusively. I am keen to try out Flux and Mithril, but I've not had the time nor the opportunity. But as it stands, we're deploying several large projects into very demanding organisations that are stable, performant and easy to manage. We as a team owe lot to Angular in terms of our productivity. We're also a great team and that counts for a lot too.

The thing I would like to add to the debate is this: We've all learned that Angular is hard. It's a complex beast with it's own nuances and idiosyncrasies. It also offers plenty of ways to do things you probably shouldn't do (i'm looking at you expressions). But more than that, with Angular in the tool box, people push themselves to deliver products vastly more complex than would be feasible without it. And these two issues collide all the time. Learning a framework + the desire to deliver more; One should follow the other, but people tend to attempt both at the same time.

I personally don't think there's anything "wrong" with Angular, but people have to acknowledge that despite the marketing hyperbole, learning Angular means setting out on a long and difficult journey that will require the developer to rethink a lot of what they know about building web stuff. But that's web development in a nutshell. It's a different gig every year, and within an alarmingly short amount of time, Angular will probably be replaced with something better suited to the tasks that try to accomplish the thing we want to accomplish with mere HTML, CSS and Javascript.

There's also a lot to be said for how you organise your projects and what tools you use (eg Require or Browserify etc etc), but that's a very different kind of conversation.

debacle 3 days ago 0 replies      
Angular is a result of the over-engineering that is endemic to web development right now. Programmers are taking strategies designed by Google and Facebook and places that actually need the high level of conventions prescribed by software like Angular and applying it to their personal blog, their half-done only on github "startup," etc.

JavaScript isn't really a good place to adopt convention - you're dealing in a mixed-code environment almost from the start, speed is constantly an issue if you're doing something complex, and there's no such thing as "one size fits all."

I've looked at almost every JavaScript framework out there, and they really don't offer much more than what you would get out of a very lightweight jQuery (or your library of choice) abstraction. I want very much to find something that is as useful as the programming friction it introduces, but I haven't really found anything that meets that criteria yet. React seems to be very good at face value, but in general it isn't saving you nearly the amount of code that you might hope it does. Ember is probably the best at this, but it has its own tradeoffs (namely speed).

city41 3 days ago 1 reply      
I'm really disappointed that not only is this article on Hacker News, but it's currently at #1. The article contains almost no substance at all.

Angular's a controversial topic. So if you're going to write a long blog post picking a side, you really need to back it up with examples and offer alternatives. The six part "detail" posts aren't much better.

lucisferre 3 days ago 1 reply      
Sorry, did I miss the part where the author explained the "right (tm)" way to do things these days?

Seriously though, I've used Angular just as long as the author and for the most part I wholeheartedly agree with the complaints (and I have complained myself for some time). However, what is the "better" way? People keep throwing out things like React, but React solves much less for app developers. Also, that answer doesn't help the countless people who began app development more than a year or so before React was released.

The Javascript ecosystem is evolving constantly and yet in some ways not much at all. Throughout that time, I've found that just about everyone can find excellent reasons not to use the various frameworks and libraries but few offer concrete recommendations in exchange for these criticisms. It's disappointing.

At this point in our own project, like many others I assume, we are reconsidering Angular. Not simple because we don't like it, but because clearly the Angular team doesn't either. Angular 2.0, like Sproutcore 2.0 before it, appears to be a complete rewrite. (Rightfully so.) As a result, we plan to examine our other options in detail while our work is still mostly in the prototype territory.

Right now however, I don't think I've seen anything yet, that makes sense for most people who've started out with Angular to do that re-write. I'm hoping as I spend more time examining this I'll find I'm wrong.

I've had many people ask me what framework they should use for new projects and every time I've said, it probably doesn't matter use right now, but be prepared to fully rewrite things in a year or so. The JS ecosystem is in so much flux right now that you can't count on any of these choices being the right one in couple of years. I've accepted that reality for now.

People hate this answer. They tell me that no PM/Exec is going to want to hear that. Fine, don't tell them. The silver lining is that whatever does comes to save us will hopefully be so much more productive than what you were doing before you won't care about rewriting it, you'll do it because it actually makes sense.

Let's all hope that's true.

xpto123 3 days ago 4 replies      
I confess I am an Angular fan.

But this article is not Angular specific at all, it stays on a very high-level. Replace the word Angular with any other web framework and the article would still make perfect sense.

Not that the article does not have some value, just that it has very little to do with its title.

hokkos 3 days ago 1 reply      
I've got to build an SPA and I'm trying to choose between Angular and React, can you guide me a little,the app will :

- create a big form based on an XML schemas, the form will be used to generate valid XML with the schemas

- some schemas can be really big with more than 3000 elements, the whole thing won't be shown in full to the user directly but probably folded

- because it is based on XML Schema, it must have interactivity to make some elements repeatable, and groups of nested elements repeatable, some elements with bounds, some maybe draggable to reorder them, everything an XSD can do...

- it will also some kind of polymorphism where you can choose the children element type and have the corresponding schema showed

- it will also show a leaflet map, with some interaction between the form and the map

- there is also a rich text editor where you can arrange xml objects between formated text

I fear that angular won't be fast enough for that, but his support for forms seems better, I've tested JsonSchema forms generator like https://github.com/Textalk/angular-schema-form and https://github.com/formly-js/angular-formly the first one is slow when editing 3000 items the second seems fast when editing, and slow when it generates the json. I've done some angular tutorials and their concepts don't stick in my head. I've tested React and their concept stick easily in my head but there is less native support for forms.

I had just decided to go with angular partly because of all the hype around it, but I see the article and others as a bad omen and I want to go with react now. Any advise ?

maouida 3 days ago 0 replies      
I've been using Angular for a SaaS for 7 months, the project is launching in about a month from now.

It is not just a CRUD app. It has:

~170 views~70 custom directive~100 controllers

Many directives can execute on the same page.

A single page can have multiple tabs, forms, modals, charts.

I hit some situations where performance dropped a lot but if you take the time to benchmark and test you can fix it.

The key to keep it stable is to load the UI (directive) when you need it and destroy it when you are done.

Personally, I've not found any serious issue so far.

akamaka 3 days ago 0 replies      
I totally disagree.

I've spent the last year working on a complex and widely-used site that is built with Angular. It is maintainable, performant, has a smooth UX, and is mobile-friendly.

I'm usually extremely cautious about relying on frameworks for long projects, because easy setup doesn't matter after you've been working on something for a year. In our case, using Angular was the best choice we could have made. I would absolutely not replace Angular with my own in-house MVC, even if you gave me a year to develop it.

-The testing tools are some of the best I've used, and hugely contribute to making the app easier to maintain

-It's not as much of a framework as a set of tools. Angular mostly stays out of the way and mostly allows us to structure code to match our needs

-You absolutely need to have top-notch developers, and ideally someone experienced enough to mentor people on the team who are new to Angular. There are a lot of JS developers who are former Flash designers who learned how to use a few jQuery plugins. If they don't know the fundamentals of programming really well, they will make a huge mess of the project.

-We've definitely run into performance problems, but they're manageable. We've had to write code that bypasses Angular's digest cycle, but it feels similar to writing a bit of inline assembler in a C++ program. I wouldn't stop using C++ because of that.

esaym 3 days ago 1 reply      
I've actually never used any javascript framework. It is stuff like this that drives me away. If you pick any one framework, you get half of a crowd telling you that it sucks, and then a year later your version is now deprecated/replaced and you get to re-do everything again. I've attempted to avoid the whole web-app scene, but with the current job market, looks like one has to know one of these frameworks...
jMyles 3 days ago 7 replies      
If Angular is not The Thing (a premise which I have no trouble believing), then what is a Good Thing to perform the task of, for example, consuming Django Rest Framework endpoints and making a frontend of them?
lbacaj 3 days ago 1 reply      
I think people are failing to see that not all apps are huge monolithic applications; for most of those apps Angular works just fine.

In fact we should be striving to get away from all of those monolithic code bases as much as we can. In the cases where we can't get away from that then we should be going with tried and trusted methods of building those apps and probably relying on the server a hell of a lot more for those kinds of really large/enterprise/corporate apps.

Most use cases for angular are to make a web app that pulls and pushes data from some Restful service. Angular lets us take that web app, through cordova/phonegap/etc, and wrap it into a mobile ready application that you can push to an app store.

Whats wrong with that?

dynjo 3 days ago 2 replies      
We built a pretty complex app with Angular (https://slimwiki.com) and have had nothing but great experiences. The main issues are no guidelines about the right/wrong way to do things, it needs to be more opinionated.
Bahamut 3 days ago 1 reply      
I also have been using Angular for my entire professional developer career, which in a few days will hit 2 years.

This article is pretty accurate for the most part, although some of the minor complaints are not quite so accurate.

Performance is something to be careful about, but the Angular team has worked hard at improving it and it has improved immensely with 1.3 - optimizations such as bind once & $watchGroup and optimizations around the $digest cycle and $watch make it a huge improvement over 1.2. I want to say there is a chart floating around showing over 30% improvement.

As far as frameworks go, I believe Angular is the best we have currently. It does a lot for you without getting too opinionated in general, and some of its tooling is just flat out better than much of what you can find in the wild.

I have been experimenting with Polymer lately though with an eye towards web components - there is a lot of change coming in how we will have to structure our code. I suspect that those using React will also not be shielded from the pain of integration with ES6 and web components as well, and so I have been hesitant to recommend it in a core product. Ember claims they will make the breaking changes slower, but I also suspect that it will limit its growth as well.

Frontend seems to be rolling on as fast as ever - I don't see much of a way around everyone having to scrap their code regardless of the major library chosen for their projects. I'm hoping the pain dies down once ES6 and web components becomes the norm though.

prottmann 3 days ago 0 replies      
The problem is not Angular specific, every Framework is designed to solve a certain problem in a certain way.

But most developers think, that when they learn once a Framework, they can use it for any kind of project.

When i read "xxx is really cool and fun" iam really careful. Most people create a "Hello World" and then THEIR favorite framework is the greatest thing in the universe and they communicate it to others.

Take a framework, live with the mistakes, until the next "better" framework appear... and it will appear, and the next, and .... ;)

datashovel 3 days ago 0 replies      
The thing I really like about Angular is it makes composition of complex ideas relatively easy. The encapsulation and dependency injection is perfect way to allow you to be as structured or unstructured as you want / need to be.

I can understand how someone coming from more traditional frameworks, and working in an environment where you are rarely or never required to think outside the box, will have difficulty making the transition.

Where I personally think Angular could be better (yet was state-of-the-art when it originally came out) is with directives. Now, I'm not talking about run-of-the-mill directives that are easy, that implement relatively straightforward concepts. I'm talking about highly complex functionality that you want to encapsulate into a single "thing" in your code. I think Polymer is going to fill that gap. That being said, Angular team has already (if it hasn't changed) decided they're going to be moving forward with Polymer.

Personally I think Angular + Polymer is going to be hard combination to beat.

praetorian84 3 days ago 1 reply      
As someone who has thus far only used with Angular for smaller projects, seeing performance raised as a concern is a bit of a concern for ever using it in a serious project. Would still like to see some numbers to back up the anecdotal evidence.

It's also hard to motivate starting a potentially large project in Angular right now, knowing that v2 is on the way that is basically a new framework.

gldalmaso 3 days ago 3 replies      
"And whar are no-no factors for angular?

    Teams with varying experience.    Projects, which are intended to grow.    Lack of highly experienced frontend lead developer, who will look through the code all the time."
I am greatly interested in learning what is the alternative that would be a 'yes-yes' in these bulletpoints.

EugeneOZ 3 days ago 0 replies      
"2 years" and "10 projects" - 2 months for each project? And he talks about "big enterprise apps"? lol.

Please links to examples of your code, author.

I wonder how people can't understand all the power of the 'directives' approach it's the MOST powerful thing in web development now and only advice I can give to future inventors of new frameworks: implement 'directives' concept, and then do everything you want else. It's advice after my 3 years with Angular, and counting ;)

Reusable code and TDD is the key for growing apps and directives - most successful following of this way.

---/please news.ycombinator, treat new line symbols as new line symbols and use ANY modern framework to make this site less slow and more mobile friendly

rpocklin 3 days ago 0 replies      
It's fair to highlight the less-ideal parts of AngularJS, but IMO the ecosystem and testing integration is as important as the framework code itself. Most of the issues the author mentions can be mitigated (eg. use ui-router).

The momentum behind AngularJS is huge, and with the 1.3 release I feel like 90% of webapps can be written well in Angular. Ionic is a great example of pushing AngularJS to the edge with mobile applications.

It really is up to the team to enforce good practices, pair or review code and refactor and unit test components. There is no framework which can make this happen, you need to be disciplined and always look to leanr more and improve the code you have written.

The author certainly does not recommend anything else, so where to now?

cbdileo 3 days ago 0 replies      
I feel somewhat conflicted about this blog post. I can agree with what others are saying in the comments that all frameworks have there pitfalls. A lot of development is dealing with trade offs and your teams varying experience.

On the other hand, I agree with the author that there is a tipping point where a framework/tool becomes too much of a burden. Sure, we can all do it the "right way" but teams don't always have people with the experience to even know what the right way is.

We should think about the frameworks we use as tools. Make sure the tool is right for the problem and the team. Also, don't try to apply all your older experience to the new tool. Take time to learn about the thing you use.

jaunkst 3 days ago 0 replies      
All of the frameworks suffer from performance issues. Performance will get better, but we will always have to profile our applications. A slow web component used in a ng-repeat scenario will always bring the application down to its knees. We can't just design a spaceship and expect an engineer to build a performant application. Designs need boundaries and guides as performance is one if not the most important factor of the UX. We also cannot reason with the jQuery spaghetti demon. Practice some Feng Shui, write better code. Understand whats going on in your framework. Work through the limitations with your designers. We are at the mercy of limited computation until our browsers give us more, and there is no magic bullet.
aaronem 3 days ago 2 replies      
You know, I'm just going to say it:

Angular is the Rails of Javascript.

That probably sounds like a derogation. But behold: I offer nuance!

They're both big and powerful, and capable of rewarding dedicated study with enormous power. Thus they develop a devoted following whose members often do things lesser mortals find little short of wizardry.

They're also both built to be friendly and welcoming to the newcomer, and offer a relatively short and comfortable path from zero to basic productivity. Thus they trigger the "I made a thing!" reward mechanism which excites newbies and leaves them thirsting for more.

They also, in order to go from newbie to wizard, involve a learning curve like the north face of K2.

In both cases, it's a necessary consequence of the design decisions on which the platform is based, and those decisions, by and large, have sensible reasons behind them -- not, I hasten to note, decisions with which everyone will (or should) agree, but decisions which can be reasonably defended.

But that doesn't make it a good thing. When people start off with "I made a thing!" and then run smack into a sheer wall of ice and granite, initial excitement very often turns into frustration and even rage, as on display in some comments here in this very thread.

(I hasten again to add that I'm not judging anyone for being frustrated and angry over hitting that wall -- indeed, to do so would make me a hypocrite, given my reaction to hitting that wall with Rails a year or so ago.)

Further compounding the issue is that, often enough, wizards who've forgotten the travails of their ascent will condescend to say things like "Well, what's so hard? Just read {this book,that blog post,&c.} and it's all right there." Well, sure, for wizards, who are well accustomed to interpreting one another's cryptic aides-memoire. For those of us still toiling our way up the hill, not so much.

I will note, though, that while I hit that wall (hard!) with Rails, and in the end couldn't make it up, I haven't had the same problem with Angular. The sole significant difference I can identify, between the two attempts, is this:

When I took on Rails, there was no one else in the organization who knew (or should've known) the first thing about the platform. When I had a problem with Rails, I faced it all alone, with only my Google-fu, my source-diving skills, and my perseverance on which to rely. For a while I did well, but in the long run, for all but the most exceptional engineers, such expenditure of personal resource without resupply becomes unsustainable.

When I take on Angular, I do so with the support of a large team, composed of the most brilliant and capable engineers among whom I have ever had the privilege of working. When I have a problem with Angular, I have a dozen people at my back, at least one of whom is all but guaranteed to have encountered the exact same situation previously -- or, if not this precise permutation, then something very like it, from which experience more often than not comes precisely the advice I need to hear, to guide me in the direction of a solution.

Of course, whether this is really useful to anyone is an open question; I think it's a little facile, at least, to say "Oh, if you're having Angular problems, all you have to do is find a team of amazing people who mostly all have years of Angular experience, and work with them!" But, at the very least, if you're going to be fighting through the whole thing all by your onesome, maybe think about picking up a less comprehensive but more comprehensible framework, instead.

sebastianconcpt 3 days ago 0 replies      
I really resonate with: "Do not make things easy to use, make your components and abstractions simple to understand."

And not only for AngularJS but as design principle.

cportela 3 days ago 0 replies      
I am not a lover of angular, but the reason angular is so popular is because it gets the prototype out there.

So many things and places are just doing things "lean" and "iterating" so angular makes that easy.

I'm not sure anyone could tell me Angular isn't very productive and that it wouldn't be tempting to use it so you can get some fairly magical experiences for users and in demos.

limaoscarjuliet 3 days ago 0 replies      
So if not Angular then... what? If I wanted to do a single page app with REST backend (little to no db access), what would you recommend?
lcfcjs 3 days ago 0 replies      
Bizarrely enough, I've built about 4 web apps ( using Angular over the past 2 years also. However, I've found that scalability (mainly due to it's reusability) is one of the strongest points. I've worked with enormous applications built entirely with jQuery.

I love angular, but perhaps thats because I'd only worked with jQuery before.

username__ 3 days ago 1 reply      
I've been using Angular now for a year and half, and a year professionally. The only issues I've run into are pages with large data bindings. I would love if the Angular team could recommend a solution other than "don't do that." That answer is simply unacceptable in my opinion -- their silence on this topic has been very frustrating.
CmonDev 3 days ago 0 replies      
"5 star performance requirements" - Scala Play comes to mind rather than any JS MVC frankly speaking.
fndrplayer13 3 days ago 0 replies      
I don't mean to be a jerk, but this article is really poorly organized.
cturhan 3 days ago 0 replies      
As author says in the comment, these are valid for Angular 1.x so I'm hoping that angular 2.x will be more carefully designed framework.
waps 1 day ago 0 replies      
The real news should be : javascript framework is actually still considered useful for something after 2 years.
What We Learned From 40 Female YC Founders
406 points by katm  3 days ago   277 comments top 26
tptacek 3 days ago 15 replies      
I've been coding since I was ~13. I can understand why people who haven't might have valid reasons to wish they'd started earlier. I'd just say: beware self-fulfilling prophecies and selection bias. Lots of really excellent software people I've worked with got late starts. Lots of people who started early coasted or are still coasting. In the 25 years I've been coding, only a few years worth of that time really grew me as a developer, so what you work on has just as much impact as how long you've been working on it.

Work with a bunch of different enterprise L.O.B. developers to get a sense of what I'm saying here. The average age of a backoffice developer is higher, meaning they have more experience. Hiring in enterprises is regimented, meaning that they tend to come from CS backgrounds. Are they uniformly high quality developers? No. In fact: there's a stigma attached to coming from a long stint in enterprise development.

As a lever for getting more women engaged with startups, the idea that an early start is important makes even less sense. Much of the day-to-day work that happens even at companies with difficult problem domains is rote and uncomplicated. A few years experience is more than enough to lead a typical web project, and, more importantly, to have a sense for whether a dev team is firing on all cylinders and to authoritatively manage it.

Obvious subtext/bias here: I do not believe that starting women in software development earlier is going to resolve the gender gap. By all means, start early; there's nothing wrong with that. It's just probably not the root of the problem.

eah13 3 days ago 5 replies      
This is a great story and project.

An apology (in the original sense) of Jessica's work for those who think that the experiences of these individuals don't matter: What I think people don't get about Jessica's interviews is that they're part of a scientific process of understanding what makes great founders and great companies. Many discredit qualitative, observational scientific data. But for new, rare, or poorly understood phenomena, observation is the only way to make scientific progress. In engineering the phenomena are often well understood, common, and within the discipline, familiar. In this case deductive logic, reasoning from known principles, is quite fruitful; but its success biases engineers against inductive reasoning. But for other subjects, such as what makes a great startup founder, or what makes a great female startup founder, the inductive method is much more fruitful.

This is not some anomaly: all sciences started with observation and the inductive method. These are the beginnings of insight, generating hypotheses to be tested. We're still quite early in our understanding of startups, and even more so in our understanding of female-founded startups, that this approach is not just warranted, it's the only way to make true progress.

Jessica is like the Jane Goodall of startup science. Even though she's studying individual founders she's ultimately helping us understand more about ourselves.

iufwe87 3 days ago 4 replies      
I am from India and on H1B visa in United States. I still don't understand why there is more hustling in recent years about women participation ? Be it playing games, developing games, women in tech or women in NFL or women in ______ ( fill in blank here).

I studied engineering in India from one of the premier university and 30 % of my class were girls. Toppers of the class for all 4 years were girls. I know at least 50-80 girls from India and China in my linkedin contacts who are actively engaged int tech.

Keeping aside social problems faced by women in India for a bit ( and excluding poor people) , still there is very high participation from girls / women in India. Throughout my education of 1st to 12th grade there were more girls than more boys in my class.

So my question is ----

1. Why is US only facing this problem of women in ..... ?

2. Is this some political gimmick being played for 2016 preparation ( and I ask with all seriousness without affiliation to any party )

I have always considered people in US are more vocal about their rights, responsibilities and more aware of problems in general. Lately though, I see lots of thought policing happening, view manipulation going on at large.

My last and most important question is ,

3. Since you folks are now actively advertising and creating social conditions for women's participation in tech are you not depriving them of their freedom to choose whichever path women in US prefer ? In an ideal scenario, women would have tech as one of choice for career and not necessarily manipulative information representing tech is only best choice career.

jandrewrogers 3 days ago 4 replies      
The importance of learning to program at an earlier age conflates two patterns in my opinion. I benefitted immensely from teaching myself how to program at a young age but that is not why I have the computer science and software skills I do today per se. It is an artifact, not a requirement.

Computer science skills roughly follow a sigmoidal curve over time with long tails at the top and bottom. You really do not become useful as a programmer until you hit the hockey stick part of that curve. There is no substitute for time in the field to get to the hockey stick part. The primary advantage of learning programming when you are much younger is that you essentially burn down some of that initial time investment before you are really paying attention to how long it actually takes to be an effective programmer. You do not hit the hockey stick faster, it just seems like it to other people because you started down the path to get there earlier.

This is discouraging to people that start in college or later because there really is no shortcut to time spent doing it. The people that become good programmers faster usually just started earlier, it isn't necessarily that they are naturally more skilled. Nonetheless, the time required to become a good programmer is not that onerous in the big picture. The key is sticking with it even when the payoff seems distant.

As an added comment, people that do well at the top of the hockey stick, where there return on additional investment is diminished, do tend to be the people that started much earlier. Again, this is not due to talent per se but the same people sufficiently obsessed with computer science to teach themselves at a young age also have the obsession to learn and master the more esoteric parts after they've become excellent programmers even though the practical utility is much less in practice.

ChuckMcM 3 days ago 0 replies      
My experience raising three daughters is that they were always very aware of what others were doing. Their male peers were pretty uninformed (as I expect I was as a teen). I observed that the men were much more inclined to pursue an "unusual" activity (ie not what other people are doing)than the women were. It seemed motivated not by feeling "weird" rather it appeared to be motivated to not do something that their friends were not interested in participating with them. From a sense of inclusion they didn't spend group time on activities that other members of the group were not interested in.

I worked with my middle daughter to build a knitting pattern illustrator in Perl[1]. She and her friends could talk for hours about knitting, which is essentially programming as Jacquard proved, because they all were interested in the ways to produce interesting weaves. My friends were interested in talking about computers when I was a teen because we were interested in machines that could 'compute'.

The question I wonder about is if the disparity goes away when women develop group activities around programming.

[1] I liked the pun of using Perl for a knitting application.

djb_hackernews 3 days ago 6 replies      
> One of the most consistent patterns is how many founders wished they'd learned to program when they were younger.

I wonder what some of the reasons would lead them to have this wish. Is it a matter of having a missing skillset that slowed down growth of their startups or they later found that the really liked to write software and regret not finding out until later in life. Or possibly other reasons?

jkmcf 3 days ago 2 replies      
I wish my parents had let me take more shop classes, because right now I'm interested in home renovation.

The thing is, you don't know where your future interests will take you. Even if you are exposed to stuff when you are younger, you may hate it regardless of how great the presentation may be.

I personally think encouraging women, or anyone for that matter, to be programmers/scientists/mechanics is missing the point. You have to encourage people to find passions, be proactive, enjoy learning, and make these a habit. Life isn't static.

I also think the "gender gap" is a fallacy. It's true that all professions could be more welcoming to people of different persuasions, but it would be more interesting to know the gap between "people who want to do X but feel excluded" and the "people who are already doing X".

dmritard96 3 days ago 1 reply      
"In the most recent batch (W15), we asked about gender on the application form for the first time. The percentage of startups we accepted with female founders was identical to the percentage who applied."

There are application videos and have been for a while. And each founder has names listed on the application. With https://gender-api.com/ you could probably figure out gender without asking explicitly and could have done so acceptably well with previous classes. I would be curious what the stats look like back tested against each class over time.

timedoctor 3 days ago 1 reply      
For women who also want to have children at the same time as co-founding a startup, I think it's important not to underestimate how difficult this is.

When my wife was first breastfeeding I timed how long she spent breastfeeding and changing nappies and bathing the young baby. It was LITERALLY over 9 hours per day (timed to the minute). To think that it's possible to ALSO run a startup at the same time is in my opinion crazy. With older children it's a lot easier but still difficult.

I have several female friends who are also successful entrepreneurs. Some seem to make it work with their family life, but my experience is with most that they have a very hard time and that it often devastates their family life and relationships.

So yes there examples of women who run a company and also have young children, but I think they are the exception rather than the rule.

For women who do not want to have children, or who are not going to have children for many years in the future, no issue.

mikeleeorg 3 days ago 0 replies      
This is probably impossible to do, but it would be very interesting to contrast these anecdotes with anecdotes from women who could have become technology startup founders, but didn't for some reason.

YC's stories are awesome and these entrepreneurs will serve as important role models for young women. They unfortunately contain a "survivor bias" though, and there may be many other hidden factors preventing other potential female entrepreneurs from following in their footsteps.

I don't mean this as a criticism at all though. This effort by YC is a tremendous first step and for the sake of my daughter and young girls everywhere, I hope they continue.

EDIT: I'd love to hear the sentiment behind the downvotes. Hopefully I didn't come across critical of YC's effort here; that wasn't my intention at all. This is an important issue and I suspect the stories of those who were deterred are just as informative as those who have gotten this far.

jgdreyes 3 days ago 0 replies      
I think this is a great project. But one quote stood out:

> Interestingly, many said it got them attention for being unusual, and that they'd used this to their advantage.

This is what I have experienced as well. But do I want to be `unusual`?

Instead, let's strive for making it a norm that female developers are just as common and just as good as their male counterparts. Stop looking at me like I'm some freak for being a competent woman developer.

pskittle 3 days ago 2 replies      
"Not surprisingly, most of the women were domain experts solving a problem they themselves had. That's something that tends to be true of successful founders regardless of gender."

How much domain experience is necessary to solve the problems you have. isn't that something you learn/pick up once you start solving em?

skywhopper 3 days ago 1 reply      
Interesting stuff. One detail I'd like to respond to. Jessica writes:

     And as YC has grown, so has the number of female     partners. Now there are four of us and we are not     tokens, or a female minority in a male-dominated     firm. At the risk of offending my male colleagues,     who will nevertheless understand what I mean, some     would claim it's closer to the truth to say that     that we run the place.
Many times women find themselves in the situation where they are the more responsible employees, working harder than the men, taking care of the details many men overlook for far longer, and too often earning far less money and respect. So I just hope that Jessica and the other female partners are being compensated consummate to their contribution to the success of YC.

mentos 3 days ago 3 replies      
Computer science is the language of the future.

It bothers me that I spent so many years learning latin/french/italian when their real world applications are very limited relative to say C/Java/Python which are much more important foreign languages to be learning in school.

d0m 3 days ago 1 reply      
One sure thing is that having a child and a startup is a hell more difficult. I applaud founder moms/dads.
cauterized 2 days ago 0 replies      
As a woman in software development (and a former founder) what I appreciate most about this collection of stories is that rather than men sitting around hypothesizing about why someone else whose experience probably doesn't match their own made different choices than they did, it tells actual women's stories. The ongoing discussion of gender in computing needs more of this.

Next, I would love to see (for contrast) the stories of some women who did drop out or who considered STEM majors/careers but ultimately chose other directions. Any takers?

logn 3 days ago 0 replies      
Generally, I think the value of programming at an early age is that you have the time and context to develop an earnest interest in programming. You're doing it for fun. And then when you proceed to take courses on it, you're genuinely excited to learn, and you're not just struggling for a passing grade.

Most of this is just the broken nature of schooling.

debacle 3 days ago 1 reply      
Not to be callous, but I'm not really interested in any of these founders as people. I would be much more interested in an aggregated feedback discussion about how (or if) startups with female founders are different, what YC did right for them, what it didn't, etc.
fillskills 3 days ago 0 replies      
Even as a male entrepreneur, this collection is pure gold. So much to learn. Thank you for doing this.

Any chance this series could be made as videos? I would like my daughter to see them as she grows up.

brlewis 3 days ago 0 replies      
I'm interested in reading more findings from the 40 stories in addition to the ones Jessica describes, but I'm too lazy to read all 40. If you've read them, what did you find interesting?
pskittle 3 days ago 1 reply      
Also the link to the female founders conference has last years dates.


xchip 3 days ago 0 replies      
What they should do is to marry men that want to stay at home taking care of the kids.
StronglyTyped 3 days ago 0 replies      
I'm in a graduate CS program. Half, maybe more, of my cohort is female.
sama 3 days ago 3 replies      
this sentiment is the reason we need to publish things like this.

only about 11% of the founders we fund (and our applicants) are women (and this is a fair amount higher than the percentage most other firms fund). it's a big untapped pool of potential founders, and we'd like to continue to get the message out that women can start startups and YC can help.

klunger 3 days ago 2 replies      
If they wanted to back up this claim "...from the start I've made sure YC had an environment that is supportive of women," they would provide affordable childcare.
ddebernardy 3 days ago 2 replies      
The first paragraph, and the implicit message that females were in unusual need of help and support, struck me as belittling and patronizing... which is probably not the desired PR outcome. :-|

It seems to me though that a much better way to convey the right message would be to compile a "what we learned from 100 VC founders" and ensure that diversity is absolutely all over the sample: females, muslims, african americans, non-US natives, gays and lesbians, whatever. Doing so would convey the implicit message that diversity is normal.

Permissions asked for by Uber Android app
396 points by uptown  2 days ago   147 comments top 33
dmix 2 days ago 8 replies      
TLDR: Uber's Android app is literally malware

Since the website is currently down, this person reverse-engineered Uber's Android app and discovered it has code that will "call home" aka send data back to Uber with your:

- SMS list [edit: see other comments re SMSLog, SMS permission is not currently requested] - call history- wifi connections- GPS location- every type of device fingerprint possible (device IDs)

It also checks if you're phone is rooted/jailbroken and if it's vulnerable to Heartbleed... which it also calls home.

From my understanding, which the author somehow missed, is that it is using http://www.inauth.com SDK which provides 'malware detection'. This SDK is popular in the 'mobile finance industry' and the banking sector. Also notably one of the founders is former DHS/FBI.

Two possible theories: it is being used for fraud detection and/or an intelligence gathering tool.

Edit: here is a copy of the decompiled source code http://www.gironsec.com/blog/wp-content/uploads/2014/11/InAu... note the name "package com.inauth.mme"

Edit #2: here is a screenshot of Uber's permission request https://i.imgur.com/4MmYrJH.png no SMS on the list

andymcsherry 2 days ago 7 replies      
There's perfectly reasonable explanation for almost all of these permissions, and there's nothing in this analysis that suggests they're doing otherwise. The only one that I couldn't think of was WRITE_SETTINGS


ACCESS_COARSE_LOCATION & ACCESS_FINE_LOCATION: Fairly obvious, they need to figure out where to pick you up

ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE , INTERNET: They need to figure out if you have internet and use it

WAKE_LOCK: Keep the network running so you can get real-time updates about your driver


CAMERA: You can take a picture of your credit card for easier entry

CALL_PHONE: So you can call your driver

MANAGE_ACCOUNTS: So they can add your uber account to your phone

READ_CONTACTS: Probably for inviting friends or splitting ride costs

READ_PHONE_STATE: Legacy analytics reasons

WRITE_EXTERNAL_STORAGE: Probably unnecessary, but they are probably just storing data

VIBRATE: For notifications

The rest are for push notifications

As far as the roottools, I know Crashlytics checks for root so they can provide that data in their console for crashes. It's a pretty useful thing to be able to weed crashes from rooted devices out. They usually make very little sense and violate the advertised behavior of the SDK.

declan 2 days ago 0 replies      
I'm not sure the criticism in the linked post is justified.

Here's what Uber says about its Android permissions -- the page isn't that difficult to find:https://m.uber.com/android-permissions

Uber says the camera permission is required to take a snapshot of your credit card. The phone call permission is required to call your driver. The get accounts permission is required to enable single sign-in (Google Sign-In, Google Wallet).

The Uber app doesn't, according to the gironsec.com post, request Android's READ_SMS permission, so pointing to a "sendSMSLog" code excerpt by itself doesn't mean much. And so on.

As <andymcsherry> pointed out elsewhere in this thread, there's a "perfectly reasonable explanation for almost all of these permissions" except WRITE_SETTINGS. Uber says in its Android permissions post that: "We use this permission to save data and cache mapping vectors."

It seems as though it would have been useful for the author of the gironsec.com post to read what Uber has to say -- or, better yet, contact the company before posting a critique. If Uber PR can't cough up a good explanation, it makes the final critique more powerful.

I've posted here on HN criticizing Uber before (https://news.ycombinator.com/item?id=8383854), but before rushing to judgment here let's check our facts first.

andrewvc 2 days ago 3 replies      
Recall that PUT / DELETE arent official HTTP requests, rather extensions implemented via WebDav. Modern applications dont bother with these requests since its easier / more secure to perform those same actions with a server side language.

Apparently the author has not ever heard of REST. I'm a little shocked by that.

krschultz 2 days ago 1 reply      
As an Android developer, I don't want to have to ask for as many permissions as I do. I have 1 button buried on 1 screen that allows you to call customer support. 99.9% of users never click the button. However, I have to make every single customer accept the CALL_PHONE permission.

There are a bunch of permissions required for basics like autocompleting the users email for login, or checking the network state so you can adjust the app behavior based on connectivity.

Not to mention the incentives are all wrong in the Play store. Changing permissions murders your update rate, so you want to do it as little as possible. So when you are forced to add a permission, you grab a bunch of extra ones you 'plan' to use later to avoid having to get over that hump again. It's really awful.

blhack 2 days ago 3 replies      
A LOT of this stuff is pretty easily explainable. They want access to SMS and phone calls because the Uber app uses those things.

Camera doesn't seem terribly implausible. IT could be an incoming feature that allows you to take a photo of where you are so that your driver can find you more easily.

The WiFi stuff is probably related to location.edit: as pointed out below, this is so that you can take a photo of your credit card so you don't have to type it in.

This seems like "hydrogen hydroxide KILLS" scare mongering.

BTW, this is all available in the app permissions: https://lh3.googleusercontent.com/-FVPu6x-F5SM/VHUZgU47m-I/A...

I don't see the big OMG SECRET MALWARE scariness.

monort 2 days ago 4 replies      
Android needs a sandbox, which will provide apps with empty contacts, call history, fake location and so on.

Does such sandbox exists?

georgeott 2 days ago 2 replies      
Pro Tip: Unintall the Uber App, and use m dot uber dot com inside Chrome.
makeramen 2 days ago 1 reply      
Checking for root access is actually really useful from a developer standpoint. I've seen countless bugs on Crashlytics that are 100% on rooted devices which often is because the user has xposed or some other system level hacks that break my apps. This allows us developers to spend more time focusing on real bugs instead of chasing down these rooted device problems.
dpiers 2 days ago 0 replies      
There's an article here explaining what the permissions are used for: https://m.uber.com/android-permissions
dang 2 days ago 1 reply      
We've attempted to change the baity title to something accurate and neutral, but if anyone can suggest a better title, please do.
rrrrr 11 hours ago 0 replies      
Isn't it possible to route all traffic to/from an Android device through a MIM proxy, run the Uber app, and then see exactly what data is being sent to Uber HQ?
duncan_bayne 1 day ago 0 replies      
I've contacted Uber in Australia, and requested a copy of all personal data they have collected. This is my right under Australian law; it'll be interesting to see how they proceed.
click170 2 days ago 1 reply      
One way to deal with this is to filter all outbound requests and not let the requests that you've identified as "phoning home" to complete. Then, you test the app, if it still works you can continue using it. If it doesn't, you find a different service or you consider re-adjusting your restrictions.

Outbound filtering can quickly highlight any app that tries to call home. Luckily, many apps continue working if you block those calls. YMMV.

djabatt 10 hours ago 0 replies      
Clearly aggressive and not in a way that improves UX.
rubyn00bie 2 days ago 0 replies      
Definitely uncool and a great article.

I do find it funny that despite all the other allegations, absolutely reprehensible business practices, and general malice they've put in the world that this is a surprise to anyone. I'm quite surprised that they still have so much business, but then again, morality isn't a one-size fits all sort of deal. What bothers me, may not bother other folks, or may seem as smart business tactics ( :sadface: ).

To me, it's just more icing on the cake.

aikah 2 days ago 0 replies      
archive https://archive.today/CR3eW

I personally believe Uber app on android fits the definition of a malware.

zeus180 1 day ago 0 replies      
I don't see that Uber has permissions to my SMS, however, after going through the other list of granted permissions, I went to the settings and modified the permission and also enabled privacy guard for the app. You can go to Settings -> Apps -> (scroll down) tap on Modify - screenshot http://i.imgur.com/AVXLqgh.png
bndw 2 days ago 0 replies      
If you're interested in exploring this, and other Android app's permissions, I built http://appethics.org last weekend.

The goal was to easily surface what permissions a given app requires, and what they mean.

api 2 days ago 3 replies      
There's a general trend of mobile apps that ask for everything: camera, microphone, sensors, access to local files, WiFi, etc. These are apps (like Uber) with no good reason to need access to such things.

In most cases I can think of no good reason for this except either a desire to surveil customers for indirect monetization, or participation in government or private surveillance grid efforts.

I've got Lyft on my Android phone, but not Uber. I look at its permissions and the only dubious looking one is "access to take photos / videos." Is this perhaps for signing up as a driver and photographing yourself and your car? I don't see anything else that doesn't make sense.

gasull 2 days ago 0 replies      
This is outrageous. Write a review at https://play.google.com/store/apps/details?id=com.ubercab and link to the article. I just did that.
chc 2 days ago 0 replies      
imperialdrive, if you happen to read this, you appear to have been hellbanned. Just FYI.
first_account 1 day ago 0 replies      
Uber doesn't know who does the ride of shame based on pickups and drop offs, they just monitor the phones of their customers!
stesch 2 days ago 1 reply      
I don't have Android 5.0. Is it now possible to block certain permissions per app?
aosmith 2 days ago 0 replies      
Looks like it's time to turn on memcached...
pkaye 2 days ago 0 replies      
Wonderful! Just last week, Samsung did an update on my phone software and installed the Uber app automatically. Glad I removed it after that.
thisjepisje 2 days ago 1 reply      
As a dumbphone user, why people even allow an app to get information like this is beyond me.
ape4 2 days ago 0 replies      
Uber: nice idea, evil company.
A Eulogy for RadioShack
381 points by Thevet  1 day ago   157 comments top 57
morganvachon 1 day ago 4 replies      
I'd be willing to bet that this is exactly the way it was at most, if not all, corporate RadioShack stores. On the other hand, the franchise stores (which mostly no longer exist) could be great places to shop and to work, because they were owned by actual breathing human beings, not faceless corporate overlords.

My first "real" job after high school was at the local RadioShack, which was owned by one of the most amazing people in my early adult life. I had shopped there for years as a teenager, as it was the only place I could find electronic parts without cracking the Mouser catalog. This was the mid 90s, and there was nearly no e-commerce yet. One day I was in there looking at the HTX-202, RadioShack's entry into 2-meter amateur radio handsets, as I had just been licensed as a Technician Class ham for the first time. Richard said "Hey, Morgan, you still looking for a job?" "Sure, but what happened to Dennis?" I replied. He said "Well, Dennis kinda died." Poor Dennis, his only employee and an elderly gentleman, had suffered an aneurysm and just like that, he was gone. Thus, thanks to the untimely death of a nice old man, I got my first and only RadioShack sales associate position.

Working there was an absolute blast; the pay sucked and the only commission to be had was on Primestar satellite sales, but Richard was like that cool uncle everyone has. I learned a lot about how to run a small business from him, and there was never a day that we didn't have fun. Christmas season could be hectic, but in a good way, with parents who were delighted to see that we carried batteries for all the toys their kids were getting (along with some nifty toys like R/C cars, too).

Even after getting a better full time job a couple of years later, I stayed on part time with Richard for over a year, until he could hire someone to replace me. I still drop in when I'm in my hometown to say hi, and we still keep in touch via email. It really saddens me to know that he'll likely have to drop the RadioShack name one day soon, but I know his store will continue operating for the foreseeable future. His kind of salesmanship stands far above what you get at the corporate stores.

chipotle_coyote 1 day ago 3 replies      
While it's glib to say that Radio Shack was lost for the want of a space, the change in their corporate identity from "Radio Shack" to "RadioShack" -- 1995 -- coincides with their slide to irrelevance nicely enough to seem kind of symbolic. We don't really know what the Internet is, but we've noticed that cool companies are using CamelCase.

I suspect it's hard for people who are under 30, maybe even 40, to believe how different RS once was from the sad sack we have today. I don't think they were ever really cool -- that air you now get from their '70s and '80s catalog of old white dudes trying to be hip is an air you got from those catalogs even when they were brand new, trust me -- but they had big selections of really interesting stuff, most of which was exclusive to them, and frequently had salespeople who were genuinely enthusiastic about electronics and computers. And the importance of the TRS-80 in the early computer scene seems to be vastly underestimated today, I suspect in part because of that perpetual lack of cool. You might dimly remember "Trash-80" jokes, but you may not remember that for a few years it was the best-selling computer in the world. And unless you followed the company, you almost certainly don't remember that they had separate "Tandy Computer Center" stores, or that they sold an IBM PCjr "clone" called the Tandy 1000 that fixed all of the PCjr's problems and became so popular that big name games had exclusive Tandy 1000 features, or that they actually shipped a frikkin' Xenix workstation (The TRS-80 Model 16) years before most people had any idea what that was or why they should care.

And, really, that last one sums up Radio Shack in a nutshell, especially with computers: always either a few years ahead of their time, or a few years behind it. As someone who grew up with the TRS-80 Model I and later Model 4 (and who did a lot of strange mad science stuff with it up even into the early '90s), I'll miss RS -- but the RS that I have fond memories of has been gone a long time.

IvyMike 1 day ago 0 replies      
Obligatory link to The Onion article "Even CEO Can't Figure Out How RadioShack Still In Business":


That's from 2007, making the continued existence today even more perplexing.

pud 1 day ago 3 replies      
I worked at Radio Shack.

I was in high school and lived with my folks. I had almost no expenses. It didn't occur to me until much later what a bummer it all was.

- You get paid commission, something like 1% per sale. Commission was pointless because almost everyone already knew what they needed (AA batteries or a headphone adapter...).

- If your commission didn't add up to minimum wage, you got minimum wage. Since it was nearly impossible to sell $800/hr at Radio Shack, everyone made minimum wage.

- There was a computer screen in the back room that showed a leaderboard of on-duty employees, and how much each sold for the day in real-time. So even though everyone was making minimum wage, the employees were pitted against each other. This made every employee your enemy.

- If somehow you made a commission for the day, when the item was ever returned, they dock the commission from your next paycheck. So you ended up making less than minimum wage.

bane 1 day ago 0 replies      
I actually remember the point that RS became irrelevant to me. After spending lots of free-time hanging out there in the 80s at the local malls, and getting much of my early computing stuff from them, I ended up with a 386sx bought from another local computer store. It was my real first entry into IBM compatibles and I remember how easy it was to mix and match various parts and cards and extend your machine into something much better than what you bought without too much fuss or expense.

And I remember lots of those old early games had special settings for Tandy computers, special video modes, 3-channel sound, that sort of thing (I don't remember the specifics), I wondered what all the fuss was about since you couldn't buy a Tandy video card or audio card and they weren't really "industry standards" in the way a VGA graphics card or a Sound Blaster was.

I went back and visited my local RS to see one of their Tandy machines and came away singularly unimpressed. I realized then that there was really nothing Radio Shack could offer me that I couldn't get easier and cheaper from dozens of other, larger, stores with better selection.

Years later I stopped by to pick up a cuecat and I've never visited a RS outside of that.

I never understood why they didn't start aggregating their small stores into bigger stores. Heck, even Office Depot offered me more selection than my local Radio Shacks. With the coming and going of Circuit City, the entrance of Frys, Best Buy, CompUSA, etc. I never understood why they stuck by the old '70s electronic store tucked away in the corner of a strip mall next to the Laundrymat model.

plg 1 day ago 3 replies      
I remember flipping through the Radio Shack catalogue as a kid in the late 70s, early 80s, and making my virtual xmas shopping list (I almost never got the things I actually wanted). radio-controlled cars, walkie-talkies, metal detectors, tape recorders, electronics kits.

I also remember being able to walk into a Radio Shack in my local mall and stroll over to the electronics section and pick out just the right resistor that I needed to complete my circuit project at home. I think I was 12 at the time.

vibrolax 1 day ago 0 replies      
Radio Shack had a lot in common with Sears, now also on death's doorstep. Sometimes the contract manufacturers making RS's house brand speakers or Sears "Craftsman" power tools or "Kenmore" appliances would turn out a top-performing product for a great price. As the actual manufacturer might change from year-to-year, one could never rely on the brand to deliver a good value. One had to read magazine or other reviews to discover where the diamonds were hidden.
brianberns 1 day ago 3 replies      
I learned to program in the early 1980s by playing with a TRS-80 floor model for an hour or two at a time while my mom shopped elsewhere in the mall. My first program animated a sine wave in BASIC by printing a single character one line at a time, and letting the lines scroll up the screen. I would've loved to own one of those computers, but my dad refused to buy one because he said it would be too tempting for me. He didn't even want me to have a hand calculator, and gave me his old slide rules instead (which were fun - not complaining!). I didn't have unfettered access to computers until I went to college a few years later.

I never made a big purchase at RS, but I always appreciated that I could pop in for electronic sundries as needed over the years. My opinion of them started to go south about a decade ago when they began insisting that they needed to know my zipcode every time I bought something. That was very fishy and heralded their sad decline, I think.

RS has been obsolete for years and I really don't know how they've managed to stay in business this long, but I'll miss the name when it's gone.

breadbox 1 day ago 2 replies      
It's awful to read that and compare it with what Radio Shack was for us geeklings in the 1980s. For those of us living far from big cities, Radio Shack was the source of electric parts, IC chips, LEDs, not to mention hands-on access to actual computers.
mrbill 1 day ago 0 replies      
Some of the locations have started to carry a bigger electronics component selection again (instead of a single pull-out file of resistors and capcitors), along with 3D printing supplies, Arduinos and related parts (retail packed from SparkFun), and so forth.

Unfortunately, they started doing so 4-5 years too late. I was shocked in 2007-2008 when I needed a couple of resistors to finish a project, the local Nerd Palace (Electronic Parts Outlet, in Houston) was closed on a Sunday, and the local Radio Shack actually had that pull-out file of components with what I needed.

adricnet 1 day ago 2 replies      
That's certainly an interesting read... and one I can't really argue with the anecdata of from my own experiences. So I'll add one:

After it was too late by that author's reckoning, the parent of Radio Shack tried another venture, a "big box" store called The Incredible Universe. When that failed and they wrote it off, it wiped out the entire profit of Radio Shack thousands of stores for that financial year.

I still think of it when I'm trying to understand the machinations of large companies.

And I, too hope that the folks I worked with at RS in the 20C found better jobs, especially the poor store managers.

rbanffy 10 hours ago 0 replies      
What saddens me most is that RadioShack was one of the pioneers that sparked the personal computer revolution. I can easily remember my first interactions with a TRS-80 (a clone - I live in Brazil), reading avidly their manuals, understanding how their block graphics worked and writing my own small programs I would go to a computer store to test.

It's a little bit tragic, but we all know RadioShack's time has passed. We appear to be living in a time of rapid transformation. RadioShack will soon no longer exist. Like payphones, printed newspapers and magazines and soon books and bookstores will follow the video rentals and record stores into history. Change can be difficult for those who live through it, but in return, we get stories to tell our grandkids.

It's not that bad a deal.

logn 20 hours ago 0 replies      
I wish RadioShack would transition to selling hackable electronics (Raspberry Pi, Arduino, stuff on SparkFun, Maker Bots, etc). That's how they started and now as they're nearing bankruptcy, that market is finally back and getting bigger. Further, most of these parts are made by small-time businesses with suboptimal distribution and RadioShack could be enormously helpful. And I see blogs all the time about how if you have the inside connections to factory owners in China you can get all sorts of cool stuff at very low prices--RadioShack could resell all this.

Edit: but I think RadioShack's future is in the hands of Wall Street types who only see it as an interesting financial instrument where they can cheat some fools out of money before totally liquidating.

realrocker 23 hours ago 2 replies      
So last month I was in US for the first time and I was dying to walk into a RadioShack, the temple of cool things as I have been hearing about on obscure and popular electronics hobby forums since I was 16. After checking out 6 stores in 4 cities, I was really disappointed. No cool stuff. All I could find was phone cases, cheap earphones and a few IC's(mildly interesting). To be honest, I didn't know what I was expecting but certainly not what I saw.
Shivetya 5 hours ago 0 replies      
I guess one of Radio Shacks biggest problem is the same as most brick and mortar stores, many us just want to order it online have it show up.

They could have tried/could try to become the home tech store. Take your home to the newest level, LED lighting all around, wireless integration of home automation and security. Sharing video all around the home. Then show it off in store. Hell, sell solar panels or such, I don't care. Just give today's geek a reason to come in. I don't need a phone or a computer; but a specialized home automation turn key system... well...

blhack 1 day ago 2 replies      
Radioshack, want to save you business? Turn every single one of your stores into a hackerspace.

You don't even need much. Get a laser cutter, a 3D printer, a drill press, free WiFi, some tables, and a stack of arduinos and you'll be doing awesome.

Then build an instructables-esque community website for people to show off their projects.

I'll even consult with you guys on this if you make a big enough donation to the hackerspace I help run in Phoenix.

leoc 1 day ago 1 reply      
> RadioShack is a company of massive real estate, and is peddling a business model that is completely unviable in 2014.

We have good evidence to suggest that the RS business model is indeed viable in 2014: it's Maplin https://news.ycombinator.com/item?id=8328621 followup https://news.ycombinator.com/item?id=8329953 ).

bsder 22 hours ago 0 replies      
While Radio Shack deserves the scorn, it had two big problems at a crucial time.

1) Charles Tandy died in late 1978. This was right as the computer revolution was taking off and his loss of vision was bad news.

2) IIRC, one of the corporate officers embezzled an enormous amount of cash in the 1980's right as the IBM PC clone business was taking off. This stalled the company at a point when it needed to be pivoting. Unfortunately, I have only my memory to rely on as I can't seem to pull this out of any of the web search engines.

ryandrake 13 hours ago 1 reply      
This isn't really a story about Radio Shack. It's about how awful it is to be a retail employee. You could have written this about the working conditions inside any retail store in any dying shopping mall: Forced to work off the clock, no overtime pay, pay structure resulting in pay below minimum wage, capricious, last-minute schedule changes, do-it-or-leave ultimatums, callous, abusive management, exhausted and overworked management, management-by-screaming-at-people, nonsensical rules and policies from corporate, revolving door for employees and management, and widespread apathy. Nothing here specific to Radio Shack, folks, this is the standard retail work environment in the USA.
neurobro 5 hours ago 0 replies      
I shopped at RadioShack just one time in the '90s. I bought some walkie talkies, and the cashier insisted that I provide my name and address to make the purchase. I said I didn't want junk mail. He assured me that it was only for recordkeeping purposes and that my info wouldn't end up on a mailing list. Within weeks, I started receiving RadioShack catalogues and flyers. I'm pretty sure they spent more on postage, paper and ink over the next few years than I had originally spent on the walkie talkies.
steven2012 1 day ago 1 reply      
The Radio Shack where I grew up gave out free batteries once a month. I did a science fair project that compared how good Radio Shack batteries were to Duracell, etc, and they were superior by far. It's sad that they are basically dead, but it's the Circle of Life, I suppose, and a good reminder to never rest on your laurels or take it for granted.
jmspring 1 day ago 6 replies      
Radio Shack at this point is a sad caricature of itself. I remember being able to buy all sorts of ICs, analog parts, interesting radios and kits. Now it peddles cell phones and overly prices things -- I recall needing a battery for my cordless phone recently, they wanted $23 for a battery for a $50 phone? Uh, no.

Two other bay area places that have changed a lot are Frys and the now closed Quement Electronics. Fry's still has a few components, but nothing like the 80s and 90s.


conductr 9 hours ago 0 replies      
My SO worked at RS corporate for a few years during late 2000s. The Internet is definitely what killed this company. They had loyalty with a older generation and that's how they survived until now. They never found out how to bring young people into the stores in an Internet age. That's what the whole "the shack" thing was all about. It failed. Now that loyal customer segment they did have is 1) aging out of the market 2) coming around to online shopping.

I can't really say much about how store ops ran, but I'm sure it was a result of the financial pressures the company was under just trying to survive.

softbuilder 1 day ago 1 reply      
Radio Shack was always two stores in my memory. There was the legit geeky side with electronic parts, and there's the Realistic/Tandy side hawking so-so also-ran consumer electronics. The perspective always seemed to be that parts were sort of a loss-leader for consumables.

The employees almost never knew anything technical, and were often suspicious of nerdy kids like me lingering in the parts section. I remember one time I was building a circuit and needed a .22uF cap. I asked if a .22pF was the same (I was like 10 or 11). Instead of saying "I don't know" the sales guy just kind of stared at the package and then shrugged and said "Yep". My circuit did not work.

I hear people floating the idea of RS reinventing itself as more of a hacker-friendly place, but from an investor's perspective you'd be switching from a giant consumer market to a much smaller niche market. Personally, I think that's a great idea, but I'm not sure you could ever convince investors of that.

msherry 1 day ago 0 replies      
I applied to work at a Radio Shack when I was 16. I didn't get a job there. The guy interviewing me spent most of the time making sure I knew the difference between being paid hourly, and being paid on commission, and making sure I really knew what a spiff was. In retrospect it was kind of weird. It was kind of weird at the time, too.

Reading these, I don't feel like I missed out on too much.

hagope 1 day ago 1 reply      
I actually think RadioShack can turn things around. "Hacking" electronics is gaining ground esp with Arduino, Rpi, etc. Here's what I would do:- Beef up stock of "maker" electronics, Arduino's, Rpi, modules, kits etc.- Buy/build modules/kits, work with suppliers like SparkFun, Shmart, Adafruit for help in this area- Create "recipes" for building electronics and guide customers through purchasing the materials- Clear out all the rip-off electronics unless you are willing to match Amazon prices.- Get rid of smartphones, who buys these from Radio Shack???- Most importantly, hold regular (weekly) free "maker sessions" for kids to learn principles of electronics and get them excited about it...similar to what Home Depot does

The more they focus on building/diy electronics the better, this is higher margin (and lower price) than simply retailing finished electronics, although they'll need to do some of that too I suppose.

ArtDev 8 hours ago 0 replies      
The RadioShack employee I spoke with had no idea there was a Mini-Maker Faire in Portland.RadioShack could have had a presence there. When the company finally dies, where can you go to grab some Arduino kits or components? I hope another company takes its place.Places like BestBuy don't even carry cables, let alone components.
vincentbarr 5 hours ago 0 replies      
"We all line up in expectation of hordes of customers. Six on one side of the store, six on the other side, pallbearers of an invisible casket." Lol.
jrapdx3 20 hours ago 0 replies      
Honestly, I don't know what to make of the "news" of RS' imminent demise. Just 2 days ago I visited a nearby RS store and bought stuff there. The store was well-stocked and I wasn't the only customer in the store. The employees didn't have that unhappy, "short-timer" look either, that is, didn't emit even a whiff that the store was closing down.

Besides, there are at least 6 RS stores in this city, the furthest I know of being maybe 10 miles away, so I'm guessing there must be a few RS stores out there that I don't know about. Since they've all stayed in business for many years, they have to be selling some items or services or something.

I can report at least two positive aspects of my RS visit. The guy at the cash register actually knew something about the gear I was interesting in and could answer technical questions. Second, prices were low and the quality of items seemed pretty good.

So yeah, there are reasons I would hate to see them go...

taivare 12 hours ago 0 replies      
It saddens me to read this I bought my first computer from Radio Shack a Tandy 1000EX, it's now gutted and hanging on the wall above my desk,my nephew said,look mom a computer with a typewriter built into it !. Upon discovering ,how I memorialized my first computer , my nephew went on a search for his first phone..rest in peace RS.
Naga 8 hours ago 0 replies      
If you think working on Thanksgiving is bad, the crummy retail store I work at (bookstore in Canada) is open New Years Day. Its actually open every day of the year except Christmas Day.
wheaties 1 day ago 0 replies      
It's not just the company is odd and off the mark. The bond holders are also screwy: http://mobile.bloomberg.com/news/2014-05-13/radioshack-lende...

The bond holders stopped them from closing unprofitable stores...

RevRal 1 day ago 0 replies      
Ah RadioShack, where I acquired Jazz Jackrabbit and a circuit board toy with my dad. It's sad that RadioShack isn't as fun as I remember it being, but last time I was in I saw a similar circuit board toy which is nice. It's always sad to hear that a company treats their employees as though they are no more than cattle tugging a plow.
beloch 23 hours ago 0 replies      
In Canada radioshack rebranded itself as "The Source". I used to pop into radioshack to find plugs, adapters, and the occasional electronics component. They stopped carrying those years ago and now, if I want anything like that, I either drive across town to a hobby shop or order online. Since "The Source" now seems to specialize in overpriced junk, I haven't set foot in one for years. This article makes me feel good about that decision. It would be better for ratshack's workers to be out of a job and then in a job that treats them as human beings.
amorphid 1 day ago 0 replies      
As of today (Nov 26), Yahoo Finance shows RadioShack as having 3-ish billion USD in revenue, and a market cap of of 83 million USD. I guess there isn't a lot of faith on Wall St. that RadioShack is going to turn itself around.
chipsy 22 hours ago 0 replies      
My earliest memory of Radio Shack(which is probably circa 1990, so even before the name change) is that I walked in accompanied by my mother and attempted to play the display miniature pinball game, and then was yelled at by the manager for this crime.

I've had better experiences, I even got a microSD from them the other day(after being tipped by a friend that they were cost competitive on this one item). But it has never been warm fuzzies.

Elzair 15 hours ago 0 replies      
Here is a good blog post on the history of the TRS-80 that also discusses RadioShack's corporate culture. http://www.filfre.net/2011/06/the-trash-80-part-1/
mgirdley 1 day ago 1 reply      
Just tonight my father-in-law's new TV required a HDMI cable. He suggested RadioShack. I said "I'm going to Target. At RadioShack you have to talk to people."

The days of high-touch service are over.

Scuds 1 day ago 1 reply      
Christ, I can't imagine what their corporate IT must be like.

Probably the Windows XP syndrome of never ever ever never investing into your infrastructure, but stretched over fourty years.

derekp7 1 day ago 0 replies      
Part of the fall of Radio Shack is that the market for discrete components isn't what it used to be. This is because not as many people take up electronics as a hobby. And that, in turn, is because a lot of people that would go into electronics, got sucked into computer programming instead. After all, the decline of Radio Shack coincides with the rise of affordable personal computers.
jdeibele 23 hours ago 0 replies      
Surprised that nobody has mentioned BatteriesPlus. Checking their website, they have at least 586 stores selling batteries - cell phone, laptop, car, etc. - and light bulbs.

Focusing in those areas seems exactly what RadioShack could and should have been doing.

zafka 1 day ago 1 reply      
I still have a cuecat!!! I never intended to use it for it's intended purpose,but thought it was great to get a free bar code reader.
foxhop 12 hours ago 0 replies      
This honestly reminds me of Sears Electronics Department, I worked there for 3 years.
bch 1 day ago 0 replies      
This reads like a revolting Douglas Coupland novel.

Well done.


sz4kerto 1 day ago 0 replies      
Maybe I have a sad life otherwise, but I was laughing so hard sometimes. Thanks for this.
GoldenHomer 1 day ago 0 replies      
Oh RadioShack, I hardly knew ye. I do regret getting that crappy Virgin Mobile smartphone, which was the only thing I bought ever from a RadioShack. RIP in pieces.
drdeadringer 1 day ago 0 replies      
I just read a description of a business calibrated to "Kafka Settings", or at least a feature biography in "Zombie Business Today".
mbubb 1 day ago 0 replies      
One thing the local radioshack is good for is to recycle old batteries - will have to find a new place to do that.
VLM 1 day ago 0 replies      
My Dad worked there in the 80s between contracts for a couple months. "Well this contract ends this month, and I've got an awesome database migration contract starting in 6 months, so I've gotta find something to do meanwhile (decades before FOSS type stuff), and I've spent tons of money at RS since it was allied radio and I was a kid ..." This was before they turned into Cell Phone Shack. About half the time they got well over minimum wage back then, even on low sales stores. Sounds like they earn even less now than in the 80s, not adjusted for inflation! One thing that didn't change was the Shack expanding to own your life, so it starts as part time and somehow 3 months later you're an asst mgr (why?) and "working" 90 hours and then a couple months later you're all WTF am I doing here at 100 hrs a week, my contract job starts next week, bye!

The article had a lot of weird stuff about not understanding why people wouldn't leave a crappy job. Well, perhaps 90% of the time the only crappy part was the paycheck, you just hung out and screwed around and occasionally sold stuff to people. So I'd take the city bus to his store after school and we'd play video/computer games for HOURS during the slow times. We had kind of a game on betting when the last customer would come in, maybe 6pm or so most days, then we had the store to ourselves till 10 or so, literally a kid in a toy store. It honestly was a lot of fun almost all the time, just not much pay. You could make a ton of money at lunch and after work rush, and saturdays, like $40/hr (which in the 80s was a lot), but you had to hang out all night long at minimum wage if you wanted the plum hours. Sales were extremely uneven over time. I still don't understand why they ever opened before noon or were open after 6pm or so.

I can prove he worked there... in the mid 80s each shipment contained a hopeless VCR tape of sales blather not for public consumption. Sell Sell Sell!

marincounty 1 day ago 5 replies      
I think Radio Shack could be reborn if it did a few things.Get rid of the cheap toys. Go back to just electronic parts, and today's technology--like having the Ardino and Rasberry always in stock--along with all the less know electronic kits. Try to keep the prices down. Keep all radios and cell phone stuff, but get rid of the advertisements(I don't like to walk into loud stores). I would also devote a small section of each store to used/recycled/surplus stuff. The workers shouldn't be required to wear ties.
api 1 day ago 1 reply      
Radio Shack was my toy store as a kid, as far back as I can remember. My favorite toys were volt meters, little electric motors, power adapters, LEDs, bread boards, and those little electronics kits with the spring terminals. Radio Shack deserves some of the blame for making me who I am today. :
jumblesale 19 hours ago 0 replies      
Brum did not have a song.
squozzer 10 hours ago 0 replies      
No matter what happens to RadioShack the stories were told masterfully. Stoned Craig is my hero, who reminds me of myself. Hacking the merchandise in an anti-social way is what being a RadioShack customer is all about!
amorphid 1 day ago 0 replies      
larrys 1 day ago 0 replies      
Nice bashing of a company fighting for their life. Point being it's easy if you are awash in profitability to do the right things, hire the best people in management, pay for the top consultants. [1] And of course give people massages and free lunch and all sorts of benies. Or if you have been funded with funny money that gives you "runway" to burn through.

The highest quality people don't generally decide they want to join a sinking ship (let's say a store manager or district manager). The quality people are either happy elsewhere already (and not looking) or they take advantage (as a general rule) of the better opportunities either because they can or because they are smart enough to recognize those opportunities and pursue them. At my first company there were people that didn't even show up for interviews. Maybe they saw the facility and didn't like the way that it looked.

Separately, presumably if the author had a better opportunity he wouldn't have suffered for "three and a half years as a RadioShack employee".

[1] But even then if your basic model is not viable you aren't going to stand a big chance of making it. This isn't something like "People still buy cars Chrysler just can't make a car that people want".

RickHull 1 day ago 1 reply      
Food for thought: any organization is vulnerable to such stagnation and dysfunction, particularly when it was born for and evolved with a state of affairs that no longer exists. Either the organization must adapt, or it should die. Thankfully, the death of dysfunctional organizations is a given in free markets -- perhaps even a reason for bittersweet celebration.

Governmental organizations, in contrast, have no incentives to adapt, and no "recycling" mechanism. Failure is emphatically not an option. Worse, bureaucracy inevitably takes on the primary mission of preserving itself.

Quantum OS - OS based on Linux which conforms to Material Design guidelines
388 points by turrini  5 days ago   163 comments top 34
DCKing 5 days ago 6 replies      
I'm just going to leave a comment to contrast the usual negativity.

What I like about this is:

- The developers are not attempting to reinvent the wheel.

- Material Design is well-regarded on mobile devices, and it's interesting to see what you can do with its ideas on the desktop.

- They clearly pick technologies that are the way forward on the Linux desktop: Wayland and Qt+QML. It's great that these things are maturing to the point that we can start leaving behind X and that it's clear that Qt has won the toolkit war.

- Focused approach of not (yet) supporting different distros, but (like Elementary OS) an attempt to focus on getting it to work well for one system configuration.

I hope these guys can put in the time and effort to make something out of this. It seems they are making a couple of good decisions already.

sz4kerto 5 days ago 5 replies      
Please no. Material design might work well for mobile devices, where you don't spend too much time in front of a single screen (by screen I mean app screen, not the physical screen). The screenshots show exactly what the problem is:

- really bright colors in the top bars: that's exactly what we _don't_ want in desktop apps, as the focus is on the (changing) content, not the title bars. Grab this http://quantum-os.github.io/images/desktop_layout_1.png, resize it to full screen, and you'll see what I mean.

- large empty spaces: for me it's too much even on touch-based devices, but on the desktop it's just utter waste of real estate.

I want the desktop to get less and less popular, so then the OS and app makers can start optimizing for people who create stuff (e.g. me), and it could look like http://i.imgur.com/7Tu2i6W.png or http://yxbenj.files.wordpress.com/2013/02/vs2012_colortheme_...

igammarays 5 days ago 5 replies      
Am I the only one who doesn't get this Material Design everywhere thing? Maybe it's why I'm not a designer, but do we really want all our UI experiences everywhere to look exactly the same (with perhaps color variance only)? I mean I'm all for having design principles, like the timeless principles of typography, which have near-infinite variance, but with a powerful yet subtle underlying set of guidelines which make it beautiful and legible. Material Design seems to me like Google branding forced all over the place. Boring.
Spearchucker 5 days ago 1 reply      
I like this. I don't like it because I like Material or even Linux. I like it because I want to build a compelling user interface for my own app. The two best reference UI's I've seen (and yes, this is subjective) are Office and Visual Studio. Nothing else out there that's even vaguely mainstream and on a desktop pushes boundaries like those two. And yet I like neither of them. My own design (https://www.wittenburg.co.uk/Entry.aspx?id=bc4a9a14-cdd5-4c0...) reflects the best I can do. I'm not creative, nor am I an expert at UI or UX, so its really useful seeing what others come up with.

And that is why I appreciate both the effort it took to put this together, and a new perspective that someone was generous enough to share.

ai_ja_nai 5 days ago 2 replies      
I'm not getting why someone has to declare "we are creating a new OS", instead of just settling on "we are creating a better UI".OS is a very serious thing that has nothing to do with usability as it is perceived by grandma
overgard 5 days ago 1 reply      
I hope they package the desktop environment in a way that it can also be used on other distros. Going to an entirely new OS seems a bit much to me, but I'd be way more willing to try it out if I could just install it next to KDE or whatnot.
imsofuture 5 days ago 2 replies      
Are we still confused about OS vs window manager?
marknadal 5 days ago 0 replies      
I've been asking a lot of people if they know of any "fresh start OSs that have modern minimalist UX" lately - and haven't gotten any good responses. So much that I've been planning on maybe building my own soon. But THIS, this looks like a great beginning! I'm super excited to watch this make progress! Congrats, great job, and keep it up.
kagia 5 days ago 5 replies      
I for one think it's time we saw bold and ambitious attempts at changing the desktop.

Most desktops today are arguably variations of the desktops we were introduced to in windows95 and OSX (v10). The colours, placement and names change but rarely does anything new show up (save for metro desktop).

With wayland and mutter/qt+ this is a great time to try out wild and out-there concepts. It's the only way to break out and really change the desktop.

I can understand peoples frustration; the desktop, after all, wraps up everything we experience when we use our machines. However I will approach this with an open mind, and I certainly hope others will do the same.

jarcane 5 days ago 5 replies      
But ... why?

What does Material even offer to a Desktop OS?

And why a whole custom distribution, instead of a desktop environment?

mattd9 5 days ago 0 replies      
The new version can be found here: https://github.com/quantum-osYou can read the reasons here: https://plus.google.com/113262712329378697012/posts/M1muF1f7...
2lphacod 5 days ago 0 replies      
Firstly, really great initiative. Looking forward to this. However, arent these two objectives contradictory

"The focus will be on creating a "stable" and easy-to-use operating system"


"Our goal is to base our work on the latest upstream versions available"

instead of using more tested and reviewed versions. Unstable drivers are a very common issue making the linux experience hard.

livebeef 5 days ago 0 replies      
They should just write a gnome/metacity theme and call it a day. There is no point in having a new OS/distribution just to feature a new UI.
Vecrios 5 days ago 1 reply      
So much negativity in these comments. Instead of saying "why?" or "MD is for mobile screens," present your opinions in a factual matter to support your arguments.

I personally don't prefer the MD/Metro look design philosophy. This stems from the fact that I believe in designs that help accomplish a task, not prettify it for the sole of prettifying it.

sandGorgon 5 days ago 0 replies      
For people wondering about GTK,this was posted on Reddit


>Evolve OS reached out to them, we started with a chrome OS esque environment, and we started a material gtk theme. This uses qt, which is good for new apps, but not existing. Evolve is implementing a new lib for animations, that'll work with gtk, and existing apps. We've got dreams for it, check out the live stream on YouTube!

coleifer 5 days ago 0 replies      
Is this in any way related to this project?


It's based on Evolve OS.

Rapzid 5 days ago 0 replies      
I would be more interested in this as a desktop environment alternative, or a skin of an existing one. Not as a new distro though. The message is a bit mixed but I gather they want to control a new distro ecosystem and are starting with the UI as a way to draw in users.
knappador 5 days ago 0 replies      
Bells should be going off in a head somewhere about using Android alongside a desktop Linux home-rolled to become an amalgamated ecosystem of both and "just work" on a PC form factor but with LTE. Developers are demanding it so much that we're building it.
killercup 5 days ago 3 replies      
I generally like the look and feel of Material, but I'm not sure how well it works with mouse and keyboard.
riyadparvez 5 days ago 2 replies      
Link is broken!
desireco42 5 days ago 0 replies      
This is one of the best ideas I heard in years. To me this totally makes sense, plays on Linux strengths etc. I wish you best and intend to follow closely.
scotu 5 days ago 3 replies      
I really want not to be snarky and downplay anyone efforts, but I just wanted to point out that such an operative system already exists and is open source. You may have heard about it, it's called android... I can't even think of a valid reason to "reproduce" (aka making something that kind of looks like it but in the end is annoyingly different). For fun, maybe is the only valid one...

Seriously though if you pulled a nice installer for really up to date android on my desktop the thousands of apps in the play store and I would be really grateful...

Can someone think of any solid reason for why I should be interested?

MrBra 4 days ago 0 replies      
Why not having just as a theme for current Linux window managers? What is the reason for a whole O.S. behind that?
eklavya 5 days ago 0 replies      
If nothing else, a material UI widget set comes out of it which I can use to write android and ios apps. Thanks for this :)
augustk 5 days ago 0 replies      
If the special effects can be turned of and it will be as efficient as the Blackbox Window Manager I may give it a try.
zorbo 5 days ago 2 replies      
Pointless cached version without the pretty pictures: http://webcache.googleusercontent.com/search?q=cache:ypB2BA-...
rrggrr 5 days ago 0 replies      
Reminds me of BEos. Not quite sure why. Great effort. Keep at it.
tormeh 5 days ago 0 replies      
Is it based on Unity? It looks really good.
lwelly 4 days ago 0 replies      
Where can I download it from?
kchoudhu 5 days ago 0 replies      
"Google, will you hire me? Pretty please?"
zenciadam 5 days ago 0 replies      
What's its ultimate tensile strength of the OS?
kolbe 5 days ago 0 replies      
Material Design aside, the development community should have somehow reserved the name "Quantum OS" for the first operating system to run on the first quantum computing devise. I'm almost offended that some port of Linux is trying to bogart it.
gcb0 5 days ago 0 replies      
You know what would be a great idea? get a Window Manager that everyone is hating because the usability and customization that took years to achieve is being throw out of the window by the new maintainers just to copy Apple's UI, and let's use that and add Google's latest UI mumble jumble, and instead of just releasing as a window manager, let's call it a new distro.
Python idioms I wish I'd learned earlier
379 points by signa11  1 day ago   159 comments top 22
ims 1 day ago 2 replies      
I think the example in #4 misses the point of using a Counter. He could have done the very same for-loop business if mycounter was a defaultdict(int).

The nice thing about a Counter is that it will take a collection of things and... count them:

    >>> from random import randrange    >>> from collections import Counter    >>> mycounter = Counter(randrange(10) for _ in range(100))    >>> mycounter    Counter({1: 15, 5: 14, 3: 11, 4: 11, 6: 11, 7: 11, 9: 8, 8: 7, 0: 6, 2: 6})
Docs: https://docs.python.org/2/library/collections.html#counter-o...

shackenberg 12 hours ago 0 replies      
If you were underwhelmed by this blog post have a look at:

Transforming code into Beautiful, Idiomatic Python by Raymond Hettinger at PyCon 2013

https://speakerdeck.com/pyconslides/transforming-code-into-b...and https://www.youtube.com/watch?v=OSGv2VnC0go&noredirect=1

Topics include: 'looping' with iterators to avoid creating new lists, dictionaries, named tuples and more

dllthomas 21 hours ago 1 reply      
"Because I was so used to statically typed languages (where this idiom would be ambiguous), it never occurred to me to put two operators in the same expression. In many languages, 4 > 3 > 2 would return as False, because (4 > 3) would be evaluated as a boolean, and then True > 2 would be evaluated as False."

The second half of this is correct, but it has nothing to do with whether the language is statically or dynamically typed. It's a tweak to the parser, mostly.

__luca 17 hours ago 1 reply      
Sincerely, Transforming Code into Beautiful, Idiomatic Python by Raymond Hettinger... http://youtu.be/OSGv2VnC0go
euphemize 1 day ago 1 reply      
One of my favorites:

    >>> print "* "* 50
to quickly print a separator on my terminal :)

Previous discussion on python idioms from 300 days ago: https://news.ycombinator.com/item?id=7151433

ghshephard 22 hours ago 4 replies      
Wow - that's really, really great list.

In particular, #7 is something that I didn't even know existed, and I've been hacking around for 2+ years.

Instead of:

   mdict={'gordon':10,'tim':20}   >>> print mdict.get('gordon',0)   10   >>> print mdict.get('tim',0)   20   >>> print mdict.get('george',0)   0
I've always done the much more verbose:

   class defaultdict(dict):       def __init__(self, default=None):           dict.__init__(self)           self.default = default       def __getitem__(self, key):           try:               return dict.__getitem__(self, key)           except KeyError:               return self.default   mdict=defaultdict(0)   mdict['gordon']=10   mdict['tim']=20   print mdict['gordon']   10   print mdict['tim']   20   print mdict['george']   0
I'll be sure to make great use of the dictionary get method - I'm embarrassed to admit how many thousands of times I could have used that, and didn't know it existed.

rnhmjoj 23 hours ago 2 replies      
This is something I do instead of writing a long if-else:

    opt = {0: do_a,           1: do_b,           3: do_b,           4: do_c}    opt[option]()

ckuehl 23 hours ago 1 reply      
> There is a solution: parentheses without commas. I don't know why this works, but I'm glad it does.

It's worth mentioning that this is a somewhat controversial practice. Guido has even discussed removing C-style string literal concatenation:


You may wish to consult your project's style guide and linter settings before using it.

desdiv 1 day ago 7 replies      
I'm not much of a Python guy, but that chained comparison operator is sweet!

Sure, it's just syntax sugar, but it saves a lot of keystrokes, especially if the variable name is long.

Is Python the only language with this feature?

RyanMcGreal 17 hours ago 1 reply      
I came across this when I was first learning Python and it has always impressed me:

    from random import shuffle    deck = ['%s of %s' % (number, suit) for number in '2 3 4 5 6 7 8 9 10 Jack Queen King Ace'.split(' ') for suit in 'Hearts Clubs Diamonds Spades'.split(' ')]    shuffle(deck)

wodenokoto 1 day ago 3 replies      
Can someone direct me to a comparision of subprocess and os? I keep hearing subprocess is better, but have not really read any explanation as to why or when it is better.

(I'm glad I'm not the only one who was thrilled to discover enumerate()!)

tomp 19 hours ago 6 replies      
Some comments:

1. Am I the only one that really loves that `print` is a statement and not a function? Call me lazy, but I don't mind not having to type additional parentheses.

5. Dict comprehensions can be dangerous, as keys that appear twice will be silently overridden:

  elements = [('a', 1), ('b', 2), ('a', 3)]  {key: value for key, value in elements} == {'a': 3, 'b': 2}  # same happens with the dict() constructor  dict(elements) == {'a': 3, 'b': 2}
7. I see

  D.get(key, None)
way too often.

8. Unpacking works in many situations, basically whenever a new variable is introduced.

  for i, el in enumerate(['a', 'b']):    print i, el  {key: value for (key, value) in [('a', 1), ('b', 2), ('a', 3)]}  map(lambda (x, y): x + y, [(1, 2), (5, -1)])
Note: the last example (`lambda`) requires parentheses in `(x, y)`, as `lambda x, y:` would declare a two-argument function, whereas `lambda (x, y):` is a one-argument function, that expects the argument to be a 2-tuple.

rectangletangle 20 hours ago 4 replies      
I'm a fan of Python's conditional expressions.

    foo = bar if qux is None else baz
They're particularly interesting when combined with comprehensions.

    ['a' if i % 2 == 0 else 'b' for i in range(10)]
Though this particular example can be expressed much more concisely.

    ['a', 'b'] * 5

leephillips 23 hours ago 0 replies      
I was grateful for the example of multilined strings, mysterious as it is. The lack of any way to do this has been an annoyance of mine for quite some time.
polemic 23 hours ago 0 replies      
I work with python full time, and the last (#10 string chaining) is one of the few times the syntax had caused me grief, due to missed commas in what were supposed to be tuples of strings. The chaining rules are one of the few sources of apparent ambiguity in the syntax, especially when you include the multiline versions.
TheLoneWolfling 15 hours ago 1 reply      
I wish there was an interval set in Python's builtins.

I also wish that ranges were an actual proper set implementation - so you could, for example, take intersection and union of ranges.

And I wish that Python had an explicit concatenation operator.

tinkerdol 19 hours ago 1 reply      
"Missing from this list are some idioms such as list comprehensions and lambda functions, which are very Pythonesque and very efficient and very cool, but also very difficult to miss because they're mentioned on StackOverflow every other answer!"

Can anyone link to good explanations of list comprehensions and lambda functions?

jemfinch 19 hours ago 2 replies      
Most of these idioms actually make me sad.

When I first started using Python around 1999, it didn't even have list comprehensions. Code was extremely consistent across projects and programmers because there really was only one way to do things. It was refreshing, especially compared to Perl. It was radical simplicity.

Over the decade and a half since then, the Python maintainers have lost sight of the language's original elegance, and instead have pursued syntactical performance optimizations and sugar. It turns out that Python has been following the very same trail blazed by C++ and Perl, just a few years behind.

(At this point Python (especially with the 2 vs. 3 debacle) has become so complex, so rife with multiple ways to do even simple things that for a small increase in complexity, I can just use C++ and solve bigger problems faster.)

panzi 14 hours ago 2 replies      
Oh, wow. I didn't know the dict comprehensions. Since when do they exist? I always used:

    d = dict((key(x), value(x)) for x in xs)

retroencabulato 1 day ago 0 replies      
Nice list, but I was confused by the arguments to the dict .get() example until I looked up the definition.
sherjilozair 16 hours ago 1 reply      
Is there any such collection of advanced Python patterns, aimed at Python programmers with more than 2-3 years of experience?
flares 17 hours ago 0 replies      
haha.. in #1, the easter egg "not a chance" :) :)
WiFried: iOS 8 WiFi Issue
345 points by ValentineC  3 days ago   68 comments top 25
conradev 3 days ago 1 reply      
I was curious as to how Apple implemented AirDrop discovery (particularly the Contacts Only feature) because their documentation is helpful, but vague[1]. I spent a night reverse engineering it, and this is the process:

1. Alice opens the AirDrop sheet.

2. Alice begins advertising her short "proximity"[2] hashes over BTLE.

3. Bob, continually scanning in the background, sees Alice and her hashes.

4. If Bob either a) has the "Everyone" setting enabled, or b) has a match to one of Alice's proximity hashes in his contacts, Bob connects to Alice over AWDL.

6. Bob starts a HTTP server.

7. Bob advertises a Bonjour service for his HTTP server.

8. Alice sends a discovery request.

9. If the request is valid, Bob sends a discovery response (including device model, name and icon).

Also to note, the author, Mario Ciabarra is the (co?)-founder of Rock Your Phone, the alternative to Cydia until the two merged in 2010[3].

[1] https://www.apple.com/privacy/docs/iOS_Security_Guide_Oct_20... Page 23)

[2] Each device hashes every phone number and email in its address book with SHA256 after normalizing them. These hashes are referred to as full hashes. The "proximity" hashes are the first two bytes of the long hash. The full hashes are not broadcasted, but they are verified later over AWDL.

[3] http://www.tuaw.com/2010/09/11/alliance-of-the-jailbreakers-...

PakG1 2 days ago 1 reply      
I gotta say that this is annoying. I currently work for a place where we're all Macs and iOS. Many, many devices all over the place, all broadcasting for AirDrop and AirPlay. We're a school, they are useful features. Everything connects to a mesh wifi network. Neighbouring APs are of course on different channels.

It's like we're the perfect environment to experience this problem on a massive scale. We've tested and replicated OP's test results. So now we have a choice between good wifi and AirDrop/AirPlay? This sucks.

Osiris 2 days ago 3 replies      
I have been suffering some really bizarre WiFi issues on my rMBP since upgrading to 10.10. I'll suddenly get latencies of a second or more. Switching WiFi networks or even turning WiFi off and back on again will solve the problem for a short period of time.

I just tested this fix to do sudo ifconfig awdl0 down and my ping times are consistently low.

jlarocco 2 days ago 4 replies      
Seems Apple's quality control has really been screwing up the last few releases. I haven't updated to iOS 8 or OSX 10.10, and the way things are going I'm not going to any time soon.

If they keep this up, I'm switching back to Android and Linux.

rafeed 2 days ago 0 replies      
Neat. This is very helpful. Hopefully it actually fixes the WiFi performance issues. I created an Alfred workflow for myself based on the terminal commands listed in the article to quickly enable/disable WiFried on Yosemite. It basically runs the terminal command after authenticating upon entering a keyword to trigger. Quick and easy, just like the iOS tweak.

Edit: Here's a link to the Alfred workflows to disable/enable AirDrop if you're interested.https://dl.dropboxusercontent.com/u/534072/WiFried.zip

canadev 2 days ago 1 reply      
In a related but different issue, my iPhone 4S has not been able to connect to wifi since I upgraded to 8.0. 8.1 didn't fix it. It notified me of some patch today; I hope it works.

I tried a few different tricks I found by Googling, like "Reset Network Settings", but they didn't work. It's weird, after upgrading to 8.1, I had Wifi for a short period, then the next time I came back to my phone it was back to being completely gone. As in the Settings app won't even let me turn it on.

Pretty frustrating.

danielhunt 3 days ago 0 replies      
Very interesting and detailed post

Since upgrading to Yosemite, I've effectively had to stop using Airdrop, as it is now completely unreliable.

The sooner this is sorted, the better.

gaza3g 3 days ago 0 replies      
I had this same issue on my 5S and a majority of my friends with that particular model seems to be experiencing that as well.

In any case, after browsing the apple support forums, I decided to try a few of the workarounds there and the only one which worked was by setting my 2.4ghz interface to b/g(legacy)-only mode.

It was an acceptable fix since all my other devices were on the 5ghz and I don't really have any other option(7.1.2 was great for me, shouldn't have upgraded).

cobralibre 2 days ago 0 replies      
I see many comments criticizing Apple's QA and wondering how this defect was not discovered prior to release. The bulk of the comments seem to adopt a tone of disbelief. I find that hard to believe.

It would be illuminating to know more details about how development and testing worked for the Yosemite project. Was this a late change? Were there systematic blind spots in the test environments and test processes that made defects like this difficult to discover internally? Are we even sure that this issue was not identified internally? Etc.

72deluxe 2 days ago 3 replies      
I have noticed since iOS8 on an iPad3 and my 2012 MBP (non-retina) that Safari is insanely slow to resolve sites. It gets to about 15% of the progress bar and will just sit there.

Has anyone else noticed this?

mike-cardwell 2 days ago 0 replies      
I thought one of the benefits of the Apple ecosystem was that they controlled both the software and the hardware, so they should be able to catch these things...
twsted 2 days ago 0 replies      
The 'sudo ifconfig awdl0 down' trick does not seem to solve the issues I have on Yosemite (described here BTW http://markmaunder.com/2014/11/13/os-x-10-10-yosemite-wifi-p...): TCP Retransmissions and TCP Dup ACK.
apenwarr 1 day ago 0 replies      
Here's my tool for measuring latency glitches in real time on any device with a javascript-capable web browser. Maybe it will help some people narrow down the nature of their wifi slowness problems. In short, if the latency is always high, that's one thing. If it is usually low but then jumps, that's a different thing. http://gfblip.appspot.com/
dennish00a 2 days ago 0 replies      
I've had WiFi problems for many moons on 10.9: long latencies when pinging the router and complete dropouts for periods of 2-10 seconds every 3 minutes or so. The problem seems only to happen with certain routers. I was hopeful that my problems could be explained by the AirDrop issue (i.e., maybe I was only having problems when in proximity to certain computers using those troublesome routers).

Sadly, I don't even have the awdl0 interface. I tried taking down p2p0 but that didn't help either. Any other ideas?

I am really desperate!!!

mrgordon 2 days ago 1 reply      
Yeah when are they going to fix these issues? Everything was great until iOS 8 & Yosemite. My Mac with Yosemite can no longer use Airplay and my Airport Express no longer shows up as a supported Apple device in Airport Utility (despite the wifi still working
tdicola 2 days ago 1 reply      
I'm surprised this issue wasn't caught at Apple. You would think they must have a huge network of Macs and iPhones--surely someone should have noticed the poor WiFi performance.
pkaye 3 days ago 1 reply      
What is the impact on Yosemite if I don't actively use AirDrop. Does it still cause WiFi issues? I'm not clear on the severity of the issue.
newman314 2 days ago 0 replies      
Ever since I upgraded to iOS 8.1, I've seen random wifi drops where wifi just disappears completely and I've got to go back into Settings to reactivate wifi.

As you can imagine, it's exceedingly annoying when trying to watch a video.

acdha 3 days ago 0 replies      
At least in previous releases, you could disable AirDrop on OS X:

defaults write com.apple.NetworkBrowser DisableAirDrop -boolean YES

I don't have any Bluetooth-LE hardware to confirm whether this affects the newer-style AirDrop reported as the problem.

b2themax 3 days ago 0 replies      
On a related note, I believe there is also a hardware problem on the iPhone 6 plus' 4G LTE radio for ATT. Same network, but my old nokia 1020 had a much faster network connection. This is annecdotal, but there are reports in other forums also.
lyinsteve 3 days ago 1 reply      
Did he file a bug report?
yalogin 3 days ago 1 reply      
Airdrop uses Bluethooth for discovery (and may be even transfer?). Why does the author say it uses Wifi for discovery?
slothbury 2 days ago 0 replies      
great article! Hope Apple takes the hint for this call to action.
fit2rule 2 days ago 2 replies      
Bonjour over Wifi is used for more than Airdrop - its also used for virtualized CoreMIDI endpoints, and Audio too .. maybe there's something to be said about paring down the Bonjour refresh interval or something, but then we'd all be complaining about how long it takes to connect our studio gear to the fancy set of iPads whose soul purpose is to rock and roll ..
ape4 2 days ago 0 replies      
Must be a real problem since its got an icon.
Ask HN: Can we talk about FreeBSD vs. Linux?
344 points by bjackman  5 days ago   210 comments top 50
byuu 5 days ago 10 replies      
Like others have said, being unbiased is difficult, but I'll try. First, Linux v FreeBSD is really tough, so I will instead approach this from Debian v FreeBSD.

I really like how easy Debian on the desktop is: install it, apt-get install xfce, and I have a nice desktop. It's very easy to add Adobe Flash, Steam, Skype, etc.

The FreeBSD desktop isn't as nice. You can add things like Flash and Skype on FreeBSD, but you have to fight harder and often use the Linux emulator. We're missing nouveau, I've had some kernel panics with the nvidia binary drivers (caused by nvidia's own shoddy code, not FreeBSD's fault), there's a lot of missing and unstable features due to developers primarily targeting Linux these days (Thunar's file refresh is glitchy and often fails to update, Thunar volman only really works with udev/Linux, mousepad crashes when you open a file an even multiple of 4KiB due to a bug in their code and a quirk of Linux mmap, livbte-based terminals tend to crash sometimes when you open them due to a bug somewhere between libvte and FreeBSD's /bin/sh, file-roller explodes when you try and extract large archives, Firefox has freezing issues with loading gigantic images unless you set MOZ_DISABLE_IMAGE_OPTIMIZE=1 in your environment, on and on.)

And it's also not really configured well out of the box for the desktop. I have to make this org.freedesktop.consolekit.pkla file and add entries to it in order to get the restart and shutdown buttons in Xfce to work. I have to create a fontconfig/fonts.conf file and substitute Helvetica with Sans in order to get Firefox to anti-alias text on web pages. And so on.

You are also doing all the setup from scratch. You install xorg, you install your video drivers, you set up xorg.conf, you create .xinitrc, you install a display manager if you want one, etc. This is both good and bad. It's great if you love tweaking your system, it's bad if you just want to throw it on a box and run it.

Moving on ... I really, really appreciate Debian's branches. If you install Wheezy, you can get security updates for packages, but not get version bumps. With FreeBSD, you have to choose between "the packages made at release time", or "the absolute bleeding edge." These updates can and do break workflows, especially on the desktop (Firefox pushed Australis on me, ibus moved to this braindead, slow-as-molasses super+space IME changer, etc.) The actual package installs are about the same for binary (apt-get vs pkg), but I much prefer FreeBSD's ports for building software (which is great when you need to patch software bugs.)

But if you're patient and good at fixing problems, you can end up with a rock-solid desktop. And hey, maybe PC-BSD will save you all of the above steps, too. I kind of look at FreeBSD vs PC-BSD as I do Debian vs Ubuntu: I'd rather know how things work than have it all done for me.

In terms of features, I really like FreeBSD's ZFS, even though it does eat a lot of RAM. Snapshots, whole-disk encryption on root, encrypted swap, mirroring/striping/RAID even across different disks, easy resilvering, etc. I also really like pf a whole lot more than iptables, as I find the syntax a whole lot more readable and flexible, although I will lament that FreeBSD's pf ships with ALTQ (QoS) off by default. I prefer the base system being maintained by the FreeBSD team. I like the consistency, the minimalism, and the great documentation.

Whenever I spot a difference between FreeBSD and Linux, I almost always favor the former's design: /dev/random behavior, jails vs cgroups, SO_NOSIGPIPE socket opt instead of needing the MSG_NOSIGNAL flag, etc.

I like that FreeBSD avoids a lot of the 'licensing wars' BS of Linux. I have Firefox instead of Iceweasel, I have cdrtools instead of cdrkit (I continue to this day to have issues with burning on Linux), I have ZFS instead of btrfs, we had sound mixing in an OSS fork instead of ALSA, and just in general I favor the BSD/MIT/ISC licenses to the GPL.

I very much like that FreeBSD is much more conservative about major changes, and open to choice, so we don't get things like systemd, Pulseaudio, etc pushed on us before they are ready for production. I used to love this about Debian as well, but they've really lost their way as of late in pushing systemd on everyone before it was ready. I greatly value stability over whiz-bang bleeding edge features. I like that FreeBSD is run by a democratically elected core team instead of by a benevolent dictator (I don't have to read what crazy thing Linus or Theo said today in the news.) I like that FreeBSD isn't balkanized into hundreds of different distros. I love not having people telling me to say "GNU/FreeBSD" whenever I mention FreeBSD. I like that third-parties like Redhat don't exert so much control over upstream.

I like the community of FreeBSD more, as it feels that most of the members are more technically oriented. Distros like Ubuntu have brought in a lot of users with little to no experience nor interest in learning the unix way of doing things. I'm not saying this is a bad thing, or that I think less of eg Ubuntu users, just that I prefer the company of sysadmins and developers over gamers and web surfers. I dislike that I can't really discuss the OS with anyone in real life, because it's too niche.

lifeforms 5 days ago 0 replies      
I have introduced FreeBSD at a number of companies, and I'm a big fan of it, but also like Debian and Ubuntu a lot. Here are some things I like about FreeBSD:

* Since it's all developed in a smaller community, and FreeBSD hackers/users tend to be pretty opinionated about many things, most subsystems have a consistent feel and are very unsurprising. This makes that you can often guess and be right. It also means you don't get to have some hyped trends. (I have the same feeling about the Go language and community by the way.)

* This might be subjective but I feel that many administration tasks are very transparent, orthogonal and simple. Many system configuration, like enabling of services, network configuration, can be changed by a simple setting in a /etc/rc.conf shellvariables file with defaults in /etc/defaults/rc.conf. Stuff ends up in understandable text config files. Comments and options tend to be simple and don't require --lots-of-long-options --which-i-cant-ever-remember which GNU utils seem to really love :(

* The base system is being developed in a conservative way. We never reinstall servers for a major upgrade, we just upgrade running systems forever. We have some boxes that come from FreeBSD 6.0 (2005) and were gradually bumped up without reinstalling. There is little switching around of subsystems, so you don't often have to learn replacements (like ipfwadm/ipchains/iptables). The OS is just a vehicle to me, please don't complicate my life needlessly!

* The storage options are pretty nice. ZFS is now a common buzzword so I won't go into it, also Linux is now getting a good replacement in btrfs. But I also like FreeBSD's GEOM, which has a lot of 'building blocks' that you can stack on eachother. For instance, you can use gmirror to create a RAID1 mirror, then use geli to create an encrypted volume on top of that.

* The ports/packages collection is a rolling release and generally has very new versions of apps from OpenJDK to Node to PHP etc. In my mind, it's best to keep your app stack continuously fresh, compared with lagging and doing large upgrades every few years. That way you get security updates from the app authors constantly, instead of from maintainers who backport it. I actually prefer this, since upgrading apps creates tangible value for our developers and customers, but it's a tradeoff. If you want to be hands-off, set-and-forget, no-budget-for-maintenance, then you might be better off with an Ubuntu LTS or RHEL strategy of locking apps into old versions with backported fixes. But damn, I cringe whenever I see a box with PHP 5.3...

* If you compile ports yourself, the port maintainers often create configurable options so you can build stuff with your own preferences. This is very useful for instance if you need ffmpeg with some specific codecs. It can be a real drag on Linux to get it just right. Creating your own binary package repository with your favorite custom options is super easy.

* We distribute that stuff like crazy (e.g. on appliances), and it's nice to be able to do that without having too much fear about the GPL(v3).

That said, there's some positive points about Linux (mostly Ubuntu/Debian here):

* Desktop support is undoubtedly better on Linux right now. FreeBSD desktops seem a bit like Linux desktops were in the '00s: manual twiddling, lots of tweaking, having to really watch what video and wireless hardware you get. PC-BSD should be better in this regard, but I haven't tried BSD on the desktop for years now. But I'm tempted to give it another try.

* I think AppArmor is a really cool way to constrain applications without going the whole container way, for which FreeBSD doesn't have an equivalent right now.

* While I'm skeptical about systemd's overreaching goals, I do agree that it is time to centralize service and event management. It's not a critical issue to me however, I'm not too bothered by the old inits for instance. So hopefully FreeBSD doesn't have to have a big controversy and pick the 'good parts'.

* Obviously Linux has a lot of momentum right now, so using it might be less of a long term risk in terms of support, hiring, etc.

jocmeh 5 days ago 3 replies      
I started using FreeBSD (version 4.8 at the time) in 2003 and I began a switch to Ubuntu in around 2011.

Disclaimer: the following is all very opinionated and 8.1 is the last version of FreeBSD I actively used. Actually, it's still running without any problems on a colocated machine, but it's a bit a "don't touch it" situation, because upgrading even a single piece of software would probably cause a cataclysm of struggle. So I don't touch it (except for the occasional move of something to an Ubuntu vps), but it works like a charm.I've tried version 10 in a virtual machine because it had a new installer, but it was not the best experience ever (FreeBSD's x.0 versions have always been better to skip, the x.1s were fine though). FreeBSD really needed a new installer and they did some great work, but it did not yet feel very solid. On the other hand, I've had to restart the Ubuntu installer often enough as well and I have a love/hate relationship with both of them.

Okay, let's hit it.

FreeBSD: it's clean, it's minimal. It's very good for learning *NIX stuff because you will have to do a lot of configuration yourself. What they do is very well done and I'd say it's hard to break it. Jails are very cool (OS level virtualization), but like everything else in the FreeBSD world, it's quite some work to maintain. If it runs, it runs and will keep on running basically forever. If you want to upgrade stuff, brace, brace. The ports tree was amazing back in the day, but has been overtaken by stuff like apt-get. Installation of bash used to be like this: cd /usr/ports/shells/bash && make install clean. And then, depending on your hardware and de ports dependencies, minutes or hours of waiting on the compilation. You can upgrade your base to any version without having to fear. You fetch a specific version, compile all the things (again, that is a lot of waiting), run mergemaster to fix up your configurations, reboot and you're back.FreeBSD isn't very cutting edge on the level of hardware support, but it comes with OpenBSD packet filter which is the best firewall ever, it's very powerful yet easy to setup. Also, the zfs support is awesome. That file system is truly amazing, very flexible and serious about data integrity. It's cool until it breaks though, 'cause you'll be diving deep in obscure Solaris documentation and just praying to get your data back. But, I had zfs on an external USB drive and you just shouldn't do that. With built-in drives you'll probably be very safe with zfs.To sum it up: if you're patient, care about technology and have a lot of time, FreeBSD is truly a marvellous operating system that will never let you down. It's solid stuff and it's serious, but it is an investment.

Ubuntu: as someone who started with FreeBSD, Linux always felt a bit messy to me. We, the FreeBSD users at the time, used to make fun of Linux guys saying they have scripts for everything. And in some way I still think that's true. ;) The scripts in init.d very often have 'issues'. Ubuntu does a lot of stuff for you automatically. Installation of software couldn't be much easier and many times there's not much to configure. A 'disadvantage' is that you don't have to spend a lot of time figuring out how the software does and what all it's options are, it just works. The way configuration files are organised is pretty neat, with almost everything in it's own dir in /etc/ and foo.d dirs for adding extra options in small config files. (This is very contrary to FreeBSD, where you have one big /etc/rc.conf to configure the stuff that starts when booting and where you can define extra parameters for the software. There's /etc/ for configuring the base system and /usr/local/etc/ for placing files for software that's not in base.) When you're used to pf, iptables is just shit (but ufw helps a lot). It's such a struggle to configure.

So why did I switch from FreeBSD to Ubuntu? Reason number one is because the place where I started working had done that. I really loved FreeBSD and I didn't really had that feeling towards Linux, but using Linux in day to day life is just a lot easier. It's more common, most people use Linux so you don't have to do 'BSD specific stuff'. Also, I loved doing all the configuration stuff years back, but I know how it's done now and I mostly just want my stuff to work without spending hours on configuration and maintenance. You can do that with one FreeBSD server without any problem, but when you have to manage around 50 servers, it becomes quite a thing. Installing a new kernel on an Ubuntu server or upgrading software is super easy and you don't have to wait for hours because you don't have to compile all the stuff.

To conclude, I'd say: just give FreeBSD a try, it's never a bad idea to have a look at what others do. I just wouldn't advise rolling it out to lots of servers, unless you have a very specific reason to do so.

Random_BSD_Geek 5 days ago 2 replies      
Long time reader, first time poster.

I've run FreeBSD on laptops, desktops and servers since 2.2.7. Laptops are not its strong suit. It works great for me as a desktop, but I've been tinkering with X desktop configurations for a long time, and don't mind doing some work to have a desktop that functions precisely the way I want it to. Also, it is not my only desktop. (/usr/ports/sysutils/synergy FTW!)

Servers and network appliances are where FreeBSD really shines. The ports tree can be updated independent from the base OS and the base OS can be updated independent of ports. No upgrading to a new OS just to get a security fix for your web server.

ZFS is brilliant. ~7 years of FreeBSD/ZFS and no issues.

PF makes every other firewall I've run (including every commercial option) look silly. PF is like the Python of firewalls: optimized for readability. Add in CARP and PFsync for easy fault tolerance.

The project is managed by a democratically elected core group. Like a real democracy, sometimes this means change happens slowly. But the deliberate approach to change is part of what makes FreeBSD great. It's stable, predictable, and reliable. It functions as a well-engineered, well-documented whole.

FreeBSD's biggest fault is how little attention it draws to itself. It is quietly brilliant. It just works. It doesn't try to do everything. It's just good, reliable infrastructure.

wyc 5 days ago 1 reply      
This is a brief (probably incomplete) summary of my understanding (many pointsalso supported by the essays in your included link):



FreeBSD has the concept of a base system: a set of tools intended to worktogether harmoniously, maintained by a core group of people. You can easilyfind evidence of this by looking at the source code; the userspace tools sitright next to the kernel[0]. This is in contrast to _GNU_/Linux, where everything(including coreutils) is pulled in from various sources. Many Linuxdistributions emulate a base system by including utilities that transform thekernel into a complete standard system (e.g. Debian[1]).

[0]: https://github.com/freebsd/freebsd/tree/master/usr.bin

[1]: https://www.debian.org/doc/debian-policy/ch-binary.html#s3.7



Linux has a benevolent dictator who decides project direction[0], while FreeBSDhas a core group of contributers who decide the future of the project. However,I'm not sure that the Cathedral vs. Bazaar is a fair comparison to impose onthese projects[1]. In any case, both projects seem to have been getting thingsdone, and unfortunately (or maybe fortunately), I'm not too savvy on internalmanagerial disputes or issues.

[0]: http://www.softpanorama.org/People/Torvalds/index.shtml

[1]: https://www.freebsd.org/advocacy/myths.html#closed-model


Package Management:

The closest Linux distribution to FreeBSD is most likely Gentoo Linux, as itsPortage system is very heavily inspired by the FreeBSD Ports system, in whichall "packages" are simply recipes to build from source. You can even run theGentoo project on a BSD kernel[0], although this sickens most FreeBSD usersfor some reason. Most other Linux distributions default to installing binarypackages, which is also possible, but not traditional in FreeBSD[1].

[0]: http://en.wikipedia.org/wiki/Gentoo/Alt#Gentoo.2FFreeBSD

[1]: https://www.freebsd.org/doc/handbook/pkgng-intro.html


Process Management:

Linux has recently added LXC, while FreeBSD has had Jails for a while now[0].LXC is much better marketed than BSD Jails through Docker, butAbsolute FreeBSD has an excellent section that describes how to do isolateddeployments via Jails[1]. FreeBSD also has the Linuxulator[2] that emulates32-bit Linux system calls via FreeBSD system calls, allowing users toseamlessly run Linux binaries on FreeBSD. The FreeBSD startup system, however,has stayed more or less the same for the past few decades, revolving aroundan rc.conf file and init scripts. Linux has seen many more efforts in thisarea, including systemd and initramfs.

[0]: https://www.freebsd.org/doc/handbook/jails.html

[1]: http://www.amazon.com/Absolute-FreeBSD-Complete-Guide-Editio...

[2]: https://www.freebsd.org/doc/handbook/linuxemu.html



BSD projects use a BSD license, which many businesses prefer over the GNUlicense used by Linux. However, this is a discussion that deserves more thana small summary.



Linux is most likely to support recent hardware because of extensive userbaseand industry support. For example, NVidia's latest CUDA SDKs always have Linux bindings, but not BSD ones.

The BSDs have great reputations for killer implementations of TCP/IP.

The BSDs have been using the GEOM[0] disk management system for a long time,which is one of my personal favorites in terms of features. It allows you totreat character and block devices as pipes, so for example, adding encryptionis simply "piping" a bare disk through an encryption layer, resulting in a newdevice. You can even "pipe" things across the network. Linux is somewhat caughtup via device-mapper, so this is not a huge deal if you're trying to choosewhich one to use. Both are great operating systems. Just use whatever works.

[0] https://www.freebsd.org/doc/handbook/geom.html

It's likely that you know things that I don't, so please feel free to correctme if I'm wrong.

_paulc 5 days ago 0 replies      
FreeBSD user since 2.0.5 (1995) - Linux since 0.95

I have always found that FreeBSD has had a much cleaner and more orthogonal feel as a system than any of the Linux distros and was always much more familiar for UNIX 'old hands'. If you come from the school where UNIX systems shouldn't have displays or the frippery associated with PCs and should be interacted with from a terminal you will probably be comfortable with FreeBSD. It shines as a rock-stable server O/S and in most cases trying to use it as a desktop is going to be fairly frustrating - the easiest way round this is not to bother and buy a MacBook Pro.

My view is that in the late 90s early 2000s adoption was impacted fairly significantly by two rather flaky major releases (3.0 & 5.0) where major bits of the system (SMP/Giant lock) were upgraded and took a long time to stabilise. These felt like a step back from the previous rock solid releases (2.2.8 & 4.11). Realistically the next really good release was the 8.0 series but since then the pace of development has really taken off and the 9.0 and 10.0 series are outstanding.

My view is that it is a great choice as a server O/S - with the significant commercial backing as an embedded O/S looks to have a strong future. I know that there will continue to be an interest in getting the desktop bits working but to be honest I think that this is a lost cause and should be dropped (though I acknowledge that the PC-BSD team doesn't agree)

One other point is that there is probably a chunk of the Linux userbase who probably shouldn't try FreeBSD as it really isn't aimed at them.

morganvachon 5 days ago 1 reply      
My personal (limited) experience with FreeBSD vs my (extensive) experience with GNU/Linux:

FreeBSD pros:

Very stable

Excellent network support

Friendly, knowledgeable devs and tightly knit community

Runs most GNU/Linux apps via ports or jails, sometimes better than on Linux

Easy to learn given prior 'nix experience

FreeBSD cons:

Difficult to learn if you're new to the 'nix world

Smaller pool of compatible hardware[1]

The above has been my personal experience and obviously won't be the same for everyone. Also, I'm most comfortable with Slackware Linux, which is very BSD-like compared to other Linuxes, so that probably influences my point of view. Generally speaking, I like FreeBSD but I don't run it as a production machine (yet) since I'm happy with Slackware. Should that ever change (and there's only one reason it would, and that doesn't need to be rehashed here) I'd be able to switch to FreeBSD relatively easily.

Something else you might want to explore, as an easy introduction to FreeBSD, is PC-BSD. It takes FreeBSD and makes it much more user friendly, with a focus on being a GUI based desktop OS (though they do offer an alternate server installation as well).

[1] While the official list of compatible hardware is extensive, I've found in practice that certain COTS hardware simply doesn't work well with FreeBSD. I've even had professional workstations like a Lenovo ThinkCentre refuse to boot the installation media, throwing a kernel panic instead. I've also had poor luck with cheap motherboards. Generally, my best experiences with installing and running FreeBSD have been on Dell and HP workstations, and on quality motherboards from companies like Gigabyte and ECS.

emcrazyone 5 days ago 1 reply      
These discussions usually amount to flame wars but I will offer some real world usage perspective. I use to work for a fortune-5 company that standardized on Linux. We compiled our own Linux kernel and we used the SUSE distribution from Novell. The main driver behind this was two fold. (1) Novell indemnifies us from a legal perspective and (2) they support us on IBM hardware. Any problem from drivers, to user space apps (CUPS, Syslog, etc...) they support us. These are pretty much required by large companies.

I'm still in the automotive field but now I work on embedded stuff. I'm one of the software developers behind CUE (Cadillac User Experience) and Linux is the go-to defacto standard pretty much because all the BSP (Board Support Packages) run Linux. For example, Freescale iMX processors and their demo kits are all Linux based and so brining up drivers for iMX ethernet, SPI, GPIO, I2S and I2C have some sort of vendor support.

The large fortune-5 companines indirectly support Linux by entering multi millon dollar support contracts with companies like Novell and RedHat. To give an example: We once had an issue with CUPS where the root cause was a software bug. Novell, under the support agreement, fixed the issue and then submitted the patch back to the open source community.

So from my perspective, I can see how Linux seems to have more traction than openBSD. Linux seems to have a larger following in the automotive sector but I'm not sure if Linux's success is because of these factors I'm pointing too or if these factors are because a lot of people just know of or about Linux more so than openBSD.

There also seem to be more company backed open source projects that support Linux before openBSD. Example is Yocto which advertises itself as an open source Linux build system. And recently Freescale has been moving their LTIB BSP tools to Yocto.

I would be interested in hearing how others industries are using Linux outside automotive.

dschiptsov 5 days ago 1 reply      
Why, FreeBSD is an advanced UNIX-like system for servers, derived from original BSD sources I don't remember exactly when. My first experience was with FreeBSD 2.0

It confirms to recent UNIX standards, such as various POSIXes (pthreads, rt extensions), UNIX98 (I guess), etc. and obviously don't have any Linuxisms, like udev, systemd (thank god!) fuse, you name it, which aren't that important for a server. So you could compile as a port or install as a pre-compiled package almost everything you want for a server.

It runs on pair with Linux on network and application performance, in some cases even slightly better. Notably, nginx prior to version 1.0 has been developed on FreeBSD.

Nowadays here are a few obvious disadvantages.

1. Driver support is fair - it runs on standard modern hardware, but cannot be compared to Linux with tens of thousands of contributors, it has very small core-team.

But it considered much less "marginal" than, say, OpenBSD (which is very clever in its own way) or NetBSD.

2. Vendors doesn't support it, so basically you wouldn't run, say, Oracle or DB2 on FreeBSD (while there is a possibility to install some Linux binaries with emulation). Notably, there is no that "certified, safe Oracle JDK" for FreeBSD, only "unofficial" OpenJDK port.

Not long ago it has a reasonable share of all Internet servers and the recent decline isn't due to any quality or reliability issues with FreeBSD, but because "too many Linuxes everywhere".

It has some clever technologies of its own, like netgraph, jails+union-fs (which is you could call "chroot-based containers before Docker") native ZFS support, etc.

Lots of sane people ran FreeBSD in production, Yahoo is the most notable example. Russians love FreeBSD too - many early ISPs and hosting services has been built on top of it. Lots of developers and contributors are from Russia.

So it is modern, reliable UNIX for servers. But not that popular, of course.

wslh 5 days ago 1 reply      
Facebook Seeks Devs To Make Linux Network Stack As Good As FreeBSD's: http://bsd-beta.slashdot.org/story/14/08/06/1731218/facebook...

(I don't want to start a flamewar, just add more stuff for a good discussion)

_asummers 5 days ago 1 reply      
This book came out a few months ago, may be a good springboard if you're interested in learning how FreeBSD works under the hood. From what I understand, the first edition of this book is highly regarded.


arh68 5 days ago 2 replies      
BSD is often just a little bit different. Hard to be unbiased. Some people prefer GNU make, some like bsdmake.

- kqueue is a very powerful event loop similar to epoll [1]

- the FreeBSD ports collection is very simple to use as far as compiling from source goes. I only really prefer Debian's apt-get and Gentoo's emerge

- the FreeBSD Handbook is a very well-maintained text [2]. I freely admit OpenBSD has the best man pages ever written, but the Handbook is good too. Not bleeding-edge talk like the gentoo-wiki or archwiki, just reliable information.

[1] https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2

[2] https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/

pellaeon 5 days ago 3 replies      
I have few experience with Linux distros other than Ubuntu, so I'm only comparing FreeBSD with Ubuntu. Linux distros are so diverse I don't think you can just compare all of them with FreeBSD anyway.

I have both FreeBSD servers and Ubuntu servers. Thing I like the most with FreeBSD is its further separation (compared with Ubuntu) of OS (kernel and world) from third-party software.In Ubuntu every package comes in multiple versions for different releases, in FreeBSD there's only one version.

There's no "because I want git 1.8 so I have to upgrade to Ubuntu 14.04", in FreeBSD you can install the latest version of git on all supported OS versions. (In my experience even a slightly outdated OS version can run latest third-party software quite smoothly)

Software in FreeBSD Ports (its system getting third-party software) catch up with upstream releases very quickly. You might think they have less testing than that is in Ubuntu/Debian, but I don't know if it's really the case. Though I rarely encounter bugs with these cutting-edge software in FreeBSD.

I speak mainly from the experience with Ubuntu, I'm not so familiar with other Linux distros.

justincormack 5 days ago 0 replies      
The culture is different. The most noticeable thing is of course simply size. FreeBSD is by far the largest of the BSDs, but it is a much smaller community than Linux. That means that it is easier to get involved, easier to follow what is going on - eg look at the commit log[1] - you can follow it pretty well, vs the Linux commits where each is a large merge of a nested set of commits[2].

Sometimes small size means things can change faster, eg NetBSD has had 64 bit time_t since 2012, while 32 bit Linux still has no roadmap to fix this (the BSDs are also have a different compatibility model). Other times it of course means that less is done and things take longer. You will often hear people say the BSDs are better designed, as perhaps prioritising limited resources leads to more design, or maybe there are just fewer people who need to solve a problem fast but not so well.

Including (some) userspace means that the tools are less bloated than the GNU ones which were designed pre Linux as portable tools, before the Gnu project had a kernel - remember GNU was the cathedral in the cathedral and the bazaar.

[1] https://github.com/freebsd/freebsd/commits/master

[2] https://github.com/torvalds/linux/commits/master

xenophonf 5 days ago 0 replies      
_The_ thing that brought me to FreeBSD is the base system. A full installation is about 2.5 GB including source code (which is about 900 MB for the FreeBSD sources plus about another 800 MB for the Ports tree). Installation is extremely simple. Third-party software is cleanly separated from the base system. If you're coming from Linux, FreeBSD now has great binary package management tools in the form of pkgng, which is every bit as good and easy to use as APT or yum. I love the FreeBSD documentation. There are useful manual pages even for kernel internals and device drivers. I think that PF is vastly easier to understand and configure than iptables; the same goes for FreeBSD's IPsec implementation. Everything's laid out in a very logical, consistent manner, with lots of comments in config files and whatnot.

What's kept me on FreeBSD is the Ports tree (the third-party package build infrastructure). I love how easily I can build customized packages for my computers, especially now with pkgng and tools like poudriere (refer to this great tutorial at http://www.bsdnow.tv/tutorials/poudriere). I can very easily set up my own custom package repository that either supplements or wholly replaces FreeBSD's. I've tried to do similar things with Linux, but it definitely isn't as easy. The ports tree committers are very responsive, and creating (and submitting!) one's own packages is both well documented and very easy.

I like how much of the system configuration is done in /etc/rc.conf. I like how the various system and ports tree build-time knobs are all in /etc/make.conf. I like how daily maintenance scripts/health checks both run by default and are all configured in /etc/periodic.conf. I like how understandable the base system is, kernel internals included. I'm no expert developer (and believe me, there's plenty of advanced Unix hacker wizardry in the FreeBSD sources), but things are accessible enough to even one such as me that I successfully modified the ciss driver this year to work around a weird bug in some old server gear I was experimenting with.

Don't get me wrong - I love me some Unix, but modern Linux distributions seem over-complicated in a lot of the ways I don't like about Solaris or AIX or Windows, even though there's a lot of nice stuff from the perspective of an end user. If you install Ubuntu or Fedora, a lot of stuff Just Works(tm), and that's great! I love Ubuntu and Fedora! But if anything breaks and I have to go digging, things get complicated so rapidly that it makes debugging more of an effort than it should be.

vitriol83 5 days ago 1 reply      
i use it mainly as a soho nas. the things i like about it are

1) userland and kernel owned by the same group. this lends consistency to the experience, that is absent in linux, where it's clear that it's an amalgamation of many different tools.

2) (largely) one way of doing things

3) package management system (pkg-ng) that is a cross between gentoo portage and debian apt-get

4) extremely good documentation for an open source project (https://www.freebsd.org/doc/handbook/)

5) configuration is very simple (mostly driven from rc.conf)

6) excellent full-disk encryption support (geli)

driver support always lags a bit behind linux, on the other hand the drivers that do exist i'm confident are stable. i make sure to buy hardware which i know is supported.

reidacdc 5 days ago 2 replies      
I think this would be a useful conversation.

The userlands are pretty similar, I think, they both support the GNU tools, as is attested to by the existence of Debian/kFreeBSD.

I don't know what the OP's motivation is, but I'm an enterprise server admin, with lots of Debian experience, but no real FreeBSD knowledge at all. I've been looking at Debian/kFreeBSD off and on for a while as a way to get at ZFS, and possibly better NFS performance. I was actually planning on deploying some experimental Debian/kFreeBSD systems when Jessie releases, but it now looks like that might not be such a good plan: http://lwn.net/Articles/614142/

Jukelox 5 days ago 0 replies      
In my experience it all comes down to performance and tunability. Our personal preferences are just that, personal.

Linux has vsyscalls which makes it performant for the typical LAMP stack. If you profile a PHP Zend framework "hello world" app it makes 20k+ calls to gettimeofday(), which is very slow on FreeBSD in comparison. Throw a couple hundred requests/s and you saturate the CPU for no good reason.

FreeBSD needs business support in order to maintain funding, and be a relevant competitor to Linux. If a developer were to pick a popular application stack like node and identify significant performance gains it would make waves.

sikhnerd 5 days ago 0 replies      
Freebsd has this Quickstart Guide for Linux Users[1] which is somewhat helpful.

[1] https://www.freebsd.org/doc/en/articles/linux-users/

Rapzid 5 days ago 0 replies      
I'd really like to hear from someone about the network stack. I often see jabs at Linux's networking features in regards to FreeBSD, but Linux has a crazy comprehensive stack. Many network appliances are based on Linux kernels and I've personally worked on Linux routers. I also recall doing brief investigation into FreeBSD's networking features and came away with the impression that Linux is out ahead; is this not the case?

How does the storage stack compare? I really love ZFS and have used ZoL quite extensively in the past for a VPS/baremetal backup system and of course FreeBSD has rock solid ZFS support. But Linux has bomber RAID, LVM, dm(which lets you write you're own targets and do all kinds of awesome mapping) etc.

And then of course there are "containers". I'm aware of jails and they are much tested but I saw the other day that they don't have cgroup type features to allow controlling RAM and CPU usage? Namespaces? Is it the case that Linux containers are overtaking jails on the features front?

I really like the idea of FreeBSD and I have bought "The Design and Implementation of the FreeBSD Operating System"(which I ought to start reading) but I'll admit to not knowing a whole lot about it. Most of the topics that come up in comparison discussions seem a bit more superficial than what I'm interested in.

ilolu 5 days ago 9 replies      
I wanted to know about hosting solutions if I want to use FreeBSD. Linode, Digital Ocean don't support it.

AWS and GCE have support, but it seems the kernel is provided by a FreeBSD maintainer rather than AWS or GCE. How stable is FreeBSD on AWS or GCE ?. And are there any companies using FreeBSD on AWS / GCE ?.

louwrentius 5 days ago 4 replies      
For those of us that are more interested in FreeBSD due to the whole Systemd 'controversy', look at the other HN link about the 'FreeBSD: the next 10 years' presentation. They clearly state that FreeBSD is in the same position and it seems that they are not hostile towards the concept of Systemd. Personally I hope that FreeBSD will choose a solution that does not involve QR codes (not mandatory).
keithpeter 5 days ago 1 reply      
It might help to know your main interest: bare metal server in a coloco? Virtualised servers? embedded? desktops?

Do you manage thousands/hundreds/a handful of machines?

Or is your interest entirely academic and structural?

I agree that most of the obvious Google searches return superficial comparisons of the installation process or explanations of the packaging and upgrade processes.

eikenberry 5 days ago 0 replies      
They are both free *nix, so they are very similar. BSD prefers the simpler versions of many of the common command line tools where the Linuxes prefer the GNU tools which tend to have more features. There will also be other small tech differences that will come into play in special cases, like ZFS being nice for fileservers.

Other than the basic tech there are 2 important differences. First is the that there is one FreeBSD but there are hundreds of distributions of Linux. And among them you'll find much larger differences than between FreeBSD and the popular Linux dists.

The other big difference is simply derived from the fact that Linux is more popular. It will tend to have better drivers, have more bugs worked out of the software, etc., just like any other free software out there. FreeBSDs conservative nature towards its core software more than mitigates this, but for all the peripheral software it will matter.

joshbaptiste 5 days ago 0 replies      
Start watching http://www.bsdnow.tv , great show focusing on the *BSD's.
sudioStudio64 5 days ago 0 replies      
It's going to be hard to find an unbiased identification of the strengths and weaknesses of both. These kinds of things tend to bring out the definition of bias.

Anyway, I like the first comments idea about a wiki. That would be helpful.

octotoad 5 days ago 0 replies      
A lot of HN posts related to BSD release announcements end up with the inevitable "Can somebody explain the advantages of using xxxBSD?" comments. Might be worth looking at the comments section for some of those past submissions.
pjungwir 2 days ago 0 replies      
I run Linux on my desktop and servers, and I can't say much about FreeBSD, but if you want a taste, I recommend building a little NAS for yourself and installing FreeNAS. It's been fun to play around with jails and ports and learn some of the differences. I've found this to be a great way to get your feet wet without risking your productivity or server reliability. I'm surprised no one has mentioned this yet!
cperciva 5 days ago 0 replies      
I arrived at this thread about 6 hours late and people have said most of what I would say; but I'll add that we would like to see more startups using FreeBSD (a preference which predates Koum's donation!) so if you've considered FreeBSD but found that it was lacking some functionality which your startup needed I'd love to hear from you.
lelf 5 days ago 0 replies      
FreeBSD: ZFS. Jails. Dtrace. Greatly easier to grok

GNU/Linux: Desktop

feld 5 days ago 0 replies      
Maybe start a wiki page somewhere with what you know/want to know about Linux and then people can fill in the equivalent FreeBSD parts
calebm 5 days ago 1 reply      
Docker has a ton of momentum right now, which should play into this comparison (as it doesn't run on FreeBSD).
obviouslygreen 5 days ago 2 replies      
As a Debian user who started on Gentoo, I think the linked article was very interesting in terms of at least getting an idea of what the differences in design are.

I've never used a BSD or even looked into it... given what I read there, it seems like it'd be a lot nicer in at least one sense, since Debian releases die off and release-upgrade can be either perfect or very painful.

On the other hand, I do love how small and unassuming a basic Linux installation is, and -- as the author repeatedly and correctly stresses -- I'm used to doing things the way I currently do them. That's not good or bad, it's just momentum.

I do hope I'll get the chance to work with a BSD at some point, but much like my attempts to really get into Clojure... well, unlike the Stones, most of us do not have time on our side.

mmmatt1 5 days ago 0 replies      
Voxer is a really interesting case, as they have run a similar production load on Linux, then switched to SmartOS (Solaris), and now switched to FreeBSD.

Their staff are technically savvy, and I don't know of anyone else who has tested the options in this way, in production. They have a popular mobile app which turns your phone into a walkie talkie and more, and have millions of users.

I saw them interviewed about their move to FreeBSD recently: http://youtu.be/NGT3jpilYfE?t=15m

I like FreeBSD, but I have yet to run the same stack on multiple operating systems in production. That's a LOT of work!

debacle 5 days ago 1 reply      
I love the BSD ecosystem, but until FreeBSD is as easy to use and as maintained as Debian, I can't see myself switching over. Debian is just far to convenient, not because it is better, but because it probably has 100 fold the maintainers that FreeBSD does.
gingersnaps 5 days ago 0 replies      
> I just want to know about differences in design and unbiased identification of strengths and weaknesses.

There's no such thing as an unbiased identification of strengths and weaknesses, especially when it comes to monoliths like operating systems.

You say you're not interested in arguing about which system is better -- and I believe you -- but you're also asking for argument about which system is better, in list form. Heck, even your title has a versus in it.

That said, I join you in looking for a concise source of information about FreeBSD's design and usage in the wild contrasted with Linux.

sjackso 5 days ago 0 replies      
Here [0] is a really extensive how-to for running FreeBSD as a desktop OS. It goes beyond the basics, including information about 3D graphics, printing, emulating Linux binaries, installing Flash, and running Wine. It's useful as a guide for someone starting out with FreeBSD, but it's also useful to skim over it in order to get a sense of the distance between the minimalist, base FreeBSD installation and a modern desktop environment.

[0]: https://cooltrainer.org/a-freebsd-desktop-howto/

sgt 5 days ago 1 reply      
We do quite a bit of Java development and particularly with regards to Java EE 7. In the past I've had bad experiences in terms of a stable Java environment on production servers on FreeBSD.

It appears that OpenJDK is the best option on FreeBSD right now - but is it something one can trust as much as Oracle's Java on Linux?

These are mission critical servers, so I would hesitate to take chances. To put it in perspective, even the latest Java 8 has a serious bug (on Linux) that we've recently reported to Oracle, so we'd like to be as risk averse as possible.

jtchang 5 days ago 1 reply      
Because I have a FreeBSD box with this:

$ uptime11:23PM up 835 days, 5:33, 1 user, load averages: 0.07, 0.02, 0.01

raphaelss 5 days ago 0 replies      
What has kept me from using FreeBSD (or other bsds) in the past on my laptop was lack of nvidia optimus support. Last time I checked, there still wasn't a bumblebee port.Nevertheless, I have always wanted to try it for longer periods. Perhaps I should try to contribute if it's feasible.
hiphopyo 5 days ago 1 reply      
There's the old "FreeBSD vs. Linux vs. Windows 2000" document which caused quite the stir:


anonbanker 5 days ago 0 replies      
Why aren't we talking about OpenBSD vs. Linux?I mean, it's the only OS I trust anymore.
sergiosgc 5 days ago 2 replies      
A side question somewhat related to the theme: What features of ZFS can't be achieved with ext4 on top of LVM? I see lots of people longing for full disk encryption, mirroring, snapshots or foreign hard disk consolidation, all of which seem doable with ext4+LVM.
oDot 5 days ago 1 reply      
I think an interesting question in this context is:

If you wanted to create Ubuntu today, would you use GNU/Linux or FreeBSD as base?

grigio 5 days ago 0 replies      
no, or your comments will be banned :/
pmoriarty 5 days ago 0 replies      
The worst part of FreeBSD is the license.
babo 5 days ago 0 replies      
If you can't decide yourself just stick with your current OS.
Introducing Driver Destination
337 points by kyleslattery  2 days ago   118 comments top 23
kalvin 2 days ago 2 replies      
I'm happy Lyft is demonstrating that it actually cares about its core/founding values of decreasing car usage.

They could have viewed this as a distraction to expanding its existing dedicated-driver model, in order to better compete against Uber. (Lyft Line is just multiple passengers; Sidecar already does "driver destination" aka real carpooling, but they're also playing a different game focusing on drivers in general, and don't have the scale Lyft does)

If they can get traction, they'll have cracked a problem that many people have tried to solve and failed at: how do you get Americans to carpool?

Context: Lyft's founders pivoted into Lyft from Zimride after five years of building white-labeled carpool sites for colleges/companies + a public long-distance carpool/rideshare board, and discovering that a) that's not a VC-scale business, and b) 90% of Americans don't "do" traditional carpooling and they're not about to start

Anyway, I use Lyft over Uber when possible for many reasons, but this is one-- they started out trying to improve society in a particular way, and still are, even as they've changed their approach. And I think they should be commended for setting an example for how to achieve an activist-y goal through a startup/company. Or since it's not achieved yet, at least trying.

(I have no affiliation with Lyft, see my profile.)

eli 2 days ago 6 replies      
Shared carpooling (called "slugging" here) has a been a viable commuting option for people in the DC area for decades. No money changes hands; the point is that a car with 3+ people can use the much faster HOV lanes.


allendoerfer 2 days ago 0 replies      
Flinc [0], a start-up from Germany, does exactly this but in real time, too. It can hook up with the drivers navigation software so they are only asked to pick someone up, when it does not change their own route.

I highly prefer this model over the classic Uber/Lyft service, because it actually delivers on the promises of the sharing economy. Resources are shared, from which both parties and the general public benefit. Its not just taxis with lower wages.

Of course, critical mass is a bigger problem with this model, but there is the obvious entry point of white-label ride sharing platforms for companies, which flinc does, too.

I think, it is about as cool as it gets, until I get my self-driving car, which transports passengers on-demand the whole day after it drove me to work, from time to time being recharged on inductive parking spots, powered by green energy.

[0]: https://flinc.org/

fraserharris 2 days ago 2 replies      
The messaging here is off. "Driver Destination" implies it has generalized applications -> be a Lyft driver where ever you are going. In practice, they need to get traction in the one big market for this: commuters. Better marketing will call it what it is: Lyft Commute - share a ride from home to work

(claiming the product name 'Commute' also stops Uber from using it)

awwstn 2 days ago 4 replies      
This is really great though I wonder if Lyft runs the risk of people making the connection once (i.e. finding a person with similar commute times/locations who is willing to pay for rides) and deciding to arrange a paid carpool that doesn't go through Lyft.
DigitalSea 1 day ago 0 replies      
I am late to the Lyft party, but after recently leaving Uber because I did not agree with their questionable business ethics and attitude toward their customers, I must say I find the whole Lyft experience somewhat refreshing.

Funnily enough, my first Lyft driver I had a week or so ago was telling me he leaves home a couple of hours earlier, does a couple of trips close by to his work and then on the way home from work he turns the app on and most of the time he is fortunate to get a ride that is going the same direction, so it pays his way home.

Not an entirely new premise, car-pooling has always been a thing, but for my driver (his name was Brett) this is going to be an awesome feature and I imagined many other Lyft drivers. I like the feature of the app that allows you to tip drivers a little something extra when a driver has gone beyond what you are paying them for. In my first experience, Brett offered me and my pregnant wife a muesli bar, bottled water and even held an umbrella for us while we got in and out of the car.

Seriously good job Lyft, you have a superior app/service and great drivers, you just need to get the numbers up and get some more brand awareness.

Rambition 2 days ago 2 replies      
20 years ago, I lived in a house that had a city implemented carpool pickup spot in the East Bay - every morning carpoolers would line up and wait for a rid into SF to get through via the carpool lane, back when the toll was only $2.

Since it was city property, I don't believe my parents got anything out of it, and we couldn't park in front of our house Mon - Fri mornings, my brother did have the enterprising idea of selling coffee in the mornings, much like a lemonade stand with a forced audience.

Seems like there was a demand for this type of service 20 years back though, interesting to see it is still an issue that companies are looking to solve today.

smokey_the_bear 2 days ago 1 reply      
This could work really well for things like getting to Tahoe on the weekends too.
nostromo 2 days ago 1 reply      
The TechCrunch article is much more informative.


jrkelly 2 days ago 0 replies      
So lyft for the burbs and uber for the cities, then? Burbs require non-professional drivers (i.e. daily commuters) to become drivers since ride request density isn't high enough to support professional drivers.
tcdent 2 days ago 0 replies      
Having grown up with California's carpool lanes (a lane reserved for vehicles with two or more riders) I've always expected the number of commuters that are able to take advantage of it has to be minimal. In my experience, having that additional passenger only happens on special occasions; it never happens on the morning commute. This is what the carpool lane has always needed to be successful; networking.
mmanfrin 2 days ago 1 reply      
Man, I was thinking about this last night, how I could use my morning and evening commutes to take people home on my way home -- since I'd either stay in the city (killing time for traffic to die down), or be paid to sit in my normal evening traffic home across the bay. Only hiccup would be if I had a fare taking me to the south bay, but that seems a reasonable gamble.
kosei 2 days ago 0 replies      
Very interesting. The most important thing to me personally is convenience, which is an itch that I believe this scratches. If I'm able to input that I'm going somewhere and this will help automatically direct me to people I can pickup and drop off on the way, it avoids the huge hassle of coordination. Additionally, it (presumably) helps with the issue of liability?
michaelvkpdx 2 days ago 16 replies      
Trying to charge people for carpooling, something we've traditionally done for free. The "sharing economy" isn't about sharing- it's about monetizing free things and taking a cut.

This is not in any way improving humanity. It's a step backwards.

monksy 2 days ago 7 replies      
That seems like it's going after a very small segment of the market.
DINKDINK 2 days ago 1 reply      
Sidecar has had this functionality for a long time.
muzz 2 days ago 0 replies      
This is more inline with what the founders worked on originally, a ride-sharing service called ZimRide
belzebub 2 days ago 0 replies      
In a similar vein this could be used to replace restaurant delivery drivers.
zyxley 2 days ago 0 replies      
Procedural carpooling. Neat.
zan2434 2 days ago 0 replies      
This is brilliant. Lyft is doing everything right as of late.
unicornporn 2 days ago 1 reply      
So, will this work in Sweden? The Google Play page says nothing about geographical restrictions, but it speaks about "DMV checks" which I have no idea what it is.

Seems Americans sometimes forget that the internet is a global network.

judk 2 days ago 0 replies      
I remember when this was called carpooling. Weird to put the branding on it. But these days nothing counts unless it's in service of some corporate brand.
pinaceae 2 days ago 5 replies      
ass backwards. you want the next bn dollar idea?

while I am at work, my car sits in the parking lot of the building for 8-10hours, unused.

i don't want to be a lyft/uber driver. i want MY CAR to work for uber/lyft in its off hours. have a driver pick it up in the morning at the office, then return it when i want to get home (full gas tank and cleaned car for an extra fee).

someone takes this and get rich.

Git's initial commit
348 points by olalonde  4 days ago   120 comments top 22
jordigh 4 days ago 1 reply      
Well, while we're looking at FIRST POSTS, here's Mercurial's, self-hosting a month after git, and like git, also created to replace bitkeeper:


The revlog data structure from then is still around, slightly tweaked, but essentially unchanged in almost a decade.

brandonbloom 4 days ago 2 replies      
I love checking out very early versions of projects. You often get to see the essence before the real world came in and ruined the beauty of it.
afandian 3 days ago 1 reply      
My god... the comments. Looks like the reddit culture (i.e. fun for in jokes but not particularly professional)
jeffreyrogers 4 days ago 1 reply      
Interesting fact about Git is that it was self hosting in two weeks, IIRC.
stinos 3 days ago 0 replies      
Maybe I've been drilled too hard by a couple of programming gurus, but I immediately noticed there are quite a lot of repeated yet unnamed magic constants in the (otherwise pretty clean) code. According to wikipedia [1] the rule to not use them is even one of the oldest in programming. Curious what kind of profanity Linus would come up with when confronted with this :]

[1] https://en.wikipedia.org/wiki/Magic_number_%28programming%29...

hyp0 4 days ago 1 reply      
It's so short.

The readme is the best explanation of git I've seen.

d0m 4 days ago 1 reply      
I've read so many git tutorials, I wish I had seen that README file before.
fivedogit 4 days ago 1 reply      
zabcik 4 days ago 4 replies      
Why are there multiple main() functions? I've never seen this style before. Is it multi-process?
royragsdale 3 days ago 0 replies      

If you want to see the commits going forward from here.

DodgyEggplant 3 days ago 0 replies      
This is a great lesson in writing focused & succinct specs, when one clearly sees what his/her program is going to do.
hnmcs 3 days ago 0 replies      
Gotta love the fact that there are open pull requests.


Jackcor 3 days ago 1 reply      
Did all of the inital commit code is written by Linux Torvalds ?
hw 3 days ago 2 replies      
Does Github offer an easy way to get to the first commit of a project? Traveling page by page back in time is time consuming (yeah, i did that)
benihana 4 days ago 7 replies      
Is there a reason there aren't any braces around single-line if statements? Is that a C thing? It seems kind of inviting to bugs to me.
Fizzadar 4 days ago 1 reply      
Great to see the original command set, and the title of course: "GIT - the stupid content tracker"
dirtyaura 3 days ago 0 replies      
I only realised reading the README that git is a great lesson in branding.
justintbassett 4 days ago 1 reply      
I wonder what the first commits for big sites/projects look like?
dbdr 1 day ago 0 replies      
Where are the tests?
EGreg 3 days ago 1 reply      
Linus wrote:

*+Side note on trees: since a "tree" object is a sorted list of+"filename+content", you can create a diff between two trees without+actually having to unpack two trees. Just ignore all common parts, and+your diff will look right. In other words, you can effectively (and+efficiently) tell the difference between any two random trees by O(n)+where "n" is the size of the difference, rather than the size of the+tree. *

Um, What?

byteCoder 4 days ago 4 replies      
Following the tradition of sports, I propose that commit id e83c5163316f89bfbde7d9ab23ca2e25604af290 be officially retired.
tempodox 3 days ago 0 replies      
Code comment about git:

  stupid. contemptible and despicable.
That sums it up quite well. Every day I pay thanks to The One Who Programmed Me that my workflow doesn't put me in need of that shitload of crap that is git. I pity those who do need git.

324 points by tempestn  5 days ago   208 comments top 43
jmadsen 5 days ago 2 replies      
I love how every Tom, Dick & Harry with a bit of math & engineering thinks he can "dispel the myth" of HN Idea X with two minutes of off-the-top-of-his-head equations.

Read about these two: http://www.gravitricity.com/#people

Do they seem like complete morons? Do you think they haven't studied this just a little bit more than you have? It doesn't mean their idea is good or will work - just that you aren't going to rip it apart with two hastily written paragraphs.

People who ask questions about the costs, process, etc. are in the right spirit. People who think they can show off how clever they are really make me /facepalm

andrewliebchen 5 days ago 3 replies      
Good thing HN wasn't around durning the development of most of humanity's great inventions. "So your telling me I'm going to have to hold my food over this fire for 15 minutes before I eat it? No thank you, I'd stick with my raw meat."
apsec112 5 days ago 14 replies      
Let's do out the math on this...

A subway tunnel might have a diameter of about 6 meters, so cross section = 3 * 3 * pi = 28 square meters. Digging subway tunnel through rock costs about $100M per kilometer. On the one hand, these holes would be vertical, which is harder than horizontal; on the other hand, they wouldn't need ventilation and train tracks and stuff. Let's handwave and say it's $100M for a 1 km deep hole.

Now, you can't fill the whole tunnel perfectly or the air can't escape, so our total volume of mass will be about 25 m^2 * 1000 m = 25,000 cubic meters. If the weights are made from lead, that's a total mass of ~280,000 tons or 2.8 * 10^8 kg, at a mean depth of 500 meters, so our total potential energy is 2.8 * 10^8 * 500 * 9.8 = 1.4 TJ, or 1.3 TJ net assuming you get the efficiency they claim. 1 kWh is 3.6 MJ, so you can store ~400,000 kWh at $100M capital cost (ignoring for the moment the cost of weights, generators, etc.), which is $250 per kWh installed capacity.

That's pretty good... but you also have to pay for weights and a bunch of other stuff. Bulk lead costs about $2,000 per ton on the current market, so that's $560M for the weights, which puts you back in the $2,000 per kWh range which doesn't beat lithium batteries. So you have to use iron or some cheaper material... but then you don't have as much storage capacity because the density is lower, and even with iron you're paying $400 per metric ton or $80M for all your weights. So this isn't obviously impossible like Solar Roadways, but even in the best case it won't make storage dramatically cheaper.

elektropionir 5 days ago 7 replies      
The energy density for gravity is just immensely small, that's why you need dams holding back rivers to use them to generate electricity. For a 1km hole (that's in the middle of their 500m - 1500m range) you have an energy density of 10kJ/kg of the weight that stores the energy. The energy stored in a Tesla roadster battery pack is around 50kWh which is 180MJ. This means that you need a 18,000kg weight in a 1km deep (immovable) hole to have the equivalent of a Roadster battery pack. I'd go with the battery pack.
awjr 5 days ago 1 reply      
Is a 500m-1500m shaft is pretty much going to fill with water? I could see a well designed weight being able to work in water (although water turbulence would erode the shaft walls. I don't see how air compression would solve this.

The principle however of storing energy by raising a weight could also be used anywhere with a steep enough hill/cliff/montain and the weight could in theory be on a rail not just suspended.

Efficiency would potentially not be on par, however linked into solar/wind systems this is less about efficiency and more about creating a 1MW long term battery with a lifetime of 50+ years.

Guessing cost of digging and maintaining a hole compared to installing a guide rail is significantly higher.

Animats 5 days ago 1 reply      
OK, a typical mine hoist is about 10 metric tons. 10 metric tons descending at 1m/sec is very close to 100KW. So a 1000 meter deep hole can deliver 100KW for 1000 seconds, or 27 KWH. That's about $3 worth of electricity, and about 1/3 of the battery capacity of a Tesla Model S with the large batter option.

Numbers not looking reasonable for this concept.

ramses0 5 days ago 0 replies      
I think the coolest thing about this concept (for me) is viewing it as a "whole-system" energy storage procedure.

It is effectively 100% renewable, 100% distributable, using 100% commodities (ie: rocks in a hole).

As a thought experiment: if on average you can meet 110%+ daily power expenditure captured from renewables (solar, wind, whatever), and store it by lifting up these weights, then you've broken into the "free energy" loop.

More specifically, don't look at the power input or storage, look at the power output / usage. If your input + storage capacity is greater than your output rate then energy effectively becomes "free forever".

Simulate it on a small scale. Get a pinwheel to run a small motor that winds something up. Attach a small LED to it that you only run occasionally. Basically, just so long as you have a really small output draw compared to your input rate and storage capacity, this "battery" will give you energy when you want it with minimal maintenance costs and minimal consumables.

abdullahkhalids 5 days ago 2 replies      
The key figure of merit for comparing energy storage is not $/KWh, but rather $/KWh*Number of cycles. Li-ion only has about a 1000 cycles. Assuming one cycle per day (solar charge during day+discharge during night) in 50 years there are 18000 cycles.

The figure of merit for Li-ion is 250x1000=2.5e5. My estimate (and those of others) is that this costs up to about $2000/KWh. So the figure of merit for this is 1000x18000=0.9e7. Two orders of magnitude better than Li-ion.

Edit: I am ignoring the cost of capital, interest rates etc. Somebody should do this analysis.

sz4kerto 5 days ago 0 replies      
This came up a long time ago. It doesn't work well, the amount energy it stores is simply too low. I've seen a calculation somewhere, but it's too late here so can't reproduce - does anyone remember?


here it is.https://news.ycombinator.com/item?id=6739349

michaelbuckbee 5 days ago 1 reply      
This is just rampant speculation - but what if they used depleted uranium as the weight? They could get paid to take the material off others hands (instead of buying lead) and it's almost 70% more dense.
tasty_freeze 5 days ago 3 replies      
Although I think the idea isn't workable (energy density is too low, cost of boring the hole is tremendous), most of the other commenters here seem to think they'd just have one weight, whether it is 1000 KG or 50,000 KG.

Any sane plan would be to have more than one weight. When the first weight hits the bottom, it would release from the cable and another weight up top would grab the cable and start dropping. To store energy, the top weight would get winched up, and when it hit the top it would lock into place somehow and the next weight at the bottom of the shaft would engage the lifting cable, etc. The cable would have to follow a circular track, rather than having 1KM of cable for each weight.

madaxe_again 5 days ago 0 replies      
This is neat, but not novel. I used to live in a house (UK, middle of nowhere) that had a deep borehole that was used for this purpose 130 years ago. The weight and winding were long gone, but the dynamo was still sat there. Oh, and it wasn't raised by water, rather, servants, back in the day.
callmeed 4 days ago 0 replies      
Funny, I just finished Peter Thiel's book. In the final chapters, he discusses the cleantech flame-out companies of the 2000's and how many were run by old suits. He contrasts them with true technology innovators (like Elon) who are "t-shirt and jeans" people.

Interesting to scroll down and see suits at this site.

phacops 5 days ago 3 replies      
Seems like this would be more cost effective as a component of sky-scrapers. Generator on top, series of weights on rails down the sides.
grondilu 5 days ago 2 replies      
What determines the depth of the whole?

I mean, since E = mgh, you can get the same energy storage capacity with a less deep hole if you use a heavier weight. And you can get a heavier weight either by using more expensive material (why use a cheap one? it's not like it's going to wear or anything), or a larger hole.

I'm not sure what determines the cost of digging a whole, but I suspect depth matters more than area.

Also, does the shaft has to be vertical? You could dig it with a sharp slope, and put your weight on rails. That would make the shaft longer for the same depth, but it would probably be easier to build and maintain.

gatehouse 5 days ago 2 replies      
Is 90% efficiency really feasible for electrical -> mechanical -> electrical?
transfire 4 days ago 0 replies      
I thought of this decades ago, and I am sure many others have too. And no the math does not come out favorably. While technically feasible, the cost is prohibitive. The reason is simple, you need LOTS OF WEIGHT to get any appreciable storage capacity.

Try it yourself: http://hyperphysics.phy-astr.gsu.edu/hbase/gpot.html

Note that 1 joule = 2.77777778 10-7 kilowatt hours

Pxtl 5 days ago 0 replies      
There's interesting variations here - at first glance a single big weight is the same as many small ones, but you have to think of the engineering challenges of cabling and motors for a 1 kilotonne mass. So you could use many smaller objects in a stack and move them one at a time, but then you've got the problem of decoupling/recoupling the cable and reaching through the ones that are at the top while lowering/raising the ones at the bottom.

Now, the next obvious consideration is using mountains instead of a pit. The rockies are full of 4000M peaks. But a mountain means suddenly we have so many new considerations - mountains aren't exactly a constant slope from peak to foot. But extreme loads on rails are a solved problem - the world's heaviest single fully-loaded rail-car was about a kilotonne (a special schnabel car carrying a reactor up to the Alberta tar-sands).

Of course, then you've got a new construction problem - building a train-track that's a near-straight-line up a mountain and can support incredibly heavy trucks.

It's probably not workable because of the energy-density concerns, but it sure is neat.

throwawayaway 5 days ago 1 reply      
> Most expensive component the hole in the ground can have a life of well over 50 years

Well well well, I must buy a well.

pm90 5 days ago 0 replies      
I really hope these guys succeed, even if they don't replace conventional batteries everywhere. One thing I've been thinking for a long while is that: the future of humanity is in how advanced drilling equipment we can make. Think about it: asteroid mining, colonizing planets or even just conventional mining...all require drilling. If this technology catches on, there will be so much research done in finding better drilling techniques. In fact there is so much resources to be found in our Earth itself, if we drill deep enough.

Also, imagine if we have a base on the moon powered by solar cells: having the technology to drill quickly and cheaply would be indispensable in storing energy captured during lunar "days". Although I imagine you would need deeper holes because of the smaller g.

pbhjpbhj 5 days ago 0 replies      
So basically clock-type counterweights. PSH seems like it would be far more efficient - there's surely a lot of mechanical losses in the sort of system in the OP?

If the hole could be used for some sort of heatpump too then maybe that would weigh off [no pun intended!] some of the problems.

aetherspawn 4 days ago 1 reply      
If we had a dyson sphere around something heavy then the crank part could be in space with the weight dangling into the atmosphere: https://en.wikipedia.org/wiki/Dyson_sphere
ggchappell 4 days ago 1 reply      

> The key requirement is a deep hole in the ground; it could be a disused mineshaft brought back into use, or it could be a purpose drilled or sunk shaft.

So, apparently, "sinking" a shaft involves making a hole using some technique other than drilling. What technique is that?

Dictionaries are no help. Wikipedia[1] says:

> Shafts may be sunk by conventional drill and blast or mechanised means.

"Mechanized means" is pretty vague. Can anyone clarify?

[1] https://en.wikipedia.org/wiki/Shaft_mining

AndrewDucker 5 days ago 0 replies      
Okay, so energy density is low - but what are the costs like?

If you can build one of these cheaply, and the running costs are trivial, then it's worth doing.

How many of them would you need to smooth out the energy of a wind farm, for instance?

Stately 5 days ago 2 replies      
I'm far from being even slightly knowledgeable in this topic, but would it be possible to build this in very deep waters? Like a massive column containing a tunnel? Seems cheaper than digging a 1km hole.
aaron695 4 days ago 0 replies      
Good to see these's still a sucker born every minute.

Intelligent people on HN are pointing out huge holes with this idea but some people are still defending their Nigerian princes, because... they want it to be true?

The warning signs here are huge. It's an incredibly simple idea, if it was possible it'd be already done. Nothing here really seems to rely on scale either.

KaiserPro 5 days ago 1 reply      
So how many joules can it contain?

I mean, technically I can get a super cap the size of jam jar to kick out 1 kw, just not for very long.

A watt is a unit of how much energy is expended in a second, not how much energy is stored. There is a reason why hydrostations in wales use lakes to store energy, because you need a lot of mass at great height to be of any use.

dmritard96 5 days ago 0 replies      
Been attempted before. Its interesting that below ground is preferable to above ground. Water tables in many places will be a problem.
arfar 4 days ago 0 replies      
zaroth 5 days ago 0 replies      
I was really hoping it would keep scrolling down nearly forever, with something neat waiting at the bottom :-)
cjbenedikt 4 days ago 0 replies      
At least you'll have to appreciate that these guys have started a viable enough business before for Siemens to buy it. However, when they started out it appeared equally unfeasible at the time..."impossible is an opinion not a fact"
blubbi2 4 days ago 0 replies      
I'm probably missing something, but what's the advantage of drilling a hole as opposed to "simply" building a kind of drain or sky scraper. I doubt it would be more expensive.

Sure, the pressure-aspect would be away, but besides that...

thisjepisje 4 days ago 0 replies      
What about springs for energy storage?


goodmachine 4 days ago 0 replies      
If this catches on everywhere we'll run out of gravity in no time.
krschultz 5 days ago 0 replies      
Keep in mind, we currently remove entire mountains to extract coal. If there is one thing we can do at scale it's dig holes, move concrete, and make steel cable.
Zikes 5 days ago 2 replies      
I look forward to Elon Musk's proposal for unconventional energy storage, because he wouldn't dare call it anything other than Eccentricity.
pizzashark 4 days ago 0 replies      
This sounded so much more interesting before I knew what it was.
robertmarley 4 days ago 0 replies      
Digging that hole seems expensive.
joering2 5 days ago 1 reply      
I have this idea in my mind for a while now and it seems great subject to share it with you.

Imagine an elevator going kilometers into the oceans dept. A huge tank is mounted on the top of it. It gets filled in with pressurized air. Because its heavy it sinks to the bottom. Then an air is released. Air travels into the surface but on its way it is actually captured into a little traps. When enough air is trapped, the entire structure built from hundreds of traps is lifted into the surface, together with the tank. of course during this trip, it triggers friction and the dynamo mounted on the surface translates this movement into electricity.

Once on the surface, the tank is filled in with air, and the process starts all over.

With long enough elevator in deep enough ocean, the electricity produced thanks to the travelling elevator would be greater than electricity used to put air into the tank.

If you ignore the rules of gravity and that trapped air travels to the surface of a water, this could be a perpetuate mobile.

What's wrong with my idea?

rebootthesystem 4 days ago 0 replies      
They are using the wrong approach. It's the difference between Karate and Aikido.

Can one use potential energy to, well, store energy? Duh. This, however, is the wrong way to do it.

scottcanoni 5 days ago 0 replies      
Where can I buy one? I'll start digging in my backyard ;)
phragg 5 days ago 1 reply      
Drilling into the ground should almost never happen.
On Linux, 'less' can probably get you owned
298 points by adamnemecek  4 days ago   130 comments top 18
userbinator 4 days ago 3 replies      
I'm also not sure if the automation actually scratches any real itch - I doubt that people try to run 'less' on CD images or 'ar' archives when knowingly working with files of that sort.

This is a trend not uncommon in GNU software -- features added by someone who at some point thought it was a good idea, but probably didn't even bother using them much beyond an initial test to see that they are somewhat working. Most users likely think of 'less' as nothing more than a bidirectional version of 'more', and not as the "file viewer that attempts to do everything" that it seems to actually be. It's also a little reminiscent of ShellShock.

viraptor 3 days ago 2 replies      
This is probably a good reminder of something else: using selinux/apparmor/tomoyo/... can save you from many situations where you'd be exploited otherwise. For example as a response to this you can set a policy on lesspipe and all its children so that they cannot access the internet or write outside of temp directories.

Whatever library is used by lesspipe - you're safe as long as the output terminal and your kernel are safe.

Animats 3 days ago 1 reply      
Last month, it was "file", which turned out to have a parser for executables which can be exploited via a buffer overflow.

Probably the best exploit in this line is crafting JPEG files which cause buffer overflows in forensic tools and take over the machine being used for forensics.

We need an effort to convert Linux userspace tools likely to be invoked as root or during installs from C/C++ to something with subscript checking.

steakejjs 4 days ago 2 replies      
This research lcamtuf has been doing with AFL is really important.

One thing that it is proving (exactly as a lot of people expected) is, we don't have any idea where security bugs (think the next heartbleed or shellshock) are going to show up, we have no idea how good the software out there is (meaning it is bad), and most of the time we don't even know what's running on our own boxes.

If these basic things we use hundreds of times a day (less, strings) have huge flaws, we have a lot of work ahead of us.

username223 3 days ago 4 replies      
The first time I accidentally ran "less" on a directory and it piped some version of "ls" into itself, I was mildly annoyed. The thing's supposed to page a text file on a terminal. Since then, I've had to think twice before invoking it to avoid this "helpful" behavior, and I'm not surprised that it came back to bite people.
fsniper 3 days ago 0 replies      
I think this is one side effect of development. We are trying to implement any feature that would be nice to have into every software.

LESSOPEN or LESSPIPE is a feature that is already achievable via manual means. But automation is the king so it's a nice feature to have it in the software implemented.

If we could just stop and move on whenever software is capable of doing what they are intended for as smooth as possible, many of these issues would diminish to exist.

mrmondo 4 days ago 0 replies      
That is bad default behaviour on Ubuntu's (and centos'?) behalf. I have confirmed this is not the case in Debian.
pmontra 3 days ago 0 replies      
I checked what I have on my Ubuntu 12.04.

$ env|grep LESS


LESSOPEN=| /usr/share/source-highlight/src-hilite-lesspipe.sh %s

Safe, as long as source-highlight isn't buggy.

I also checked my .bashrc and found this

# make less more friendly for non-text input files, see lesspipe(1)

# NO! I don't want this!

# [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

So yes, lesspipe was the default and for some reason I commented it out. I vaguely remember at being annoyed about less showing me something different from the actual binary content of the files.

michaelcampbell 3 days ago 0 replies      
> Many Linux distributions ship with the 'less' command automagically interfaced to 'lesspipe'-type scripts, usually invoked via LESSOPEN. This is certainly the case for CentOS and Ubuntu.

I just ran less on a dir in Ubuntu Trusty (latest LTS) and got the expected "<dir> is a directory" message.

reubenbond 3 days ago 3 replies      
Less really is more
vezzy-fnord 4 days ago 1 reply      
This was also mentioned in one of the pages of The Fuzzing Project, linked on HN just a short while ago as of this comment: https://fuzzing-project.org/background.html
fleitz 4 days ago 3 replies      
Am I missing something? Is lesspipe run as root? What could you execute via lesspipe that you couldn't from the command line?
guard-of-terra 3 days ago 1 reply      
That's why

  alias less=/usr/share/vim/macros/less.sh

Glyptodon 3 days ago 1 reply      
Should this show when I run env or not?
zobzu 3 days ago 0 replies      
but also vim.and irssi.and gpg.and a bunch of other day to day linux programs nobody bothered to review very thoroughly.

Just any case anybody still believe its just java, flash, openssl and bash that suffers bad vulns (oh oh oh).

Sami_Lehtinen 2 days ago 0 replies      
Using cat in terminal can get you owned.
jacquesm 3 days ago 2 replies      
rm /bin/less

Problem solved.

Any binary utility that I haven't used in a 6 month period can get lost. The problem is that there are probably a hundred or so more issues like this hiding in /bin/* and /usr/bin/* and wherever else executables are hiding.

Is there a way to retrofit 'can shell out' as a capability flag not unlike the regular access permission bits?

upofadown 3 days ago 1 reply      
I checked on Debian Jessie and the /usr/bin/lesspipe script runs entirely off the file extension. So there is no issue with less itself. If someone sends me, say, a malicious doc file I would have to type "less blort.doc" to get owned by catdoc. The only time i would ever type that is if I knew that less would invoke catdoc, that I actually had catdoc installed on the machine and for some reason I wanted to use catdoc to look at a doc file.

Less only installs a mailcap entry for "text/*". A mail reader that could not handle plain text itself would not be much of a mail reader.

That also means that it is kind of stupid to have less display non-text things. Still not a real security issue.

IE Web Development Support Moving to Stack Overflow
287 points by djug  3 days ago   200 comments top 21
kayman 3 days ago 4 replies      
Microsoft making a lot of moves that are starting surprise the tech community. it will take some time to convert the generation that grew up with the old closed microsoft, but slowly I believe they will have converts.
alexggordon 2 days ago 1 reply      
Firstly, bravo. I really applaud Microsoft for realizing what it wanted to have in a Q&A site, and realizing Stack Exchange was that realizing. Then realizing that people are probably going to go to SE, instead of a Microsoft forum or product.

That said, I think Microsoft is going to be in an adapt or die philosophy for the next few months, and I think this is one of those decisions. They know they need to foster a community that anyone can be a part of.

Open Source move? Check. More Open Source? Check. Stack Exchange? Check.

The only thing this piece is missing is a better hardware platform, so I'm going to be very interested in seeing what Microsoft can produce in the coming months.

iancarroll 3 days ago 1 reply      
MSDN Social is probably the worst web I've ever used, honestly... The layout is confusing at best.

The answer is attached to the bottom (full length, no see more or anything) of the question and is then repeated again. Threaded comments are also very poorly formatted, and they hit a limit pretty soon from what I've seen.

hoodoof 3 days ago 4 replies      
Such a bad idea. The format of StackOverflow is very tightly defined. I bet all support questions don't fit that format. Then what?

StackOverflow should speak up against companies doing this.

ihaveajob 3 days ago 0 replies      
This is great news. In the past we worked with a great group of developer advocates, and I know they spent countless hours answering all sorts of technical questions on Windows development on StackOverflow. Making this official was only the next natural step.
andrewstuart 3 days ago 3 replies      
I created www.NotConstructive.com a few days ago in response to some of the challenges of posting to Stack Overflow.

Launch announcement:http://fourlightyears.blogspot.com.au/2014/11/notconstructiv...

meesterdude 2 days ago 1 reply      
This is great news, and this new Microsoft that seems to be emerging (albeit late to the party) is a welcome change of direction.

But lets not kid ourselves. It's still Microsoft. In my eyes, they have one of the worst track records and it will take a monumental change for my perspective to change to positive; I do not know what it would truly take to change it.

They have played the evil corporation card for so long, I worry if you give them an inch they'll take a mile. But I welcome hearing good news from redmond, and hope it keeps coming.

mooreds 3 days ago 5 replies      
I guess you have to go where the people are, but man, I wouldn't want to outsource the knowledge base/forums for a major product to a different company. Seems like a valuable asset to give up.
px1999 3 days ago 2 replies      
Given the nature and content of the specific forum, and that SO is built on a largely microsoft tech stack, moving it to SO makes sense.

Still an interesting move from Microsoft. Projects hosted on github, open source, versions of the .NET runtime for mac and linux, embracing docker, node.js and avoiding getting in fights over front end frameworks... this really seems like they're trying to turn a new page.

thepoet 2 days ago 0 replies      

LinkedIn API support moved to Stack Overflow too. Same timing might not be coincidence.

towelguy 2 days ago 0 replies      
With more and more teams moving their support to Stack Overflow, they should think about a way to show if the user is an official support person in their signature, rather than them writting "disclaimer I work at this" every time.
BorisMelnik 2 days ago 1 reply      
Wow this is very surprising, glad to see Microsoft moving more into the commoners domain. Not a big SO user, but what would prevent them from getting IE.stackoverflow.com?
Animats 2 days ago 1 reply      
Does this mean Microsoft is terminating developer support for IE? All Stack Overflow can do is answer questions. They can't fix bugs in the product.
danabramov 3 days ago 0 replies      
Hopefully they'll do better than Facebook!
wcdolphin 3 days ago 0 replies      
That blog post does not look very "official". I had to double check to make sure it wasn't a joke.
arrowgunz 3 days ago 1 reply      
That is one smart move by Microsoft. Microsoft definitely seems like it is heading in the right direction. I'm quite excited to see what Microsoft have under their sleeves. Faster release cycles for Internet Explorer (like any other popular browsers) would be a killer move by them, IMO.
mathattack 3 days ago 1 reply      
Wow - is this a massive pivot for Stack Overflow? A new revenue model?
jessedhillon 3 days ago 4 replies      
I don't understand why Microsoft continues to make it's own rendering engine.

I get that they need a web browser they can control and distribute with their OS, but why a rendering engine? Especially one that, in my region (SF bay area), runs natively on fewer than 1% of the dev machines I see out there. If somebody were inclined to test their site for IE they'd most likely have to go through the hassle of setting up a VM running Windows just to do it. More and more, I'm hearing people just say "I don't test my work in IE" and being fine with it.

Is there a good argument for developing their own rendering engine, given the existence of two really good open source engines?

eximius 2 days ago 1 reply      
This is... not what Stack Overflow is for...
tkubacki 3 days ago 1 reply      
Asked my first IE question there - got downvotes with "it's not an IE support page" comment
dfar1 3 days ago 3 replies      
"With over 40-thousand questions tagged "Internet Explorer", and dozens more asked every day, it has proven to be a great place to find reliable help."

The announcement makes it seem that they are proud of how many questions are out there about IE issues.

Kill init by touching a bunch of files
253 points by omnibrain  3 days ago   121 comments top 12
pilif 3 days ago 7 replies      
There are so many things in here that tempt me to comment about, so here goes:

1) For me, this is a prime example of why I personally like programming environments with exceptions. If libnih could throw an exception (I know it can't), then they could do that which would allow the caller to at least deal with the exception and not bring the system down. If they don't handle the exception, well, we're were we are today, but as it stands now, fixing this will require somebody to actually patch libnih.

Yes. libnih could also handle that error by returning an error code itself, but the library developers clearly didn't want to bother with that in other callers of the affected function.

By using exceptions, for the same amount of work it took to add the assertion they could also have at least provided the option for the machine to not go down.

Also, I do understand the reservations against exceptions, but stuff like this is what makes me personally prefer having exceptions over not having them.

2) I read some passive aggressive "this is what happens if your init system is too complicated" assertions between the lines.

Being able to quickly change something in /etc/init and then have the system react to that is actually very convenient (and a must if you are pid 1 and don't want to force restarts on users).

Yes, the system is not prepared to handle 10Ks of init scripts changing, but if you're root (which you have to be to trigger this), there are way more convenient (and quicker!) ways to bring down a machine (shutdown -h being one of them).

Just removing a convenient feature because of some risk that the feature could possibly be abused by an admin IMHO isn't the right thing to do.

3) I agree with not accepting the patch. You don't (ever! ever!) fix a problem by ignoring it somewhere down the stack. You also don't call exit() or an equivalent in a library either of course :-).

The correct fix would be to remove the assertion, to return an error code and to fix all call sites (good luck with the public API that you've just now changed).

Or to throw an exception which brings us back to point 1.

I'm not complaining, btw: Stuff has happened (great analysis of the issue, btw. Much appreciated as it allowed me to completely understand the issue and be able to write this comment without having to do the analysis myself). I see why and I also understand that fixing it isn't that easy. Software is complicated.

The one thing that I'm heavily disagreeing though is above point 2). Being able to just edit a file is way more convenient than also having to restart some daemon (especially if that has PID 1). The only fix from upstarts perspective would be to forego the usage of libnih (where the bug lives), but that would mean a lot of additional maintenance work in order to protect against a totally theoretical issue as this bug requires root rights to use.

angry_octet 3 days ago 5 replies      
This is a great example of how bad many open source projects are at accepting contributions from 'non core' developers. The patch is just rejected, when it actually looks pretty valid to handle all cases of return value from a kernel interface. While it might not be a perfect solution, accepting it with suggestions for additional improvements could have led to those improvements.
rikkus 3 days ago 4 replies      
.NET's FileSystemWatcher documentation says that there's an internal buffer of a finite size, and that if (when) it fills up, you're going to be told about it and must do a complete traversal of the directory you're watching. No-one has invented a better way to deal with this, so that's what you need to do.

Many developers ignore this, so it's not really surprising that this has happened with inotify too. It's mentioned that a patch wasn't accepted, but it was with good reason - it doesn't fix the problem (by traversing the directory).

twic 3 days ago 0 replies      
Perhaps it would be a good move to extract the file-watching behaviour from the part of Upstart that runs as PID 1, and put it in a separate daemon that can notify Upstart when it's time to reload a configuration file. That way, only the daemon would crash in this situation, and Upstart could restart it as it would any other daemon.

Although then you have the problem of the notification channel between the daemon and Upstart overflowing, i suppose.

Havvy 3 days ago 1 reply      
"libnih" - I have no clue what this does, so I immediately read it as 'lib not invented here'. Otherwise, yay...more unstable software.
shaurz 3 days ago 1 reply      
It's pretty dumb that pid 1 has any assert()s in it at all. Or libraries for that matter.
nodata 3 days ago 2 replies      
Writing to /etc/init.d requires root access. If you are root, you can bring down the box as it is.
xfs 3 days ago 6 replies      
TL;DR: He overflowed inotify queue of /etc/init which is the Upstart configuration directory being monitored. Upstart doesn't deal with the overflow, exits, and causes kernel panic.

The bug is not fixed because in order to trigger it you need root to spam file operations in /etc/init, which implies bigger problems elsewhere. If you have root and want to see panics, just echo c >/proc/sysrq-trigger.

uint32 2 days ago 0 replies      
What's the point of watching for init config changes? Who has init configs that change so often that this is useful?
weissadam 3 days ago 0 replies      
The good news is that Upstart is in the process of being phased out. The bad news is its replacement. : )
SwellJoe 3 days ago 3 replies      
It's worth noting that the root user has any number of pathological use cases that can bring down the system. This is but one of them. Interesting, but not particularly dangerous or likely to be triggered in any normal circumstance.
emeidi 3 days ago 4 replies      
As far as I understand, you need to be root (or another privileged user) who has write access to /etc/init. Conclusion: You can bring down a machine with superuser privileges. Breaking.
What I Learned from Building an App for Low-Income Americans
266 points by prostoalex  2 days ago   91 comments top 11
soneca 2 days ago 2 replies      
>The best thing about my time at Significance Labs was meeting incredible people like Jason and Angel. The most fun I had last summer was sitting in a room chatting to housecleaners.

This ending shows how hard it is to escape the "feeling/looking good rather than doing good" trap.This motivation about meeting and relating to people and realities outside of your usual social network are legitimate, but dangerous. If you don't evolve pass through this phase, you will end up getting used to this alien world, and it will become routine. And you will see the problematic and boring issues about chatting with housecleaners and also the flaws of Jason and Angel.

This, in itself, is enlightning. You stop considering yourself disconnected from your users or the people you are trying to help. The time has come when you might actually create some impact. But you must find your motivating above this fugacious desire to be contrarian among your peers.If your motivation ends when you feel you already have enough stories to share with your friends and that the "i am good people" stamp is already attached to your personality in your own and other's view; then you quit. And make no impact.Muhammad Yunus took decades to create impact.

I don't want to doubt the author's motivation, or predict that she won't cause any impact. Just that defining meeting interesting people (who are just boring people you don't know enough, like the rest of us) as the best thing on a "what i learned" post is definitely a red flag.

anigbrowl 2 days ago 2 replies      
Nevertheless we often had trouble persuading housecleaners and other domestic workers to come to interviews, even though we paid $25 per hour, which was higher than their regular hourly rate. They didnt know us and it looked too good to be true.

You could pay $25/hour for some product interviews, but how long does that last? Say someone does 4 interviews with you for a total of $200. Housecleaning for a new client might pay less per hour, but it also has the potential to be a steady gig which might yield a few thousand a year in additional income whereas this app probably won't. One also has to factor in the non-negligible time cost of getting to and from the interview. It's not really enough for someone to get excited about unless they have nothing else going on; people who work hard place a high value on their limited leisure time, and giving product design interviews sounds more like a way for the interviewer to pad a resume than anything that will deliver a long-term benefit to the interviewee.

As someone who spent quite a lot of time at the bottom of the economic pyramid, doing similarly casual labor, there's probably not a lot of value that an app cobbled together in a few months can deliver. The most useful things you can do with a smartphone are:

1. receive calls, email and text messages

2. Maps app and public transit scheduling to get you to your gigs on time

3. web browsing to check Craigslist

Those three things deliver a lot of value because they save a lot of time. It's unlikely that a shallow app is going to deliver anything like as much utility, and the utility it does add has to be weighed against the time it takes to use.

After that your income comes down to knowing whatever job it is that you do and being able to set prices and stick to them. The latter is quite important; some people are a pleasure to do business with, others will try and beat you down or trick you out of money. I've had people that owed $150 for furniture removal lie about the rate they negotiated and try to pay only $100, and when called out on it they shrugged and said 'no harm in trying.' That's a lot harder to deal with for someone who doesn't have English as a first language or the confidence to call someone's bluff.

The best way to empower a poor person is to teach them something that allows them to charge more money, which is typically a different skill as low-paid laborers tend to be price takers more than price makers and competition is friendly but intense. Short of a new work skill, the other thing a lot of poor people could benefit from is a basic course in microeconomics from a labor perspective - not so they can quote Adam Smith but so they have a consistent way to model things like opportunity cost and production possibilities and make decisions in less time instead of puzzling over things and wondering if they could have made a better deal.

Not everyone can do this, but basic micro does not involve a lot of math and gives people a systematic way to think about the economic problems that they already face on a regular basis. Developing a course based on the practical problems like deciding which of two mutually exclusive jobs to take or how to maximize profit would have enormous value. For a housecleaner, say, you could figure out that offering 'natural cleaning products' might have a cost in both consumables and additional cleaning time amounting to $4/hour, but clients who care about that may be willing to pay an extra $5/hour (or more), yielding greater profits.

roneesh 2 days ago 1 reply      
I encounter these "civic engagement" apps every week. Honestly, unless the author plans to actually work hard on this, I find these kinds of apps more damaging than helpful.

No one benefits from a half-baked half-implemented app, which these kinds almost always are. If they are fully-implemented in maintained, then it's a different story, apps can change lives, but it's hard long work to change a life. But if the app is abandoned or half-done, then really the client will end up viewing the developer more or less like the other scam artists they have to fend off.

In my opinion, the approach itself is pretty flawed. You as a talented person come help those less fortunate than you for 12 weeks, it seems like a win-win, but really I think most sides just end up frustrated. Off the top of my head, a better solution might be to actually try and build your own cleaning business with that community and then see what tools you need. Software is software, cleaning is cleaning, you can't be a domain expert in one and then just know what will work in the other, I think you need deep expertise in both.

cmarschner 2 days ago 3 replies      
Thanks for sharing. In the upcoming years a billion people will come online that have less than $20000, $5000 or $2000 per year. They will have less education and less foreign language proficiency than the current internet users. At the same time the internet will provide the opportunity for them to make more informed decisions and become more productive. The sets of problems might differ vastly from the rich people's problems that are solved by the current internet ecosystem. I would love to hear some more perspectives on this matter.
hereonbusiness 2 days ago 4 replies      
> In fact, a higher proportion of low-income Americans rely on their smartphone for Internet access than the population as a whole. A 2013 Pew research survey showed that 45% of users living in households with an annual income of less than $30,000 mostly use their phone to go online, compared with 27% of those living in households with an annual income of $75,000 or more.

If the situation is similar in the rest of the world, could this then be the driving force behind the trend of ever increasing screen size (phablets)?

I mean, if a smartphone was my only means of accessing the internet you can bet I would want it to have a big screen.

tedchs 2 days ago 0 replies      
I was really hoping for some concrete app-development lessons learned. For example, was it important to target old/low-end versions of phones? The article both says SMS is a lowest-common-denominator, but also that smartphones are more prevalent than one might assume; what was the outcome from those considerations? I noticed the food stamps app presents one question at a time, with a "Next" button after each one. Was that approach discovered after user testing, or was there just an assumption that that would be more successful for the user base than showing all the prompts on one screen?

Also, I am excited the IT industry has evolved to where these point-solution apps are both easy to build and plentiful. This is the "long tail" in action.

There is a lot of backend infrastructure that enables what I call "UX apps", where 90%+ of the effort is on frontend design and functional requirements, and very little consideration needs to be paid to infrastructure issues like OSes, networking, high availability, etc. The enablers here are things like PaaS, powerful mobile and browser clients, and reliable and prevalent communications (cellular / WiFi).

comrade1 2 days ago 1 reply      
This is a good example of trying to create demand where there was none before. This can work if you have a really great salesperson, but it's hard.

Also, I see your survey results on smartphone usage but I'm not convinced. In my experience poor people have spotty data plans - either no data or very limited, and only use their phone for sms and voice.

I'm making some decent money - not huge amounts - providing a service to low-income Americans. I have a server application that is used by 211 organizations, suicide hotlines, government groups, etc to communicate over sms with mostly low-income Americans.

kokey 2 days ago 2 replies      
It's really nice to read that this sort of thing is being done. I believe technology can be a great enabler to create new low and semi skilled jobs, or improve current jobs. House cleaning and related duties I always felt has great potential when combined with technology. You could have a person come in and take and capture your requirements regarding cleaning, packing, organising parts of the house, including handling the laundry with a laundry service, or you could capture this information yourself with an app. Then you can have casual or full time workers use an app to guide them through the house and the requirements, even if they haven't been to the particular house before. It could organise and schedule rounds of these people serving multiple households and schedule laundry pickups and drop off around it. It could allow people to pick their 'shifts' and move them in advance. It could encourage more people to take up shifts when there's a spike in demand.
username__ 2 days ago 1 reply      
I like this article.

Most of my friends and family are the working poor and for years I have been struggling to find a way to give back in a meaningful way. I would love to help them technology-wise, but I've come to realize is that they are not facing a technology problem -- they are facing a problem of opportunity, or lack thereof.

Instead, I've been offering them free training and equipment to learn what I do (software engineering), as I know there is a demand in the area that has not been filled and they can get those jobs if they put in a few months of time and dedicate themselves. Even if they couldn't get those jobs, they could get an office job and 'automate it away' to impress their bosses (this is exactly how I started).

So far, I've had surprisingly little interest. That said, they are not ones to take handouts...

jroseattle 1 day ago 0 replies      
> housecleaners prefer to be paid in cash (so mobile payments were out), mainly use text messaging, and sometimes dont want to reveal professional information online, especially if they were undocumented.

We have many users who fall into the lower-income bracket, and understanding their needs at a design level is very challenging. Many functions that we might take for granted within an application -- logging in, downloading, installing from the app store, etc. -- can be foreign if not altogether new concepts. We have to deliver success, error and information messages to users who fundamentally won't read or anticipate what is happening.

The SMS/text scenario is a big deal for us. We have users who interact with our service solely through their text message application -- nothing installed from our service on their end. When we offer a chance to use our mobile app, we have been asked "what's the app store?"

That said, it's rewarding to build something that makes someone's life easier for them. We are certainly falling in the do good camp.

VLM 2 days ago 0 replies      
"What he really needed was a steady job which would provide him with an income for his family. No mobile app I could build in three months was going to deliver that."

In summary, our economy is too small for our population, so minorities and others defined as undesirables get heartlessly kicked out of the economy. I don't like it, but that's the facts of our ever shrinking economic system and ever expanding population. So their plan is to use technology designed for the people still inside the system, on people outside the system, because by being insiders they are morally superior and should culturally imperialize the folks kicked out of the economy. And if it works inside the system, being very provincial, they think that all that exists is the inside of their system, so obviously we need to re-educate those in another system to see things the one true insider way. And despite having been kicked out of the economy, they are assumed to still have something that can be harvested from them that insiders value that isn't already being harvested by efficient megacorps (walmart has all their money, you aren't getting it). And somehow its assumed imperialism is not only a good idea, but it should work if they just wish hard enough, despite centuries of past experience showing imperialism mostly just screws things up. The whole thing is just nonsense on a big picture systemic level.

People kicked out of our economy already use logic and reasoning and technology, at least as much as our educational system permits them, to solve their own real problems, and this isn't one of them. In an era of hyper pervasive media they don't need to be shown how the remaining rich/lucky people live, they're drowning in it from the media and it causes little other than resentment. If the square peg doesn't fit in the round hole, working smarter not harder fails just as bad as getting a bigger hammer or pretending there is no fundamental as designed by our oligarchs mismatch.

A Minecraft world that has been played for 3.5 years
258 points by rocky1138  4 days ago   104 comments top 26
zalzane 4 days ago 2 replies      
Pretty big world, but it doesn't even compare to 2b2t in scale.

For those who havent heard of it, 2b2t is an anarchy survival server that's been around for about the same period of time, 3-4 years, with no resets. Virtually the entire map from the spawn point to 5km from spawn is a desolate wasteland littered with ruins griefed bases, castles, and megastructures.

With the introduction of the hunger system everything got a lot more interesting, requiring new players to make a mad scramble from spawn and try to find some source of sustenance. It's not uncommon for new players searching for food to duck into a 2-3 year old base that's been long abandoned but has a few precious pieces of bread left in a chest.

Typically players will build their bases anywhere between 10-500km away from spawn, and when they do, they build some of the most impressive bases I've seen in the game. One favored hobby of many regulars is to go hunting for these gems that have usually been abandoned years past.

Google it and you can get a good idea of exactly how old the map is, but the pictures really don't do justice to the absolute carnage of spawn.

stevebmark 4 days ago 9 replies      
Seeing things like this, and others in the thread, honestly depresses me. I've wasted months of my life building in Minecraft, huge structures that I can do nothing but look at, alone. Months where I lost social interaction and self maintenance. Nothing good ever comes of these worlds, and they are self destructive to their creators.

Minecraft, Reddit, Imgur, Facebook, all things you will eventually have to block from your life if you want to achieve anything real. Don't let it consume you. The only winning move is not to play.

ykl 4 days ago 4 replies      
Way awesome! It's really amazing what people can build in Minecraft even in a short period of time, let alone a long stretch.

Here's a map from a server I play on. This map represents only 6 months or work (we reset our map every 3-6 months) and was done 100% in survival mode: http://nerd.nu/maps/pve11/#/61/64/-42/-2/0/0

Macha 4 days ago 2 replies      
Server worlds can get impressive pretty quickly. One of my wallpapers is this render from one of the old reddit server worlds: http://i5.minus.com/im5gOe.jpg

On the other hand, my single player world which is about 4 years old isn't nearly as impressive.

ashark 4 days ago 2 replies      
The ring roads are the most impressive part, IMO.

Between material-gathering, going to the work site, clearing the path ahead (largely avoided here by elevation, but still) and actually laying down the road one... block... at... a... time... roads are very slow to build in Minecraft. They take a large amount of investment before they start to pay off, unlike most buildings, and they can't really be fully appreciated except from map views like this site, so they're kind of low-reward for the people doing the work, too.

I only run a vanilla server, so I don't know how mods might affect that. Were these tool-assisted in some way? I just can't wrap my head around that time involved if not.

cdr 4 days ago 1 reply      
Stuff like this is pretty fascinating. I've only really played Minecraft singleplayer and intermittently. I almost always end up starting a new world with each update, too, since it bugs me not having the new blocks/structures/etc in already created chunks. I've never even made it as far as doing the enderdragon, usually getting caught up building some ill-advised structure or clearing some endless cave network. I wish the devs would slow down with the new features, the complexity is kind of out of control at this point.
Hawkee 4 days ago 0 replies      
Here is a world I've been running for 2.5 years. Spawn has been moved several times, so the world spans tens of thousands of blocks. If you look carefully you can find large settlements from continent to continent, http://treestop.com:8123
robinhoodexe 4 days ago 1 reply      
Very impressive. How many days in total have been spent playing this world?

Also, some gems in there[1].

[1] https://i.imgur.com/RFDD5lZ.png

rakoo 4 days ago 1 reply      
I have a really hard time believing you built those concentric pathways by hand and not through some level editor...

Very nice world otherwise.

Mandatum 4 days ago 2 replies      
Wow, the performance of this website is really good. How is this so fast?
nacs 4 days ago 0 replies      
Another large server map that's from a 1+ year old map that's 12000 blocks wide. Players' towns are marked:


jostmey 4 days ago 1 reply      
I know almost nothing about mine-craft. How many people played in that arena? I ask because it would be kind of sad if one person did all of that. However, it says something pretty cool about humanity if it was constructed by random people passing through, which is that people will naturally work together. In other words, no centralized set of rules are required in order for people to collaborate together to build a (virtual) city.
agmcleod 3 days ago 0 replies      
Wanna play minecraft now. I have a save that is one i continually work on. Nothing super impressive, still dont have the enchanting table surrounded by book shelves. But it's been a fun project slowly growing.
edem 4 days ago 1 reply      
The server has enormous lag because of this post. I managed to find a castle full of melons just a minute before I would have starved. I guess I'm lucky.
jchonphoenix 4 days ago 1 reply      
As someone who's never played minecraft, my first response is "I wonder if I can build Jurassic Park in this thing?"
rocky1138 4 days ago 2 replies      
Neat bit of trivia: Notch's wife, ez, was kind enough to visit once and leave a sign. "ez was here" :)
emilioolivares 4 days ago 0 replies      
There are some very impressive things created in Minecraft. This one of them: Westeros (Game of Thrones) built in Minecraft. Simply amazing, I bought the game just to check it out:


NAFV_P 4 days ago 1 reply      
This could be an extreme version of "Where's Wally".
Narishma 4 days ago 3 replies      
A black page? The website doesn't seem to work in Firefox.
rocky1138 4 days ago 0 replies      
Another bit of trivia: I set the map to do a full render a day or so ago and it's still going. The map is at 1657200 tiles and counting.
mkaroumi 4 days ago 1 reply      
My question:

How many hours/day did you sit with Minecraft? (or whoever has made this)

edem 4 days ago 1 reply      
The link is not working for me :(
Immortalin 4 days ago 2 replies      
Doesn't anyone on HN play feed the beast modded minecraft?
notastartup 4 days ago 1 reply      
I wonder wouldn't such map invite trolls and griefers that would just come to destroy everything in sight?
alexperezpaya 4 days ago 0 replies      
I can't even imagine how much penises they had build in this time
timetraveller 4 days ago 1 reply      
What a waste of time.
The $12 Gongkai Phone
259 points by megablast  5 days ago   62 comments top 22
sah88 5 days ago 5 replies      
There is a link in the comments to a 7 dollar phone that looks way more polished. Crazy really, a fast food meal will cost you more than that here in Canada.


pronoiac 5 days ago 0 replies      
Ah, this dates from a year or two ago. Here's a previous discussion: https://news.ycombinator.com/item?id=5703946
Animats 5 days ago 0 replies      
It's been possible to get a reasonably good Android tablet for $30 in Shentzen for two years or so. Now those are down to $20 in quantity.


drivingmenuts 5 days ago 1 reply      
So, what are the rules of gongkai? That might have been useful to include in the article.
codinghorror 5 days ago 1 reply      
If any article needed a date in the title this one does. Seen it on HN a few times now.

(can HN have code that checks for dupe links from the past and forces the year in the title, if so, just as an additional signal to long term readers? That'd help a lot.)

Aqwis 4 days ago 2 replies      
Is this really that extraordinary? I can buy a Samsung GT-E1200 candybar phone for $15 in Norway. Presumably similarly priced phones are sold in the US. That's a considerably nicer phone too, with a bigger screen and more features.
guylhem 5 days ago 1 reply      
I used that exact phone (well, it had a different plastic shell was called "cardphone" - no bluetooth) for a year as my main cell phone.

It worked very well, had about 3 days of battery time (given my uses). The best part is that I never forgot it, given how small it was - always there with my credit cards

Unfortunately, the headphone died after a bad fall (I could still use on speakermode).

I'll be very happy to buy a new one, especially it has bluetooth now. (if there's someone from China I would be interested, it usually sells for 3 times as much on ebay :-)

Immortalin 5 days ago 0 replies      
Not completely true. A Arduino Uno bought in China costs less than 5 bucks, especially if you buy it online
ww520 5 days ago 1 reply      
"Gongkai" seems to be between public domain and open source. Interesting to see a sharing ecosystem developing without a firm legal framework.
jarcane 4 days ago 0 replies      
This reminds me of the magazine phone. http://mashable.com/2012/10/02/ew-has-smartphone-inside/
chdir 4 days ago 0 replies      
Under $12, with camera, bluetooth, FM, radio, flashlight and dual SIM


There are more available in B&M stores for $6-$7 with a reasonable lifespan (easily greater than 1-2 years)

ja27 4 days ago 3 replies      
It's pretty amazing to me that there are pretty regular sales of of crappy no-contract Android 4.X phones for under $10. Here's a current one:


mey 5 days ago 0 replies      
Title should be updated to (2013)
sturmeh 5 days ago 1 reply      
That looks like a Sansa Clip+ display, hmm.
mintplant 5 days ago 1 reply      
Does anyone have a link to this particular phone on the Digital Mall site? I've been looking for a low-cost, 'disposable' MP3 player for a while now.
moe 5 days ago 0 replies      
I got one of these (or a very similar model), they are sold as "Cardphones" on eBay.

Really nice backup device, battery lasts weeks, reception and call quality is surprisingly good.

I only wish there was an easy way to sync the phonebook to Android/iOS because the interface is really, well, basic.

krisgenre 4 days ago 0 replies      
GuiA 5 days ago 1 reply      
Related: David Mellis (one of the Arduino founders) has an Arduino-based phone project as part of his Phd research: http://web.media.mit.edu/~mellis/cellphone/index.html
stormpat 4 days ago 0 replies      
For a real low budget phone you could get an Nokia 105 for 18 dollars.
wyager 5 days ago 0 replies      
How hard is it to re-program these? If you needed a device with GSM and just a few GPIO, this makes more sense than an Arduino + GSM shield. In fact, the entire phone is cheaper than a GSM shield. It would even make sense to use this phone solely fore GSM comms and connect to some other c over the UART.
frozenport 5 days ago 0 replies      
3rd world folks want Internet, and many will ration food spending to buy a phone capable of such.
millionairegonk 5 days ago 3 replies      

1.) Complete failure of security in 'convergence' to the smartphone.too much centralization and the firmware has plenty of rootkits.

2.) Most USA citizens own their home and Hurricane Katrina - NewOrleans turn their major assets into WEALTH DESTRUCTION.

3.) Yes, I carry around a 'personal book' on lightweight paper that couldeasly be 'obliterated.' WHY CARRY an apple phone, flash it andthe ARMED ROBBERS and 'predators' follow you?

4.) Sure I am a bit extreme. Sure I made money the HARD WAY.paid my way thru school by part time work, scholarship, etc.

5.) Hey MILLIONAIRE - I PAY CASH to local merchants/friends/room-mates. Plug in LIVE CD - Gentoo preferred or openbsdand THE CLOUD TAKES CARE OF YOU.

6.) OK. so I wear the tux and the bicycle is 50US$ used. upgarde it100 $US. Paying for an expensive car and parking it in New York City?

7.) RUSSIAN JOKE - ancient - So, the scientist wears the supercomputerthe size of a wristwatch. on wrist. BUT the catch is he has tocarry a suitcase full of heavy lead batteries.

8.) OH, ahhhh. some of the equipment is hammmraddddio usingthe newwwer open source software. So, the 911 service CRASHEDIN Seattle, WA? AGAIN?? AGAIN?? not a problem for the


riding a bicycle. making a chinese food delivery and traveling byprivate plane arrangements means the SILLY VALLY VultureCapital guys have a harder time to track MR. Mobius of Templetonor Mr. gongkai.

LOL, suckers enjoy your aaaple dumb phone.

PS. moving most of the 'infrastructure to world diversified VPS.even CLOUDFLARE is a single point of failure.

Magnus Carlsen Repeats at World Chess Championship
260 points by sethbannon  4 days ago   95 comments top 12
dmourati 4 days ago 6 replies      
Carlsen came to play an exhibition at my previous company. He played against 8 players simultaneously. Two had been chess-club players and competitively ranked. One employee played Carlsen to a draw. This earned several gasps from his handlers who had never seen such a thing. They asked what she was ranked and she said she wasn't with a wry smile. Truly something special to behold.
sethbannon 4 days ago 1 reply      
For those that weren't following closely, this rematch was nothing like the last world championship between Carlsen and Anand, when Carlsen won in a blowout. Carlsen didn't seem to be in top form this time, and Anand clearly came better prepared than the last match. Two games stood out as being particularly fascinating for me:

Game 3, which Anand won: http://en.chessbase.com/post/sochi-wch-g3-the-tiger-roars

And game 11, in which Carlsen clinched the title: http://en.chessbase.com/post/sochi-g11-in-dramatic-finale-ca...

pk2200 4 days ago 0 replies      
Game 6 was the turning point of the match. Carlsen made a horrific blunder, and if Anand had noticed it, he very likely would have won and taken a 1-game lead in the match. Instead he lost, giving Carlsen a 1-game lead.


ramkalari 3 days ago 0 replies      
Anand doesn't like playing long games. Carlsen doesn't give an inch even when he is down. Regardless of rating differences, Anand's game was in bad shape in 2013 and it just continued in the world championship. He is playing much better these days. It is just that his game doesn't match up very well to Carlsen's relentless style. That said, he had his chances this time like how he predicted before the match. He just couldn't take them. I'm always reminded of Federer-Nadal rivalry when I watch these two play.
mbell 4 days ago 4 replies      
I found it interesting that there was a stream on twitch.tv that had over 10,000 viewers for this. It's how I found it, not being much a chess follower but still finding the game interesting. I was a very surprised to see so many viewers.
cheepin 4 days ago 3 replies      

But seriously, I was following this like a TV show, kind of hoping Anand would win for the extra drama it would create.

Congrats to Carlsen, clearly an amazing player but...


sharkweek 4 days ago 0 replies      
It's been a real treat watching Carlsen play these past few years in his rise to fame - He has a fundamental understanding of endgame far above anyone else playing at his level. It's fun watching the latest generation of players add to the skill of the game, with Carlsen atop that list.
linuxfan 3 days ago 0 replies      
Anand's play improved vastly during this WCC. In retrospect, he should've drawn the 11th game and pressed Carlsen with white pieces in game 12. Magnus played well but it appeared that he wasn't as prepared as Vishy for this match. Every champion eventually gets dethroned by a youngster. The new generation of players including Caruana, Karjakin, Nakumara etc. will pose a bigger threat to Calrsen than old-timers such as Vishy, Gelfand and Kramnik.
Jakeimo 4 days ago 0 replies      
mhartl 4 days ago 3 replies      
It's fascinating to read about the blunders both competitors made. One of the many strengths of computer chess programs is that they are virtually guaranteed never to make such mistakes.
KedarMhaswade 3 days ago 0 replies      
This is a great achievement. Though I am a Vishy Anand fan and I really liked the way the contest was fought, I do think that Carlsen deserved to win. World chess championships, with their rich history are our treasure!
known 3 days ago 2 replies      
Bell Curve for Anand.

Anand should gracefully retire from chess and passon the batton to younger generation.

FreeBSD: the next 10 years
247 points by danieldk  5 days ago   170 comments top 18
DCKing 5 days ago 4 replies      
The most intriguing part of FreeBSD (any BSD really) from an outsider's perspective I find the fact that "the FreeBSD project" has a bigger, more directed scope than "the Linux project" or "the GNU project". It's a kernel and userland all in one, and they can actually decide to focus more on unity of configuration files and mobility. I get the impression that you cannot decide that as efficiently on Linux at all.

People often say "Linux is all about choice" as if it's a good thing [1]. I think this overwhelming focus on choice really is what's so frustrating about Linux and its community. If needs aren't being catered to, or if there are disagreements, the amount of vitriol that gets thrown around is despicable. The systemd controversy is so terribly shameful, but lo and behold: FreeBSD now seems to be envious of it (or at least some of its aspects). Gnome 3 is widely regarded [2] as the best, well-integrated desktop environment Linux has ever had, and look at the amount of vitriol that got for having a direction and making choices for the user. I personally think that the level of integration systemd and Gnome 3 are attempting to pioneer make Linux far more attractive than ever, but the Linux community really alienates me with its attitude towards that. With this attitude, desktop Linux mostly remains a patched together collection of software. The rough edges of this patchwork are still far more apparent on even the best regarded distros when compared to OS X or Windows or even Android.

It's a shame FreeBSD doesn't maintain an integrated or official graphical interface. Since it uses the same not-so-well integrated desktops as Linux does, it unfortunately means that using FreeBSD is only a minor improvement over Linux for me in daily use [3]. That means I'll just leave it to Apple to build me a well-working and well-integrated operating system. If an operating system project or vendor makes choices for me, I consider that a big advantage most of the time. If a project can actually take on a proper direction like this presentation suggests, that is a big selling point to me.

[1]: I know what the good things about choice are, I'm trying to make a point.

[2]: Widely regarded does not mean "universally regarded" or even "regarded as such by the majority".

[3]: Ignoring the problem that FreeBSD doesn't have the level of driver or application support that Linux has.

fdsary 5 days ago 11 replies      
Choosing a unified format for configurations is an interesting task, because they all suck a lot (hehe). XML is too verbose to be nice to work with. Plain text files with config flags delimited by newlines lead to the program in the end implementing a small scripting language for config files.

JSON is pretty nice, but also a bit clunky. A lot of {:} all the time.

Personally, I think the nicest and most expressive way is S-expressions. I'm no lisper, but you have to admit sexprs are expressive, easy to read, and can be run as functions if the program knows lisp.

    {      "configFiles": "in JSON",      "wouldLook": {"like":"this"}      }      (while sexpr      (could look)      (even nicer))

nkuttler 5 days ago 3 replies      
I find it irritating to see cheap jabs at Linux/GNU/GPL in most BSD presentation I check out. It doesn't prevent me from using it, but it's just childish. Focus on your strength, not what you perceive as weakness elsewhere.
justincormack 5 days ago 0 replies      
I didnt find this talk very visionary (I saw the similar one at EuroBSDcon). Power management needs to be better, add something like systemd thats not like that. Vision in (existing) operating system development in terms of ten year projects is actually quite rare, mostly there are incremental changes. OSX is maybe an example, go for usability, and ZFS is another, make a radically different file system. Windows NT perhaps as well.
andrewflnr 5 days ago 2 replies      
This may be a stupid question, but what's wrong with shell scripts as config? Yeah, they're Turing-complete, but they're necessarily trusted, and a lot of times you end up wanting that anyway. But in the simplest case, they're almost the platonic ideal of a config language: name=value, repeat. If your objection is that shell languages have lots of nasty warts, then I'd agree, but you should be fixing that separately anyway. I sort of encountered this idea in FreeBSD init scripts, so I don't see why they don't just run with that.
coldtea 5 days ago 0 replies      
For those that don't know it (there are some comments here but not very clear): the writer of this presentation, Jordan Hubbard was a head FreeBSD developer for many years, who then become a head developer of OS X at Apple. He left Apple last year, and is back into FreeBSD work.
nathell 5 days ago 1 reply      
"All OS and app configuration data in OS X and iOS are XML plist files, even GNU emacs and X11.org's preferences!"

Naming correctness aside (it's X.org), can this be backed somehow? I remember using Emacs on OS X and I very much was storing my configuration in ~/.emacs.d/, the way it should be. The idea to have a unified configuration format for the entire system is glorious in theory, but with a system as heterogeneous as FreeBSD (and Linux even more so), it seems next to impossible in practice.

sudioStudio64 5 days ago 1 reply      
I have to admit that I came to this thread to see if anyone accused Jordan Hubbard of not understanding the "Unix Way" when he mentioned that they need a subsystem like the one that must go unnamed on this and every other forum.

Gosh, I remember when you could get FreeBSD on floppies. I've always had a great deal of respect for the work that they do. 10.0 was awesome, but I have to admit that I don't use *NIX everyday anymore.

cwp 5 days ago 1 reply      
Can someone explain bit about "trying really hard not to suggest launchd?" He's right, it does seem like an obvious fit, but he takes it as obvious FreeBSD wouldn't use it. What am I missing?
josteink 5 days ago 0 replies      
I must admit I like some of what I see, but I'm not sure the people currently ditching Debian for FreeBSD is.

Hopefully FreeBSD's execution will better than Debian's, but given their long standing track record I have little doubt they'll be able to make a future transition a million times smoother.

I'm still not sure I agree that all configuration has to be stored/processed in the same format (ref Apple plists).

I know this is the way things are done on some embedded platforms like OpenWRT and once you get used to it, it's OK, but it always means a feature needs to be doubly supported: first in the original service and its config-file and then in the translation layer between the config-file translated by the init-script into the "real" deal.

And will they be doing this for the 70k+ ports, or just for core services provided by the base OS?

porker 5 days ago 0 replies      
Let's hope they take these pointers to heart, discuss and move forward. It'd be great to have an even stronger FreeBSD in 10 years' time.
teddyh 5 days ago 2 replies      
Regarding A centralized event notification system, I predict that D-Bus will soon have this position in Linux, certainly when kdbus lands.
krick 5 days ago 2 replies      
> My dream laptop has evolved (BSD instead of Windows) (15th slide)

But, wait, isn't it Mac?

gtirloni 5 days ago 1 reply      
Regarding the centralized configuration data, I don't know how to feel about it. Sure, it would be good but most recent attempts at it usually involve going the Registry way.
Spidler 5 days ago 5 replies      
"Even the linux die-hards have essentially grasped the necessity of systemd (Even though they're going to hate on it for awhile longer)"
XERQ 5 days ago 1 reply      
I find it alarming that releases are EOL after 1 year, whereas RedHat Linux releases are supported for 10 years.
mp3geek 5 days ago 2 replies      
In 10 years time will they still be using CVS?
W3C HTML JSON form submission
254 points by ozcanesen  1 day ago   99 comments top 21
rspeer 1 day ago 3 replies      
Let me say first of all that I'm glad they're working on standardizing this. When making REST APIs, I find HTML form scaffolds incredibly useful, but it means that you probably have to accept both JSON (because JSON is reasonable) and occasional form-encoding (because forms), leading to subtle incompatibilities. Or you have to disregard HTML and turn your forms into JavaScript things that submit JSON. Either way, the current state is ugly.

Here's the part that I don't particularly like, speaking of subtle incompatibilities:

    EXAMPLE 2: Multiple Values    <form enctype='application/json'>      <input type='number' name='bottle-on-wall' value='1'>      <input type='number' name='bottle-on-wall' value='2'>      <input type='number' name='bottle-on-wall' value='3'>    </form>    // produces    {      "bottle-on-wall":   [1, 2, 3]    }
I've seen this ugly pattern before in things that map XML to JSON. Values spontaneously convert to lists when you have more than one of them. Here come some easily overlooked type errors.

I don't know of any common patterns for working with "a thing or a list of things" in JSON; that kind of type mixing is the thing you hope to get away from by defining a good API. But all code that handles HTML JSON is going to have to deal with these maybe-list-maybe-not values, in a repetitive and boilerplatey way.

I hope that a standard such as this will eventually be adopted by real-life frameworks such as Django REST Framework, but I also hope that they just reject the possibility of multiple fields with the same name.

bkardell 1 day ago 0 replies      
In fairness, you have to look at how standards get somewhere - this is an editor's draft which is a starting point of an idea rather than a done deal. Don't be surprised if the final product winds up being significantly different than this - even better, get involved in the conversation to make it what we need. That's not to pour cold water on it: It's good as it is, but there are changes which potentially help explain the magic of participating in form encoding and submission which may be better and allow more adaptation and experimentation over time.
luikore 1 day ago 0 replies      
I don't agree with Example 9, we should use data uri scheme for file content

    "files": [{      "name": "dahut.txt",      "src": "data:text/plain;base64,REFBQUFBQUFIVVVVVVVVVVVVVCEhIQo="    }]

homakov 1 hour ago 0 replies      
hughes 1 day ago 7 replies      

    {      "name":   "Bender"    , "hind":   "Bitable"    , "shiny":  true    }
Who puts commas at the start of a continuing line? What good could that possibly do?

jimmcslim 23 hours ago 1 reply      
I'm not sure whether to be heartened or concerned that the W3C is referencing the doge meme in its specifications... see Example 6.
lorddoig 1 day ago 3 replies      
It amazes me that we're now at the point of standardizing sticking array references inside strings and yet we're still not having a serious discussion about what comes after HTML.
tomchristie 20 hours ago 0 replies      
Seems pretty decent. Also neat that the nesting style could be repurposed to support nested structures in regular form-encoded HTML forms.

Main limitation on actually being able to use this is that `GET` and `POST` continue to be the only supported methods in browser form submissions right now, so eg. you wouldn't be able to make JSON `PUT` requests with this style anytime soon.

Might be that adoption of this would swing the consensus on supporting other HTTP methods in HTML forms.

chronial 15 hours ago 2 replies      
Am I the only one who is worried about the fact that this is exponential in size?

  <input name="field[1000000]">
Will generate a request that is ~5MB.

tootie 1 day ago 1 reply      
They're still working on XForms after 10 years http://www.w3.org/MarkUp/Forms/
jdp 19 hours ago 0 replies      
The latest release of my jarg[0] utility supports the HTML JSON form syntax. Writing out JSON at the command line is tedious, this makes it a little nicer. The examples from the draft are compatible with jarg:

    $ jarg wow[such][deep][3][much][power][!]=Amaze    {"wow": {"such": {"deep": [null, null, null, {"much": {"power": {"!": "Amaze"}}}]}}}
[0]: http://jdp.github.io/jarg/

pmontra 18 hours ago 0 replies      
A discussion about the implementation of the spec in jquery. It started on June 21


techtalsky 1 day ago 1 reply      
Kind of nice, basically turns form submission into a bare-bones API call.
kijin 1 day ago 1 reply      
Why such an emphasis on "losing no information" when the form is obviously malformed?

You need only to look at the crazy ways in which MySQL mangles data to realize that silently "correcting" invalid input is not the way to go. The web has suffered enough of that bullshit, we seriously don't need another. Example 7 (mixing scalar and array types) gives me shudders. Example 10 (mismatched braces) seems to have a reasonable fallback behavior, though I'd prefer dropping the malformed field altogether.

If the form is obviously malformed, transmission should fail, and it should fail as loudly and catastrophically as possible, so that the developer is forced to correct the mistake before the code in question ever leaves the dev box.

Preferably, the form shouldn't even work if any part of it is malformed. If we're too timid to do that, at least we should leave out malformed fields instead of silently shuffling them around. Otherwise we'll end up with frameworks that check three different places and return the closest match, leaving the developer blissfully ignorant of his error.

While we're at it, we also need strict limits on valid paths (e.g. no mismatched braces, no braces inside braces) and nesting depth (most frameworks already enforce some sort of limit there), and what to do when such limits are violated. Again, the default should be a loud warning and obvious failure, not silent mangling to make the data fit.

This is supposed to be a new standard, there's no backward-compatibility baggage to carry. So let's make this as clean and unambiguous as possible!

skratlo 14 hours ago 2 replies      
Wow, W3C at it's best again. Non-modular, non-negotiable, JSON it is, take it or leave it. Well fuck you W3C. Base64 encoded files? Seriously? What if my app workes better with msgpack encoded forms? Or with XML encoded? So you're going to support one particular serialization format, quite a horrible one, but that's subjective and that's the whole point. Every app has different needs and you should spec. out a system that is modular and leaves the choice to the user, even for the price of "complicating things".
billpg 19 hours ago 1 reply      
A new standard for referencing a point in a JSON object? I wonder if they considered RFC 6901 and rejected it.

I personally prefer this new square bracket notation, but being a standard already gets more points.

mnarayan01 1 day ago 1 reply      
The JSON-based file upload would be nice (AFAIK there's not great way to do this ATM, but I haven't looked in over a year). The rest seems pretty weak-tea though. I can see multiple issues with more defined type (e.g. numeric rather than string values, null rather than blank string), but without dealing with that stuff, this seems of extremely limited utility.
stu_k 1 day ago 3 replies      
Submitting files with this form encoding is of course going to have the base64 overhead, but otherwise this looks great!
edwinvdgraaf 1 day ago 0 replies      
Guessing that it's interesting when using an uniform endpoint for both forms and js-driven requests.
Patrick_Devine 1 day ago 2 replies      
Can we just get rid of HTML and replace it with JSON while we're at it?
woutervdb 1 day ago 1 reply      
> wow[such][deep][3][much][power][!]

And there goes my interest for this submission. Don't use overused memes in a submission. Liking the idea though.

Hard disk hacking
237 points by dil8  1 day ago   44 comments top 17
userbinator 22 hours ago 1 reply      
I think it's rather unfortunate that the workings of modern HDDs (and other storage devices, like SSDs, microSD cards, etc.) are all hidden behind a wall of proprietariness, as this is mainly a form of security through obscurity; and government agencies probably know about such means of access already, while not many others do.

Although they're largely obsolete today, for many years the most well-documented and open storage device that could be connected to a standard PC was the floppy drive. The physical format was standardised by ECMA, the electrical interface to the drive nothing more than analog read/write data and "dumb" head-positioning commands, the controller ICs (uPD765 and compatible) interfacing it to the PC were based on simple gate arrays (no need for any firmware), and all the processing was otherwise handled in software. The documentation for the earliest PCs included the schematics for the drive, and the ICs on it were documented elsewhere too - e.g. https://archive.org/details/bitsavers_westernDigorageManagem... A lot of the technical details of early HDDs were relatively open too. I've interfaced a floppy drive to a microcontroller before, and being able to see how the whole system works, to understand and control how data is read/written all the way down to the level of the magnetic pulses on the disk, is a very good feeling.

(Many earlier systems that came before the PC, like the C64, also had more-or-less completely open storage devices, enabling such interesting things as http://www.linusakesson.net/programming/gcr-decoding/index.p... )

mojoe 22 hours ago 4 replies      
I am very curious about how long this hack took to complete. I write firmware for SSD controllers for a living, and this would probably take me many months of full-time work to pull off with an unknown controller (granted, I generally work on algorithms at a slightly higher abstraction layer in the firmware, and some of my colleagues who are more focused on the hardware interfaces could figure something like this out much faster than me). I am incredibly impressed by this effort.

Also, I want to mention that it's common to have multiple processors in storage controllers. I can't talk about the specifics of the drives that I work on, but for SSDs at least there are several layers of abstraction: the host interface to receive the data, a middle layer to perform management of the data (SSDs require things like wear leveling, garbage collection etc in the background, to ensure long life and higher I/O speeds), and a low level media interface layer to actually write to the media. These tasks are often done by different processors (and custom ASICs).

jarek 19 hours ago 2 replies      
Also might be of interest: Bunnie's hack of SD cards last year http://www.bunniestudios.com/blog/?p=3554

"An Arduino, with its 8-bit 16 MHz microcontroller, will set you back around $20. A microSD card with several gigabytes of memory and a microcontroller with several times the performance could be purchased for a fraction of the price. While SD cards are admittedly I/O-limited, some clever hacking of the microcontroller in an SD card could make for a very economical and compact data logging solution for I2C or SPI-based sensors."

"The embedded microcontroller is typically a heavily modified 8051 or ARM CPU. In modern implementations, the microcontroller will approach 100 MHz performance levels, and also have several hardware accelerators on-die."

Was discussed on HN, but Algolia search looks to be down at the moment.

schoen 1 day ago 0 replies      
There were several amazing talks at hacker conferences last year about reprogramming storage devices so that they can tamper with their contents. This researcher's talk was one of those. Another significant one was


and I think there were at least two others that I can't find right now (plus recent stuff on USB devices that attack their hosts in various ways). In light of these and other firmware and hardware-borne threats, a good overview of the bigger verification and transparency problems is


teknotus 18 minutes ago 0 replies      
I really like the idea of using this as a defensive measure.
dsl 21 hours ago 3 replies      
Most people are surprised when I tell them that their computer is a lot of little computers working together on a sort of internal network.

This is why if your machine is compromised, and you have a threat model that involves serious (state or otherwise well funded) attackers, you really should just send it off to be recycled.

bajsejohannes 5 hours ago 0 replies      
This reminds me of a quite wonderful talk at Oscon earlier this year: http://www.oscon.com/oscon2014/public/schedule/detail/33943 slides available, but I don't recognize the file format)

The high point for me is where he installs Linux on the hard drive. In the sense that the hard drive itself is running Linux.

There are quite a few venues for attacks like these: A single computer is sprawling with processors.

pronoiac 23 hours ago 2 replies      
The server is overwhelmed. Coral cache: http://spritesmods.com.nyud.net/?art=hddhack&page=1
yoha 19 hours ago 0 replies      
Here is the previous discussion for those interested: https://news.ycombinator.com/item?id=6148347
rasz_pl 12 hours ago 0 replies      
kev009 1 day ago 1 reply      
This is really interesting stuff. Any pointers for getting into this kind of thing?
themoogle 6 hours ago 0 replies      
I want to take this and go further. Have a mini linux distro running on my drives :D
pingec 21 hours ago 0 replies      
I really like his article about dumb to managed switch conversion. I wonder if more projects like this exist perhaps with some existing community. Would be really cool if one could buy a cheapo switch and hack it to a managed one in a similar fashion like you can flash OpenWrt on some cheap routers and make them 100x better.
larrys 11 hours ago 0 replies      
I learned today what a jellybean part was:


"cheap and multitudinous commodity parts, each with a processor, memory, and a fast communication interface"

This reminds me of when I first went into business and bought some machinery. It actually surprised me (at that young age) to learn that the production machine I bought used standard parts that I could buy anywhere (bolts, screws and the like) and that if I needed one I didn't have to order it from the company that I bought the machine from. That seems obvious to me today but it wasn't obvious back then ("back then" was way before the web of course where info was not readily available)

jeffhuys 18 hours ago 1 reply      
Aw... Was reading, clicked to page 5:

>Warning: mysql_connect(): Can't connect to MySQL server on '' (111) in /var/www/spritesmods/connectdb.php on line 2

Edit: seems to work again!

TheLoneWolfling 15 hours ago 3 replies      
So... what's the Cortex used for?
jrockway 19 hours ago 2 replies      
I wouldn't trust the data on a hard drive anyway, since the hard drive can be removed and the data changed. If you want to make sure you're reading _your_ /etc/shadow, it needs a message authentication code. If you want to prevent others from reading your disk, it needs to be encrypted.
Yahoo Mail moving to React
230 points by rfc791  1 day ago   287 comments top 31
DigitalSea 1 day ago 1 reply      
I am surprised at all of the hate that Yahoo! is receiving for this, in the comments section of Hacker News (not surprisingly). This is great in my opinion, I think React.js is definitely the future of SPA's especially when combined with the Flux architecture. As someone who has been using React on a daily basis for the last few months, I have a severe man-crush on it. It just makes so much sense, combined with something like Browserify and working with a isomorphic workflow (shared codebase front and back-end).

And those who say Yahoo! are just jumping onboard the React hype train or Node.js hype train, you have it all wrong. Yahoo! have been using Node.js for the last few years, in-fact early 2010 is around the time Yahoo! engineers started playing with Node.js, long before it was considered mainstream cool or being used really in any high-profile scale environment.

It is rare that a company the size of Yahoo! truly ever embraces moving at this kind of pace and embracing new open source technologies, languages, frameworks and libraries. Now that Yahoo! have openly declared their use of React on such a large scale, expect it to explode even more so in 2015. For an open source project that is a little over a year old, React is getting the kind of user-base and adoption that most open source projects can only dream of having.

This news excites me. I honestly cannot wait to see how it all turns out.

PS. I have noticed a few people in the comments section getting confused. Yahoo! Mail is NOT using React just yet. The current mail product is still using YUI and plain HTML/Javascript. If you read through, it mentions 2015...

dschiptsov 1 day ago 8 replies      
Why on Earth people who aren't merely enthusiasts of "cool async JavaScript with V8" or those who began as a webdevs and knows nothing better than JS and PHP, could choose a single-threaded solution, which blocks the whole app if a single function blocks, and forces programmers to write spaghetti wrappers around asynchronous call-back hells in a non-functional but GC'ed language?

I am really too stupid to get it. "Single language for a whole stack" is a pretty stupid Java-sque argument, a naive assumption that one single language is good for all kinds of tasks.

cageface 1 day ago 6 replies      
I've been a Javascript and Node skeptic for years now but I think the tide is finally turning. So much time and energy has been poured into making JS better that it's finally starting to pay off. Javascript with all the ES6 enhancements is really not a bad language. The runtime performance is already very good and continually getting better. Tools like Typescript and Flow make dealing with larger code bases much easier.

I don't think the other "scripting" languages like Ruby, Python, and PHP stand much of a chance against it in the longer term. They just don't have the resources to compete.

mygreetings 1 day ago 8 replies      
Little bit off topic but is there anyone like me in community having a problem with liking javascript?

I have worked with javascript for years but it was always for DOM manipulation. When it comes to building an app with javascript, i feel like it is too fragile to depend on.

Anyone can help me to get rid of this feeling?

remon 1 day ago 10 replies      
What an odd trend. First Netflix moves part of their infrastructure to Node (https://news.ycombinator.com/item?id=8631022) and now Yahoo is doing something similar. Node.js is great but I don't think huge enterprise systems for some of the largest brands in the world are necessarily the best fit. I wish they'd provide some insights on why they're making that particular move.
mncolinlee 1 day ago 3 replies      
I see a ton of discussion about the NodeJS decision and almost nothing about the more interesting industry paradigm shift to reactive programming and using functional-style programming in an imperative language.

They're joining the likes of Facebook, Netflix, Square, Microsoft, Instagram, Khan Academy, SoundCloud, Trello, New York Times, and others in adopting reactive extensions.

glifchits 1 day ago 4 replies      
Yahoo's React/Flux projects mentioned at the end are intriguing. I have tried implementing Flux architecture in an app and found it really tedious and boilerplatey. I'm still looking out for well-implemented Flux dispatcher libraries.




jameswragg 1 day ago 0 replies      
Here's the associated Yahoo Engineering blog post that goes with the slides.http://yahooeng.tumblr.com/post/101682875656/evolving-yahoo-...
bsimpson 1 day ago 2 replies      
I was hoping to see a conversation around the points addressed in the slides (e.g. how to handle async data in Flux). Instead, it's a bunch of neckbeards whining about people using Node.
untilHellbanned 1 day ago 0 replies      
I can't make arguments about Node's technical abilities, but I like that big companies are throwing more of their weight behind Javascript.

I think Javascript is fun and easy to use. That's good enough for me.

That it helps preserve balance with iOS and Android in the "software eating the world" conversation, is a bonus.

z3t4 1 day ago 0 replies      
With the Opera browser it takes over a second to load the GUI whenever I click something, and the overall design looks like it has been made by someone making his/her first homepage.No wonder they decided to use something that encourage in-line HTML where code and design get entangled like a pile of spaghetti - making it almost impossible to maintain.

In the last years JavaScript has exploded with new frameworks and "compile to JS languages". But I have yet seen anything close to usable. Maybe it's because I've become "speed blind" after coding JS for over 15 years. I do not see all the problems ppl see in JavaScript, until I look at code written by beginners that seem to use every framework out there, and over-complicate the code, and naming everything with one letter variables and the name of their favorite pizzas.

dyeje 1 day ago 2 replies      
It's funny that this showed up on the front page, because yesterday I logged into an old Yahoo email account for the first time in a few years and was absolutely astounded at how awful the UI was.
fideloper 1 day ago 1 reply      
The exciting part of this for me would be around improvements in Yahoo - removing old cruft code. There's lot of strangeness and/or flat out errors I run into on Yahoo regularly. Any improvement in that regard could be a big benefit.

On the topic of JavaScript - I love Node as a glue. For larger applications - I just don't know how to structure (architect) a large application on a language like JS. Maybe that's just my ignorance.

ownedthx 1 day ago 0 replies      
Moving to node can be in part a decision to attract fresh talent, as well as keep the current team interested and motivated.
colinramsay 1 day ago 0 replies      
Yahoo seems to be championing React more than Facebook! With the flux-router-component they are trying to solve something that FB hasn't really addressed and it shows that they're actively trying to share the solutions to practical problems people come across when working with Flux.
tunesmith 1 day ago 1 reply      
Just curious, what are the reasons for 'isomorphic' apps these days? SEO used to be the big reason, but since google and bing can now render most javascript, and since you can use pushState for SEO, that's not really a sufficient reason anymore.
vamur 1 day ago 1 reply      
I wonder if the move is the reason Yahoo Mail has become even slower than before.
ComNik 1 day ago 1 reply      
Is finding Clojure people that hard (honest question)?

The Clojure + ClojureScript approach has many more batteries included. You get all the benefits of reusing the codebase on both sides of the fence while the language is solving the "Transactional Store", efficient dirty checks, nice server-side concurrency primitives and many unrelated problems for you.

somefoobar 1 day ago 1 reply      
Anyone have more insight on why TJ Holowaychuk left Node?


Anyone experience the same?

adamors 1 day ago 0 replies      
Considering how awful Yahoo Mail has become, I guess they're just throwing things at the wall and seeing what sticks.
wildpeaks 1 day ago 0 replies      
Yahoo also had slides about Isomorphic Flux / Dispatchr two weeks ago if you want more infos on how they're using React:https://speakerdeck.com/mridgway/isomorphic-flux
bitL 1 day ago 0 replies      
I find it funny as Yahoo has Netty which is far better performing and more capable than node.js...
dingdingdang 1 day ago 0 replies      
Node is a weird choice for Yahoo and Netflix: at least it might see the base Node package being improved significantly which is always nice :
serve_yay 1 day ago 0 replies      
"The age of large platform libraries is over."

Boy oh boy do I wish that were the case.

mhd 1 day ago 0 replies      
But it's still going to be fake desktop app, right? It seems Google is the only one to buck this trend.
callahad 1 day ago 3 replies      
Slide 11: Even Yahoo's Engineering Manager for Mail crops the ads out their own product screenshots. :(
__mrwhite__ 1 day ago 0 replies      
Would be interesting to see some performance benchmarks in comparison to previous iterations.
matobago 1 day ago 0 replies      
React is not even stable yet...
svs 1 day ago 0 replies      
Since when has what Yahoo Mail does become relevant again?
notastartup 1 day ago 0 replies      
Do you guys think that we are now moving into a new trend on the client side now? React + Flux vs. Angularjs, Backbone.js?

It seems that every job, even backend positions, require Angularjs or Backbone.js knowledge. Having largely ignored the two and hoping they would die, I am ready to learn React + Flux to accelerate to this cause.

peterwwillis 1 day ago 0 replies      
Welp, time to find a more stable/sane mail host.
One guy's experience with programming
225 points by stephen_hazel  1 day ago   38 comments top 10
stephen_hazel 21 hours ago 8 replies      
glad yall found it interesting!

side note: if anyone has any ideas about finding beta testers for my baby (erm, pianocheetah) I'd be glad hear em. can't find beta testers to save my life.

Overall, I'd say I'm pretty freakin lucky. Spoiled even. Thanks for the nice replies.

lucio 1 day ago 2 replies      
I've also had a Spectra, and after that an 88 weighted keys alesis QS8. had the ZX Spectrum (i'm still using n as default for loop variable) and then got the marvelous commodore 128. good old days. I was born in '71.
digitalzombie 20 hours ago 1 reply      
> Kathy turned out to be a jerk.

I saw that one coming. She left him a few time and trying to get back with her ex.

Not good at all.

But overall, the dude seems well and surprisingly his Oracle's skill is the one that is the most useful...

tomcam 21 hours ago 0 replies      
Beautiful story in so many ways. Thank you. Lots of similarities here, though I was self taught & ended up (very, very happily) at Microsoft for a few years. Your stories of family were enormously powerful & bittersweet. what a ride.
thorin 16 hours ago 1 reply      
Great story. I went Vic20->Spectrum->Amiga 500->windows PC->Linux.

Given I studied electronic eng I also expected to be doing something low level, but ended up doing enterprise oracle too for most of my career.

jayvanguard 13 hours ago 1 reply      
Great article. I love this style of free flowing retrospective with pictures of old technology. The Radio Shack 160-in-one electronics kit was awesome.
ronyeh 1 day ago 1 reply      
Nice story, Steve. It's cool to look back over a career / life and think about what you've done and where you're going still. I think I'm about 15 yrs from where you're at... also a fan of music, although due to coding and career and kid, I end up not doing any music practice. :-| But I code music-related apps! Hehe.
banku_brougham 23 hours ago 1 reply      
I enjoyed it Steve. Shared a lot of the same early experiences I think. Tape drives!
BorisMelnik 9 hours ago 0 replies      
very cool - I think this hit right at the right time when I am semi-nostalgic on Thanksgiving and tend to reflect on my own past with C64s trs80s etc.
shaurz 13 hours ago 3 replies      
Dude, never date single mothers.
After Threatening Hacker with 440 Years, Prosecutors Settle for a Misdemeanor
236 points by InternetGiant  1 day ago   113 comments top 24
tptacek 1 day ago 7 replies      


Most importantly: when you are charged with many counts of the same crime, the DOJ likes to write press releases suggesting your sentence is the product of the maximum sentence of each count. But that is not how sentencing works in reality. Reality works more like this: the judge uses sentencing guidelines to figure out a sentence for the "worst" single count you're charged with, and that's how much time you serve.

will_brown 1 day ago 2 replies      
The problem here is that the CFAA - especially the way it is being interpreted as of late - is a relatively new and undefined body of law. Therefore, there is not much case law on point.

As a result you have prosecutors doing what they do best, throwing everything at the wall and seeing what sticks. The problem with throwing everything at the wall is that defendants are more inclined to accept a reasonable plea rather than face the unknown.

For example, is it reasonable to charge someone with a count under CFAA for each instance they try to access a system without permission? You can try to adapt existing case law in unrelated crimes - such as attempted murder, should someone be charged for a new count of attempted murder for each bullet that was shot at a given victim or just a single count notwithstanding the number of shots fired, or how about if there are 2 potential victims the defendant wanted dead but only shot 1 bullet near both of them? Is it reasonable to charge a count under CFAA for running vulnerability software on a website? Is it reasonable to charge someone for breaking and entering if the defendant simply checks to see if doors or windows are unlocked, should we file an additional count for each door/window that was checked or a new count for each time a door knob was turned?

These are all issues that are ripe for the courts to decide, but it will take a very long time before you have a defendant willing to take the risk. What is really troublesome is that in the meantime there are cases such as this where 44 felony counts can be reduced to a single misdemeanor in exchange for a change of plea. The fact that a prosecutor was willing to offer a deal like this means the original charges were improper even in the eyes of the prosecutor.

Expect things to only get worse in terms of prosecutorial discretion vis-a-vis charges under CFAA in the foreseeable future.

mbreedlove 1 day ago 2 replies      
"Eighteen of the 44 counts in Salinas indictment, for instance, were for cyberstalking an unnamed victim. But each of those charges was based on Salinas merely filling out a public contact form on the victims website with junk text. Every time he clicked submit had been counted as a separate case of cyberstalking."

This is obscene... There seems to be an utter lack of understanding by the prosecutors handling these cases.

hessenwolf 1 day ago 1 reply      
Plea-bargaining goes against the fundament of innocent until proven guilty. It puts the requirement on the accused to know for certain that they can prove their innocence, to avoid the ludicrous sentence.

That is, there may be a motivation for pleading guilty even when you are not.

talmand 1 day ago 1 reply      
Prosecutor using vaguely written laws to pile on charges to make the defense feel overwhelmed to force them into a plea deal for a lesser, but actually more accurate, charge? This is nothing new, it's been done for generations.

The sad thing is, this only truly affects the non-career criminals in our society. Career criminals, that these tactics are supposed to be for, will laugh in a prosecutor's face for suggesting such stupidity. It's how you give non-career criminals a new career option so they can laugh later.

andrewtbham 1 day ago 3 replies      
I believe there should be a movement to pass a law to stop all plea bargaining, and require all criminal proceedings to go to trial. It would force the criminal justice system to only pursue solid cases, prioritize for the worst crimes, and reduce our bloated prison system, and restore our right to a fair trial.
anigbrowl 1 day ago 0 replies      
After all, Ekeland argues, Salinas has already been pilloried in the local and national press, which touted the early charges against him, but ignored the fact that they were dropped.

This is what incentivizes the prosecutors to a large extent. It's a political office (even in the case of US attorneys who are appointees, they're appointed by the administration, and that job is often a stepping stone towards running for a state AG job or some other political office) and the sad fact is that in many parts of the country there are more people who want to throw the book at people they perceive criminals than there are people concerned with proportionality or preserving the rights of defendants. In fact, most people are complete hypocrites about legal process and will cheerfully make completely opposite arguments depending who is int he hot seat and why.

So there's a clear incentive for prosecutors to paint anyone they catch as some Moriarty-like crime lord and of course that makes great news copy - big number, cooperative prosecutor, astonished neighbors saying they never realized they were living next to a crime lord, all heavily edited for maximum emotional impact within the tight constraints of the 'Action News' format (http://en.wikipedia.org/wiki/Action_News - the reason local TV news in the US is so awful is because it's manufactured on a template rather than crafted in response to the facts of the story).

And of course, there's no requirement to report the much less interesting (to most people) outcome of someone having their charges downgraded to a few months in jail and a fine. Because of the first amendment it's difficult for defendants to keep their name out of the media pending the outcome of a trial (whenever courts put anything under seal news organizations tend to file suit to gain access while mouthing platitudes about 'the public's right to know') and there's no way to compel the media to give equally prominent coverage to defendants whoa re acquitted, exonerated, have charges downgraded and so on.

goatforce5 1 day ago 0 replies      
What's a greater form of harassment?

a) Submitting garbage text via a Contact Us form up to (and including) 18 times, or

b) Threatening someone with 180 years in jail for those messages and then settling for a $10,000 fine?

Answers via my Contact Us form please!

Amorymeltzer 1 day ago 0 replies      
The power that prosecutors have is unbelievable, especially when it comes to using the plea bargain. Here's a fantastic look at the history of plea bargains and how they get used to bully people into pleading guilty when they aren't.


peter303 1 day ago 1 reply      
I wish Aaron Swartz had realized this. Prosecutors like to bluster and pile on charges. In the end the bargain is more reasonable. Professor Lessig said such in the Swartz bioptic earlier this year. Such a loss of talent.
tomiko_nakamura 1 day ago 0 replies      
This is the way prosecution works in the US - charging with heaps of bullshit felonies with the aim to scare the defendant, forcing him to plead guilty in exchange for minimum sentence. The defendants have to consider the risk that some of the felonies might stick (e.g. because of general ignorance of people to technology), and the expenses for the defence. And many actually choose to plead guilty despite being innocent ...

Just look at the percentage of "pleaded guilty" cases, that completely bypass the judicial system. The prosecutors can claim how they convicted another dangerous haxxxor, the general public applauds and the popularity helps them eventually get into Congress, important post or whatever.

This is not really all that different from how patent trolls work - they usually require payments that are slightly lower than the expected cost of defence (which may or may not be successful, and you'll have to pay for it no matter what the outcome is). So most companies do the math and simply pay to make them go away.

Also, it's exactly the issue that killed Aaron Swartz ...

kazinator 1 day ago 0 replies      
In Canada, you can kill someone and only get six years. You can easily find cases of this by searching CBC news stories for murder and "six years".

For instance, a few years ago, some dude in Alberta killed a foreign worker: a welder from Thailand. That guy's life was worth six years in jail.


I'm also appalled by that someone who fills a form with garbage and clicks Submit is even called a "hacker", let alone being charged with anything.

ZoFreX 1 day ago 0 replies      
> We've got enough on you right now to put you away for the rest of your life, plus 30 years

> Plus 30 years? That doesn't make any sense. Why not give me life plus a thousand years?

> Keep pushing.

- Dilbert S02E12 "The Virtual Employee"

spacemanmatt 1 day ago 0 replies      
When a serious legal topic comes up here on HN, I start missing Groklaw again.
wtf_us_ 1 day ago 0 replies      
when a prosecutor oversteps their authority like this, they should be punished in some way.
charonn0 1 day ago 0 replies      
The use of kitchen-sink charges and draconian sentences to coerce confessions has all the same moral and practical difficulties as the use of torture for the same ends.
jldugger 1 day ago 0 replies      
> If filling a website submission form a lot of times is cyberstalking, about half of Twitter is going to jail, Ekeland says.

We can only hope!

steffenfrost 1 day ago 0 replies      
If you want to commit crimes with impunity, become a banker. Otherwise, you're just needed fodder for the corporate prison system.
Yadi 1 day ago 0 replies      
Ouch that is a lot of years! Kids don't do hacks.
bmmayer1 1 day ago 0 replies      
Salinas' defense attorney is named "Tor Ekeland."
pasbesoin 1 day ago 0 replies      
At some point, this abuse must cross some line which I will and perhaps the law should define as extortion.

If the is no consistency nor comprehensibility and predictability to the law, it is no longer law. It is merely capricious authoritarian behavior.

notastartup 1 day ago 0 replies      
what is the point of such a long sentencing? 440 years?
MatthewMcDonald 1 day ago 2 replies      
While I agree that the amount of discretion is too high, Darren Wilson was not "let go" by the prosecution; the grand jury decided that there was not enough evidence to indict him.
eyeareque 1 day ago 1 reply      
The article states that he was scanning the website for vulnerabilities. He wanted to do harm.. My assumption is that he was looking for exploits, perhaps a XSS in the comments section (filling out comments with junk text) or he was just trying to DoS the site.

Had he found a vulnerability in the site, what do you think he would have done? He doesn't seem to be a white hat, but does the county have a vulnerability reporting policy? (my guess is no.)

I equate what he did with a burglar snooping around a house and checking for an open door or window to break in.

I think the laws they used were wrong--but it appears he was up to no good.

HSBC, Goldman Rigged Metals Prices for Years, Suit Says
229 points by MichaelCORS  1 day ago   122 comments top 16
acjohnson55 1 day ago 7 replies      
This is fucking ridiculous. Scandal after scandal. How long are we going to put up with this?

In America, the political climate disgusts me. When it comes to the poor and the front-line workers, we're all about the strictest forms of accountability. Drug tests for basic aid. Punitive oversight of educators. But our richest, most powerful institutions are robbing us blind on the regular and getting away with it.

Dwolb 1 day ago 2 replies      
It sounds like the actions went something like this: Modern Settings places a large order for 100k oz platinum from BASF. BASF buys mined platinum stock at price Y and takes 1 month to produce the refined platinum order. BASF doesn't want to be exposed to price movements with platinum that swing +/-20%. (if the price of platinum drops, BASF doesn't want to make thinner margins on a product they've already produced)

Therefore, BASF calls up their friendly bankers, Goldman and HSBC, to reduce BASF's exposure to platinum price swings through derivatives. BASF hears both banks' proposals over respective dinners. The conversations between the teams range from difficulties with the business, the client that BASF is dealing with, and what could be done to make everyone's job a little easier.

The relationships get cozy over time and BASF starts an open flow of information about who's purchasing what metals so Goldman and HSBC can 'just handle' the derivatives BASF should buy. Unscrupulously, the banks start trading on the information on when large orders will be placed on which metals markets and get discovered insider trading.

bkeroack 1 day ago 1 reply      
This is a great example of what Nassim Taleb would call "lack of skin in the game"[1].

It's a wonder that we don't see more of these crimes, given that the rewards (if they don't get caught) accrue to the parties responsible, while the punishments (if caught) are suffered almost exclusively by everybody else. The people who made the criminal decisions do not suffer when the corporation is fined/santioned, it's shareholders, innocent employees and society at large who do. In other words, there's very little disincentive against this behavior.

What we really should consider is adding personal criminal liability to corporate officers who are found to either commit or condone financial crimes, as far up in the call stack as it can be proven.

1. http://www.amazon.com/Antifragile-Things-That-Disorder-Incer...

jim_greco 1 day ago 1 reply      
> According to the complaint, the four companies participated in twice-daily conference calls to set global price benchmarks for platinum and palladium, which also affected derivative products based on the precious metals.

I was bond trader so I don't know the specifics of the platinum and palladium markets, but my guess is they are pretty illiquid. A significant portion of it is likely traded off-exchange so discovery of the true value of the commodity is a huge issue. It sounds like these four companies see a lot of flow in one form or another and thus have a better idea of what the true market price is. They also probably have significant counter party exposure to each other through derivative contracts. Meeting twice daily to compare notes and settle on a price to adjust margin, etc. It doesn't sound very nefarious. It just sounds like a bunch of market participants trying to figure out the right price. And of course they use this information to make money for themselves - that's how a market making operation works.

Now you could make the argument that regulation should push more of this type of trading onto an anonymous Exchange where price discovery is available to everyone and this drives down the information advantage that insiders have. The Government pushed that with swaps to various degrees of success. Hopefully they continue to do more of it.

datashovel 1 day ago 0 replies      
It's unfortunate, but these days when I hear about rapid price fluctuations my "knee jerk" reaction is:

1) Who HAS the market in 'X' cornered, and is working to drive prices up so they can unload their shares.

2) Who is WORKING to corner the market in 'X' by driving prices down.

The fortunate thing (I think) is that removing barriers from access to data will eventually make schemes like this all but extinct. It will no longer be how rich people get richer. It will be how crooks end up in jail.

pfortuny 1 day ago 1 reply      
These years are a boon for financial education: we are getting to learn everything: derivatives (including swaps, by the way), forex, the libor, the meaning of a Ponzi scheme (thanks to Madoff), insider trading, and now front running.

I say: these years will be taught in law schools and in finance masters for ever. History in the making.

adwf 1 day ago 3 replies      
I wonder whether this is just a speculative suit based on past performance with the LIBOR scandal, or whether they actually have some sort of smoking gun. The article doesn't really mention either way.
noipv4 1 day ago 2 replies      
There's something 'regular citizens' wrestle with that the elites never seem to: a sense of moral duty.
amalag 1 day ago 2 replies      
This doesn't even mention the issue with them delaying aluminum deliveries to raise those prices.
klunger 1 day ago 0 replies      
It's not just precious metals. Remember the aluminum price manipulation from these guys last year? http://thedailyshow.cc.com/videos/aa0rnb/john-oliver-s-arcan...
spacemanmatt 1 day ago 2 replies      
Shocked, I say. Shocked.
malloreon 1 day ago 0 replies      
Take every single penny back and 10 times more. Put everyone involved in prison.
senthil_rajasek 1 day ago 0 replies      
This is a law suit. It's unclear to me what's "unlawful" in this case. Did they violate any regulation of the market for these metals?
cryoshon 1 day ago 0 replies      
What kind of crazy hijinks are the banks going to get into next?

I expect no change from any elected official, even though my elected official is Liz Warren...

Every time some huge scandal like this is broken (LIBOR, aluminum fixing, mortgage default swaps) I openly wonder how long it's going to be before people start wrecking things out of anger.

known 1 day ago 6 replies      
Legalize insider trading. You can't prevent it.
arca_vorago 1 day ago 0 replies      
If the SEC weren't yellow-bellied cowards they might actually do something about this. Too bad regulatory capture and good old bribery and corruption are rampant.
       cached 28 November 2014 05:11:02 GMT