hacker news with inline top comments    .. more ..    26 Jan 2016 Best
home   ask   best   3 years ago   
Amazon's customer service backdoor medium.com
1385 points by grapehut   ago   344 comments top 47
danneu 1 day ago 17 replies      
Whois is great for social engineering attackers. You get a name, email, address, and the first service to attack.

Meanwhile, the ICANN is working around the clock to make it illegal for us to protect our personal information, and whois protection is becoming an increasingly niche service for registrars.

For example, gandi.net (and thus Amazon) doesn't hide your name when you have it turned on. By the time you find this out, it might occur to you to just type in a different name, but now you're violating ICANN policy. And it's already been scraped by any of those whois history websites.

grapehut 1 day ago 2 replies      
Worth checking out: someone reproduces using a fake address to get a real address.


(contains pretty great screencaptures)

_Codemonkeyism 20 hours ago 2 replies      
Amazon does not care. A fraudster used our startup bank account to pay at Amazon. We told them, they did not blacklist the user to use our account or take any actions beside removing the bank account (ours) from his Amazon account.

The fraudster did this at least 3 times with increasing amounts of money. Amazon did not care. Only when we went to the police did this stop.

Amazon sold me a phone, the box arrived empty (I wonder why they do not check the weight when it leaves their warehouse, DHL printed a weight on the box that was less than the phone alone). It took Amazon support months to solve this, especially they could or would not cancel the attached mobile phone contract for months.

mrb 1 day ago 7 replies      
How to stop this:

1. Get a friend's permission to "hack" into his Amazon account (or "hack your own account").

2. Contact Amazon's customer service, try the same social engineering techniques that the OP documented.

3. Once you obtain some sensitive information from the account, scare the CS rep by saying: "Haha! I am actually not the customer. I am a journalist/hacker/whatever and wanted to see how easy it was to social engineer information out of your customer service department, and you failed. I would like to talk to your manager please."

Hopefully if enough people do this, it will get some internal attention at Amazon.

nmjohn 1 day ago 6 replies      
> services should allow me to easily create lots of aliases. Right now the best defense against social engineering seems to be my fastmail account which allows me to create 1 email address alias per service

What you may want is a catch-all email - which lets you do @domain.com -> nmjohn@domain.com (where is everything besides already defined addresses) - that way you can make up emails on the fly without having to setup the alias beforehand.

I've had that setup for 5 or 6 years now, and it works extremely well. A handy side-effect of this is it makes it easy to see which companies sell your email address to spammers when you included the name of the original company in the email you register with

incarnate 1 day ago 1 reply      
This is exactly the same thing that let someone delete Mat Honan's (Wired author) accounts back in 2012:

Apple tech support gave the hackers access to my iCloud account. Amazon tech support gave them the ability to see a piece of information a partial credit card number that Apple used to release information.


paulcole 1 day ago 6 replies      
"The problem is, 9999 times out of 10000 support requests are legitimate, agents get trained to assume theyre legitimate. But in the 1 case theyre not, you can completely fuck someone over."

That's why nothing will change if these estimates are even in the right universe. Nobody wants to inconvenience the vast majority of customers to prevent a minuscule number of issues.

AndrewUnmuted 14 hours ago 1 reply      
I worked for Amazon for four years. For nearly the entire time I worked there, I, as an engineer, had access to every customer's purchase history, contact information, email addresses, etc. The reason? On occasion, I'd need to get a user's email address to reach out to them if they reported bugs. The one service that offers employees this access is all or nothing. Either you get to see a customer's email, credit card number, and purchase history - or you get to see nothing at all.

Everyone knew that I had this access, and everyone knew that it was against Amazon's own policy to give me access. But to them, that was easier than fixing the service so that it was more useful.

Perhaps I'm just clueless, but something tells me that any relevant competitor to Amazon - say, I don't know, Google - would choose to fix the service instead.

Camillo 1 day ago 0 replies      
Yes, Amazon is doing it wrong. But the much bigger problem is that your bank lets fraudsters impersonate you using easily obtained information such as your name and address. It is completely backwards that you need an impenetrable wall and moat around the place where you buy books and groceries, because, once you get past it, then the place where you store all your money and get your mortgage is as easy to penetrate as a piece of tissue paper. The root cause of all identity theft are the incredibly lax security policies of the financial system.
turar 1 day ago 6 replies      
If you own a home in the U.S., anybody already can get your address legally and easily from your county or district property appraiser's/assessor's website. Along with how much you paid for it, and when you bought it. So calling Amazon CS rep is a hard way to go about it. :)
leeleelee 14 hours ago 0 replies      
I think the best solution, for now, is to just regularly check your full credit report for anything you don't recognize and watch your credit card, debit card statements for any purchases you don't recognize.

I've had credit cards get compromised in the past, and it was actually quite painless to have my bank (Chase) shut the card down and issue a new one.

Your information can be stolen from SO MANY sources and not just Amazon customer service. It's impossible to guarantee who sees any of your personal information once you share it with ANYONE on the internet (Amazon, Google, some random retailer, domain registrar, etc.).

The server at your local Applebees could steal your CC info.

Be sensible with where you share personal information, but don't be unreasonable. It's safe to use Amazon.

Just watch your credit report (regardless of whether you feel you're at high risk) and bank statements.

If/when a problem arises, then deal with it.

dannysu 1 day ago 0 replies      
Reading through this thread, I've now taken action to use a unique email for important accounts. I was already using [name of service]@[some other domain I only use for email].com. However, I just changed now to [random # and chars]@domain.com.

An additional thing I'm doing is reviewing what accounts have my credit card. One of the things I like about my Bank of America credit card is that I can use their ShopSafe feature to generate a card number for specific accounts.

So if I'm buying transit pass on a website probably made by incompetent people, I generate a new credit card number and use it one time. Same thing with doctors that want me to write my credit card info on a piece of paper and mail it back to them.

adarsh_thampy 1 day ago 1 reply      
As someone who has trained customer support agents, I can attest to the fact that most agents have to be taught every scenario. If it slightly deviates from the one they have been trained on, they are clueless.

Not saying all customer support people are like this. However, majority of people are. They rely on pre-written scripts. When a question is asked, they search for the template question with the answer.

okigan 1 day ago 3 replies      
Any recommendation what one (as a customer of Amazon) can do today ?

2FA does not help here as someone goes through support channel which looks like bypasses 2FA

Also concerned if the same trick can be applied to Amazon Cloud services, as there one can also run up a big bill pretty quickly.

rplnt 18 hours ago 0 replies      
> migrating as much to Google services which seem significantly more robust at stopping these attacks.

Because they don't have customer support?

xenadu02 1 day ago 0 replies      
The vast majority of services use email address to identify you so diversifying your email addresses helps a lot. I've known about every hack/info leak ahead of everyone else for that reason - I use a unique email for every service.

I also use different cards for the major online retailers / tech giants so knowing the last four digits from my Amazon account is useless to validate anything else (though this does require having several credit cards or debit cards).

Whois privacy is absolutely required.

Unfortunately if someone is determined enough, almost all ISPs, cell companies, retailers, etc will happily give them control of your entire digital life. You can only minimize the risk somewhat.

bobby_9x 1 day ago 0 replies      
Its interesting how easy it is to do something like this, yet legitimate third-party sellers cant even talk to a live customer support rep. when their account is suspended.
rogeryu 19 hours ago 0 replies      
We had our AWS account hijacked three years ago. Someone had taken over our admin email by hijacking the DNS. They had hacked into our DNS account (with another provider) and changed the MX for our domain. Then they contacted customer support and convinced them to disable two factor authentication. Then they started to play with our account, starting and stopping servers.

Taking back the DNS took time. Meanwhile the hijackers were logged in, and could not be logged out by Amazon. This took more than a day. It took us two full days to get all back to normal.

The good thing is that they could not login to our servers. What they wanted is still not clear, and who did this - we saw some suspicious traffic from Russia, but that's all.

vjvj 20 hours ago 0 replies      
Wow. I had a similar experience with Skype too. They couldn't care less that someone had got access to my account and made calls. The attacker even added his own mobile number (in a different country) but Skype wouldn't bother investigating or escalating...
tommoor 1 day ago 1 reply      
Damn, lucky they send out emails after a customer service interaction or you'd have never had any idea this even took place.
Animats 20 hours ago 0 replies      
This is Amazon's problem for using street address (!) as a password. If there's an authentication issue, they should at least email or call or SMS you.
fredwu 17 hours ago 1 reply      
It is rather unfortunate yet at the same time unsurprising. :(

Two years ago I found out that Amazon allows multiple accounts to be set up using the same email address with different passwords (!!!) - which means that the potential attack vector is larger for no good reason.

I don't recall how this happened but I can only assume at the time I signed up to AWS and I might have reset/changed the password somehow that resulted in the system creating another copy of my account.

So all the information (credit cards, addresses, etc) of the "old" account still existed until I deleted them. But let's say if someone who has no idea that they have more than one accounts with Amazon, they could easily leave their information intact in their "old" accounts, which if they have weak passwords can easily be compromised.

Unfortunately Amazon did not take this report seriously, and to this very day this issue still persists.

krampian 15 hours ago 0 replies      
I'm concerned that names and addresses alone seem to be enough for these guys to do meaningful ID theft with. The phone book is full of names and addresses anyone can get their hands on easily. Even these guys in India - there's whitepages.com. Not sure why they're going to all the trouble of trying to game Amazon's customer support.

On that note, I order alot from Amazon and throw out their boxes in the trash outside all the time. Sometimes I notice that neighbors (presumably) take those boxes for their own use before trash pickup comes along. All of them have my name and mailing address on them...

ikeboy 1 day ago 0 replies      
>Email services should allow me to easily create lots of aliases

I use blur from Abine.com, gives me a new email that forwards to my main, as many as I want, integrated with a browser plugin that barely adds time to signup.

ommunist 16 hours ago 1 reply      
Same sh#t happens with Apple Support all the time, for few years in a row. Someone was after my last 4 digits, requesting password resets to Apple ID, like 14 times a day, and then impersonating me, talking to support.
Nemant 19 hours ago 0 replies      
Somebody should try getting Jeff bezos' address. I tried a couple of times and failed.
free2rhyme214 1 day ago 3 replies      
Someone hacked my Amazon account once. I'm surprised they don't have 2-step verification.
dannysu 1 day ago 1 reply      
There's also no way to separate AWS account from Amazon account it seems:https://forums.aws.amazon.com/thread.jspa?threadID=85882

This is really bad. The security implications are different between the two.

jasonkostempski 1 day ago 1 reply      
Couldn't customer service just treat all sensitive information like the they treated the last 4 digits of the CC in this scenario? Verify only, reveal nothing. I'm sure almost all legit customers don't have even 5 possible addresses they may have shipped to, make them say what they think it is.
anindyabd 1 day ago 2 replies      
The OP says he is "a security conscious user who follows the best practices like: using unique passwords, 2FA, only using a secure computer and being able to spot phishing attacks from a mile away..." yet I do not think he enabled 2FA on Amazon.com. If he did customer service would not have helped the hacker pretending to be him. As their help page says, "If you need help from Customer Service after enabling Two-Step Verification, you'll need provide a security code similar to when trying to sign in to your account." https://www.amazon.com/gp/help/customer/display.html?nodeId=...
ninjakeyboard 1 day ago 0 replies      
People will forever be the weakest link in a system's security.
nicksuperb 1 day ago 0 replies      
I've been using my local USPS PO box for domain registration for the past few years. It pays for itself when you figure in the cost of add-on services from registrars. I also use it as an address for similar in-person sign up forms also. Aliasing services like Fastmail are also a solid part of the equation. Use it as much and in as many places as possible. You could also try an IRL alias-type hack by giving out a slightly different name (middle name, title, etc.) when filling out your address.
BaNzounet 17 hours ago 0 replies      
A possible solution to avoid people finding out the email you're using for a given service is to dump random word/phrase in your email address.

e.g. email+ifidontknowthisthisisnotme@youremail.com

Not sure how an agent would react to someone having part of the correct email though.

joncp 23 hours ago 0 replies      
Working in the same neighborhood as Amazon's new headquarters, I've become convinced that not all is well with their security. All those blue badges with their employees' full names dangling from their belts while they're in line at the local food trucks is a social engineer's dream come true.Expect to see some high-profile breaches.
RainManDetroit 1 day ago 0 replies      
Simply create an LLC and have it manager-managed, as opposed to member-managed, As long as you either do no business or legitimate business, the owner (member) into is protected, and it will list your registered agent and their office address as the site owner.
ubersync 1 day ago 1 reply      
If anyone wants to start a fund to sue Amazon for this, I am ready to pitch in a $100.
gravypod 1 day ago 0 replies      
I am VERY interested in what you mentioned about fastmail. That seems like an amazing idea. I have never thought about it.

I think I need to make a script that can do that for me. A simple mail server to forward emails both ways.

yomly 18 hours ago 0 replies      
Someone's life in Amazon is about to become a world of pain. This is not going to be a fun Jeff B escalation...
pfarnsworth 1 day ago 1 reply      
The problem is Amazon has thousands of poorly trained first-level support staff with far too much power and information.

What we need is a global security standard for support staff, with a template as to what information is accessible by staff and what isn't. And what is available to better trained 2nd-level support, etc.

And then each company can say they are certified for this particular security standard, and then you can't get social engineering attacks where you attack one large corporation, get partial information, and then feed that into another large organization to get other information. This was done previously using Amazon, again, to get enough information to take someone's Twitter account, if i remember correctly.

purpled_haze 1 day ago 0 replies      
And now that this is public, we're all at more risk.
aaronbrethorst 1 day ago 0 replies      
"A chain is only as strong as its weakest link."
ronyeh 1 day ago 3 replies      
On your Amazon home page, go to:

Your Account Change Account Settings Advanced Security Settings

Turn on 2-step Verification.

It won't completely solve social engineering, but it can't hurt.

hagmonk 1 day ago 1 reply      
Why didn't the OP turn on two step verification? Amazon does support this.
facepalm 18 hours ago 0 replies      
I find it a bit weird that address and even credit card number are confidential information. Credit card numbers are not really secret, you hand them out to random waiters in random restaurants. Maybe part of the fault lies with the other companies who accept that information as ID?
EGreg 20 hours ago 0 replies      
This article has taught me a valuable lesson: I should be using the email+suffix@gmail.com feature in each service I'm signed up for. Seems like an easy enough change.

Ideally, the suffix would be some non obvious function of the service name, which I can remember easily. Like taking the second letter of the service name and relating it to an object I encounter a lot in my life.

muppetman 1 day ago 0 replies      
I read this as a guy who has serious amnesia.
swehner 1 day ago 0 replies      
Customer support is what Amazon adds to the otherwise simple service of operating an online catalogue, stocking products and sending them out when ordered.

As you can see here, they are not doing a good job even in that department. Taking huge profits for basically failing.

I have called this a lose-lose in the past.

So -- be good and stop using amazon!

Marvin Minsky dies at 88 nytimes.com
675 points by joelg   ago   93 comments top 28
gonzo 29 minutes ago 0 replies      
I met Minsky once in February 1987, at a rally in Las Vegas to protest the Nevada Test Site.. A lot of famous people (Carl Sagan, Barbara Boxer, Tom Downey, Ramsey Clark, Martin Sheen, Kris Kristofferson) were there, but I'd gone to meet Minsky.

I had taken along a (quite early) copy of the GNU Emacs manual. The FSF was selling them, but I'd put this one together myself. Running TeX on the texinfo source, converting the output for the Imagen printer, and then taking it to Kinko's to be spiral bound, including my imitation of the yellow cover that the FSF version had.

I asked Minsky for his autograph. He looked at what I presented, understood what it was, an autographed inside the front cover, "Marvin Minsky, friend of Stallman".

In April of 2011 in an airport in Honolulu, I presented that same manual for an autograph to Richard Stallman. He looked my manual over for a long time. IIRC, it documents Emacs version 16 or 17. Then he signed it, below Minsky's autograph,"Richard M. Stallman - Friend of Minsky"

RIP, Marvin.

Tossrock 5 hours ago 5 replies      
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?" Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.


Eliezer 4 hours ago 2 replies      
I had the chance to walk with Marvin Minsky down a hallway once, and I asked him what he thought of Bayesian reasoning. He said that it seemed to him like it was still part of a general trend away from tackling the central problem in AI. I said I didn't think so, but he seemed tired so I didn't try to go into detail.

There's an urban legend that I once got into a fistfight with Marvin Minsky, which does about as well as anything to illustrate the crazy, crazy things that people have been known to believe about me.

We have temporarily misplaced a great mind. See you later, Professor Minsky.

ggreer 5 hours ago 3 replies      
Interesting fact: Minsky is an Alcor member[1], so he's probably being cryopreserved right now. Though if he died from a cerebral hemorrhage, I'm not sure how well they'll be able to preserve his brain.

1. https://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundatio...

dmschulman 2 hours ago 0 replies      
Minsky helped design one of the coolest musical gadgets I've ever come across, the Triadex Muse. Being a sort of self-generative music box, Minsky imagined a future where families would gather around such musical machines instead of turning to boring old television for their entertainment and relaxation.


julianpye 5 hours ago 3 replies      
Isaac Asimov: "The only people I ever met whose intellects surpassed my own were Carl Sagan and Marvin Minsky."
daughart 5 hours ago 1 reply      
Besides his contributions to computer science, his invention of the confocal microscope profoundly affected biology research and is still in wide use.
chriskanan 4 hours ago 2 replies      
I think Minsky was the last living giant of AI that attended the 1956 Dartmouth Summer Research Project on Artificial Intelligence, which many cite as being the beginning of NLP, computer vision, machine learning, etc.

Sad news...

saosebastiao 5 hours ago 2 replies      
RIP. I will always hold him as an inspiration.

Of interest: https://en.wikipedia.org/wiki/Neats_vs._scruffies

I find it interesting because Minsky did a lot of the foundational work in Neural Network research yet he philosophically identified as the opposite on the Neat/Scruffy spectrum of most NN researchers today. Much like Bayes, I think there is some immense wisdom from his research that will not even be acknowledged as wisdom for decades.

mindcrime 5 hours ago 0 replies      
Oh man... This is really sad news. I mean, don't get me wrong, ANY death is sad news, especially for that person's friends and family. But while I never knew Marvin Minsky personally, I've felt his influence on my life for a long time. AI has always been one of my favorite subjects, and he's one of the forefathers of AI research and his presence looms large in the life of anyone connected to the field. So this feels like losing an old friend.

Not to mention that he was a brilliant mind, and his loss is a loss for humanity at large.

Anyway, RIP Mr. Minsky.

hydandata 5 hours ago 0 replies      
What a sad day, he was truly one of the great minds of the twentieth century. Inspiration to generations.

P.S. Web of Stories has an extensive, autobiography style interview with Marvin Minsky [1].

[1] http://www.webofstories.com/play/marvin.minsky/1

vonnik 2 hours ago 0 replies      
Some great sentences from Minsky:

No computer has ever been designed that is ever aware of what it's doing; but most of the time, we aren't either.

In general we are least aware of what our minds do best.

ryanmarsh 3 hours ago 0 replies      
Favorite paper of his:

Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily-Formulated Ideas


jonbaer 4 hours ago 1 reply      
"You don't understand anything until you learn it more than one way." ... RIP ... The Society of Mind one of the best books.
ehudla 3 hours ago 2 replies      
One of my earliest exposures to CS was his Computation: Finite and Infinite Machines.

"Communication with Alien Intelligence" is another favorite of mine. The idea of enumerating all possible Turing Machines and looking for ones that do something meaningful is brilliant.

reviseddamage 19 minutes ago 0 replies      
I'm trying to figure out who will play him in the Hollywood movie about him.
proc0 30 minutes ago 1 reply      
So sad. I wanted him to survive until a true AI breakthrough happens, which seems so close (granted for many decades now, but that's why).
rootbear 4 hours ago 0 replies      
Minsky used to give talks at the annual Boskone Science Fiction Convention. I heard several and enjoyed them greatly, he was an entertaining speaker.
jonbarker 3 hours ago 0 replies      
Glad I was able to get to meet Marvin back in 2014. On cognitive neuroscience he was pessimistic, he likened it to telling a chemist to try and discern what a computer was doing by looking at the machine without the monitor. Really enjoyed that analogy.
pyrrhotech 3 hours ago 0 replies      
Very sad and surreal, I happened to just be reading his Wikipedia page yesterday. I hope the cause of death does not prevent his cryopreservation. Say what you will about cryonics, but it definitely gives you a better chance of living again than internment or cremation. I hope the world will see his genius again.
morenoh149 2 hours ago 0 replies      
thankfully MIT open courseware recorded one of his classes https://www.youtube.com/watch?v=-pb3z2w9gDg the society of mind class 6.868
jordhy 4 hours ago 0 replies      
This is sad beyond words. He was an admirable genius and a very candid person. May he rest in peace.
DonHopkins 1 hour ago 0 replies      
Here's a video image from the POV of a robotic Dakin Bear of Marvin Minsky's son, Henry Minsky, who had a look of trepidation at the idea of sacrificing his Dakin Bear to one of his dad's robotics experiments.


DonHopkins 1 hour ago 0 replies      
He was truly a brilliant and humble man, who wrote so much influential and interesting stuff!Here's one of my favorite papers by Marvin Minsky:

Jokes and their Relation to the Cognitive Unconscious

Marvin Minsky, MIT

Abstract: Freud's theory of jokes explains how they overcome themental "censors" that make it hard for us to think "forbidden"thoughts. But his theory did not work so well for humorous nonsenseas for other comical subjects. In this essay I argue that thedifferent forms of humor can be seen as much more similar, once werecognize the importance of knowledge about knowledge and,particularly, aspects of thinking concerned with recognizing andsuppressing bugs -- ineffective or destructive thought processes.When seen in this light, much humor that at first seems pointless, ormysterious, becomes more understandable.


endlessvoid94 2 hours ago 0 replies      
Can we get a black bar for this one?
ffk 5 hours ago 4 replies      
Admin, can we get a black bar for this? Marvin Minsky is widely referred to as the "Founding Father of AI."
_pius 4 hours ago 1 reply      
We just lost a giant.

Hard to imagine someone more black bar worthy for Hacker News, hope we have one up soon.

T-Shirts Unravelled threadbase.com
746 points by janzer   ago   154 comments top 41
nextos 2 days ago 6 replies      
It's pretty well known that tees made using 1920s-era loopwheeling machines don't suffer from size changes, and age really well. But sadly these are now super expensive, and only offered by Japanese niche brands who bought machinery from American corps.
blahedo 1 day ago 0 replies      
> While hot water may cause shrinkage in wool garments, for cotton and polyester t-shirts, the washer settings don't make a big difference.

This is also more or less true of wool, in fact. It's not the hot water that shrinks a wool sweater, it's getting it damp and agitating it. The hotter it is while agitating, the faster it will shrink; being fully wet (as opposed to damp) mitigates the felting process somewhat. It appears that cotton is the same way (although its shrinkage is less extreme and less permanent than shrinking a wool sweater!). You can certainly get a wool sweater wet with hot water without shrinking it at all. If you're careful, you can wash it in hot water and agitate it to wash it properly, as long as you don't overdo it, without shrinking it. With some treated wools you can tumble dry on no heat, although I wouldn't recommend it; but under no circumstances should you wash a wool garment (in whatever heat of water) and then put it in a dryer with any heat at all. That will shrink it.

I knit, so I have extra awareness of how many people just don't know what to do with wool these days, and I have to educate them if I want to give them something I made. :)

threadbase 2 days ago 8 replies      
I'm the founder of threadbase. Thanks everyone for your kind words. I'd love to hear any comments or suggestions for what you'd like to see next or how we can improve the user experience. We're also looking for front-end/design help, as well as help with computer vision tech. Feel free to email me chris@threadbase.com.
jakub_g 2 days ago 8 replies      
I feel obliged to share a lifehack my mother taught me regarding the laundry.

After removing the laundry, take your t-shirts and stretch them yourself, one by one, when they're still slightly wet. Grab them with two hands symmetrically, stretch horizontally, moving your hands down along the shirt. Do the same vertically, and with the sleeves.

Do not use a machine dryer, just a regular standing dryer like [1].

Put your t-shirts carefully, symmetrically on the dryer, and once dry, put them on a hanger. If you follow this, you will not have to iron them at all.

Source: been doing this for 4 years and I didn't touch the iron since. I have all t-shirts 100% cotton (though I buy only high-grammage ones) and they all seem brand new and ironed (the only exception being one particular brand whose collar looks bad unless ironed, I stopped buying that brand). YMMV of course.

[1] http://ecx.images-amazon.com/images/I/41oWjx2Q-mL._SY300_.jp...

specialist 2 days ago 2 replies      
This is a great resource, thank you. Being tall with a long torso, trial and error buying t-shirts has been torturous and expensive.
Dan_in_Brighton 2 days ago 1 reply      
Great to have this data. But wouldn't the extent of dryer-induced shrinkage be driven by the amount of time in the dryer, as much or more than temperature?

While most modern dryers offer a choice of temperatures, the big knob mostly controls a humidistat-based target. I personally equate the "very dry" setting with "shrink beyond usability".

I'd expect that removing clothes while still damp would be more important to avoiding shrinkage than reducing the heat, but I'm no T-shirt scientist. (T-shirtician? T-shirtologist?)

tdaltonc 2 days ago 1 reply      
Man, so many things that I always wanted to know. Why didn't a marketing team at Tide make a infographic about this a decade ago?
jkereako 2 days ago 1 reply      
Good stuff. I'm interested to see data on other cotton garments, particularly buttondown shirts.

Cotton is just a lousy fiber. On the other hand, wool is a strong and resilient fiber. It also never needs to be washed provided it isn't stained.

My wife knit a wool sweater for a close friend of mine who spent 6 months as a bosun on the tallship the Lady Washington (the Interceptor in the Pirates of the Caribbean). Fresh water is scarce on a tallship so showers were infrequent. He came home during Christmas and I smelled the sweater which he claims he never washed and it smelled fresh. Surprisingly, it also kept him warm and dry on the open ocean. I later learned that Irish fishermen have been wearing wool sweaters at sea for generations.

Wool is the fiber of the past and future.

sschueller 2 days ago 0 replies      
Very cool but please add a metric measurements option to your search.
flormmm 2 days ago 0 replies      
This is data I'm grateful to have and at the same time, can't believe someone went to all the trouble to get it !
boulos 2 days ago 1 reply      
How did you actually do the measuring?

The "manufacturing variance" chart jumped out at me as looking fairly unnatural: there's a variation in width or variation in length but very little points that mix. Then I noticed that we're talking about just over half an inch in each direction.

How much of this effect is variation in your measurement?

brad0 2 days ago 0 replies      
Brilliant post. This explains exactly why some T shirts I buy fit great in the chest after buying and are shorter and wider after one wash.
smcl 2 days ago 0 replies      
These sizing charts are incredible and must have taken a lot of effort to put together. I'm gonna come back to this page a lot.
chrismartin 2 days ago 1 reply      
This is excellent. Independent, consumer-empowering size analysis, especially measured over extended wear and washing. I'll be following you.

I wonder if there are any companies that sell inexpensive custom t-shirts? Provide your measurements, specify desired fit, neck type, color, and fabric, and order exactly what you want.

As someone very hard to fit for pants (28" waist and cyclist thighs), I would be thrilled if you also do this for jeans and shorts.

vanilla-almond 2 days ago 1 reply      
I don't have a dryer, but I've still had cotton shirts (not t-shirts) shrink in the washing machine. This usually happens the first time (or first few times) they are washed at the temperature recommended on the label: 40C (104F). However, at 30C (86F) I've never encountered any shrinkage. So this purely anecdotal experience makes me believe that the temperature of water can affect some cotton garments.
itchyouch 2 days ago 1 reply      
I've standardized my shirts on the Uniqlo Crew Shirts and they seem to have very little change from wash to wash compared to my Banana Shirts.

The Uniqlo shirts are a cotton/polyester blend while the BR shirts are 100% cotton, which explains the help that durability of the synthetic materials provide.

It would be great if we could get some data on which brands have the most and least variance and which brands expand and shrink the most over their lifetime.

carlob 2 days ago 2 replies      
Doesn't work on Safari with Ghostery and uBlock. No text loads, but the plots do.
mcv 20 hours ago 0 replies      
Interesting data! I remember the excessively short/wide t-shirts well from my early university days, when I still took my laundry to my parents who had a dryer. Now I dry my clothes on a line, and they don't seem to deform so much.
samstave 2 days ago 3 replies      
>What surprised us was that over the course of many wash cycles, the chest and waist will drift wider and the length will drift shorter.

What if the fabric was rotated 90 degrees upon manufacture, wouldn't this eliminate this problem?

The shrink pattern is related to the orientation of the thread build of the fabric used is it not?

mojoe 2 days ago 1 reply      
This is a cool analysis, although I think every t-shirt I own is from Target (Merona and Mossimo brands) so I didn't have a single point of reference for the width and length charts. Those charts seemed like by far the most useful part of this post.

Edit: I'm curious about the downvotes. Are people appalled at my lack of taste in t-shirts? :)

nether 2 days ago 1 reply      
Awesome work. Small nit: wish the average values were shown as colored dashed vertical lines.
bravura 2 days ago 0 replies      
tldr Pro-tip: If you want to get long life out of clothes you like, don't put them in the dryer.
jwagenet 2 days ago 0 replies      
I would love to see the size dataset expand to casual and dress buttonups, and even jeans. A bit of data like this will greatly improve my shopping experience.
mc32 2 days ago 0 replies      
The most maddening things is that even within a brand, size, S,M,L, etc. are inconsistent, never mind hoping there would be consistency of measurement across brands.

Their charts expose this onconsistency. Some brands Mack Weldon, is more consistent than say American Apparel.

Even when objective measures like pants waist size in inches, typically a size 32" actually 34" --I guess to make people think they are thinner tan they actually are.

wslh 2 days ago 1 reply      
side note to the web team: please add RSS to your blog.
graycat 2 days ago 0 replies      
How to make T-shirts longer:

Wash as usual and then hang on a plastic hanger until dry. So, don't use a dryer and just let them hang, starting when they are still wet from the washing.

Also works when hand wash and rinse but don't squeeze out much of the water, that is, hang them while they are still wet enough to drip.

Also works with knit polo shirts.

jimbobimbo 2 days ago 0 replies      
I learned the dryer effect hard way - so many good tees were destroyed. :( Nowadays I just leave my t-shirts and polos to dry.

The size charts is a real eye opener. I know that Abercrombie carries smalls that fit me fine, but Zara was an unknown to me. Apparently, their tees are also reasonably priced and look pretty good...

Thanks for the post!

BatFastard 2 days ago 1 reply      
I like the concept of the post, but found it difficult to read. Too much data, and not enough conclusions.
_greim_ 2 days ago 1 reply      
Is there a "grain" to the fabric or something? Why not turn it 90deg and have the shirts increase in length and decrease in the chest instead? I'd prefer size to stay the same over time, but if I had to choose I think I'd rather have that.
sehr 2 days ago 0 replies      
Probably one of the few times I've enjoyed an ad, useful and interesting! good job threadbase
mrbill 2 days ago 0 replies      
I still have a problem with my big and tall shirts - I have to hang-dry them to keep them from shrinking vertically when run through the dryer (due to my body type). This backs up what I've complained about for years :)
jedberg 2 days ago 0 replies      
The most important thing I learned from this page was that American Apparel sizes small. Since 3/4s of all my shirts are startup shirts and most of those are AA, this is good to know.
glossyscr 1 day ago 0 replies      
How did you get samples from all the manufacturers, did you buy them all? In particular for the variance test (20 pieces per manufacturer).
cakes 1 day ago 0 replies      
This is really interesting/useful for the next time I'm looking to make t-shirt purchases.
vyyvyyv 2 days ago 0 replies      
Are there plans to do this for women's clothing?
_greim_ 2 days ago 1 reply      
So is there no economic pressure to develop a fabric weave that's both efficient to manufacture and stable over time?
janzer 2 days ago 0 replies      
As a bit of an aside, I actually came across this from a tweet by Adam Savage of Mythbusters fame.
kelukelugames 2 days ago 0 replies      
The results are too important to accept without peer review, examine the methodology, etc.
MaxHalford 2 days ago 2 replies      
Does someone know what tools were used to make the graphs in this article?
godzillabrennus 2 days ago 0 replies      
Seems like you guys and http://Markable.com would have a natural symbiotic relationship for data sharing.
solotronics 2 days ago 0 replies      
This is awesome!
Google's Free Deep Learning Course udacity.com
611 points by olivercameron   ago   61 comments top 15
j2kun 3 days ago 3 replies      
I'm going through the course right now, and the instructor is saying some strange things, clearly (to me) ignoring that what he's saying is only true in very specific contexts.

For example, in the video I just watched he said "the natural way to compute the distance between two vectors is using cross entropy." And then he goes on to describe some unnatural features of cross entropy. The truly "natural" way to compute distances between vectors is the Euclidean distance, or at least any measure that has the properties of a metric.

I can understand this is a crash course and there isn't time to cover nuances, but I'd much rather the instructor say things like "one common/popular way to do X is..." rather than making blanket and misleading statements. Or else how can I trust his claims about deep learning?

it_learnses 3 days ago 7 replies      
Would it be beneficial for me as a developer to take these machine learning courses? I took a course in the uni a while back and know the general techniques, but I'm not sure how it would help me in my career unless I'm doing some cutting edge work in the field or focusing on a machine learning career, in which case wouldn't I need to be pursuing a postdoc or something in it?
imh 3 days ago 1 reply      
If you want more than a 4 lecture course, I recommend Nando de Freitas's course. It's very high quality and free.


stared 3 days ago 0 replies      
When it comes to the course itself (I've just started it) it looks nice, but the (initial) questions tend to be vague.

E.g. in the first question with code I had to reverse-engineer what they mean (including passing values in a format, which I consider non-standard (transpose!)). The first open-ended questions were entirely "ahh, you meant this aspect of the question".

Otherwise, the course (the general level, pace, overview) seems nice.


The IPython Notebook tasks (i.e. the core exercises) are nice.

ganeshkrishnan 3 days ago 1 reply      
I think intro to machine learning https://www.udacity.com/courses/ud120 is the prerequisite to this course
maurits 3 days ago 0 replies      
For people interested, Stanford has an excellent online course on deep-learning with an emphasis on convolutional networks. [1]

It comes with video, notes, all the math, cool ipython notebooks and will let you implement a deepish network from scratch. That includes doing backprop through the svn, softmax, max-pool, conv and ReLU layers.

After that you should be more than capable to build a 'real' net using your favourite lib (Tensorflow, theano etc).

[1]: http://cs231n.stanford.edu/

stared 3 days ago 0 replies      
While TensorFlow may be not yet as mature as Theano or Torch, I love their tutorial: https://www.tensorflow.org/versions/master/tutorials/. It's clean, concrete, and more general than introduction to their API. (Before I couldn't find anything comparable in Theano or Torch.)

In any case, I regret waiting so long for learning deep learning. (I thought that I needed to have many years of CUDA/C++ knowledge (I have none); but in fact, what I need to to know the chain rule, convolutions etc - things I've learnt long time ago.)

DrNuke 3 days ago 0 replies      
Yes! Andrew Ng's coursera + kaggle.com + this deep learning course by Google is a very nice -and free- foundation.
cpal90 1 day ago 1 reply      
Hi guys one question sorry if it's answered somewhere but why does the title say "Free" course? is it free cause of the trial period or the whole course is free as of now?

If the whole course is free are there more free courses on this site?

Thanks for the reply.

magicmu 3 days ago 4 replies      
How accessible is a course like this with no prior knowledge of linear algebra? I know it's listed in the pre-reqs, but with a good head for math and lots of calc, is it something that could be picked up along the way? I'm normally pretty bold about stuff like that, but I know it's a core part of deep learning / ML. If it's really necessary, if anyone has any resources for linear algebra run-throughs it would be greatly appreciated!!
alok-g 3 days ago 0 replies      
Will the projects/assignments be workable on Windows, or would I need Linux et al for these?

And if not natively (using docker/VMs), would they be able to use NVidia CUDA card on my system? And how much disk space would be needed.


wodenokoto 3 days ago 2 replies      
Does this use tensor flow?
ntnlabs 3 days ago 0 replies      
Yeap, it's dead :)
jorgecurio 3 days ago 0 replies      
I fucking love Google, it's the greatest company there is. Thank you for this free course, incredibly high quality and very enjoyable to watch.
fiatjaf 3 days ago 0 replies      
Udacity is kinda ridiculous, making us answer some stupid questions every 5 minutes. I'm not in school anymore (by the way: no one ever learns anything in school).
Why I quit my dream job at Ubisoft gingearstudio.com
594 points by Chico75   ago   191 comments top 35
lost_name 3 days ago 10 replies      
I once worked as a consultant to help users implement some software. I moved on to the development of that software, knowing the dozens of areas that could be improved to make life easier for the users, and honestly with little effort (it's a web app, I did that before the consultancy stuff). At that point in time, that was my dream -- I wanted to help make people's lives a little bit easier, and the people I helped would be those who used our software.

After around a year or so of implementing questionable features, I attempted to get approval for updates to old, well used features to improve them (stability and convenience focused, really), but was shot down. This wouldn't sell the software, because it worked well enough, and we needed more revenue more than retaining old customers. At that point I understood that after the software is sold the customer will be too ingrained into the product to leave without financial repercussions.

A while later, we got bought out by Big Company, so that strategy apparently worked. BC doesn't give half a shit about anything we ever did, and we piled on the features release after release with little concern about anything else. I tried a couple times after the buyout to get approved for existing product improvements, but always got shot down.

I continue to find it odd how the company can be so profit oriented, and yet so averse to improvements. I suppose I'm just wrong or don't actually understand. Either way, it makes it very hard to care about my work these days.

ransom1538 3 days ago 4 replies      
Why to not work at a video game company:

0) Abuse.

1) Executives cut projects: a lot. The budgets are so insane for games executives need to constantly trim budgets and shift things around. It is common to walk over to an artists desk and inform them the art they have worked on for 2 years wont be used. I am convinced telling a wife her husband has passed is the same feeling.

2) The budgets have exploded. My last project for an iPhone game was well over 4million dollars.

3) Complexity is compounding. My last team (for a prototype) consisted of: AI guy, graphics/C++ guy(s), gameplay guy, Art TEAM (vector and raster) and project managers. The art pipelines alone will suck the budget dry.

4) Pay is low. Since you are starting fresh each project (see 5), your working knowledge of the system is similar to someone new. Promotions, salary increases, etc don't make any financial sense (see 1) unless you are a rockstar. The new kids walking in usually burn out and quit because they don't understand the massive shit show the industry is. EA's managers just grind people until they can't walk. Disney is a sweatshop.

5) NOTHING is reused. After your second project, you quickly realize the AI you created for fish has nothing todo with with your AI for a 3d shooter. The asset pipeline you created for a soccer game doesn't translate over to a racing game. Game companies are full of dead code repros. People try to create/use repeatable platforms, but then the game designer guy will walk by and say "Hey is that the newest unreal engine?". In games: Anything reused is quickly spotted as reused. This is why games that have a good series going do really well financially. GTA what like 15?

6) Success is low. After a few years into a project, someone will say: "But its not... fun". Welp, good luck fixing that. Or plan on having it rot in some terrible online store.

7) Rockstars. Executive: "OMG you wrote the AI for GTA2 in 1998??". Welp, this guy is now your boss. AND, because games are almost always a luck play - this "Rockstar" will teach you absolutely nothing.

My takeaway:

I have talked with guys in the game industry that have been in it 20+ years and asked WTF. Basically, lifers are like high school teachers. They are abused and underpaid: but they love what they do.

endymi0n 3 days ago 3 replies      
> Since my very beginnings at Ubisoft, I knew I wouldnt spend the rest of my days here. I already dreamt of starting my own indie company.

Well, then that was probably your dream job instead of Ubisoft.

chaostheory 3 days ago 8 replies      
> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

A good description of a lot of big corp projects. Do people working on large open source projects eventually feel the same way?

this_user 3 days ago 4 replies      
This phenomenon is not exclusive to game development. Lots of people want to work for large, prominent companies like Google or FB, dreaming of working on cool projects. But the reality usually turns out to be much less glamorous. Instead of being the guy who comes up with the next killer product or feature, you will likely end up as a small cog in a huge, well-oiled machine, optimising ads to increase some metric in the fifth decimal place.
mattmaroon 3 days ago 0 replies      
I hope this does not happen to him, but wait until he releases a game (that might even be a really good one) and gets nowhere because 1,000 other people released a game that week. It's a tough industry now! You probably appreciate the indie side of it when you're in AAA, but I can tell you from experience you appreciate the AAA side of it when you're in an indie!
lhnz 3 days ago 1 reply      
> "When your expertise is limited to, lets say, art, level design, performances or whatever, youll eventually convince yourself that its the most important thing in the game."

This is my experience, too. Without autonomy and ownership across a whole project it's very easy for people to get tunnel vision about what's valuable. This causes general harm to both the team and the outcome of its project.

I'm not sure how to lessen the effect other than perhaps by making projects small enough that they can be worked on by just a few people and using this phase to establish a kernel of good ideas and team cohesion.

Perhaps there might be another structure where the tools that are provided to the team are literally so good that the main project can be done by just a few people working on everything together. (Idealistic vision here.)

chris_wot 3 days ago 0 replies      
I can well imagine this can occur in larger, non-games software development projects. I wonder if it is the same?

I sort of suspect not. I am currently refactoring an (albeit important) part of the LibreOffice codebase - the VCL font subsystem. Mostly it's reading the code (in fact, 90% is reading and understanding the code), but it's kind of satisfying looking at how changes to the code make things better and... more elegant.

Perhaps this is just an Open Source thing. Or maybe I'm unusual in that I like to focus on smaller modules and make them really good, then move on to the next thing.

jnaour 3 days ago 2 replies      
Seems to be his first indie: http://openbargame.com/
ckarmann 3 days ago 0 replies      
Congratulations for pursuing your dreams! I also work at a big game company, not even on a game but on an internal technology: no player will ever see directly the result of my contribution. I still feel great about my work because of reasons not important here, but I totally understand what the author says.

But feeling of being a little cog in the machine aside, some of what is said here is about failure of management: communication problems, useless meetings, bogus decision process, lack of visibility of who is impacted by a decision, etc. It's true that big projects are more difficult to manage than small ones, but in truth a bad management or bad coworker dynamics can destroy motivation in big or small companies alike. I have worked in a few startups and two indie game companies and all were plagued by mismanagement as much if not more than my other experiences at a bank and at a big cell-phone company. I may have been unlucky, but it may a simple truth about the programmer's job: working with other people is hard and team dynamics is very important.

gnulnx 3 days ago 0 replies      
> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

This is why I left my 'dream job' of work working on a AAA MMORPG. I came on board early on as the first member of a 'NetOps' team, a senior linux systems administrator, which later split off and grew into a number of very large, very specialized teams. My loose definition of 'dream job' at that time was 'large scale' and 'video games'. Cool!

It took a few years for me to redefine what a 'dream job' really meant, and being a drop in a bucket was not it, so I left and moved on (slowly) to freelancing, and haven't looked back.

martimoose 3 days ago 0 replies      
I don't work in the videogame industry, but I can totally relate. I work in a small website dev studio, and we interact with a lot of companies, both large (though not huge) and small.

As soon as you get people working on a project that are too specialized, no matter the size of the team, you inevitably get conflicting concerns. I think it's very important for managers to understand what those concerns are to be able to take the right decision.

I also think that even specialized people should have some knowledge of other specializations (e.g. designers that understand programming, and vice versa). On very large projects, this is impossible as there are just too many fields, but still I value very much "general knowledge" for that reason.

Anyway, good luck Maxime in your endeavors.

hacknat 2 days ago 0 replies      
Late to the party and this doesn't address what the OP said directly, but the state of the video game industry actually makes me quite sad.

The last AAA game I played was Oblivion, which I couldn't finish. I haven't really played a AAA game since, and have only played two video games all the way through since (Braid, and Monument Valley).

When the OP talks about working on a project so big that no one person really "grocks" the whole thing I can relate, but I also want to say "it shows".

IMO, the current state of AAA games is shit. I think the reason they are this way have to do with what the OP is complaining about, the originating vision of the game comes from Marketing not an artist, and no one person has vision for the game. Maybe video games just have too many resources at there disposal.

I think I read somewhere that either Ocarina of Time or Mario 64 had double or triple the playable content of the released game and Miyamoto had a perfectionist eye for the game and was merciless in what made the cut.

Resource constraints are a good thing, IMO, as it forces people to make a razor focused product that trims the fat mercilessly.

Having unlimited resources is the enemy of good decision making, and it shows in the current state of video games (and film too). Games and movies are just too long/full these days.

cpsempek 3 days ago 1 reply      
I do not get why people, it seems, often use the "...or, how I learned to stop worrying and..." in their blog post titles. Are they doing it as an homage to the film Dr. Strangelove (I'm not sure if that was the originator of this alternate/sub title phrase), or, are they doing it because it has become a meme among bloggers?

If the latter, fine, at the worst they are unoriginal. If the former, then they haven't ever seen the movie, or, don't understand the movie and the absurdity of the title character nonetheless of "loving a bomb".

Or, this phrase is common and I erroneously associate its origin with the film.

In every case but the last, it irks me, but for no good reason ultimately.

shmerl 3 days ago 0 replies      
The problem of companies like Ubisoft is mass market approach. Big publishers prefer commercial mass market art to good art. In result, more interesting games come out from independent studios like inXile, Obsidian, CD Projekt Red and others. Not sure how it looks from insider's standpoint, but from gamer's standpoint, big publishers like Ubisoft and EA are plain boring and their games can be compared to pulp fiction and you don't expect to see masterpieces from them (coincidentally they are also most often plagued by DRM in contrast with games from independent studios).
ergothus 3 days ago 0 replies      
This quote stood out to me:

"On large scale projects, good communication is simply put just impossible. How do you get the right message to the right people? You cant communicate everything to everyone, theres just too much information. There are hundreds of decisions being taken every week. Inevitably, at some point, someone who should have been consulted before making a decision will be forgotten. This creates frustration over time."

This is an issue I've wrestled with over the years - too small a company and your resources are limited, too large and progress mires, and it mires because of communication.

dismal2 3 days ago 0 replies      
Fulfillment through structured 40hr+ a week labor is an illusion
the_common_man 3 days ago 1 reply      
Fantastic write up. I know this feeling all too well.

A bit related is when you work in big companies like Apple and Tesla. These guys have a "hero" at the top. There is nothing you can do but wait for that headline that talks about a feature you made and it was Elon Musk's doing or Job's amazing leadership. I have nothing against these two but it is very demotivating to work.

davedx 3 days ago 0 replies      
I've worked for a couple of small games studios, and once for a big studio working on a AAA game. The headcount observations resonate.. I remember our teams growing, and growing, and growing, and each extra programmer detracted from the "community" feeling of being part of a studio, and added to the complexity of developing such a large code base with so many devs.

Compare that to small studios, where you can really feel like part of a family. It's very different, and all these kinds of feelings are more intense than other IT companies I've worked at. (Probably partly because of the extra time you tend to spend there when working in the games industry...)

Having said that -- some of my best friends were made when working at the big AAA studio! So it's not all bad.

jarjoura 3 days ago 1 reply      
So let me just throw this out there, we will always have to answer to someone. Whether it's our middle manager in a big organization, VCs telling us how fast we need to grow, or our demanding users because they are the only way to get revenue.

All software written at this stage is small cogs on a much bigger platform written by teams of brilliant people over the last 30-40 years.

I do think it's fair to say you want to work on actual interesting problems and being one of 20-40 people working on a game engine is probably very tedious. I imagine long code-review cycles since any tiny change could destabilize the entire system several layers up.

Some people need big organization structure to produce their best work while some people need the freedom to have infinite WFH days answering to users to produce their own best.

emehrkay 3 days ago 0 replies      
This was a great read. I worked at a large web agency once and did some pretty decent work. It is definitely rewarding to see people use something that you worked hard on and to see it on tv and in magazines, etc. But that yearn to do your own thing, blaze your own path is a feeling that I'm certain most people who work in creative fields go though.

Sidenote: before he said that the small projects were cancelled, I assumed that they were Evolve (https://evolvegame.com/agegate/) (I don't follow games close enough to know which studio makes which game).

I'm curious as to how he was able to, I assume, bootstrap a game company for a year before releasing an iOS game.

nshung 1 day ago 0 replies      
This reminds me of this TED talk.[1]

1. https://www.youtube.com/watch?v=5aH2Ppjpcho

dexwiz 3 days ago 1 reply      
Good luck to him. Going Indie is a bold choice, especially after Steam Greenlight.
richerlariviere 3 days ago 0 replies      
> The team spirit was sooo good! Our motto was on est crinqus!, which more or less translates to were so hyped!. During our play sessions, we were so excited we were screaming and shouting all over the place. I think it bothered colleagues working next to us, but hell, we had so much fun. I didnt feel too guilty.

Wow. IMO A dream job is a balance between having fun like you described and working on complex problems. I love how you have written this paragraph.

ninjakeyboard 3 days ago 1 reply      
I went to business school instead of tech because I too dream of running the head of the ship one day. Tech will always be an interest but I thirst for freedom.
robertndc 2 days ago 0 replies      
There are no dream jobs, just jobs or dreams:

. Build your own company and you will finish accepting profit as flagship.

. Find a job where you lead the direction and internal politics will make you adapt to a whole way against their life goals.

. Make an open source project that no one will use.

azraomega 3 days ago 0 replies      
tl;dr - feeling no ownership doing somebody else's big projects, he quit.
MollyR 3 days ago 0 replies      
Wow, cool stuff. I wish them the best.

I often had dreams of doing the same thing, especially inspired by this guy http://www.konjak.org/ .

It seemed like overkill for me as I could never get a team together.

Though with rise of VR, I've been looking into unity3d.How cool would it be to build your own world, then jump in and visit it.

Mendenhall 3 days ago 0 replies      
We are all just cogs. What I learned is no matter what sized cog I am compared to others, just make sure my interaction with the other cogs is as smooth as possible. I take pride in doing good work no matter how small or large.
durpleDrank 3 days ago 0 replies      
I used to work beside UBISOFT here in Montreal. I'd here them talk about videogames during lunch time and it was pitiful. It seemed like having colored hair and geek-chic was more important than actually knowing anything about videogames.
listic 3 days ago 1 reply      
How large was the development team of Assassins Creed Syndicate, at its peak? Is the overall budget known, as well?

I wonder how big the really big nowadays is.

iolothebard 3 days ago 0 replies      
Put the guitar on ebay/reverb. Or just send it to me!

Best of luck :-)

workitout 3 days ago 0 replies      
For me my dream job can be writing CRUD web software as long as people need it and appreciate my work.
oDot 3 days ago 0 replies      
How does this compare to Valve? Maybe having no deadlines can ease the specialization issue
bronz 3 days ago 0 replies      
Make sure to check out his upcoming game, Openbar. It looks really, really good.
Privilege and Inequality in Silicon Valley medium.com
629 points by dtran   ago   292 comments top 42
rdlecler1 3 days ago 3 replies      
I had a similar experience starting life with opportunity debt. Single parent family whose mother had no high school education. Have ADD and dyslexia, moved around a lot with a good part my life in subsidized housing, and never graduated from high school. No one teaches you the basics, so that when you do start coming into your own and taking control of your own life you are so incredibly behind your peers socially, politically, and intellectually. I eventually went to community college as a mature student, eventually made my way to university, did a masters and then a PhD at Yale. Through it all I was always one or two steps behind and so many opportunities were missed because I didn't have money. Similarly, now as an entrepreneur I find myself being a little more conservative because you've been through a lot of bad times without a safety net.
yeureka 3 days ago 6 replies      
This sounds very familiar.

When I was in University I didn't understand why some people didn't care about grades and partied so much. When we left school and got into the real world I understood why: they had rich parents with contacts that could get them good jobs or seed capital for their own businesses.

I had lots of ideas and worked in a lot of startups for more than 10 years but now the following phrase from the article describes my situation very well:

"Most of the time, potential founders who share my background tend to work at lucrative jobs in finance or tech until they can take care of everyone in their families before they even dream about taking more risksif they ever get there."

ginsurge 3 days ago 4 replies      
This really resonates with me, I was the first person in my family to go to university, and my grandparents had to work multiple jobs when they migrated from Europe in order to survive. My dad did slightly better, but both my parents only had high school education and worked blue collar jobs.

It does make it really hard to change your mindset when you come from this sort of background, when you've achieved more than anyone in your family and therefore can't really talk to them about your ambitions or career objectives.

It sounds awful, but sometimes I wish I had been born into a different family, with highly educated parents I could have amazing conversations with, who would encourage me to achieve and grow even more.

I find I constantly have a mindset of "I'm not good enough" and it's paralysing. I want to interview for the top tech jobs out there, like Google or Facebook, but my brain keeps telling me I'm not good enough, it's awful.

AlexB138 2 days ago 3 replies      
This was a bit of a tough read for me. My reaction is sort of selfish, but it was very visceral. I read this and had to come back later to respond, though I imagine the conversation's largely over at this point.

My family basically fell apart when I was around 11. My parents divorced. I stayed with my father, siblings went with my mother. My father turned into a drunk. I spent good nights carrying him from the couch to bed, and bad nights carrying him from the lawn, sometimes without clothes. I learned to drive bringing drunks home when I was about 13.

I had no social skills. I struggled in school and failed a grade, though I eventually made it up and graduated high school on time. No one ever even mentioned college to me. I never thought about it until everyone I knew was talking about where they were going. Toward the end of high school my father's alcohol habit turned into a hard drug addiction. About a week after my 18th birthday, we were kicked out of our house because he hadn't paid rent in months. He went to go live with a fellow addict and I became homeless.

I lived on friends couches for a while. Around that time I realized that life could continue getting worse, or I could start fighting the tide. I got a job making pizzas, then doing construction work and then started a sub-contracting company doing construction when I was 19. When I was 22 I had 14 people working for me. I ended up shutting the business down, mostly due to mistakes I had made. After that, I got into tech.

I'm 30 now. I've got a family and don't have much of a relationship with my parents or siblings. I make a solid salary, and have done fairly well in my career, but I struggle with pretty severe imposter syndrome. I have trouble making lasting connections, and have failed entirely to find any mentorship. My wife hardly knows anything about my history, but she knows more than any of my friends.

All of this is a long winded setup to say, I didn't get that transformational experience that the writer here experienced at university. I didn't even know SAT classes existed until well after they would have helped, and had never heard of Stanford until I was into my tech career. I would have given quite a bit to trade my father for an immigrant who simply didn't work. I very much admire the writer's drive and results, and don't mean to detract from any of that, but I have a hard time fighting the urge to point out that he had more privileges than he probably realizes.

__jal 3 days ago 1 reply      
I found myself nodding along the whole article.

First, let me say that I am happy where I ended up. I'm successful, enjoy my work, and when I compare my personal income with our family income when I was growing up, it is an absurd multiple.

We were a very poor family in a poor part of the South. I went to a top-10 small private university on a full ride, felt completely alienated and never quite figured out how to function in that environment. I dropped out and moved to San Francisco at what turned out to be a very good time (early 90's), and once Netscape dropped, discovered nobody else knew what they were doing with this web thing either, and more or less faked it until I made it.

At the same time, I have had and do have ideas that others have executed on, that I know I could have made a go at, if only

The if only list is long, and most of it comes back to self-imposed limitations that I can trace back to how I grew up. Frequently it relates to economic security, but there are other habits of thought that stop me from even getting to worrying about that.

One big one is that I never learned to think about entrepreneurship. A big lesson hammered into me growing up was the importance of "finding a good job", not figuring out how to make my own.

I did start a company in my mid-30s, and we did OK, until we didn't. And that failure (I think) had nothing to do with the habits of thought of a poor kid. But failing in a similar way in my 20s would have left me in a position to learn from that and try again, something I'm unlikely to make a go at 10 years later. I do little things for side income, but those are hobbies.

So it ends up being this thing that doesn't really bother me at this point, but does leave me to wonder what would have happened if I had picked parents from a very different walk of life.

And I am quietly amused when people tell me how they built everything themselves "after a seed from Dad", or "with a great connection I made through a family friend" or similar. Those are impossible blockers for a lot of people, even if they get over some of the habits of mind better than I did.

nickbauman 3 days ago 6 replies      
There's a strong thread of meritocracy in the tech community, but there is no such thing. When you choose the clearly better developer over the other, you're often choosing the one who had better resources growing up, not just natural ability. The poorer developer may have had a natural advantage over the other one, but didn't have the money to develop it as much. So you're really just selecting for wealth all over again.

This is what's behind the achievement gap anxiety: Wise rich people don't want to perpetuate a world where only money selects success. It's wasteful and ultimately unsustainable.

Htsthbjig 3 days ago 2 replies      
I believe this man is confusing lots of things.

I had lived in China for more than 5 years, Boston, Japan or Korea for more than 9 months each.

In my opinion, minimizing conflict has nothing to do with being poor, and a lot with being Chinese educated.

On the contrary, I volunteer helping poor kids like Spanish gypsies or Subsaharan African and they(and their parents) are ultra confident, and spontaneous. Being open is the default thing for them.

I managed Chinese people in China and there was a world of difference between natives and those Chinese educated overseas.

When living on the US, I was shocked to see parents cheering their kids for the most stupid thing, when in Europe as a kid you are forced to do 4x more effort without rewards at all(like learning multiple languages). It is just what it is expected from you.

In Asia, this pressure over kids is even higher than in Europe.

Family is very important for Chinese almost a religion.This has advantages and disadvantages. For innovation, it is a big disadvantage. Innovation means taking risks, being close to your family means having to convince lots of people those risk are worth it. Most people won't understand you and it is very hard.

In the US, everybody is on their own, basically, nobody gives a dam, which is great for changing the world.

mchu4545 3 days ago 0 replies      
Ricky mentions the guilt at not cashing out on his Stanford degree immediately and providing for his family.

Day-to-day, how do founders in similar positions coming from cultures with tight-knit families address this? Especially as parents age?

dtran 3 days ago 4 replies      
Has anyone done a survey of the family socio-economic status of startup founders and early employees? I'd be curious to see how many founders/early employees are from low-income families, whether their parents graduated college, etc. If not, I'd love to create one.
nish1500 3 days ago 1 reply      
I grew up other side of the world, amazing by what I heard in the news - that there existed a world beyond mine where smart people with smart ideas built great companies overnight. I am smart. I have merit. I dropped out at 19, taught my self how to code, and built a 6-figure business with my projects online. I want to learn more.

I got turned down by 15+ companies and startups in the past few weeks because they couldn't sponsor my work visa. This is Canada.

The USA? Being a dropout makes me ineligible for any US work visa.

So much for merit.

ianphughes 3 days ago 0 replies      
What was that axiom that Red Auerbach was attributed with? "You can't teach height." Ricky demonstrated he was hungry to learn and succeed at a very early age, a quality that will always bring some level of success through life: "I had to bring my dad to the office the next day and told him to pretend to say some words in Mandarin while I just demanded that I get put in an honors-level English class."

How do you identify those who are underprivileged, but carry that quality too? It can be very difficult to identify.

mempko 3 days ago 0 replies      
Excellent post. But I feel that we need to go beyond talking about what we can personally do to improve our situation. Either the vast majority of people are ill-adapted for success, or something else is going on. I think we should go beyond the classic argument "If we just all recycle, the world will..." Or "If we all buy electric cars, global warm....".

This post had some of that individualistic attitude to a much broader and obviously systemic problem.

forrestthewoods 3 days ago 0 replies      
As someone who grew up in the exceptionally poor, rural South I'm not sure what to take away. I don't know anyone who was able to go to Stanford despite bad grades in high school. That's an enviable luxury.
Karunamon 3 days ago 2 replies      
Great read.

I've become allergic to words like "privilege" as they usually are seen in the company of ill-thought-out and grandiose/insulting/wrong proclamations about How Things Should Be Done,

..but this is none of that - it's an honest look and deep analysis of someone's experience.

And knowing how important upbringing is, and the sheer (almost superhuman) tenacity the author had to go through to even partially overcome the (poisonous? non-optimum?) mindset that was completely a result of things out of their control...

what the heck is everyone else supposed to do? How does society do right by people like this? Overall, we're pretty horrible at dealing with things that are as subtle as mindset.

ryanwitt112 3 days ago 1 reply      
interesting take. I'd like to hear what PG and others think. Coming from a middle class background, I can relate a bit and see-observationally-the other components to what Ricky's calling "mindset inequality". It's almost like "new money" vs folks that have bigger dollars to spend growing up. I know a lot of friends that have deeply entrenched psychological elements they need to overcome before reaching that "next level" that were engrained because of their upbringing. And, to Ricky's post, it's sometimes more of a challenge than the monetary differentials.
decisiveness 2 days ago 0 replies      
There might be many more success stories if children growing up closer to the poverty line were able to do so in more nourishing environments. However, discouragement, lack of confidence, anxiety are things not restricted to any racial or economic background. Not having a silver spoon is in many ways a better environment in which to be raised.

The OP does not say that his parents didn't show him any love, which is more important for the development of a person than any economic status. Many of the other struggles might be used as fuel for building positive character traits, unless one doesn't let it.

Having read through the post, it doesn't appear that he's actually arrived at a valid point, and is just trying to brand himself as being underprivileged through the telling of his life story, which has turned out to be successful by most standards. He uses the argument that "mindset inequality" gave him a chip on his shoulder so he was able to succeed, and therefore others fail because of it, which seems contradictory.

bobby_9x 3 days ago 0 replies      
"but building and sustaining a company that is designed to grow fast is especially hard if you grew up desperately poor"

Most people don't have the money or resources to build a company like this, which is why we have VC. They know you are in a desperate situation and exchange the money that you need for a % of the company.

The better thing to do is choose a solid business idea that can be built slowly and at a certain point, put money you make from this venture into an idea that needs more capital to succeed.

tomcam 2 days ago 0 replies      
Made me cry. Much of it rang true. Story has similarities except about 1/5th as traumatic and am a white dude who grew up here. Have done well financially but have a compromised home situation traceable to some of the same causes.
dsfyu404ed 2 days ago 0 replies      
He conspicuously missed the part where time spent working a job, studying and generally acting like a responsible adult is time not spent networking.

The "poor" kids also tend to find each other at college and over the first few semesters form separate networks from the rich kids. People tend to want to hang out with people who are similar to them. One group goes out partying together, the other sits in a dorm room listening to music and drinking a $15 handle. Their friend groups don't overlap over much.

The poor kids tend to build networks where the members personal skills and resources they bring to the table in the present are important (or that was my observation). I guess when you can't throw money at a problem knowing who's the IT guy and the car guy becomes more important.

miiiiiike 3 days ago 0 replies      
An old friend and I were talking a few weeks ago and I smiled when he said "We were so poor growing up we didn't even realize we were poor." And we didn't, we were so poor we couldn't even pay attention. It's was good tho. It's still easy for me to live in a tiny apartment and exist on a steady diet of eggs'n'oatmeal, apples, and frozen chicken bought in bulk.
jmspring 2 days ago 0 replies      
I was soft in my earlier comment, but fine taking a rep hit.

This is not the only post recently where the topic is "oh golly gee, look at the hardship I went through to get through college and the found something."

It's a millennial post, and there have been many of them.

Going through college is a challenge ... Having to work or be responsible during such sucks (I interned at Borland as well as worked for an astronomical research company).

Post college, more than a few have to deal with life obligations that come up.

Our profession certainly offers a bit of a cushion and flexibility, but we have to manage that and our obligations.

I don't see someone here whining about having to support their parents due to the last downturn or the many other personal decisions made.

The blog would have been better written as challenges met and overcome and left out the for lack of tact whiny bits...

Yes, coming from poverty has challenges, and friends in such stretched into their late 20s to complete a degree...but perspective and awareness of the wider world is needed... Not another post about personal insecurities..

zanewill9 3 days ago 0 replies      
Excellent post. We like to think sometimes the underdog wins but sadly, success is typically given to those that were born with it. The unfortunate part to me is the credit they were given as if they were amazing, not born lucky.
peter303 2 days ago 1 reply      
Im sorry, but if you graduate from Stanford you start near the top of the opportunity heap. Maybe people arent satisified with what they have and want more.
jcoffland 2 days ago 1 reply      
My favorite line:

> Compare that level of confidence to a kid with successful parents whod say something along the lines of If you can believe it, you can achieve it! Now imagine walking into a VC office having to compete with that kid. Hes so convinced that hes going to change the world, and thats going to show in his pitch.

I enjoyed this article a lot but clearly this guy also made some of his own hardships. Going on ski trips just to fit in and then running out of money is incompatible with the image of a frugal poor kid.

bluishgreen 2 days ago 0 replies      
Paul Graham is my one of my biggest programming heroes. He single-handedly changed the way I think about and do programming about a decade back, and I am eternally grateful for it. One of the biggest lessons I got from him is "succinctness is power". That essay was a game changer both in terms of the math work and the programming work I do.

Here is one instance where that powerful way of thinking runs head on into a stone wall. He said "few successful founders grew up desperately poor" and moved on. Succinct. yes, but not powerful. This piece took a couple thousand words to say the same one succinct thing that PG said, and nails it in terms of the empathy it generates and the power with which it communicates. While PGs writing in this issue comes out as aspie. This is the lesson he needs to take from that latest article and the Internet's reaction, and not that Life is short and totally miss the point.

Narrativity and Authenticity and Poetry and Verbosity is power! (when dealing with humans).

frodik 2 days ago 0 replies      
It is a good thing that privilege is becoming a topic in these circles. It's fascinating to see how many people still try to present it as looking for excuses. Perhaps because they just don't understand what it really is. Or they need to validate their success by convincing everyone that it is only their hard work that matters and nothing else.Also, beware of survival bias. We don't exactly get to hear about the stories without a happy end here.
return0 3 days ago 0 replies      
> We think this is the reason why poor founders tend not to be successful.

The essay by PG actually meant that there are no poor founders at all. It would be interesting to have statistics on whether poor founders fail more, or don't even get a chance to try at all. I have reasons to believe that the rare poor person is more motivated and determined than the average groomed-to-be middle class entrepreneur, and there are plenty of cases of dirt-poor persons becoming millionaires.

dba7dba 2 days ago 0 replies      
This story really made me reflect on my own similar past. Growing up poor in US as son of immigrant family and somehow getting into a nationally well known college (public though), I was shocked to see things that I had never known about.

The shock came from seeing how I lacked culture/experience/skills/confidence others had. And these others had grown up in more stable environments with either some or quite a bit of money.

I didn't know how to play any instrument. I wouldn't say everyone I knew in college played an instrument since I wasn't at Standford :) but still it was obvious to me I LACKED the soft skills my peers had.

I had not done many things as a teenager that are possible only when you grow up in a family with some means. And this weakened already not so robust confidence in myself, resulting in a mostly downward spiral as far as confidence in myself.

You see growing up with money buys you a lot of soft skill that helps you later.

I'm not bitter though. It is what it is. I try to be thankful for what I've had so far.

tn13 3 days ago 4 replies      
As an Indian immigrant when I see people complaining about Privilege and Inequality in SV (and in America) I feel like laughing.

I lived in a society where everyone was almost the same, similar economic status, similar privilege etc. etc. Life sucked. I decided to move out to be among the top 10% instead of one of the 100%. I eventually ended up in SV.

This place is awesome and the very reason I am here is because I can be in the top 10%. I dont want to be equal but I seek privileged, extra-ordinary wealth and stuff that most others can not afford. I think it is an amazing thing that places like SV exist. If you somehow take out that incentive I think I will move somewhere else. Of course I would be moving out of California sooner or later given the taxes.

codingsaints 2 days ago 0 replies      
This is a great article. I'm more of a reader than a co tributor through these articles. I just had to comment on a great, positive post. It makes me want to provide more positive feedback to others to hopefully keep them going.
zmitri 3 days ago 0 replies      
What an excellent post. Respect.
lifeisstillgood 2 days ago 0 replies      
One thing striking me was how as s child the author had the "common" responsibility of dealing with landlords bills etc for the family.

It may not be something a startup can solve but "administrivia as a service" - some means of connecting families who need with someone able to actually advise and not be taken advantage of

In the uk we have a volunteer service called citizens advice bureau - I am thinking something like this on tap might be beneficial in ways hard to quantify


timewarrior 3 days ago 1 reply      
tldr; In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

Excellent article by the writer. Apologies for the long post, however hope it is helpful for someone in similar situation. I can relate to many things that he has faced and feel incredibly lucky to not have faced some things that he had to.

I grew up in a small town in a poor family in India as eldest of four siblings. Our monthly budget was 20 dollars and things were really tight. However my dad worked really hard 16 hours every day and made sure that my studies do not get hindered. He told me every single day that with hard work I can achieve anything that I can dream of.

I got into IIT Bombay (one of the most prestigious colleges in India). However it was obvious to me, that I need to get a decent paying job right after school to support siblings and my dad who couldn't do 16 hours any more.

It took my the next 8 years working for others, to save enough to pay for the studies and marriage of me and my siblings and to help my dad retire.

During these 8 years, I built and ran the biggest social network to come of of India. Apart from this also built something which is now the Twilio of India. I was also the part of the team which built the current mobile offering at LinkedIn.

If I had financial stability, I would have started working on mine own ventures 3 years into my career. But it took 5 more years. As soon as I had financial stability, I quit LinkedIn (with 2.5 years of stock unvested) to start a company.

I started a company, where we had incredible opportunities. We built something like Slack for consumers around the same time as Slack. However, being on H1 visa, I was a minority stake holder in the company. And it is a bad situation to be in, if your traction is not already proven. It made sense to exit the company, so we sold it to Dropbox in an acqui-hire.

Dropbox treated me really well. I met some of the smartest people I have ever met over there and it can be a great place to work for many people. However, I soon realized that it wasn't a good fit for me. Such companies are very top driven, there is little creative freedom, and most of the work is cleaning up the tangled code developed over 7-8 years. So I quit Dropbox after an year.

Now I am in a job that gives me more creative freedom and I am pretty happy on that front. Meanwhile, I have been sole advisor for a few companies over the past 2-3 years and they are all profitable and didn't need to raise any money. The entrepreneur in me, keeps me raring to go and start another company. However, because I am on H1 visa, I do not want build another company with minority stake at formation (USCIS rules). To fix this, I would need to get a Green Card. However if you are from India, it will take you 8-10 years to get a Green Card in EB-2.

So the next steps are either move from US, or find a way to get a Green Card on EB-1. If anyone knows any good immigration lawyers, please help introduce.

However, related to the original post. In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

erichmond 3 days ago 0 replies      
Props for writing this. Often times I want to tell my (different but similar) story, but never do. I don't know why. It probably has to do with a number of the points you make in the article, so you are a couple steps ahead of me.
jdenning 2 days ago 0 replies      
Forgive me for being a bit sappy here, but this post, and the discussions that it inspired here are absolute gold!

It's certainly not the first time I've thought about this topic, but for whatever reason, the OP and much of the discussion is resonating very deeply for me (and apparently for a lot of folks). IMHO, this is some of the most productive discussion about privilege and opportunity that's ever appeared on the internet; for the most part, this discussion has avoided the sort of aggravated competition (i.e. pissing contests) and judgements that generally arise out of internet discussions of privilege. In place of those nastier (albeit very human) responses, this thread is full of empathy, support, and offers of help.

I'm very proud of our little community here today.

I'm planning on writing a more detailed post in a few days after collecting my thoughts a bit more, but I'd like to share some half-formed ideas which this post has inspired (comments and criticisms are very welcome!):

1) Part of what's awesome about this discussion is that it seems to have enabled a bit of ad-hoc group therapy. I think it's very helpful for folks who are facing these hurdles to realize they are not alone; while everyone's situation is unique, it's great that people have been acknowledging similarities in their stories, rather than arguing about the differences. We should try to do more of this (with other contentious topics as well)!

2) As several people have suggested, I believe that collecting these stories could potentially help a lot of people. I'm totally down to build and host a site towards that end - would anyone be interested in sharing their stories in that sort of venue?

3) While the specific issues that people have had to deal with are different, there seems to be some common 'flavors' that many have experienced: a) Socio-economic disparity causing an aversion to risk later in life b) Lack of confidence in oneself which adds an additional handicap compared to more self-confident people, likely resulting in missed opportunities (you can't win if you don't play vs you can't lose if you don't play); impostor syndrome. c) Lack of connections, again likely resulting in missed opportunities and increased difficulty in building new things/finding a job/etc. d) Disparity in access to knowledge that greatly improves chances of success (e.g. importance of SAT scores to college admissions; efficient resource management; interview skills)

Improving the situation in (a) seems to be what the world at large is most interested in. Unfortunately, it's a difficult, heavily politicized, and therefore divisive issue. By contrast (b), (c), and (d) seem like problems that we could really improve, at least within our own community.

For example, someone might have a harder time getting the type of (tech) job that they want due to a lack of personal connections (it can be really hard to get your foot in the door), however, it's likely that the personal connections they need are actually visiting this site every day. While we obviously can't just start providing references for total strangers, how much effort would it be to spend a few hours corresponding with someone and vetting their skills to see if you feel comfortable in recommending them? (I'll put my money where my mouth is on this one - if anyone feels like they'd be a good fit at Cloudera, let's talk! EDIT: just to be clear, I don't really have any hiring authority, but I'm happy to talk to anyone, and potentially help with a recommendation)

Likewise, it seems that (b) could be improved for a lot of people with simple communication - impostor syndrome is very common in tech, so I assume that a lot of people here have advice on the subject, or just an empathetic/sympathetic ear.

Regarding (d), this type of information is all likely available already on the internet, but perhaps it could be more usefully compiled for this particular case, minimizing the number of unknown unknowns? What about a thread (like "Who's Hiring") listing offers for mentorship ("Who Needs a Mentor?") ?

I dunno, am I just being overly optimistic here? It seems to me there's a lot of low-hanging fruit here, if some of us are willing to dedicate a bit of time to it.

More ideas? Criticisms?

kkajanaku 2 days ago 0 replies      
This article was very real, and I cant help but identify with Ricky, and other stories Ive read on here, but its not just in SV, its entrepreneurship in general, I thought Id share my story as well:

I was born in Albania, a small, poor, European country with a GDP comparable to Zimbabwe, Namibia, or Sudan. That same year marked the fall of it's isolated strain of communism, and Albania's borders were opened for the first time since WW2. In the late 90s, after the collapse of its economy and ponzi schemes, social unrest reached its height following the violent murder of peaceful protesters by the government and police. This sparked an uprising and the government was toppled. The police and national guard deserted, leaving armories open, then looted by militia, and criminal gangs, with factions fighting in the streets to take control. My parents moved our beds to the hallway of our small apartment as there were no windows, and my little sister and I had to stay quiet so no one would hear we were there. After a UN operation, the government was restored, and the situation was relatively calm. Sometime that following summer, my dad found out about a US green card lottery, filled out an application form, and because he was in a hurry, handed it to a random stranger waiting in line to submit it for him. He then forgot about it, until a year later, when we got a letter telling us that we had won. My parent's weren't terribly off in Albania, they were comfortable, their friends, families were there, they had great jobs, and the future looked promising. But having just gone through that rebellion, then the Yugoslav Wars to the north trickling across the border, and the allure of the American Dream, they decided it would be best for my sister and I.

We moved to Philadelphia in 2000, in a working class neighborhood, with a few suitcases and not one word of English. My parents took on multiple jobs, their Albanian communist degrees were obviously not recognized in the US, so my dad, once a doctor, is still working maintenance, and shoveling snow in the East Coast, as I write this. Like Ricky said, and like all immigrant kids, my family depended on me to learn english and deal with translation, and everything in between. 5 years later when we became citizens, and received our passports, my parents knew more about American History than was taught in my inner-city high school.

My parents are incredibly supportive, but they moved to the US in their 40s, they werent familiar with the language, culture, and even more importantly capitalism. Apart from the classic model of education, they werent familiar with the tools required to be successful in such a strange place like this. But with their meager wages they were happy to support my hobbies, buy me lots of books, and a computer with internet access which taught me much more than my inner city schools.

Eventually I got a college degree, then went on to do a dual-masters in design and engineering at the Royal College of Art and Imperial College in London. I even got to go to Tokyo and work for Sony, while studying there. I graduated this past summer, and then launched my final group project as a startup in London with my friends, two English, Oxford educated engineers, and a Spanish designer/engineer whos father is the president of one of the largest companies in the world.

Then reality sank in, I had to leave, I cant be an entrepreneur just yet, and I moved to SV to find a high paying job in tech for the next 5-10 years, so that I can:a. afford to pay rentb. pay off my educational loansc. pay off my parents homed. help my sister pay for her educatione. send some money home because my dad is getting too old to shovel snow

marincounty 2 days ago 1 reply      
I looked through your past posts, and you are legit!

I liked to that you went to a community college. I too screwed up in high school. I didn't even know why people were taking another test--the SAT. That said, I cleaned up my act in my senior year, but it was too late.

Everything, and a lot more, that I missed in high school, I made up for in two semesters at community college.

If anyone in high school is reading this, and thinking, "I wish I could do it over?". You can! I had a great time at my community college. I saved a lot of money, and met some really wonderful people. The teachers really seemed to care. I didn't find that at the four year school, or even my professional school.

Just make sure to transfer, and get that four year degree. So many people don't transfer to a four year university, or even get the associate degree. Yes, so much of college is absolute bull shit, but degrees are still valued in a lot of professions. It's changing though, and I couldn't be happier. British companies are taking the lead. I know at Penguin books; HR isn't even allowed to know if you went to college, or not. You are hired on your experience, and maybe a test? The way it should be.

namenotrequired 3 days ago 2 replies      
Finally someone who talks about the consequences of economic inequality. PG seemed to think all that mattered is the causes.
ryandrake 3 days ago 3 replies      
"I'm a self made millionaire [0]"

[0] - Apart from the safe suburban upper class childhood, the prep school and Harvard education my parents paid for, the job at Goldman Sachs my uncle got me straight out of school, and the finance network from that experience that eventually helped me with my first funding rounds, but yea, besides all that I'm TOTALLY SELF MADE!

robgibbons 3 days ago 3 replies      
You obviously have no idea what it's like to grow up poor. The fear, the guilt, the frustration, and the exhaustion that you learn almost as if through sheer osmosis from your parents.

The author is not arguing that you literally cannot compete if you're poor. But it's the very mindset from growing up in poverty that, through almost every interaction you have in childhood, leads you to _believe_ that you cannot compete, which prevents you from even trying. And even if you overcome that feeling (through constant hard work and willpower, such as our author's), say you do try to compete with the rich kids, then your lack of inborn confidence is so obviously apparent that you come off as inexperienced, or insincere. This is perfectly accurate in my own experience.

Mindset inequality is actually an incredible way to describe it.

dang 2 days ago 0 replies      
> People feeling sorry for themselves because they're not male and white from a place with food are annoying.

This breaks the HN guidelines: it is both a personal attack (since you're talking about the OP) and gratuitously negative. Please do not post comments like this here.


BurningFrog 3 days ago 3 replies      
> No one teaches you the basics

Room for a startup/free service that does that!

Microsoft releases CNTK, its open source deep learning toolkit, on GitHub microsoft.com
499 points by fforflo   ago   46 comments top 10
x-sam 8 hours ago 2 replies      
Only Microsoft can open source documentation on github in docxhttps://github.com/Microsoft/CNTK/tree/master/Documentation/...
cs702 11 hours ago 2 replies      
I'm a bit surprised they decided on a CAFFE-like declarative language for specifying neural net architectures[1], instead of offering high-level software components that enable easy composition right from within a scripting language, e.g., like Python in TensorFlow's case.[2]

Is there anyone from the Microsoft team here that can explain this decision?


[1] See examples on https://github.com/Microsoft/CNTK/wiki/CNTK-usage-overview

[2] See examples on https://www.tensorflow.org/versions/0.6.0/get_started/index....

csvan 13 hours ago 6 replies      
First TensorFlow, then Baidu's Warp-CTC, and now Microsofts CNTK. It is a very, very exciting time for open source machine learning indeed.
sharms 14 hours ago 3 replies      
This is great news - I was looking at Tensorflow the other day, but on Windows / OSX it doesn't take advantage of my GPUs. For my desktop I was stuck as my new 3440x1440 monitor doesn't, for the time being, work with Xorg's intel driver.

Hopefully this is a viable alternative, I would love to see a online course in machine learning leveraging this. I found http://research.microsoft.com/en-us/um/people/dongyu/CNTK-Tu... on the homepage which looks well put together to start off

baq 14 hours ago 3 replies      
quoted performance numbers on multiple GPUs leave other frameworks in the dust. where's the catch?
shtangun 12 hours ago 1 reply      
How many Deep Learning frameworks now? I think DL frameworks are coming up like JavaScript frameworks
mzahir 10 hours ago 0 replies      
An evaluation of the open source ML options - https://github.com/zer0n/deepframeworks
blazespin 2 hours ago 0 replies      
doczoidberg 12 hours ago 2 replies      
Is this used on Azure Machine Learning? Do they use GPU based machines for it on Azure?
tianlins 13 hours ago 1 reply      
They claimed to be "the only public toolkit that can scale beyond single machine". Well, mxnet (https://github.com/dmlc/mxnet) can scale across multiple CPUs and GPUs.
Server Retired After 18 Years Running on FreeBSD 2.2.1 and a Pentium 200MHZ theregister.co.uk
418 points by joshbaptiste   ago   160 comments top 26
krylon 3 days ago 2 replies      
I remember a discussion on a FreeBSD mailing list, around 2003-2004, where people bragged about the impressive (though in comparison to this headline, puny) uptimes of a few years.

One of the developers remarked that while he was proud the system he worked on could deliver such uptimes, having an uptime of, say, three years, on a server, also meant that a) its hardware was kind of dated and b) it had not received kernel updates (and probably no other updates, either) for as long. (Which might be okay, if your system is well tucked away behind a good firewall, but is kind of insane if it is directly reachable from the Internet.)

Still, that is really impressive.

ljosa 3 days ago 3 replies      
The authoritative DNS server for pvv.ntnu.no is still a MicroVAX II from the late 1980s. It runs an (up-to-date, I think) NetBSD. Logging in by SSH takes several minutes, even with SSH v1.
crishoj 3 days ago 1 reply      
In fairness, from the article it's not actually clear whether the server literally had an uptime (as reported by the OS) of 18 years, or whether it had simply been in constant service (modulus power cuts) for 18 years.
theandrewbailey 3 days ago 0 replies      
Ars Technica had an article a few years back about a machine that was up for 16 years. Had pics, too! http://arstechnica.com/information-technology/2013/03/epic-u...
keithpeter 3 days ago 0 replies      
Having read and enjoyed this thread and the later follow up thread on The Register, I was struck by the number of commenters who could not clearly remember the dates/machine types or who posted anachronistic descriptions.

People here forging ahead with innovative hardware, why not just record brief details of dates and setups in the back of a diary or something. In 30 years time, you'll be able to start threads like this!

geggam 3 days ago 1 reply      
I tossed out a similar system not too long ago

Pentium Pro 180Mhz running OpenBSD 64MB RAM with a perl BBS averaging around 10k hits / day.

Wasn't worth the electricity to run that thing. It still worked when I put it out on the corner.

SEJeff 3 days ago 1 reply      
This was an old RHEL4 external dns server I ran at $last_job:


I was sad that we had to shut it down, but we had to shut it down due to migrating our primary colo to another city and were going to retire all of the hardware. I'd been manually backporting bind fixes, building my own version, and had to do some config tweaks when Dan Kaminski released his DNS vulns to the world.

It is always a sad day to retire an old server like that, but 18 years... What a winner!


But 1158 days for an old dell 1750 running RHEL4 isn't too bad considering it serviced all kinds of external dns requests for the firm. Its secondary didn't have the uptime due to constant power issues in the backup datacenter and incompetent people managing the UPS.

metaguri 1 day ago 0 replies      
I set up a FreeBSD box at a computer shop I worked at in high school. We had a T1 and static IP, so I set up some routes to make it internet accessible (my boss wanted to use it to host pictures for his eBay transactions).

I set it up, threw it under a table in the corner with nothing but a power and ethernet cable, and moved on.

I was surprised when 5 years later he called to ask why it had stopped working. I told him where it was, he rebooted it, and it came back.

(My memory was a little fuzzy but I probably set it up in 2001-2002 and it ran until at least 2007-2008)

rogeryu 3 days ago 2 replies      
Almost as impressive is the fact that in 18 years, the electricity had no downtime.
jedberg 3 days ago 1 reply      
I used to run the FreeBSD box for sendmail.org. When I left that job in 2001 it had already been running for 2+ years.

Considering that the datacenter it was in is now the Dropbox office, I'm guessing it had to be shut down and moved at some point, but 2+ years seemed like a really long time even then!

FreeBSD is just really good at lasting forever.

wazoox 3 days ago 0 replies      
I always had many Unix machines with high uptimes around. My home PC (Linux) typically reboots 2 or 3 times a year. My office DNS server has currently 411 days of uptime and is the best of my bunch ATM.

In 2002 I had installed on the machines under my guard some program that reported uptime to some website. One of my machines, an SGI Indy workstation, had a high uptime, about 2 years. Then a new intern came, and we installed him next to the Indy. Unfortunately, his feet under the desk pulled some cables and unplugged the Indy and broke my hopes of records :)

emcrazyone 3 days ago 1 reply      
oh man, I have them so beat! I have a Slackware Linux box with similar specs. 200 MHz Pentium, 32MB of RAM, and I think I have an old 10GB barracuda 80pin SCSI drive in it connected to an Adeptec PCI SCSI card. Every so often the hard disk starts making a high pitch noise but throws no errors and the noise goes away after a few minutes. It sits on a UPS and I probably have an uptime of a few years on it now. It has been running nearly 24/7 since 1996! Only powered off when I needed to move the box from a home-office and a few rented offices over the years.

When it was in the basement of my home/office, I would sometimes hear it's disks wine as I was working out (lifting weights and such). It was even in my basement through parties in my early bootstrap years.

I originally bought it to run WinNT 4.0 for a new company a friend of mine and I bootstrapped. I would guess a couple years later is when I put Slackware on it. It's running a 2.0 linux kernel. It's not exposed to the public Internet.

It use to be a local Samba, DHCP, and DNS server for the company. I eventually upgraded to new hardware and left this server around for redundant backups. I develop software so copies of my git repositories find their way onto this box each night. It is in no way relied upon other than to call upon it out of convenience if another server is down or being upgraded, etc...

At one point the box was in the basement of my home when a small amount of water got to the basement floor and because the box sat just high enough on rubber feet, no damage. Occasionally I go back there and pull the cob webs off it.

There is no SSL on it. We still telnet into it or access the SMB shares for nostalgia. It's sort of a joke in the office these days to see how long it will last or if it will simply out last us.

grabcocque 3 days ago 0 replies      
And now its watch is ended.
MikeNomad 3 days ago 1 reply      
Great run for all-original equipment. I worked at Shell's Westhollow Research Center in the mid-90s. We handled the nightmare of standardizing the desktop space (for the first time ever).

A lab was decommisioning an instrument controller that had been running non-stop since they had first spun it up, fresh out of tge paking box, a decade previous.

And they had never backed up any of the data. Sure, the solution was the pretty straight forward use of a stack of floppies. It was still pretty nerve-wracking having a bunch of high-powered research scientists watching over my shoulder, "making sure" I got all their research data off the machine they were too smart to ever back up themselves. Good Times.

iolothebard 3 days ago 0 replies      
Anyone running old machinery that had DOS drivers would likely have older computers. I remember working on base seeing 386/486s in an aircraft hanger area that were so covered in grime I was astounded they were still used.
ommunist 3 days ago 1 reply      
Well, I have TI-92 graphing calculator still working OK, since 1995.
gtrubetskoy 3 days ago 0 replies      
Funny, in today's world, the uptime on my Linux (virtual) box is several times greater than that of the macbook within which it's hosted.
koolba 3 days ago 0 replies      
Is there a way to track uptime across kexec[1] restarts? That way you could differentiate between a hard reboot and "soft" one (ex: automated kernel upgrade). Having a system like that working for a 18 years would be insane!

[1]: https://en.wikipedia.org/wiki/Kexec

webkike 3 days ago 0 replies      
It will be given the highest honor a sys admin can give a piece of hardware: casual reference to it as "what a box" in the future.
vondur 3 days ago 0 replies      
Reminds me of the old Netware servers we used to have running file services and print queues for a few computer labs at a University I worked at. Netware was really stable and we only restarted them when some of the hard disks in the raid array were dying.
deutronium 3 days ago 0 replies      
Impressive! I made a silly kernel module that 'fakes' your uptime, by patching the kernel.


meesterdude 3 days ago 0 replies      
This is beautiful to me; it's ROI is off the charts from any kind of reasonable expectation. Keeping it cool certainly helped, and having it serve a role that could even exist for 18 years is another important factor.
NDizzle 3 days ago 0 replies      
18 years is a really good run. I had some white-box Cisco networking equipment that had 10 year uptime. I shut it down when we closed the office they were in.
bechampion 3 days ago 0 replies      
it would've been nice a photo of the uptime or something like that... i believe him tho.
Announcing Rust 1.6 rust-lang.org
443 points by steveklabnik   ago   216 comments top 20
haberman 4 days ago 3 replies      
Since this release is focused on stabilizing libcore for embedded use, it seems a good time to ask a question that's been on my mind.

C11 (and C++11) defined a memory model and atomic operations for shared-state lock-free concurrency. But that model and the atomic operations aren't being used by Linux, because they didn't match up with the semantics of the operations that Linux uses. (See https://lwn.net/Articles/586838/ and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p012...).

I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.

allan_s 4 days ago 8 replies      
I was wondering:

As the HN crowd seems to have quite a lot of Rust supporters, would it be a good selling point in a job description ?

i.e if (it's just currently a personal hypothesis) a company were to consider to (re)write some part of its Rest-ish microservices and that Rust was the chosen language, and was looking for people to help on that, would it make a `interesting++` in your mind ?for real services used by real people by a not-so-startup company, in europe.

edit: I already deployed in production for my previous company some (with a very stricly limited scope) microservices in rust (with everyone approval) and it was quite a success, so I'm more and more thinking that rust is now enough developed to fit the market of language for microservices, as it's more or less "understand HTTP, read/write from redis/postgresql/mysql/memcache, do some transformation in between" and Rust now support these operations quite well

Cshelton 4 days ago 3 replies      
This is relevant and cool, UPenn now has a course for Rust - http://cis198-2016s.github.io/

You can follow along, all slides and HW are on github.

valarauca1 4 days ago 3 replies      
Actually managed to sneak some Rust into production. Nobody cares what language you wrote the DLL in, just that it does what the docs say it should do.
VeilEm 4 days ago 2 replies      
Congratulations! I'm loving Rust, it's my go-to default language now. I'm going to start messing around with piston.rs to make some basic 2d games. I already wrote an irc bot with Rust.

For those wondering, stable just meant the API defined for interactions with some libraries were subject to change. It wasn't like a problem with using the APIs, it was just the developer has to know that new releases might change how they worked or if they would even be available in the future.

djb_hackernews 4 days ago 2 replies      
I've poked at the language a bit over time and while I don't think I'll ever "get" rust, I can say the folks in #rust on irc.mozilla.org are friendly and helpful, which can't be said for all languages.
EugeneOZ 4 days ago 2 replies      
Keep going! Thanks to all contributors for their work on this awesome language and crates.

I use Rust for web as REST API and I'm absolutely in love with this language. Maybe first steps were little bit steep, but it's worth it :)

// cargo run!

zcdziura 4 days ago 1 reply      
What do you mean when stating that applications aren't supported for developing around libcore?
criddell 4 days ago 0 replies      
I have a question for Rust fans: how do you deal user interaction? Do you have a favorite user interface library? Do you separate the UI from the program and communicate either via IPC or http+html? If you don't care about cross platform capabilities, is there a great library on the platform you do care about?
nikolay 4 days ago 2 replies      
I think Rust badly needs an AWS SDK as a lot of systems software is being written now in Go...
sdegutis 4 days ago 7 replies      
After trying to understand some Rust, it seems to me that it's just as complicated as C++, from the programmer's perspective. Was I mistaken in thinking that being simpler to program in than C++ was one of the goals?
dikaiosune 4 days ago 1 reply      
May the libc event never occur again (at least, I think it was libc with the wildcard dependencies). That's great to hear.
wyldfire 4 days ago 1 reply      
W00t, I made the contributor list!
posborne 3 days ago 0 replies      
Is there a good way to look up no_std crates? For crates written with no_std, is there a keyword we should be tagging things with?

I have several crates providing access to GPIO/SPI/I2C under Linux and would like to put together a roadmap for having the interface to these types of devices that is portable to various platforms and devices (e.g. Linux as well as Zinc, ...).

imakesnowflakes 3 days ago 1 reply      
I have looked at both Rust and Go. What I have felt is that, Rust is too restrictive. I mean, you have to fight the compiler a lot harder then you have to do in Go. Some times, for example, if you are writing device drivers or real time embedded programs, then that is great.

But for web services? I think it is overkill. I think Go strikes a nice balance. I would love to be convinced otherwise though. So please tell me, what am I missing?

kevindeasis 3 days ago 1 reply      
So, I've recently tried to figure out what rust it's been getting lots of attention in HN.

So, it allows you to have safety and control.I thought that is very neat.

I've got three questions for experts.One, what type of applications is Rust intended for?

Two, I like JS because I can code in the client, and server in one language. Will there ever be a web server framework for rust and an api that allows me to modify the dom?

Three, what are your predictions for the future of rust?

jedisct1 3 days ago 0 replies      
I'm still looking for an _easy_ way to write servers that can handle many TCP connections with Rust.

The native thread/connection model quickly shows its limits, and is very slow on virtualized environments.

mio is as low-level as libevent and just as painful to use.

mioco and coio are very nice but blow the stack after 50k coroutines.

hokutosei 3 days ago 0 replies      
I'm not using rust for heavy/intense db/API stuffs currently w/Go. but in the future, i'm looking on using it as Main language for our API's. I like how safe it is and its explicitly.
jinst8gmi 3 days ago 0 replies      
Just waiting for official Jetbrains/IntelliJ support...
lasermike026 4 days ago 0 replies      
Very exciting!
Virginia Tech Professor Spent $147k to Help Uncover the Flint Water Scandal attn.com
377 points by rmason   ago   93 comments top 18
nemild 1 day ago 4 replies      
Is anyone else saddened that it takes a personal mortgage by a passionate professor and his team to figure this out? And that even after they proved it, a GoFundMe campaign is the only way to recoup the funds?

With the agencies involved dragging their feet, was there no other way to get someone else involved? Is there a whistleblower fund like the SEC has: https://www.sec.gov/whistleblower that gives them a portion of the impact they had (10-30% for amounts over $1 MM in the SEC's case)?

gjmulhol 1 day ago 5 replies      
Wow, I was under the impression that the Flint situation was primarily driven by bad policy put in place by people who didn't really know what they were doing, but it seems like there was a lot more nefarious action than that.
robotmlg 1 day ago 1 reply      
I met Marc Edwards in 2010. when I did a summer program at Virginia Tech. One of the most passionate people you'll ever meet. The big news then was water contamination in D.C. (https://en.wikipedia.org/wiki/Lead_contamination_in_Washingt...), another project that Edwards spent much of his personal funds on. Fortunately then, his work earned him a MacArthur Genius Grant in 2007, helping him to pay back his debt.
AngrySkillzz 1 day ago 0 replies      
Interesting couple of Reddit comments on the relevant water chemistry in this situation. [1]

[1] https://www.reddit.com/r/NeutralPolitics/comments/41gqfe/is_...

blammail 1 day ago 0 replies      
This situation is horrifying. And if we're thinking of costs - those have only begun. I shudder to think the longterm health effects this will have for years to come.
kevindeasis 1 day ago 0 replies      
I've heard from my colleagues that there are leaked emails around on how they chose to switch suppliers. Even though they knew the lead contamination.

Anyone have a link of the said email dump? Other links are broken

discardorama 1 day ago 1 reply      
Side question: how expensive is it to get your water tested? I live in an older house, and all this talk has made me concerned about what's coming from my pipes, since they are > 100 yrs old.
dghughes 1 day ago 1 reply      
I was in Scranton Pennsylvania in 1999 and was horrified at the water there, the water was yellowish orange.

My guess is many former industrial regions in the US will have heavily contaminated water.

melted 1 day ago 0 replies      
Seems like a hundred dollars should have been sufficient. It's a simple lab test. Lead in the water == bad news.
ck2 1 day ago 1 reply      
Remember they switched to flint river to save a "whopping" $1 Million per year.

They are probably costing $1 Million per week now in bottled water.

Imagine what the special needs kids caused by this will cost taxpayers for the next several decades.

Even without the lead leeching issues, everyone knows the Flint river was an industrial dumping ground for decades and was very toxic - no amount of filtering would have made it safe for constant consumption.

olympus 1 day ago 8 replies      
Weird that nobody has mentioned this yet: The article claims that Prof. Edwards spent $147k from his discretionary research funds and personal funds. He's still a good guy and has probably used a non-trivial amount of his own money, but it's not like he had to pull all of it out of his bank account. Discretionary research funds are given to researchers as money for them to spend on whatever research they find interesting. It's kind of like 3M's 20% time. The article doesn't mention how much of the $147k came from his research money and how much came from his pocket book. Why? Because it's not as sensational to find out that this guy (who is a tenured professor at a big school) put up $125k of research money and $22k of his own. I don't know what the actual numbers are but I wouldn't be surprised if he put up less than $50k of his own money and had the vast majority come from research funds, considering his history (wikipedia claims he received a $450k grant in 2011 from the EPA to study lead and copper in the water).

Again, I'm not saying he's a fraud. He probably still put up a decent chunk himself. He did take out a mortgage on his house, but it doesn't mean that he put 100% of the value of his house towards this crisis. All I'm saying is don't get caught up in all the sensationalism.

jrjr 1 day ago 0 replies      
City of Detroit's FAULT.The attempted gouging of Flint for their water supplyis the cause, who was that person that allowed thatattempt ?

That is the cockroach that needs the light shown upon them.

follow the money.


droithomme 1 day ago 6 replies      
They claim the professor and his students did 6000 hrs of work testing samples (that's an especially round number), which they valued at an average of $27.74/hr even though everyone was a volunteer and no one was asking to be paid. This is how they came up with an estimate of a labor value of $166,487 for this voluntary work for which no students who did the work were paid or asked to be paid.

It is extremely misleading and deceptive here to say that the professor spent $147,000 because he did not spend $147,000 at all. He spent 11200+3180+50 = $14,430 and he received 32843+200+500+200 = $33,743 in grants and fees he charged for speaking, for a surplus of $19,313.

Furthermore, the money being raised is going into an account supervised by the lead. It is not being used to pay the people who did the work for their time. This is especially outrageous that he is collecting money for their volunteer work and keeping it for his own use.

pfarnsworth 1 day ago 2 replies      
This is sickening. How this could occur in 2016 needs to be studied and eliminated with prejudice. To think that government officials look at that orange water and think that's normal. It reminds of the Monsanto lobbyist that claimed that RoundUp is perfectly safe to drink, but then instantly refused to drink it when offered. I wish there was a special place in hell for people like that.
PythonicAlpha 1 day ago 0 replies      
The problem is, in a world, where profits are the most important thing and everything else is secondary, human life does not count -- even when human health is destroyed, it can be good -- to increase the profit margins of some corporations.
crimsonalucard 1 day ago 1 reply      
>"[It's a] trivial cost compared to the damage we prevented," Edwards said in an email to ATTN:. "Best investment we could have made into society."

In the world we live in, people only invest in themselves and society may or may not benefit as a consequence. This type of selflessness is rare and unsustainable.

sabujp 1 day ago 0 replies      
erin brokovich II, directed by michael moore, a netflix special to raise money.
I'm going to slowly move on from Mercurial mercurial-scm.org
432 points by jordigh   ago   129 comments top 16
indygreg2 10 hours ago 0 replies      
10+ years is a long time to steward a project. Most projects don't even live that long!

Transitioning away after 10+ years is a sign of several positive things:

* Creating a successful project that lasts 10+ years

* Having the perseverance, patience, and willpower to guide that project for its lifetime

* Creating a healthy community whom you are able to transition day-to-day responsibility of that project to

Matt has accomplished something that most of us never will. I'm envious of what he has accomplished and that he is able to walk away from a healthy and successful project. Truly a remarkable accomplishment.

BuckRogers 11 hours ago 2 replies      
Matt has done a great job and I wish hg had taken off more than it did, though it would be disingenuous to suggest it hasn't been successful.

I used hg before moving to git and always preferred the clean, single language implementation vs the scripts and general hackish nature that git offered as an alternative.As a result, it was a nice perk for a long time that hg worked much better than git on Windows.

ChuckMcM 10 hours ago 0 replies      
This is a gutsy thing to do with a piece of software that has been part of your life for over a decade. Mad props to mpm.

Up until Blekko I had pretty much been a perforce user (both at NetApp and at Google) and git was "that weird system you had to use to check in kernel changes". Blekko used Mercurial and I had no opinion as I had no experience. But over the years using hg at work and then starting up a github account to learn git, I saw the writing on the wall. Hg was the betamax of source code management systems. Clean, elegant, reliable, and not mainstream.

andrewla 11 hours ago 3 replies      
For context here, and for people like me who don't follow the development of Mercurial closely enough to realize, mpm is not just some random contributor to Mercurial, but is in fact Matt Mackall, the original author of Mercurial.
krupan 6 hours ago 3 replies      
I'm really surprised by the "git won" mentality that manifests in any discussion of mercurial as of late. It makes no sense. We don't decare "winners" with other tools we use. Do we?

Why are we all playing with new programming languages when C won? Or maybe C++ won? Or Java? Did emacs or vim win? Speaking of text, did ASCII win? Which code review tool won? How about continuous integration tools? or build tools? I mean, make won, right?

None of us just pick the most widely used tool or programming language and say, "there can only be one" and then stop trying new things. Why are we doing this with version control?

pjmlp 10 hours ago 0 replies      
I loved Mercurial, but just like with Wirth languages, I seem be on the wrong side of the mainstream flow.

Thanks for Mercurial!

dochtman 8 hours ago 1 reply      
I was a Mercurial crew member for a few years (2008 to I don't know) and was very happy to be able to learn from Matt in his role as BDFL. His principled stance against layering violations and his policy to require really small commits still influence my coding every day. I hope he finds something new to give him satisfaction/joy!
AndyMcConachie 11 hours ago 1 reply      
Thanks for all your contributions over these years.
nailer 11 hours ago 7 replies      
hg could be 20% better than git. For the sake of the arugment, let's say it is.

git's network effects mean hg needs to be 1000% better than git to replace it (much like git replaced svn, svn replaced cvs, and cvs replaced rcs).

bedros 8 hours ago 0 replies      
I talked with an engineer from a fortune 500 tech company and they chose git over other vcs because they could have permission control; they have sensitive sub project who needs only a small set of engineers with permission to access it, but that subproject/module is part of a bigger project that everyone has access to.

I personally prefer HG over git and use it for my own personal projects, hosted on my own redmine server. I tried bitbucket, and github; the latter have s superior interface to bitbucket. which I believe why people picked github over bitbucket

acveilleux 3 hours ago 0 replies      
I believe that Mercurial's greatest flaw is that it is too damn easy to use. Since you don't have to wrap your head around it so damn much to use it, it seems weak, like if it was only slightly better than subversion.

The truth is different and Mercurial is roughly as capable as git but it's easy to miss Mercurial's power where as Git rubs your face in it (less so these days but early on, it was a brain f* to my CVS trained brain.)

dhimes 9 hours ago 0 replies      
Thank you, Matt. hg is still my choice in source control. It's the only one I really use.
stevelosh 7 hours ago 0 replies      
I've only contributed a few patches to Mercurial but meeting Matt and learning from him (mostly on the HG mailing lists) has taught me a lot. He's influenced how I think about and work with code at a really basic level. I wish him the best in whatever he decides to do.
oscarcardoso 10 hours ago 0 replies      
If you ever see this, Thank you Matt. It helped me a lot to get away from subversion by working with hg at the same time.
22klinda 10 hours ago 0 replies      
I've used Mercurial on Bitbucket in the past and loved it.Very thank you for your hard workd, Matt.Too bad that git is on everywhere now.
yeukhon 6 hours ago 1 reply      
I did a project on integrating Mercurial with Mongo before, and I have to say being a Python developer I didn't have too much difficulty in figuring the right API, although the documentation is not so great. The code itself uses a lot of abbreviations which you have to really be patient to get familiar. The Mercurial model [1] is a pleasant read for a couple train ride (read it a few times I encourage). Others have done amazing things with hg too, like making hg available in Javascript (now an abandon project).

Mercurial is actually very easy to use. However, it didn't have the exact lightweight concept of git branch until they implemented bookmark a few years ago. I don't remember the limitation of bookmarks now it has been a few years since I used Mercurial...

I too thank Matt and core developers working on Mercurial. One thing I feel Matt and developers need to address is hosting hg on PyPI infrastructure instead of redirecting to their own server. From time to time the server wouldn't deliver anything and it was frustrating to see our builds fail (of course some say you should have cached the build and not hitting Mercurial server every day).

[1]: https://www.mercurial-scm.org/wiki/Presentations?action=Atta...

ES6 Cheatsheet github.com
323 points by DrkSephy   ago   84 comments top 20
krisdol 3 days ago 3 replies      
Wow, var was so broken.

Anyway, we use as much ES6 as Node 4 allows at work. Transpiling on the server never made much sense to me. I also used to sprinkle the fat-arrow syntax everywhere just because it looked nicer than anonymous functions, until I realized it prevented V8 from doing optimization, so I went back to function until that's sorted out (I don't like writing code that refers to `this` and never require binding, so while the syntax of => is concise, it is rarely used as a Function.bind replacement). Pretty much went through the same experience with template strings. Generator functions are great.

I'm not a fan of the class keyword either, but to each their own. I think it obscures understanding of modules and prototypes just so that ex-Class-based OOP programmers can feel comfortable in JS, and I fear the quagmire of excessive inheritance and class extension that will follow with their code.

pcwalton 3 days ago 1 reply      
> Unlike var, let and const statements are not hoisted to the top of their enclosing scope.

No, let is hoisted to the top of the enclosing scope [1] ("temporal dead zone" notwithstanding). let, however, is not hoisted to the top of the enclosing function.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Raphmedia 3 days ago 0 replies      
I would recommend taking a look at this page for a bigger "cheatsheet": https://github.com/lukehoban/es6features#readme
abustamam 2 days ago 0 replies      
I love how concise this is an handles a lot of "Gotchas" when working with ES6, but can we call a spade a spade and NOT call this a "cheatsheet?"

I always imagine cheatsheets to be just that; something I can render on one sheet of paper. Printing the entire raw README text would take 4 pages (2 sheets, front and back).

I think it would be better titled, "ES6 best practices" since I think that's a more accurate description of what it is.

jcoffland 3 days ago 1 reply      
Great reference and overview of ES6.

One minor quibble. I was bothered by the misuse of the words "lexical" and "interpolate". The lexical value of the keyword "this " is the string "this". Then, you might translate between two technologies such as CommonJS and ES6 but interpolating between them implies filling in missing data by averaging known values. Granted this word is commonly abused. Sorry this is a bit pedantic but these corrections would improve the document, IMO.

deckar01 3 days ago 1 reply      
Is "WeakMap" really the suggested way to implement private class properties?

Using "this" as a key into a Map of private variables looks bizarre. I would rather keep my code concise than create a layer of obfuscation.

igravious 3 days ago 4 replies      
The only thing from this list of new ES6 idioms that doesn't sit comfortably with me is the short-hand for creating classes. I remember being kind of blown away way back in the day with the prototypical/functional nature of Javascript and how you could wrangle something into being that behaved in an object-oriented manner just like other languages that had explicit class declaration and object instantiation.

Part of me feels that obscuring Javascript's roots in this respect is very un-Javascript-y. What think ye?

Coming from Ruby, loving template literals, feel right at home with them, I wish even C could have them (if that makes any sense!).

banku_brougham 3 days ago 0 replies      
Much more than a cheat sheet, this is a revealing window into js development. Helpful!
edem 2 days ago 0 replies      
[This](https://ponyfoo.com/articles/es6) is also a very informative guide of ES6. I highly recommend perusing it.
TheAceOfHearts 3 days ago 0 replies      
This cheatsheet is wrong about ES2015 modules. They don't define how module loading works, that's still being worked on [0]. ES2015 just defined the syntax.

[0] https://github.com/whatwg/loader

shogun21 3 days ago 1 reply      
Two questions: what happens if you use ES6 standards in a browser that does not support it?

And would it be wise to hold off adopting until all browsers support it?

joshontheweb 3 days ago 1 reply      
Is there a resource that tells me which of these features are available in the latest stable Node version?
s84 2 days ago 0 replies      
Didn't realize arrow functions preserver this! Now using arrow functions.
kclay 3 days ago 0 replies      
This will come in handy, thanks.
overcast 3 days ago 0 replies      
String interpolation, classes, promises, and parameter stuffs. A tear rolls down my cheek.
lukasm 3 days ago 1 reply      
Is there a similar thing for coffescript?
jbeja 3 days ago 0 replies      
Who is in charge of ES6 design? Is awful.
z3t4 3 days ago 1 reply      
"Require" is the reason why we now have a module for just about anything in Node.JS. I even think Kevin Dangoor or whoever invented it should get the Nobel prize. But then the ES committee choose to use paradigms from year 1970. I cry every time someone use import instead of require in JS because they miss out why globals are bad, modularity is good, and the JS API philosophy (super simple objects with (prototype) methods).
mercer 3 days ago 9 replies      
I'm all in on ES6 when it's practical or allowed. Arrow functions are wonderful, I love destructuring assignment, const and let, and considering that some projects I work on involve a lot of async stuff, I'm close to just giving in and using ES7's async/await functionality.

But most of the time this is in the context of Node.js development, and in every case I use Babel.js to turn the end result into ES5 code.

I'm perfectly comfortable with using ES5, because as a freelancer/contractor I often have to do so. But I really miss the ES6 stuff and the more I use it, the more time it takes me to 'switch' to a mindset where I'm only allowed to use ES5 functionality.

Nonetheless, it strikes me as really odd to actively prefer ES5. Having worked with Ruby and Python (among others), ES5 feels limiting for no good reason. The only rationale I can think of for prefering ES5 is nostalgia.

Could you elaborate why you don't like the 'perl/python' style changes? Because I truly do not understand why one would choose to limit oneself to things like .bind(this) instead of the different forms of arrow functions that make functional-like programming so much easier. And I've found that the best part of JS is that it's decently functional.

Edit: I would agree when it comes to the new 'class' keyword though. I'm not a fan of that.

sectofloater 3 days ago 0 replies      
This will likely get downvoted - but I have just realized how much I was underestimating the privilege of developing apps in Dart instead of JavaScript. Dart had none of the mentioned idiosyncrasies from day one, all the features, and has a lot of other stuff (like async/await, yield, mixins, etc) to offer. Its tooling is very simple and powerful, and the overall experience is really nice - when there is a problem, it's always in the logic of my code, and not things like some weird implicit conversions that are so common in JS land. I almost forgot how terrible JS is...
Advice for companies with less than 1 year of runway themacro.com
288 points by loyalelectron   ago   95 comments top 13
danieltillett 3 days ago 3 replies      
There is a third way which is put the company into hibernation. I was faced with this with my startup a bit more than 10 years ago now. I ran out of runway so I laid everyone off, paid the bills, and got a job. I then bought out everyone else, worked part time on the business and built it back up over then next few years to the point where I could return full time. I could have started a new business, but I believed that there was a lot of value in the old business [1] which proved to be correct.

1. Some caveats here. Firstly, I did not have many people to buy out and they were willing to sell at a reasonable price. Secondly, my business is in biotech/bioinformatics and we had put a lot of resources into R&D. This R&D had real value that could be used to bring the business back to life.

Animats 3 days ago 6 replies      
The alternative is becoming profitable as soon as possible. Then you get run over by someone with better access to capital who can absorb losses. Example: Sidecar.[1]

[1] http://venturebeat.com/2016/01/20/sidecar-we-failed-because-...

nunobrito 3 days ago 7 replies      
Was laughing when reading "12 months of runway" as "low runaway". My bootstrapped company is "low runaway" by default since the past two years. Every. Single. Month.

Funny how your mind gets busier to work and build revenue in Europe without a comfortable cushion like SF guys seem to have.

AndrewKemendo 3 days ago 2 replies      
I think this article needs to be paired with an article about "When to shut your company down." If I recall, that article exists and it basically says: when you lose hope.

Maybe I am not looking at this right but this part doesn't make sense to me:

In many cases, <2 months is the point of no return. If you are in this state it is immediately necessary to lay off your employees and give them severance, pay down your obligations, and use your remaining cash for shutdown costs.

So is that for companies that had a year+ of runway at some point and are now down to 2 months? What about companies that never had 1 year of runway? The differences between those are pretty big.

For example if you have a 4 person startup and 2 months of runway after being on the market for only 4-6 months, you are supposed to just shut it down?

No, you take consulting jobs and do side work till you can get higher revenue or some financing.

I think, like most startup articles, this applies to companies who have already gotten past seed stage, initial traction and thus is not applicable for 90% of us.

gjmulhol 3 days ago 3 replies      
My friends, we are finally hitting the new economy where even startup are being asked to make money---maybe not to the point of profitability, but even a little revenue can make a big difference in a lean organization.
josh_carterPDX 3 days ago 0 replies      
I've never understood the psychology of those that do not fundamentally get this. If you just finished raising a seed or angel round, chances are you had less than 12 months of runway to begin with. Perhaps your personal savings was drying up or you were running out of friends and family resources that could help you run this out further. The sense of urgency and anxiety you felt while raising your seed round doesn't go away simply because you were able to raise some money. If anything, it would increase. So the fact that someone had to specifically cover this in a blog post seems really counterintuitive to me.
rl3 3 days ago 1 reply      
>The Fatal Pinch does not apply to me

Except, sometimes it doesn't? If you look at the notes[0] at the bottom of The Fatal Pinch:

>There are a handful of companies that can't reasonably expect to make money for the first year or two, because what they're building takes so long. For these companies substitute "progress" for "revenue growth." You're not one of these companies unless your initial investors agreed in advance that you were. And frankly even these companies wish they weren't, because the illiquidity of "progress" puts them at the mercy of investors.

What do you do if you're one of those companies? There's plenty of business models that could be attractive acquisition targets (read: billions), but otherwise can't monetize to save their souls.

Two pieces of advice often encountered (paraphrasing):

"Treat each funding round as if it's your last."

"VC money is like rocket fuel. It's intended to be burned at a high rate."

I imagine reconciling both is difficult at best.

[0] http://paulgraham.com/pinch.html

JarvisSong 3 days ago 0 replies      
Great analysis. I would add another chart -- likelihood of more investment / buyout -- both decrease as you get closer to zero runway.
bshimmin 3 days ago 3 replies      
I don't wish to be overly mean or uncharitable, but I don't really think anyone who is unable to figure out the advice offered in the section titled "Some tips on reducing burn" all by themselves is ever going to be able to run a successful business.
larakerns 3 days ago 2 replies      
It's discouraging when new employees expect a certain lifestyle on joining your startup but you're runway is less than a year. Startups have been portrayed as having so many perks that there's an impossibly high standard to strive toward.
Kiro 2 days ago 0 replies      
> In especially messy scenarios you can end up with personal liability.

When can this happen?

aagha 3 days ago 1 reply      
I love how investors are always willing and wanting to show examples of how they have the upper hand and the entrepreneur has the lower one (chart in the article).
lotso 3 days ago 2 replies      
Could someone give a brief bio of Dalton Caldwell? I know he created Svbtle, but don't know much else about him.
Unikernels are unfit for production joyent.com
358 points by anujbahuguna   ago   310 comments top 42
nkurz 3 days ago 7 replies      
Bryan may certainly be right (I neither know him nor much about unikernels), but some parts of his argument seem incredibly weak.

 The primary reason to implement functionality in the operating system kernel is for performance...
OK, this seems like a promising start. Proponents say that unikernels offer better performance, and presumably he's going to demonstrate that in practice they have not yet managed to do so, and offer evidence that indicates they never will.

 But its not worth dwelling on performance too much; lets just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.
"Let's just say"? You start by saying the that the "primary reason" for unikernels is performance, and finish the same paragraph with "its not worth dwelling on performance"? And this is because there are "well-grounded counter-arguments" that they cannot perform well?

No, either they are faster, or they are not. If someone has benchmarks showing they are faster, then I don't care about your counter-argument, because it must be wrong. If you believe there are no benchmarks showing unikernels to be faster, then make a falsifiable claim rather than claiming we should "move on".

Are they faster? I don't know, but there are papers out there with titles like "A Performance Evaluation of Unikernels" with conclusions like "OSv significantly exceeded the performance of Linux in every category" and "[Mirage OS's] DNS server was significantly higher than both Linux and OSv". http://media.taricorp.net/performance-evaluation-unikernels....

I would find the argument against unikernels to be more convincing if it addressed the benchmarks that do exist (even if they are flawed) rather than claiming that there is no need for benchmarks because theory precludes positive results.

Edit: I don't mean to be too harsh here. I'm bothered by the style of argument, but the article can still valuable even if just as expert opinion. Writing is hard, finding flaws is easy, and having an article to focus the discussion is better than not having an article at all.

vezzy-fnord 3 days ago 3 replies      
Bryan Cantrill seems to have some personal interest in denigrating OS research (defined as virtually everything post-Unix) as all being part of a misguided "anti-Unix Dark Ages of Operating Systems". He has expressed this sentiment multiple times before, and places a great deal of faith on Unix being a timeless edifice which needs only renovation. Naturally, he regards DTrace and MDB to be the pinnacles of OS design in the past 20 years and never stops yapping on about them, this article being no exception. It's his thought-terminating cliche.

He voiced all this here [1], and so I countered by listing stuck paradigms in traditional monolithic Unixes, as well as reopening my inquiry on Sun's Spring research system, which he seems to scoff at, but over which I am impressed by the academic research it yielded. He has yet to respond to my challenge.

[1] https://news.ycombinator.com/item?id=10324211

chubot 3 days ago 4 replies      
Big upvotes for this article. I'm glad it was written, because I've seen nothing but hype for Unikernels on Hacker News (and in ACM, etc.) for the last 2 years. It's great to see the other side of the story.

The biggest problem with Unikernels like Mirage is the single language constraint (mentioned in the article). I actually love OCaml, but it's only suitable for very specific things... e.g. I need to run linear algebra in production. I'm not going to rewrite everything in OCaml. That's a nonstarter.

An I entirely agree with the point that Unikernel simplicity is mostly a result of their immaturity. A kernel like seL4 is also simple, because like unikernels, it doesn't have that many features.

If you want secure foundations, something like seL4 might be better to start from than Unikernels. We should be looking at the fundamental architectural characteristics, which I think this post does a great job on.

It seems to me that unikernels are fundamentally MORE complex than containers with the Linux kernel. Because you can't run Xen by itself -- you run Xen along with Linux for its drivers.

The only thing I disagree with in the article is debugging vs. restarting. In the old model, where you have a sys admin per box, yes you might want to log in and manually tweak things. In big distributed systems, code should be designed to be restarted (i.e. prefer statelessness). That is your first line of defense, and a very effective one.

Hoff 3 days ago 1 reply      
Interesting article. Rather than arguing what can or cannot be done or what might or might not work, here's some code, and some history.

Here's full-mixed-language programmable, locally- and fully-remote-debuggable, mixed-user and inner-mode processing unikernel, and with various other features...

This from 1986...


FWIW, here's a unikernel thin client EWS application that can be downloaded into what was then an older system, to make it more useful for then-current X11 applications...

From 1992...


Anybody that wants to play and still has a compatible VAX or that wants to try the VCB01/QVSS graphics support in some versions of the (free) SIMH VAX emulator, the VAX EWS code is now available here:


To get an OpenVMS system going to host all this, HPE has free OpenVMS hobbyist licenses and download images (VAX, Alpha, Itanium) available via registration at:


Yes, this stuff was used in production, too.

ChuckMcM 3 days ago 4 replies      
Well that is pretty provocative :-) Bryan might be surprised to learn that for its first 15 years of its existence NetApp filers were Unikernels in production. And they out performed NFS servers hosted on OSes quite handily throughout that entire time :-).

The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.

derefr 3 days ago 5 replies      
> Unikernels are entirely undebuggable

I'm pretty sure you debug an Erlang-on-Xen node in the same way you debug a regular Erlang node. You use the (excellent) Erlang tooling to connect to it, and interrogate it/trace it/profile it/observe it/etc. The Erlang runtime is an OS, in every sense of the word; running Erlang on Linux is truly just redundant, since you've already got all the OS you need. That's what justifies making an Erlang app a unikernel.

But that's an argument coming from the perspective of someone tasked with maintaining persistent long-running instances. When you're in that sort of situation, you need the sort of things an OS provides. And that's actually rather rare.

The true "good fit" use-case of Unikernels is in immutable infrastructure. You don't debug a unikernel, mostly; you just kill and replace it (you "let it crash", in Erlang terms.) Unikernels are a formalization of the (already prevalent) use-case where you launch some ephemeral VMs or containers as a static, mostly-internally-stateless "release slug" of your application tier, and then roll out an upgrade by starting up new "slugs" and terminating old ones. You can't really "debug" those (except via instrumentation compiled into your app, ala NewRelic.) They're black boxes. A unikernel just statically links the whole black box together.

Keep in mind, "debugging" is two things: development-time debugging and production-time debugging. It's only the latter that unikernels are fundamentally bad at. For dev-time debugging, both MirageOS and Erlang-on-Xen come with ways to compile your app as an OS process rather than as a VM image. When you are trying to integration-test your app, you integration-test the process version of it. When you're trying to smoke-test your app, you can still use the process versionor you can launch (an instrumented copy of) the VM image. Either way, it's no harder than dev-time debugging of a regular non-unikernel app.

geofft 3 days ago 3 replies      
It may well be the case that unikernels as currently envisioned by unikernel proponents are impossible to make fit for production; it may also well be the case that there exists a product that is closer to a unikernel than current kernels, that is quite production-suitable, and unikernels are fruitful research to that point.

For instance, you could imagine a unikernel that did support fork() and preemptive multitasking, but took advantage of the fact that every process trusts every other one (no privilege boundaries) to avoid the overhead of a context switch. Scheduling one process over another would be no more expensive than jumping from one green (userspace) thread to another on regular OSes, which would be a huge change compared to current OSes, but isn't quite a unikernel, at least under the provided definition.

Along similar lines, I could imagine a lightweight strace that has basically the overhead of something like LD_PRELOAD (i.e., much lower overhead than traditional strace, which has to stop the process, schedule the tracer, and copy memory from the tracee to the tracer, all of which is slow if you care about process isolation). And as soon as you add lightweight processes, you get tcpdump and netstat and all that other fun stuff.

On another note, I'm curious if hypervisors are inherently easier to secure (not currently more secure in practice) than kernels. It certainly seems like your empirical intuition of the kernel's attack surface is going to be different if you spend your time worrying about deploying Linux (like most people in this discussion) vs. deploying Solaris (like the author).

bcg1 3 days ago 2 replies      
This article is mostly FUD I think.

It comes off as a slew of strawmen arguments ... for example the idea that unikernels are defined as applications that run in "ring 0" of the microprocessor... and that the primary reason is for performance...

All of the unikernel implementations he mentioned (mirageos, osv, rumpkernels) all run on top of some other hardware abstraction (xen, posix, etc) with perhaps the exception of a "bmk" rumpkernel.

We currently have a situation in "the cloud" where we have applications running on top of a hardware abstraction layer (a monolithic kernel) running on top of another hardware abstraction layer (a hypervisor). Unikernels provide a (currently niche) solution for eliminating some of the 1e6+ lines of monolithic kernel code that individual applications don't need and introduce performance and security problems. To dismiss this is as "unfit for production" is somewhat specious.

I wonder if Joyent might have a vested interest in spreading FUD around unikernels and their usefulness.

_wmd 3 days ago 1 reply      
I think the problems with this article are well covered already. Just a suggestion for Joyent: articles like this are damaging to your excellent reputation, would suggest a thin layer of review before hitting the post button!

Some additional meat:

- The complaint about Mirage being written in OCaml is nonsense, it's trivial to create bindings to other languages, and in 40 years this never stopped us interfacing our e.g. Python with C.

- A highly expressive type/memory safe language is not "security through obscurity", an SSL stack written in such a language is infinitely less likely to suffer from some of the worst kinds of bugs in recent memory (Heartbleed comes to mind)

- Removing layers of junk is already a great idea, whether or not MirageOS or Rump represent good attempts at that. It's worth remembering that SMM, EFI and microcode still exist on every motherboard, using some battle-tested middleware like Linux doesn't get you away from this.

- Can't comment on the vague performance counterarguments in general, but reducing accept() from a microseconds affair to a function call is a difficult benefit to refute in modern networking software.

ewindisch 3 days ago 0 replies      
I'm happy for this article because it does hit some points on the head. Other points are deeply entrenched in Bryan's biases, but I can't really fault him for that.

In particular, I am suspicious of the idea that unikernels are more secure. Linux containers make the application secure in several ways that neither unikernels nor hypervisors can really protect from. Point being a unikernel (as defined) can do anything it wishes to on the hardware. There is no principle of least-privilege. There are no unprivileged users unless you write them into the code. It's the same reason why containers are more secure than VMs.

Users are only now, and slowly, starting to understand the idea that containers can be more secure than a VM. False perspectives and promises of unikernel security only conflate this issue.

That said, I do think the problems with unikernels might eventually go away as they evolve. Libraries such as Capsicum could help, for instance. Language-specific or unikernel-as-a-vm might help. Frameworks to build secure unikernels will help. Whatever the case, the problems we have today are not solved or ready for protection -- yet.

This blog post was clearly spurred by the acquisition made by Docker (of which I am alumnus). I think it's a good move for them to be ahead of the technology, despite the immediate limitations of the approach.

pyritschard 3 days ago 1 reply      
The essential point the lengthy article makes revolves around debugging facilities for unikernels.While mostly true for MirageOS and the rest of the unikernel world today, OSv showed that it is quite possible to provide good instrumentation tooling for unikernels.

The smaller point about porting application (whether targetting unikernels that are specific to a language runtime or more generic ones like OSv and rumpkernels) is the most salient, it will probably restrict unikernel adoption.

For docker, if only to provide a good subtrate for providing dev environments for people running windows or Mac computers, it is very promising.

uxcn 3 days ago 1 reply      
I think Bryan Cantrill and Joyent are doing a number of interesting things, but this reads more like an ad than a genuine critique of Unikernels.

 The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster.
I haven't heard this argument made once. There are performance benefits (smaller footprint, compiler optimization across system call boundaries, etc...). However, the primary benefit is not performance from eliminating the user/kernel boundary.

 Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable.
If this were true, and an issue, FPGAs would also be completely unusable in production.

seliopou 3 days ago 1 reply      
First, let's put aside the start of the blog post, which consists entirely of empirical questions. Each potential adopter of unikernels will have to figure out for themselves wether their specific use-case justifies the cost and benefit of this particular technology, just like all others.

Putting that aside, debuggability is an obvious and pressing issue to production use-cases. Any proponent of unikernels that denies that should be defenestrated. I haven't come across any that do.

How to go about debugging unikernels is unclear because it certainly is still early days. However, I don't think the lack of a command-line in principle precludes debuggability, nor does it my mind even preclude using some of the traditional tools that people use today. For example, I could imagine a unikernel library that you could link against that would allow for remote dtrace sessions. Once you have that, you can start rebuilding your toolchain.

P.S. Bryan, where's my t-shirt?

zobzu 3 days ago 1 reply      
From TFA: "At best, unikernels amount to security theater, and at worst, a security nightmare."

As a security engineer, that's a good one sentence summary from my point of view of unikernels, since, forever.

I think the reason why unikernels are being developed is due mostly to ignorance, and if any of them is successful, it will morph into an OS that is closer to Mesos, Singularity, or even Plan9. That's faster, safer, more logical, etc.

ori_b 3 days ago 3 replies      
The key thing to realize, I think, is that if you're using virtualization, a unikernel is nothing more than a process that uses a very strange system call API.
pcwalton 3 days ago 1 reply      
It's not by any means the main point of the article, but: I'm not sure citing the Rust mailing list post on M:N scheduling is proof that it's a dead idea. The popularity of Go is a huge counterexample.
chris_wot 3 days ago 0 replies      
I think Cantrill is doing a massive favour for those who are pro-Unikernels - he's essentially trolling them and will force them to come up with responses to some of the issues he's making.

Given how invested Joyent is in their current positions, I can see why Unikernels may seem a threat, but none of the things Cantrill has raised as concerns seem insurmountable.

zmanian 3 days ago 3 replies      
Op seems to misunderstand the following:

1. Your hypervisor is the security boundary. 2. Unikernel design lets you maximize the security benefits of AppSec and LangSec by removing the large OS surface area.

toast0 3 days ago 0 replies      
I'm not likely to run a unikernel anytime soon, but I wanted to respond to this:

> And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever.

Multi-tenancy is needed in some cases, but I don't need it, we use the whole machine, and other than the one process that does all the work, we only have some related processes for async gethost, monitoring/system stats processes, ntpd, sshd, getty.

readams 3 days ago 0 replies      
One of the things that seems to really fall flat is the claim claim that the security is bad for unikernels. The comparison point though is not a traditional OS running in a hypervisor but a container running on the host OS. In that comparison I think unikernels are emphatically more secure than what you get on Linux, and have essentially all of the same advantages of containers (plus a few extra ones).

For Joyent of course they have a book to talk up and they want to sell you their own solution which looks more like containers than a hypervisor. The Joyent solution is I think undoubtedly very interesting and well-considered but I have a suspicion that they've hitched their wagon to the wrong horse and Linux will keep winning.

PaulHoule 3 days ago 0 replies      
I dunno.

For a long time the dominant programming environment for IBM mainframes has been VM/CMS, where VM is something like VirtualBox and CMS is something a lot like the old MS-DOS, i.e. a single process operating system. Say what you like but it was a better environment than anything based on micros until you started seeing the more advanced IDEs on DOS circa 1987 or so.

Now the 360 was a machine designed to do everything, but it's clear the virtual memory in most machines is an issue in terms of die size, cost, power consumption and performance and I wonder if some different configuration in that department together with a new approach to the OS could make a difference.

Animats 3 days ago 1 reply      
Can you run the unikernel under a debugger when testing? Can you get crash dumps? Stack backtraces?

Unless you're running your unikernel on bare metal, it's still running under an OS. It's just that the OS is called a hypervisor and is less bloated than most OSs.

hughw 3 days ago 0 replies      
Isn't it a feature of (some) unikernels, that you can fire one up to respond to some request, and tear it down, in milliseconds? If so, running an AWS Lambda-like service with all the isolation you get in a HVM seems desirable for some situations. The isolation provided by a Docker container might not be good enough. It's a feature whose benefits, for some applications, might balance the debugging costs the article outlines.
patrickaljord 3 days ago 0 replies      
Nothing like a 15 paragraphs corporate blog to explain why a technology is bad and unfit for production to promote said technology. Microsoft used to do the same with linux and now we have this http://blogs.technet.com/b/windowsserver/archive/2015/05/06/...
jorge-fundido 3 days ago 1 reply      
"Unikernels are entirely undebuggable."

I'm confident this will be addressed eventually. Anyone have a sense of what that will look like? Something like JMX? Something like dumping core, restarting, and analyzing later?

erichocean 2 days ago 0 replies      
Simple explanation for the article: Bryan is "talking his book".[0]

Joyent doesn't sell unikernel services, hence unikernels are bad. Color me shocked. Is it me, or has Joyent become less than upfront about their motives over the last few years? I don't require everyone to embrace "don't be evil" or whatever, but I always get a "righteous" vibe from Joyent employees that seems at odds with their actual behavior. Maybe they feel under siege or whatever, and are reacting to that? The whole thing is vaguely off somehow.

[0] http://www.investorwords.com/8436/talking_my_book.html

kriro 3 days ago 0 replies      
I think he brushes by the security argument too quickly. Unikernels are (typically) smaller with less attack surface and more importantly it's easier to reason about them. I'd argue that this ability to keep more of the entire OS in your head at any given time improves security on a high level of abstraction.
ccostes 3 days ago 1 reply      
Reading through the article I feel like the author and I are describing different things when we use the term unikernel, which is surprising because we both have experience with the same unikernel: QNX. I'm not very familiar with the other examples, but my QNX application definitely does have processes that I can see using top, htop, etc., and interfaces with system hardware using the QNX system calls; all things the article describes as not being features of unikernels.

Either the article is written in the context of writing kernel software, which wouldn't have much of an impact on my decision to run my application on a unikernel OS or not, or QNX is a far outlier from other unikernel OS's and that's why I'm so confused.

Philipp__ 3 days ago 1 reply      
Wow. This thread went just as I thought it will... The HN way.

I hadn't any experience with unikernels (still student), but there are few concerning things about them. And the main thing is that those things that are concerning are at it's core.

I have only respect and admiration for Mr. Cantrill, but this post felt kinda strange. After reading the last paragraph it sounded like and ad. Maybe they got scared of Docker possibly expanding and taking part of their cookie. I don't know, but these discussions were interesting to read at least...

jupp0r 3 days ago 0 replies      
The main use case for unikernel apps (the way I see it) is running language specific VMs like Beam, MRI or the JVM almost directly on bare metal and getting rid of all the complexity of OSes. The idea is to make it easier to debug, optimize and tune applications by removing traditional OSes complex kernels from the equation. The real argument for security (that the author omits) is derived from that: 20 million less lines of code in the stack that you deploy.
kev009 3 days ago 0 replies      
I tweeted to him to research IBM's zTPF before writing this, I guess it conflicts with the narrative he's telling though. In general, I agree with his sentiments, but there are no absolutes, only trade offs here. You can, for instance, hook a debugger into the kernel or through the hypervisor. And debugging hardware looks a lot like debugging a unikernel in that sense.
woah 3 days ago 5 replies      
So can someone intelligently contrast a hypervisor with an OS? Both allow multiple applications to run on some hardware, what are the major differences?
cbd1984 3 days ago 0 replies      
How are unikernels unfit for production? CMS has been in production longer than most here have been alive.
Mojah 3 days ago 0 replies      
In case anyone is still struggling with the concept of a 'unikernel', I found this article to help in clearing it all out: https://ma.ttias.be/what-is-a-unikernel/
dicroce 3 days ago 1 reply      
If the definition of unikernals doesn't include hypervisors then arguments against unikernals specifically attacking hypervisors are only viable against unikernals with hypervisors.
dustingetz 3 days ago 1 reply      
So you need good enough logs that you can debug production without ssh to production (since there is no ssh, bash, ps etc)? Don't we already have this?
jksmith 3 days ago 1 reply      
>"There are no processes, so of course there is no ps, no htop, no strace but there is also no netstat, no tcpdump, no ping! And these are just the crude, decades-old tools."

So does this mean something like a Symbolics machine or an Oberon machine can't be debugged, or does this mean that the unikernel has to be debugged at a higher level by the application(s) it's dedicated to?

nevir 3 days ago 2 replies      
TL;DR for those reacting to the title, but not reading the entire article:

Unikernels are young, and lack tooling/robustness that we have in more traditional approaches. They are not production ready yet, but will likely become a prominent way of building and deploying applications in the future.

0xdeadbeefbabe 3 days ago 1 reply      
> it is all OS, if a crude and anemic one.

Crude or anemic? The program does what you want or it doesn't. Quit trying to make it a human.

Edit: If the author can believe programs are crude or anemic he clearly likes to look at them from a high level, but you need a low level view to get excited about unikernels.

Edit: What?

dang 3 days ago 0 replies      
Personal attacks are not allowed on Hacker News.
MCRed 3 days ago 1 reply      
This article seems to miss the point. Not that Unikernels seem useful for running in VMs, not on bare metal. Thus you get the isolation of a true VM with a container like performance & resource usage.
Things every React.js beginner should know camjackson.net
377 points by Scriptor   ago   201 comments top 17
marknutter 8 hours ago 4 replies      
So here we see the culmination of the great Frameworks vs. Libraries divide. Frameworks alleviate the need for the type of articles like the one linked here because they eliminate choice paralysis and imposter syndrome. Everyone is worried about whether or not they're doing things The Right Way and so they either blaze ahead and hit the same pitfalls everyone else does (and then write blog posts to warn others) or they hold off on adopting the tech until they are shown The Right Way by someone else.

The truth is, libraries and frameworks both end up being equally complex to work with, precisely because the problem of building large applications is inherently difficult.

It all comes down to personal preference:

Are you the type of person who is more likely to believe you can do something better than everyone else, or are you the type who is more likely to defer to those you believe to be better than you?

Are you decisive or do you agonize over the smallest choices?

Do you feel a compelling need to understand how everything works, or are you willing to implicitly trust other people's systems?

I find it amusing that people who gravitate toward smaller libraries like Backbone.js and React.js rail against frameworks like Ember or Angular for being overly complex, heavy, and "magical", and then proceed to cobble together a Rube-Goldberg-esque system of disparate dependencies to achieve the same goals. When React first started getting popular all you read about was how simple the API was and how it was Easy to Reason About. Fast forward to today and you need to have working knowledge of WebPack, ES6, Redux, immutable.js, Babel, and a bevy of other dependencies to be aligned with the React ecosystem's ever-evolving best practices.

The exact same thing happened with Backbone.js and it will probably happen again with the next shiny new view-model library to ride the hype train.

It's important that I point out, however, that none of this is necessarily a bad thing. Smaller libraries like React.js and Backbone.js encourage a cavalcade of innovation from which awesome things like Redux are born. But let's not pretend that this doesn't result in a heckuva lot of churn for the average developer whose job is to simply get shit done.

pkrumins 15 hours ago 38 replies      
I'm not a React or Angular expert, but do you really put HTML inside of your JS code? It just bothers me too much. We spent years in early web days learning that code and templates should be separate, yet here we are putting HTML inside of code, which goes against years of practice. Can anyone share their professional thoughts about this?
diggan 16 hours ago 3 replies      
> 5. Use Redux.js

Well, that is not really something every React.js beginner should know. Feels weird to say that a X beginner should learn framework Y since he is probably just interested in X...

> 6. Always use propTypes

Worth mentioning that this slows down your application since React needs to check all the props. Don't forget to set NODE_ENV=production when compiling your production script.

didgeoridoo 16 hours ago 3 replies      
#10: This guide will be out of date in 3 months.
tptacek 12 hours ago 6 replies      
Someone sell me on Webpack? We use Browserify for no other reason than that someone gave me a boilerplate Gulpfile that relied on it.

What part of my life would actually get better if I took the 4 hours of my life I will never get back to make this switch?


Ronsenshi 15 hours ago 1 reply      
#3 Write functional components

I'd say "Write functional components where it makes sense". For simple components that is a reasonable advise, however in cases when you need such methods as `componentDidUpdate` or generally any lifecycle methods - that won't do and you have to use classes.

k__ 15 hours ago 0 replies      
I tried a different approach here: https://github.com/kay-is/react-from-zero
jastanton 11 hours ago 0 replies      
Fantastic article, in the past 9 months you've hit on nearly all of my discoveries.

Something I do now every time I create a new react app is to create a base class that extends React.Component that uses `PureRenderMixin`. When doing this I can prevent unnecessary renders from props changes. A killer of this would be passing an prop to a child that always changes its identity, like passing a function that calls `bind`, Because bind create a new method every time its call the identity always changes and the method will always be re-rendered. Knowing gotchas like these can help really speed up apps!

lukesan 13 hours ago 9 replies      
What really bothers me about React (and other frameworks) that without JS you do not see anything. No fallback. No progressive enhancement. Is this really the way to go?Did JS replace HTML/CSS as the backbone of websites/applications ?
mythz 14 hours ago 1 reply      
# 3. Write functional components

Functional Components don't get Redux's connect performance optimizations which is a good reason to avoid them:https://github.com/rackt/redux/issues/1176#issuecomment-1670...

joseraul 7 hours ago 1 reply      
How about TypeScript/TSX?React's components and "views" are easy to type (and you get completion in your IDE) but I still have to find a convenient way to type an Immutable map where each key/value pair has a different type (the equivalent of a TypeScript object). Any idea?
tzaman 15 hours ago 0 replies      
I'v done an integration test example (no PhantomJS, just jsdom) if anyone is interested: https://gist.github.com/tomazzaman/790bc607eb7ca3fd347f
todd3834 12 hours ago 1 reply      
Great article! As we have been on-boarding developers with React, we noticed that all of the boilerplate needed to get Babel, Webpack, Redux, and a testing environment (we use Mocha, Karma, and Chai) was simply too much for most people to handle while beginning something new. There are lots of boilerplate projects but telling someone new to fork a project on GitHub and start building an app from there was raising some eyebrows as well.

That's not all, there are really amazing developer tools available for React development like the redux dev tools, and hot module replacement to name a couple. These tools are extremely helpful to beginners but a beginner is not going to enjoy the extra work of setting that up as well.

Imagine if you were going to build an app with Rails and the instructions were to follow a tutorial that had you manually hook up active record, create your bootstrap files by hand, write your own code to rebuild the server ever time code changed or even forking a boilerplate Rails project and going from there. I don't believe Rails would have become what it is today without the Rails CLI.

What happens when the boilerplate changes? What if you find a XSS vulnerability in the boilerplate project that you used in your last 10 projects. Rails developers have identified and quickly patched several security vulnerabilities over the years. It's usually as easy as updating a gem file to patch your app. With the boilerplate approach, you would have to manually update all of your apps or try to merge the update from the project your forked. That isn't going to be fun at all.

Finally, one of the coolest things you can do with React is server side rendering to build universal apps. At this point, even if you know exactly what you are doing, setting up a new app is going to give you way too much work just to get started. So you'll need to find a boilerplate with server side rendering and fork it. There are way more opportunities for security issues when you increased the surface area of your app's responsibilities. There will be updates to this boilerplate and you will have to merge them into all of your apps. I hope you like manually updating files and resolving merge conflicts on files you don't really understand

We set out to resolve these problems when we built GlueStick (https://github.com/TrueCar/gluestick/). It's a CLI that allows you to quickly generate new apps using all of the tools we like to use on React applications. You have generators for creating containers (smart components hooked up to redux), components, and reducers. Redux is set up out of the box. You have server side rendering out of the box. We also push as much of the boiler plate code into the GlueStick module as we can. This lets you easily get performace and security updates as well as new features by simply updating the node module. You also get a sane folder structure so that all of your apps built with this tool will be easy to navigate.

We built this tool at TrueCar but we open sourced it. That means you can take it and make it your own, contribute back if you want to improve it and you can rest assured that it is backed by a big company that is heavily invested in seeing it succeed.

ktdrv 11 hours ago 1 reply      
Some very good advice here but after reading through the list I had a familiar feeling. Functional, stateless, typed, Redux? Yep, that's Elm.
Keats 12 hours ago 0 replies      
Good advices but I would add TypeScript as well
baby 11 hours ago 1 reply      
```<section> <div><h1>Latest posts</h1></div>```

isn't that div unnecessary?

I Built Myself a 16x20-Inch Camera in 10 Hours petapixel.com
313 points by robin_reala   ago   30 comments top 8
matthewmcg 1 day ago 1 reply      
There was a retired physicist that refurbished an old aerial reconnaissance camera and painstakingly built a hybird film/digital workflow that resulted in claimed resolution equivalent to several gigapixels. I can't find his original site anywhere, but here's a Popular Mechanics write up of it:


It's too bad because the original site had lots of technical detail and was a good lesson in understanding resolution limiting factors at each step of the image chain (e.g. effect of air density changes on sharpness of distant objects when shooting landscapes).

Update: hooray for archive.org. Check it out here:


nickbauman 1 day ago 3 replies      
For the DSLR folks, the 16x20 Ambrotypes he's making are like a ~100+ MP image. If he had the lens quality and precision focal plane / standard mounting, he could project this onto a billboard at an unheard-of resolution.
adfm 1 day ago 0 replies      
Large format photography delivers amazing detail. Normally you'll see folks making their own plates past a certain point, but this DIY 14x32-inch camera on the Make Magazine site uses large format X-ray film, which you can get for cheap, since hospitals have been converting to digital equipment: http://makezine.com/2011/05/17/gigantic-diy-ultra-large-form...
aphrax 1 day ago 0 replies      
Inspiring me to dust off the darkroom equipment. I even got to love the smell of Fixer!
KaiserPro 1 day ago 1 reply      
Use gloves, If I remember correctly part of the substrate is cadmium bromide.
mikeytown2 21 hours ago 0 replies      
Video about a guy who took this to the extreme and made a truck sized camera: https://vimeo.com/39578584 thumbnail is slightly nsfw
detaro 1 day ago 0 replies      
related: Here is a video showing how a tintype is prepared (Basically the same process as his Ambrotypes, but on a metal plate instead of glass)


imaginenore 1 day ago 0 replies      
I've always wanted to do that, but with a flatbed scanner in the back.
Launch, Land, Repeat blueorigin.com
276 points by Aaronn   ago   158 comments top 19
molyss 2 days ago 9 replies      
Am I the only one finding the last 2 BO videos highly disingenuous ?

As mentioned numerous times, there's the whole "reaching space" vs "going into orbit" before landing.

More important to me is the fact that SpaceX is streaming its different tries in _live_, taking the risk of crashing the rocket out in the open. How many vehicles did BO lose before achieving a vertical landing ?

Oh, and what about the fact that they have total control on the location and time of the launch ? Meaning they basically control weather to an accuracy no one launching anything useful into space has. For example, last failed SpaceX landing was officially linked to fog icing the leg locks. That's not going to happen if you launch on a clear day from the desert.

These are more comparable to the Grasshoper tries than to anything SpaceX has done recently : no horizontal speed, full weather control, no reporting on failed attempts, very limited weight. Even the last grasshoper video seemed to have more side winds that had to be countered than this 100k altitude video.

Even the format of the video itself screams "vaporware" to me. It looks like a trailer for a bad action movie, where some spacey something goes to space, separates and lands back in 15 seconds. When the grasshoper videos left me in awe, looping over them 5 times in a row, the BO ones just make me feel like they sh/could end with some sexual innuendo over their big rocket

yborg 3 days ago 6 replies      
Honestly just roll my eyes now at these pissing contest blogposts from Bezos. He does his team a disservice by suggesting that what they are achieving is actually more advanced than what SpaceX has done - it all looks like the approach the Soviets took in trumpeting various "firsts" in space in the 60s as the US methodically built capability far beyond what the Russians could sustain.

I am impressed by both companies' ambition, and SpaceX clearly has both the time and money advantage over Blue Origin. Let your accomplishments speak for themselves.

gus_massa 3 days ago 1 reply      
Impressive, but one important difference with the SpaceX rockets is that this rocket only goes up to space but it doesn't put satellites in orbit.

Form: https://what-if.xkcd.com/58/

> The reason it's hard to get to orbit isn't that space is high up.It's hard to get to orbit because you have to go so fast.

hayksaakian 3 days ago 1 reply      
I like it -- we're starting a 21st century space race between corporations rather than nations.
vonklaus 2 days ago 1 reply      
This is amazing, and a pretty amazing feat that we are taking for granted. Space is super super tough, the complex coordination of manufacturing something like this is being totally written off by many, but I assure you it is non trivial.

A popular sentiment in that industry is that rocketry is like writing software composed of many modules and testing each module separately on mac, then deploying the entire build on linux. If it doesn't work, you don't just back out the conversion error or stray quotes you left in, your rocket explodes.

The engineering spend alone is massive, as is the damage to the company when a failure is syndicated across youtube. Taking big risks is something we should be promoting.

We are in a technological renaissance and it starts with lowering launch costs to achieve realtime LEO satellite blanketing and distributed communication channels to connect to the other fucking 3 billion people without internet. Bezos is accomplishing something great, and we don't need to qualify that statement.

He and Musk are definitively the Jobs and Gates of the 21st century if you want to use the obvious cliche.

What Gates did. What Jobs accomplished. It was pretty fucking powerful. Musk and Bezos are sort of doing that, except both are working in at least 3 industries at that same scale.

I wish Blue Origin, Sierra Nevada, Firefly and all the other people in new space well. Nano-sats will provide realtime insight to the earth, people will be able to own a satellite in ~5-10 years because of these advancements.

this is good for all of us, and the only negative thing to say about it is that for god sakes Jeff, that rocket does look a bit like a stubby penis.

sandworm101 2 days ago 1 reply      
SpaceX = Space launch.

Blue Origin = Rollercoaster.

I really don't see why these companies are competing. They are in totally different markets. Sure, there is some technological crossover in that they both use rockets, but this is like comparing a prius to a locomotive.

raldi 3 days ago 2 replies      
It sounds like Blue Origin rockets are only capable of sending payloads to space for just an instant, before gravity pulls them back down to earth. They're nowhere near close to capable of putting anything into orbit.

Is that correct? If so, what are they good for?

xgbi 2 days ago 1 reply      
"Our vision: millions of people working and living in space"

When you have only a few seconds above the "official" space altitude on a parabolic trajectory, I wouldn't say" working and living in space", and specially not "millions" at the same time..

Is it me or this is primarily a pitch video ?

ChuckMcM 2 days ago 0 replies      
I think its great that New Shepard is coming along, I don't get how Bezos feels he is helping his cause when he says "people living and working in space." when he doesn't come close reaching orbit. The difference between an orbital mission and a sounding rocket.

Now, that he is getting closer to having tourist flights outside the atmosphere than Virgin Galactic? That is pretty cool and a fair comparison. Being able to out execute Burt Rutan? That counts for a lot, but don't try to compare yourself to SpaceX until you're putting things into LEO and getting back the hardware to use again.

andrewtbham 2 days ago 0 replies      
Here is an animated video that shows what space tourism will be like. You will be in space for a few minutes. the view of the world from space will be amazing. plus you will be weightless. not sure how long you will be up there or the cost but it looks awesome.


dba7dba 2 days ago 0 replies      
I wonder SpaceX opening an office in Seattle was just to make it easier to hire away engineers from Bezos?
sailfast 2 days ago 0 replies      
This looks like a very complicated engineering achievement that undoubtedly will lead to advances in space travel and tourism.

That said, I didn't get to watch the launch and join in its success or failure, so I'm finding it difficult to actually care as much as other launches.

igravious 2 days ago 1 reply      
Is there a significance for ~100km? Is this, roughly speaking, space -- where the atmosphere is so thin to be almost negligible? Clearly atmosphere thins gradually so how do we define where space starts? Is the significance of ~100km something to do with the effects of gravity at that altitude from the Earth's surface? Does ~100km give you weightlessness? Or is Blue Origin going up to ~100km because it's a nice round number that is roughly (whatever that means) in space. But aren't kilometres completely arbitrary?

Also, can people please stop knocking Blue Origin. We get it at this point, okay? I'm a huge fan of SpaceX and Elon Musk but does Blue Origin have to lose for SpaceX to win? No. There's nothing in this post from Bezos bashing SpaceX as far as I can see. There's simply saying, look, we did it again with the same refurbished rocket. Good on them. May they do it again and again. And so may SpaceX. The next space race is on, happy days!

forrestthewoods 3 days ago 3 replies      
Is Blue Origin landing a booster that SpaceX is letting crash? Would their forces combined result in total re-use? I'm not sure what all the parts and roles are. Sorry for the dumb question.
iamcreasy 2 days ago 0 replies      
What I am really looking forward in the bigger rocket from Blue Origin that will be able to achieve orbital velocity.
anjc 2 days ago 0 replies      
Article title sounds like somebody's attempt at a new MVP/PMF paradigm
sjg007 2 days ago 0 replies      
These guys played way too much lunar lander :)
jcoffland 2 days ago 2 replies      
This is awesome. A great compliment to the work SpaceX is doing. To put it in perspective this rocket went about 100x as high as an average international airline flight but would still need to go about 4000x as far to reach the moon. Not sure about the 3 mile per hour impact with the ground on my way home from work. I suppose with a nice soft seat it would be fine but by the time United Airlines gets done with it you'll be packed in like an NYC cross town bus with a seat just as hard.

edit: got my facts straight

mchahn 3 days ago 3 replies      
One would think that using fuel to touch-down slowly is wasting fuel since they could use some kind of capture scheme with a parachute instead. I've read many times that the weight of the fuel is a big problem in spacecraft.
CIA declassifies hundreds of UFO documents cia.gov
231 points by XzetaU8   ago   95 comments top 29
jimrandomh 5 hours ago 10 replies      
I'm surprised they're continuing the misdirection so many years later. "Unidentified flying object" means an aircraft, usually a military one (because civilian aircraft are easier to identify). A UFO over US soil would usually be a USAF plane, but the CIA ran (runs?) a reporting hotline because they might occasionally be foreign spy planes instead.

But of course, the Soviets were very interested (or the US believed they were very interested) in knowing about US military planes and their capabilities. There's an anecdote I recall reading - I apologize for not having the link handy - about how, at Area 51 (their airfield for testing experimental aircraft), they would leave big airplane-shaped objects on the tarmac to create cold spots for Soviets' spy satellites to photograph in infrared, corresponding to planes that didn't exist.

The reason I'm unable to find that anecdote is because the search term "Area 51", combined with any other search terms, brings up an enormous a flood of incoherent crackpot ramblings. I'm pretty sure this is the result of a deliberate strategy: create a bunch of fake information, so that real information about US military capabilities will be harder to find. A few hints and a few taunts, and voila, every discussion about unidentified aircraft is guaranteed to have a schizophrenic walk in and start rambling about aliens!

ChuckMcM 5 hours ago 0 replies      
I really enjoyed those. I particularly like some of the descriptions.

In the early 80's I worked in the Image Processing Institute at USC where one of hte UFO shows had us analyze some of the "best" photographs (at that time) of UFO sitings. We pulled all sorts of fun stuff out of the pictures, like the letters "9 oz" on the bottom of a pie plate type UFO, "old skool" photo doctoring where someone had taken the picture of a "ufo" and literally placed it on top of a picture of the sky, and then re-photographed the composite. As I recall there was only one picture we could not definitively rule out as a fake. But my best memory of that project was when the producer asked the camera man to get some footage of the "computer" and when he pointed it at the 11/55t front panel the producer said, "No, the computer!" and pointed at the tape drives. I knew we had top shelf folks on the team at that point :-)

jedberg 5 hours ago 0 replies      
You have to hand it to the CIA for taking advantage of the XFiles reboot to boost their social media a bit. Especially on 37 year old documents!
notthegov 5 hours ago 4 replies      
If you want to understand the UFO story better, you should watch Mirage Men about Air Force Office of Special Investigations Agent Richard Doty, a self-admitted UFO disinformation agent-


The intelligence agencies aren't hiding anything about UFOs from my experience. And I don't really think the USAF is either. But the USAF did tell various people that UFOs were real and they did run a psychological operation against Paul Bennewitz, destroying his life and forever obsfucating whatever truth there is about UFOs. (None in my opinion.)

Here's a good summary:


*Fixed link

berkeleynerd 54 minutes ago 0 replies      
If you're interested in the CIA's involvement with the UFO phenomena check out this article on the Robertson Panel.


(t)he "debunking" aim would result in reduction in public interest in "flying saucers" which today evokes a strong psychological reaction. This education could be accomplished by mass media such as television, motion pictures, and popular articles. Basis of such education would be actual case histories which had been puzzling at first but later explained. As in the case of conjuring tricks, there is much less stimulation if the "secret" is known. Such a program should tend to reduce the current gullibility of the public and consequently their susceptibility to clever hostile propaganda. The Panel noted that the general absence of Russian propaganda based on a subject with so many obvious possibilities for exploitation might indicate a possible Russian official policy.The Panel took cognizance of the existence of such groups as the "Civilian Flying Saucer Investigators" (Los Angeles) and the "Aerial Phenomena Research Organization (Wisconsin). It was believed that such organizations should be watched because of their potentially great influence on mass thinking if widespread sightings should occur. The apparent irresponsibility and the possible use of such groups for subversive purposes should be kept in mind.

Aqueous 5 hours ago 0 replies      
They declassified them in 1978. So these documents have been publicly available for a mere 37 years. Figures - what else are they hiding that they aren't hiding?
myth_buster 5 hours ago 0 replies      
I found the documentary "I know what I saw"[0][1] rather interesting.

In this Mercury 7 astronaut, former pilots, retired air force officers, Major Generals etc give account of UFO sightings.

Although produced by History Channel, it features only accounts of these distinguished people and recollect no spin (if my memory serves me right).

[0] https://www.youtube.com/watch?v=jTEZeY-fNCU[1] http://www.imdb.com/title/tt1579236/

sandworm101 3 hours ago 0 replies      
How much money does it take to get a US agency to issue a blog post in support of a TV series?

Granted, those in charge of the CIA blog are probably the right age to be die hard Xfiles fans, but after the StarWars promotion at the white house I'm seeing lobbyists everywhere.

jonnyweirdteeth 6 hours ago 2 replies      
Good to see that the CIA has a little bit of a sense of humor about this kind of thing.

What a PR coup by the X-files reboot, as well.

Someone1234 5 hours ago 0 replies      
Honestly even if you don't believe in aliens visiting earth (and I do not), these documents are still an interesting read.

Just seeing how they investigated and the investigators opinions/thoughts/methods was quite interesting to me. Some more than others, some are very short and only contain a witness statement.

davidbanham 5 hours ago 0 replies      
All the documents that appear startling are actually just recounts of stories that appeared in a daily newspaper, marked as completely unsubstantiated.
isaiahg 2 hours ago 0 replies      
At this point we're pretty settled that alien life probably exists in some form out there. Whether that alien life could evolve to create technology and build spacecraft is another thing. But the even bigger thing is when you take into account just how vast the distances involved are. It's just so unlikely that alien life could exist, develop technology, and then be in the same neighborhood as us.
crapolasplatter 3 hours ago 0 replies      
"Below you will find five documents we think X-Files character Agent Fox Mulder would love to use to try and persuade others of the existence of extraterrestrial activity. We also pulled five documents we think his skeptical partner, Agent Dana Scully, could use to prove there is a scientific explanation for UFO sightings."

The most interesting thing about this is the lack of seriousness that the CIA puts into UFO.

toupeira 5 hours ago 0 replies      
Title is misleading, text starts with "The CIA declassified hundreds of documents in 1978"
codenut 3 hours ago 0 replies      
Odd thing. I just watched the new X-Files last night and this kind of news is on HN frontpage the next morning.
partiallypro 1 hour ago 0 replies      
They had to have timed this with the X-Files premier on purpose
x3n0ph3n3 5 hours ago 0 replies      
Come on guys, stop conflating UFOs with alien spacecraft. UFOs are real, alien spacecraft are probably not.
samstave 4 hours ago 1 reply      
My first UFO story:

I was ~13 years old in The Civil Ait Patrol, this occurred in Truckee, Ca at the Truckee Airport.

We were doing standard marching exercises and our troop leader was a former SR-71 mechanic/maint person...

It was a full moon - and as our gatherings typically started after 5PM - it got dark fairly quickly.

We were marching and then looking to the mountain line in the horizon, we saw a light come up over the mountain line and fly vertically up at a steady clip as it was near the moon which had also just risen over the ridge.

We all watched, speculating what type of plane this was (Civil Air Patrol was like the cub scouts for the air force)

As we watched and guessed, this craft flew straight up from the mountain - then directly toward us.

I came to a complete hover just above us by about 300 feet. The craft was the classic triangular shape with large round whit lights at each corner.

As it hovered - the troop leader said "OK KIDS - EVERYONE INSIDE NOW!!!" and he ushered us into the end of the hangar building that had our CAP office.

Ever since then.... I've had an interesting relationship with believing UFOs...

DanielBMarkham 4 hours ago 2 replies      
I love UFOs because UFOs tell us a lot about how we deal with non-reproducible observations. Walk outside and see a green, glowing light that hovers then disappears? You can walk outside again for the rest of your life and never see it again. Does that mean you didn't see anything? Of course not.

This really messes with people's heads. The rationalists will say that since we can't independently observe it, it might as well not exist. Folks who have a deep mythology as part of their worldview will simply incorporate their observation into their mythology.

Even cooler is the fact that something is obviously going on that we can't categorize or figure out. (See thunderstorm sprites and jets as a recent example). Is it one phenomenon? Highly unlikely. But beats me how many different and really cool things there are out there to discover. I'm sure some of these may take hundreds of years to nail down, if ever.

And then the government gets involved and -- get this -- starts deliberately fucking with people about it. Hey, that wasn't a new stealth bomber, that was probably aliens. While it may have confused the Soviets, it was also a deliberate attempt to mislead the voters. I'm not a constitutional expert but seems to me the one thing that ought to carry the severest penalty is government officials in a democracy deliberately trying to mislead the public about critical issues.

And then we have them pawning their B.S. off on mentally ill people. Nice. Very classy.

UFOs tell us a lot more about ourselves than they do anything from another planet. Unfortunately, much of what they tell us is rather unpleasant.

pjaytipp 5 hours ago 0 replies      
Reports of strange new energies & crafts also made science seem magical and helped fire the imagination of the next generation of scientists and engineers for the ongoing cold war. The UFO phenomenon was handy on a few fronts.
andrewclunn 1 hour ago 0 replies      
Just in time for the x files reboot.
joering2 2 hours ago 1 reply      
Is that really real?? [1]

Who on Earth would have such aircrafts in 1962 other than aliens??


jobu 5 hours ago 0 replies      
Obligatory xkcd link: https://m.xkcd.com/1235/

"In the past few years, with very little fanfare, we've conclusively settled the questions of flying saucers, lake monsters, ghosts, and bigfoot."

themagician 5 hours ago 0 replies      
The truth is still out there.
tannranger 5 hours ago 0 replies      
the silence is deafening
dangerboysteve 5 hours ago 0 replies      
Couldn't they have coordinated with this the lunch of the new X-Files? Fox marketing sucks!
ilovefood 5 hours ago 0 replies      
I wonder why those numbers don't add up :>
Laaw 4 hours ago 0 replies      
Why does everyone always assume the government would hide aliens from the public?
Show HN: I made a site to catalogue 10,000 CC0-licensed stock photos finda.photo
310 points by davidbarker   ago   63 comments top 24
liamca 3 days ago 1 reply      
[Full disclosure, I work on a service called Azure Search]

Very nice site! Since your site is so much based around search, I thought I would pass on a few suggestions based on what I saw. If you happen to be using a search based engine for your content such as ElasticSearch, SOLR or maybe Azure Search :-), there are a few simple things you could add to make the experience a little smoother. Suggestions in the search box are nice to allow people to quickly see results as they type. You could even add thumbnails of the images in the type ahead such as you see using the Twitter Typeahead library (http://twitter.github.io/typeahead.js/). I also noticed that your search does not handle spelling mistakes or phonetic search (matching words that sound similar). Finally, through the use of Stemming, search engines can often help you find additional relevant content. For example, if the person is looking for mice, but your content has the word mouse in it, this will bring back a match. Since you don't have a lot of content, this can really help people find relevant content.

Hope that helps.

oliwarner 3 days ago 3 replies      
I like to think I am very conscious of copyright. I might not always adhere to it in my person life (who can claim they do these days?!) but professionally, everything is done strictly legitimately. With that in mind... Am I the only person who is slightly uncomfortable with the phrasing around PD and CC0? With other copyright licenses there is somebody there is saying they own something.

I'm particularly uncomfortable with Flickr's "no known copyright restrictions". What if people infer PD from that and upload it somewhere else under CC0? Then it gets sucked into this finda.photo? Yuck.

As for finda.photo, why are you truncating the source down to just a domain name?! Many of the sources include proper uploader details so why aren't you copying those over and displaying them?

I know you're not required to, but attribution isn't a bad thing if you can give it. I for one would be much happier using a photo if I knew exactly where it came from.

BrunoJo 3 days ago 2 replies      
I always use https://pexels.com. They also have only CC0 images.
brandonheato 3 days ago 1 reply      
Why not just use flickr? A search for images with "No known copyright restrictions" returned 663,502 results.https://www.flickr.com/search/?license=7%2C9%2C10&text=&adva...
m-i-l 3 days ago 1 reply      
Looks good. Feedback from a designer I showed this to: it would be useful to search based on aspect ratio (landscape vs portrait at minimum).
vortico 3 days ago 0 replies      
I like the domain, the design is usable, and the database is great. This has it all.
lucaspiller 3 days ago 1 reply      
Very nice! What are you using to search the photos by colour and feature?
petecooper 3 days ago 1 reply      
Adding Alana to the list of CC0-only stock photos.

[1] http://alana.io/

[2] http://alana.io/about-us/

Flimm 3 days ago 0 replies      
When of the about pages says that the photos are on a GitHub repo, which sounds really cool, until you follow the link and the repo hasn't been shared yet. Hopefully it's just a matter of time before it is shared.


j_lev 3 days ago 1 reply      
Hi - for some reason the search bar keeps changing my search terms eg Australia --> Australium
frantzmiccoli 3 days ago 0 replies      
It seems that you do have a valid SSL certificate but https://finda.photo/search?q=test is not working properly.
franciscop 3 days ago 0 replies      
Check also http://pixabay.com/ for Public Domain pictures, I've found many awesome gems there
trtmrt 3 days ago 1 reply      
Firstly it is slow...Secondly I have typed "wolf" and I got: 3 foxes, 1 lion, 2 monkeys, 1 snow house and 2 wolfs that do not look like wolfs !?
chrxr 3 days ago 1 reply      
http://finda.photo/image/14847 - Tags are weird. This is not a dog, mouse, canine or feline. It's not sitting. It has 'eyes' but I think that might be irrelevant. Although I would agree that ferrets (not an included tag) are cute, I'm not sure I'd describe them as domestic. Otherwise, great!
hantusk 3 days ago 0 replies      
An idea: You could use this pretrained machine learning library to classify your images/improve search even more:https://www.reddit.com/r/MachineLearning/comments/3yt4o5/dee...
andreash 3 days ago 2 replies      
what is the diff betweeen this and pixabay or pexels.com? which there was one meta-search engine to cover them all :)
uvesten 3 days ago 1 reply      
I really like both the selection and the color chooser!Did you do any manual selection of the photos?
quaffapint 3 days ago 0 replies      
Might be a good place for an infinite scroll when going through pages of image results - one lest click they have to do.
_spoonman 3 days ago 0 replies      
Just a really great job on this. Love it.
edpichler 3 days ago 0 replies      
Just curiosity, where the owner possibly find all these photos to fill up the database?
fareesh 3 days ago 0 replies      
I ran into a Laravel error on the homepage due to the server running out of memory.
jlis 3 days ago 0 replies      
nice one!
juiced 3 days ago 0 replies      
My first search on "new york" returned nothing...
thecodemonkey 3 days ago 0 replies      
If you're still a little concerned with licensing and copyrights, I would recommend taking a look at www.graphicstock.com - you just play a flat monthly or yearly fee and you can download as much as you want.

Disclaimer: I work for the company behind GraphicStock. Oh, and we're hiring!

CA assembly member introduces encryption ban disguised as human trafficking bill asmdc.org
272 points by asimpletune   ago   91 comments top 22
AJ007 4 days ago 1 reply      
Here is the bottom line -- if smartphones can not be securely encrypted there are a lot of things we can't use them for:

- Phones aren't going to replace credit cards- You will need to type in all your passwords each time you use them- Two Factor authentication will need to be done with a different device- Healthkit and other medical records will need to be moved elsewhere- Any profession where there are very serious consequences for leaked communication will no longer be able to do it through their smartphone (lawyers, doctors, executives.)

Basically losing or having your mobile phone stolen will be equal to a burglar pulling up to your house or office and driving away with every sensitive document and record in the back of a van.

No tech company wants to see the end of the mobile revolution. Forget the national interest side to this, anyone supporting broken encryption basically looks like a total moron.

Steko 4 days ago 1 reply      
Assemblyman Jim Cooper represents Elk Grove a city of ~160K just South of Sacramento. Apple is the second largest employer in Elk Grove [1] and currently expanding their footprint there by several thousand jobs [2].

Hopefully there's a primary challenger or soon will be, I'll donate.

[1] https://en.wikipedia.org/wiki/Elk_Grove,_California#Top_empl...

[2] http://www.bizjournals.com/sacramento/news/2015/12/07/someth...

{note: [2] gives a significantly larger current headcount than [1]}

joshka 4 days ago 2 replies      
Same problem pretty much with the NY bill.Buy a phone that is unlocked / decrypted at the time of sale.The next step is for the user to login and encrypt. I don't see how this bill actually fixes that.I guess this hinges on the definition of authorized when it comes to encrypting something I own. I hope I don't require authorization to do this.

A few questions I posed to the NY senator earlier this week:

1. Would you use such a phone knowing that the government / apple / seller of the phone could easily get into it.2. Would it be legal for someone in the legal profession to use such a phone without being disbarred for negligence of the right to private communication?3. If sold unlocked, and then later locked (i.e. every phone right now), where's the change?4. Where's the 4th amendment fit in with this?5. What should we do with old phones that don't support this? Dump them in the bay I guess?6. Where are the technical experts that are telling you that this is actually feasible to do securely and safely? I'm looking hard, but only seeing negative responses from those that know what their talking about.7. Who's responsible for fixing the broken device once the master key gets leaked? The manufacturer? The state of {CA/NY}?8. the list goes on.

fiatmoney 4 days ago 7 replies      
For a long time gun owners have had the singular pleasure of having massively intrusive, incoherent regulations written by people with no technical understanding of the subject matter. It's nice to finally have some company.
Dowwie 4 days ago 3 replies      
"Full-disk encrypted operating systems provide criminals an invaluable tool to prey on women, children, and threaten our freedoms while making the legal process of judicial court orders useless.
jonathaneunice 4 days ago 0 replies      
This is the game. There will never be a "Prohibiting Encryption and Preventing Privacy Act." It will always be a ostensible act of patriotism and protection. Combatting terrorists, child molesters, sex traffickers, drug cartels, money launderers, and other easy-to-demonize scary folk.
cmurf 4 days ago 1 reply      
Cryptography yields two components: encryption/decryption, and authentication. Break one of those, and they're both broken. And that's what really bothers me about all of these politicians who only fixate on the encryption part. They're oblivious to extreme risk introduced by breaking authentication.
trhway 4 days ago 1 reply      
>Full-disk encrypted operating systems provide criminals an invaluable tool to prey on women, children, and threaten our freedoms

"If Youre Typing the Letters A-E-S Into Your Code Youre Doing Wrong"

mdip 4 days ago 0 replies      
It's a political tactic that's been used forever.

When the legislature wants to do something unpopular (or even stupid which is what this is), associate it with the "Evil Of The Era" and propose the bad legislation as the solution to said evil. These days, popular "Evils" are Human Trafficking, Child Porn, and "Terrorism". The first two evoke extreme emotion of crimes committed against the most innocent of victims, so they're the best choice in this scenario. In the 80s-90s it was anything to reduce "Crack Babies" or win "The War on Drugs".

It's an old trick -- when people talk about logical limits placed on the first amendment, you'll hear the phrase "Shouting Fire in a Crowded Theater". Most of those who utter it don't realize that this phrase originated as part of a ruling that had nothing to do with "fire" or a "crowded theater" but was made to curtail the dangerous speech of opposing the draft during World War I[1].

[1] https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...

kabdib 4 days ago 1 reply      
The bill says "...shall be capable of being decrypted and unlocked by its manufacturer or its operating system provider."

It doesn't say how, and it doesn't give a time frame.

So: Provide an API to accept a key. Allow two key attempts per second. Start with key 0x0000..000, next try 0x000..0001. This is guaranteed to complete, you just have to be prepared to wait a while.

(Yes, I know that courts are unhappy with this kind of thing. But the bill is a crappy bill, in many regards).

mmanfrin 4 days ago 0 replies      
What a severe irony for this idiot to be on the Privacy and Consumer Protection Committee.
pdkl95 4 days ago 0 replies      
So the 2nd crypto war has move beyond mere fighting words. The long term battlefield is usually the court of public opinion, so I hope Silicon Valley recognizes this challenge to their power. Tech firms should have been attacking this rhetoric hard when it started, but accusing politicians of not understanding math/crypto has been a common response.

Do you want crypto to work? Or do you want to be forced to replace crypto with security theater? Is your business actually willing to actively protect a free internet? Or is it easier to assume this is "someone else's problem"?

I guess we will see which companies defend themselves, and which companies think being a collaborator is more profitable?

trhway 4 days ago 1 reply      
so basically there should be 2 components sold separately - "dumb GSM connectivity module" and "smart OS module" (iPod basically). The latter not having cell phone connectivity wouldn't be subject to that law and thus can have FDE/whatever. The GSM module can just attach to the "iPod" back like external battery.
ianamartin 4 days ago 1 reply      
The "shall" wording is going to keep this in courts for years, even if it does pass.

Shall is the source of more litigation than any other single word in the English language. It can always be debated because no one knows if it reliably means "can", "must", "may", "might", "will", "should", "ought to", or "is allowed to".

All the above uses can be supported with evidence. Because language evolves.

It's killer word for any law or contract and guaranteed to be disputed.

I am not a lawyer, btw.

But if this somehow passes, it will get tossed because of the wording.

ams6110 4 days ago 1 reply      
Would Apple have the balls to stop selling phones in California?
passwordreset 3 days ago 0 replies      
Where the hell is Anonymous in all this? Shouldn't they be out there doxxing and haxxoring and whatever it is that they do to these kinds of people? I'd figure if someone stands up and says "Encryption should be illegal", they probably don't encrypt jack shit themselves, and they're probably easy targets. They might even take the hint and say "shit, I should have encrypted my internetz" and change their stance. Eh, doubtful.
LinuxBender 4 days ago 1 reply      
What impact might this have on tax revenue from said controlled devices no longer being purchased in California?

Do we start referring to encrypted devices without back doors as contraband?

peteretep 3 days ago 1 reply      
Someone needs to make a big deal about how this is bad for business because it allows the Chinese/Russians/French/Welsh/whoever to steal American Innovations(TM) and then write to whomever this person will be challenged by in upcoming elections with "x is anti American Business" talking points. Both sides can play "Won't someone think of the children?"
tdkl 3 days ago 0 replies      
I'm waiting on the day something like this gets proposed in all of the EU states, for the same BS reasons.

As a matter of fact, I'm certain that current leaders of the EU countries who publicly invited immigrants to their state (we all know the most prominent one), was considering this as a easy way to change the privacy laws - and be applauded for it.

jegutman 4 days ago 0 replies      
Might as well try to ban speaking pig latin in public.
rdudek 4 days ago 0 replies      
Fair question, if I wanted to buy a phone now, which manufacturer/OS comes with ability to do FDE and said party does not have a copy of the key?
Why the Sun 2 has a message Love your country, but never trust its government nohats.ca
255 points by longwave   ago   58 comments top 14
lewiscollard 16 hours ago 1 reply      
HN hug of death, so cached version (text-only):


_0ffh 16 hours ago 1 reply      
When digging inside my Acorn Risc-PC I once found a message along the lines of "Help! We are being kept in a cellar and forced to write software!"... :-)
Kristine1975 12 hours ago 3 replies      
Oracle did something similar with the network protocol of its database driver. Not to prevent copying, but to prevent others from writing their own drivers: http://dacut.blogspot.de/2008/03/oracle-poetry.html
jamescun 16 hours ago 1 reply      
With regards to the copy protection aspect, it was a pretty common practice back in the day (perhaps even still is).

In the Macintosh Classic's ROM, there were debug sequences that not only would display pictures of the development team, but write the text "STOLEN FROM APPLE COMPUTER" to the screen. http://appletothecore.me/files/mac_se_easter_egg.php

dsugarman 13 hours ago 3 replies      
If anyone was curious, this quote is credited to Robert A. Heinlein, a science fiction writer.
Hello71 15 hours ago 2 replies      
> NFS ran in plaintext and used the senders IP address for authentication

and it still does

tokenizerrr 17 hours ago 1 reply      
I may be missing something, but what is the link between the DES chip and the message being triggered? Because the message shouldn't be triggered without doing the convoluted key combination, right?
johngalt 12 hours ago 0 replies      
I had a fluke network tester that would put "elvis lives" in the padding of it's ping tests.
webXL 8 hours ago 1 reply      
Isn't there a name for this? Poison pill? Logic bomb? Honey token?


eternalban 14 hours ago 0 replies      
Sun Microsystems Founders Panel - CHM [1].

[1]: https://youtu.be/dkmzb904tG0?t=1h38m29s

United857 10 hours ago 0 replies      
Apple does something like this as well:


draw_down 16 hours ago 0 replies      
Nor its marketing industry.
alienjr 17 hours ago 0 replies      
Ask NSA...
lultimouomo 17 hours ago 3 replies      
I wonder if this trap street message, which was inserted in some code initializing a DES chip, was chosen because it was rumored that NSA had backdoored DES. The article doesn't mention it, but it would be quite a coincidental choice!
GoQt github.com
277 points by T-A   ago   97 comments top 11
zanny 1 day ago 5 replies      
Qt has evolved fantastically since Digia's acquisition. Just throughout the 5.0 series it has gained incredible feature support for almost anything from full multimedia faculties to websockets to a revitalized 3d rendering engine. And QML is such a breakout success in UI design and just general applications programming compared to the competition - if you have not, you must try coming up with a simple program idea and then try implementing it in QML. Compared to any other tech - GTK, HTML, even Android's ADT the mechanism behind QML is incredibly elegant.

I dream of seeing a day in ten years or so where everyone expects right once run everywhere native look and feel applications out of Qt, a toolkit that provides you mechanism to access the full power of your target devices without having to lock yourself into whatever vendors SDK is and thus forcing you to rewrite the same program 5 times for 5 different target devices.

Its not there yet - just the other week I was arguing with someone on reddit about the usability of Qt on mobile and conceded it lacks a prebaked gesture support library to easily just do a two finger swipe rather than having to use the multitoucharea type to specify how two finger motion works - but it is getting so very close.

u14408885 1 day ago 1 reply      
Whenever I hear of a golang binding to a library implemented in C++ I'm always interested to see how they map the class hierarchy to golang's type model which doesn't have traditional inheritence.

Does QT's class hierarchy map cleanly to Go? Is it just interfaces everywhere?

(I'm not trying to start a language war, just genuinely curious.)

shmerl 1 day ago 4 replies      
How does it manage C++ bindings? Rust struggled with that, and it has a lot of issues because of compiler specific name decorations and etc.


VeilEm 1 day ago 1 reply      
elcapitan 16 hours ago 0 replies      

And people make fun about Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch in Wales.

l1n 1 day ago 1 reply      
So does this mean you could write a QT application in Go on Linux and cross compile the whole mess for Windows?
josephcooney 1 day ago 1 reply      
I like LiteIDE, I will have to give this a shake.
rhodysurf 1 day ago 0 replies      
Whoa cool. I was working on a fork of his original goui library and updating it for Qt5. Awesome to see this and im pumped to not have to keep working on the other fork.
ozzmotik 1 day ago 0 replies      
my phone is showing there being a lot of empty comments from five decades ago. clearly signs consistent with time travel.
api 1 day ago 4 replies      
Awesome, but them are going to be some fat static binaries. Of course I get the sense that shared libraries are being abandoned so that seems to be how things are going. You have to ship qt with any qt app anyway.
malkia 1 day ago 3 replies      
While Qt is awesome, I'm not so sure about it's recent license change. I know lots of people would love that, but for other programmers (stuck in big companies) this simply means that they can't use it, as not always money would be approved for commercial projects, even if the project do have a big budget.
The New York Times Introduces a Web Site (1996) nytimes.com
218 points by danso   ago   94 comments top 15
danso 3 days ago 5 replies      
The oldest snapshot on Archive.org is from November 1996: http://web.archive.org/web/19961112181513/http://www.nytimes...

Back then, they even had a low-bandwidth version of the site: http://web.archive.org/web/19990117023050/http://www.nytimes...

The website included various tutorials on how to use it, including a guide that covers the different browsers. None of the browsers listed are actively developed today: http://web.archive.org/web/19961112182937/http://www.nytimes...

edit: A couple of other observations:

- How many other content websites have published for nearly this long and yet have their oldest articles remain on their original URLs? Most news sites can't even do a redesign without breaking all of their old article URLs.

- I like this Spiderbites feature -- a sitemap of a year's worth of articles (likely for webcrawlers at the time): http://spiderbites.nytimes.com/free_1996/

morgante 3 days ago 5 replies      
It's pretty impressive how good a job the NYT has done of maintaining historical articles.

You can even find quite old articles from key historical times and they're presented just like articles today. For example, the famous Crittenden Compromise is at http://www.nytimes.com/1861/02/06/news/the-crittenden-compro...

esaym 3 days ago 0 replies      
And thanks to them, we have the world's greatest Perl debugger/profiler: https://metacpan.org/pod/Devel::NYTProf
aaronbrethorst 3 days ago 2 replies      

 The electronic newspaper (address: http:/www.nytimes.com)
hilarious, butthen againthe colon-double slash still isn't clear to most people.

jedberg 3 days ago 1 reply      
It's funny because this is the digital version of an article that was printed in a newspaper about that newspaper going digital.

If only they had any idea of the pain they were about to cause themselves. :)

ChrisArchitect 3 days ago 0 replies      
briantmaurer 3 days ago 1 reply      
I would love to see the complete evolution of the home page.
Haul4ss 3 days ago 0 replies      
> "The market is booming for newspapers on the World Wide Web," Mr. Kelsey said.

Not anymore. :)

jcoffland 2 days ago 0 replies      
I find it interesting that they explicitly excludedreporting that appears in the news paper. They were on the web early but with caution.

> The New York Times on the Web, as the electronic publication is known, contains ..., reporting that does not appear in the newspaper, ..."

mc32 3 days ago 3 replies      
My impression is the mercury news from san jose had an earlier, if paid, presence.

Funny how the NYT wanted to charge nearly two dollars to allow you to print older articles. Asking people to pay for the own digitization.

spacefight 3 days ago 0 replies      
Who had their first website online back then in 1996 as well?

raises hand

Good memories... Claris Home Page!

hackuser 3 days ago 5 replies      
After 20 years, they still are adapting to the 'new' platform. Look through their site with fresh eyes: If you were designing a news website (rather than moving a newspaper to this new platform), how many design, UI and functionality choices would you make differently?

A quick start:

* The separation of different forms of content: They don't really mix text with video, images and graphics, even though most web-native bloggers will do it. They seem to lack fluency with mixing media; it's a project for them. They'll staple a video and decorate text with images and graphics, but they don't really commuicate with it; they don't say, 'here's how Clinton responded to Sanders:" <video>, or, 'here was the scene when the earthquake struck' <video>, or even in a movie review, here's what the scene looks like: <video> or <image>. Instead, they try to describe the visual with text. Even explanatory graphics are a separate, special production, on a separate page.

* The font in their title: Back when printing fancy fonts was a technological feat, this font communicated that they were serious and sophisticated. Now, if you step back and ignore the history, it looks like a kid playing with fonts. (Look at it this way: would you ever use that font on a website you were designing?). It says, insists even: We're anchored to the paper age and will never let go. We're the old, dying generation. If you want something new, go elsewhere.

* The discoverability of content: Obviously mimicing a newspaper, but a bad choice for the web. How many links are on that home page (scroll down)? And even more content doesn't even appear there. All that hard work and content, unlikely ever to be found, buried and lost forever. It's tragic. But that's what they did in the hard copy newspaper so I guess it's ok.

* Also, where are stories updated since I visited a couple hours ago? Oh look, if I look at every link a red 'updated' indicator is next to some links (just like the web 20 years ago!), which I see if I examine every one of them (and how do I identify brand new links in this massive page of links?) - but where in this multi-page story are the new parts? I guess I'll just re-read the whole thing.

I say this all out of love. They are an very important institution. The news business is hard enough; stop handicapping yourselves! From the outside they look like they still, in 2015, haven't fully embraced the new technology. What would you say about another business' web team (that was not adapting a newspaper to the web) that produced a site that looked like this? Egads. [1]

EDIT: Some minor edits and additions

[1] I'm not blaming the web developers; I assume they are working within the general constraint of: Make it look like the newspaper.

plg 3 days ago 1 reply      
come on NYTimes, update the iOS app for iPad Pro --- it currently is gigantic (I presume because the screen design is just a scaled-up version of the iPad Air)
briandear 3 days ago 0 replies      
"..the Web as being similar to our traditional print role -- to act as a thoughtful, unbiased filter and to provide our customers with information they need and can trust."

Unbiased? Some quality reporting to be sure, especially when politics aren't involved, but they jumped the bias shark a long time ago.

VeilEm 3 days ago 4 replies      
I like to call the incognito window my nytimes reader. I paid for the nytimes for a bit but it costs more per week than a monthly netflix subscription and it feels stupid to pay for not knowing how to use the incognito window. It's like a "I don't know how to use software" tax.
Use JSON files as if they are Python modules github.com
266 points by bane   ago   62 comments top 18
jamesdutc 23 hours ago 5 replies      
The mechanism is called "import hooks":



You can see the import hook added here:


Import hooks are a great feature in Python.

Take the `import` mechanism in the language and reduce it to its barest theoretical formulation - what does it really do? (Name binding.) Think of all the other features in your application that can be reduced to this. (e.g., configuration management) Use the constraints of the import mechanism to guide the design of this feature, and use import hooks to implement it. You'll end up with an implementation that is very "close" to the core language. I would assert that this closeness strongly suggests correctness and composability.

Less philosophically, think of the Python virtual machine as a system with lots of "safety valves" and "escape hatches." Import hooks are one such safety valve. Think about all the things you could do easily by hooking into module importing? (Real use-cases: loading code from bitemporal object databases, deprecation of modules, relocation of modules, configuration management, &c.)

Of course, there are languages that are much more flexible than Python in this respect. Python aims for a practical flexibility, and I find that, in practice, Python strikes a nice balance.

jstoiko 1 day ago 1 reply      
My favorite is this issue: https://github.com/kragniz/json-sempai/issues/7

> Ever since I saw this I have been unable to sleep. Please fix.

vog 18 hours ago 2 replies      
From the examples:

 >>> from jsonsempai import magic >>> from python_package import file >>> file <module 'python_package.file' from 'python_package/file.json'>
I believe it is not a good idea to teach people to import something named "file", as that overrides Python's builtin class "file".

(On the other hand, that class is usually instantiated via calls to "open", so the class name "file" is unused is most programs dealing with files.)

hyperion2010 1 day ago 0 replies      
IIRC there is a moment of sheer terror in this talk by David Beazley [0] where he shows how to import an xml file as a python module.

0. https://www.youtube.com/watch?v=0oTh1CXRaQ0

nemothekid 1 day ago 0 replies      
Really love the fact the import is named `from jsonsempai import magic`.

If this were to be used seriously, it would make sense to other devs that the import named magic is doing some crazy stuff.

latenightcoding 1 day ago 2 replies      
I did the same with Perl in 10 minutes: https://git.io/vzKDs

it was fun I will check it tomorrow when I'm back from the pub

arthurcolle 1 day ago 2 replies      
Why do all the comments act like this is bad? Seems fine to me. Explanation would be appreciated :)
BinaryIdiot 1 day ago 1 reply      
As a JavaScript developer for the past 5 years or so, when jumping into Python I certainly miss the simple require(<json file>) which this seems to replicate into Python pretty well!

Having said that I don't think that I would use it for serious things since this isn't really the Python way and I would like my code to be most understood by others. Neat though!

amelius 15 hours ago 1 reply      
The enormous downside to this approach is that once you want to load a json file dynamically, e.g. with a filename that comes from a dynamic string, you're stuck and you have to come up with a completely different approach and rewrite your code.
StavrosK 13 hours ago 1 reply      
Excepting the cringemarsh that is "import <jsonfile>", this is actually very useful. This looks very unpythonic:

my_json["this"]["can"]["be"] == "nested"

and is impossible to write defensively and cleanly after the first call:

my_json.get("this", {}).get("no way to not crash here")

Something that could do:

my_json.this.can.be == "nested"

With a default value would be very useful. Does anyone know something like it, or should I whip a module up in an hour?

zippoxer 10 hours ago 1 reply      
> Have you ever been kept awake at night, desperately feeling a burning desire to do nothing else but directly import JSON files as if they were python modules

I almost died laughing at this. Feels like the author is aware that this scenario is probably the only one where such library can be mission critical.

hobarrera 1 day ago 3 replies      
> Disclaimer: Only do this if you hate yourself and the rest of the world.

I'm curious why the author would do this. Just for the fun in it?

bsg75 8 hours ago 0 replies      
I'm not sure which is the better/worse idea:

"JSONx is an IBM standard format to represent JSON as XML. The appliance converts JSON messages that are specified as JSON message type to JSONx. The appliance provides a style sheet that you can use to convert JSONx to JSON."


kelvin0 15 hours ago 0 replies      
Awesome, although I don't quite have a use for it right now.Next we could:* Import HTML/CSS/JS/DOM via a URL: https://news.ycombinator.com

* Import XML?* Import any_structured_data!

moonpompey 1 day ago 0 replies      
Ah. kragniz must be the Damien Conway of Python.
PaulHoule 1 day ago 0 replies      
... and I was writing SPARQL queries against my Java source code yesterday
jm666 14 hours ago 0 replies      
These jokes are getting rather sophisticated. For a few (head-scratching) minutes, I thought this was meant seriously.
brhsiao 21 hours ago 1 reply      
Some people are saying JSON Python Object Notation. Here we go! https://github.com/brhs/pon

(I used OP's work as a reference. Amazing you can do this in so few lines of code.)

Worlds Oldest Torrent Is Still Being Shared After 4,419 Days torrentfreak.com
210 points by yabatopia   ago   87 comments top 10
cesarb 6 hours ago 2 replies      
Speaking of fan-made and Matrix, a long time ago there was a fan-made film in the Matrix universe called The Fanimatrix (https://en.wikipedia.org/wiki/The_Fanimatrix). Around a decade ago, I downloaded it through BitTorrent. Then a few years ago, I found it in my old files, still next to its original torrent file. Ever since then, I seed it on and off, in the vain hope that anyone knows it exists, has its torrent file (or magnet link), and wants to get a copy of the full file.

If somehow this torrent becomes live again, it's even older than the Matrix ASCII mentioned in this article, by a few months.

 Name: The-Fanimatrix-(DivX-5.1-HQ).avi Hash: 72c83366e95dd44cc85f26198ecc55f0f4576ad4 Created by: Created on: Sat Sep 27 23:08:42 2003 Piece Count: 516 Piece Size: 256.0 KiB Total Size: 135.0 MB Privacy: Public torrent

deckar01 10 hours ago 4 replies      
I really like the idea of torrents being used to distribute anonymous derivative works of copyrighted media.

I recently created a fan edit of "Star Wars: Episode II - The Phantom Menace" for my dad for Christmas. I removed Jar Jar Binks and the entire Gungan subplot. I tried to keep the edits as subtle as possible, and I learned a lot about the editing techniques and transitions used in the film. I wanted to release it, but I decided that the risk outweighed my confidence in my anonymity skills.

airza 11 hours ago 1 reply      
if by "Warner Bros. is not known to go after this type of fan-art" you mean "Warner Bros went after someone extremely aggressively for this type of fan-art" https://en.wikipedia.org/wiki/Wizard_People,_Dear_Reader#Pre....
jedberg 11 hours ago 10 replies      
Slightly off topic, but I find the fascination with ASCII-arting things interesting.

I grew up in the days of 2400 baud modems and even ran a BBS briefly. At the time, ASCII art was the only thing you could do to differentiate yourself.

Nowadays I suppose it's a combination of nostalgia and ease of transferring since pretty much every system ever has a way of reading ASCII.

But I wonder how long the trend will last -- the majority of internet users don't have "nostalgia" for ASCII anymore [0] and there are at least a few image and video formats that are becoming almost as ubiquitous as ASCII readers.

[0] I was recently on a discussion with some folks I used to work with at university about how our old workplace was no longer offering shell accounts to the students because they weren't being used. This made us all sad since most of us learned all of our command line foo at that workplace.

grecy 11 hours ago 5 replies      
I'd be interested to learn about the software that created the ASCII version of the movie.

Is there a simple way to do this, or would they have made something custom?

petra 12 hours ago 3 replies      
Actually torrents are pretty bad for storing "rare" files. Maybe someone should offer a fix.
eximius 9 hours ago 1 reply      
For those of us who feel uncomfortable going to torrentfreak on a work computer, what is the torrent?
Zikes 11 hours ago 2 replies      
This reminds me of that ASCII Star Wars telnet thing. How long has that been around for?
LulzSect 8 hours ago 0 replies      
rocky1138 3 hours ago 0 replies      
I'll leave this on Deluge for a while to share :)
Deep Learning with Spark and TensorFlow databricks.com
193 points by mateiz   ago   23 comments top 5
elcct 9 hours ago 3 replies      
That article reminded me of this: http://i.imgur.com/XQJ3ACO.jpg
vonnik 7 hours ago 0 replies      
So the cool thing here is that you can use Spark and TF to find the best model like Microsoft Research did with Resnets.


They're showing you how to train different architectures simultaneously, and then compare their results in order to select the best one. That's great as far as it goes.

The drawback is that with this schema, you can't actually train a given network faster, which is what you want to do with Spark. What is the role of a distributed run-time in training artificial neural networks? It's easy. NNs are computationally intensive, so you want to share the work over many machines.

Spark can help you orchestrate that through data parallelism, parameter averaging and iterative reduce, which we do with Deeplearning4j.


Data parallelism is an approach Google uses to train neural networks on tons of data quickly. The idea is that you shard your data to a lot of equivalent models, have each of the models train on a separate machine, and then average their parameters. That works, it's fast, and it's how Spark can help you do deep learning better.

hcrisp 11 hours ago 1 reply      
Impressive, but it seems an inversion of paradigms. Small data to compute ratios is usually associated with high performance computing (HPC). Why use Spark when the data is small and is broadcast to each worker? You have to pay the serialization-deserialization penalties of moving the data from Python to JVM and back again. In fact the JVM isn't really needed here at all since all the computation is done in the pure-Python workers in an embarrassingly-parallel way. Seems to me that you would just move onto an HPC and use TensorFlow within a IPython.parallel paradigm and be done much sooner.
amelius 6 hours ago 5 replies      
I have a question about neural networks.

Say, you are training a NN to recognize handwritten characters 0 and 1, and you have 1000 training images for each character (so 2000 images in total). All images are bitmaps with 0 for black and 1 for white.

Now, by accident, all the "0" training-images have an even number of black pixels, and all the "1" training-images have an odd number of black pixels.

How do you know that the NN really learns to recognize 0's and 1's, as opposed to recognizing whether the number of pixels in an image is even or odd?

tachim 2 hours ago 0 replies      
0.1% accuracy increments correspond to 10 images in the testing set; they should be reporting standard error bars with those numbers.
Number of legal Go positions computed tromp.github.io
213 points by tromp   ago   67 comments top 19
richard_todd 3 days ago 2 replies      
Does anyone try to count equivalence classes (after rotation or reflection) instead of raw board positions? To my mind, that would also be of interest if you want to know how many actually distinct game situations there are. I guess as a rough under-estimate you'd just divide the count by (4 rotations * 2 reflections)?
Fargren 3 days ago 1 reply      
I played very little Go a very long time ago, so I don't know this: could there be any legal positions that aren't reachable by any legal moves?
daveloyall 3 days ago 0 replies      
A note for the author and new readers:

L19 means "the number of legal positions for a 19x19 board".

cf. L18 which means "the number of legal positions for an 18x18 board".

L19 does NOT mean position L:19 on the Go board. :)

Alex3917 3 days ago 7 replies      
The fact that the number of valid positions is 19 x 19 in base 3 is wild. You'd have to be almost dan-level to immediately recognize that the pic above isn't actually a real game.
tel 3 days ago 1 reply      
As I remarked elsewhere, what was more interesting to me was that the 2x2 board has hundreds of billions of games (assuming a superko, I suppose).

It's easy to recognize that there must be a lot of them, but hundreds of billions is absurdly fast growing. As another data point, the 2x1 board has 8 games.

sago 3 days ago 3 replies      
There are billions of valid games on a 2x2 board? Can anyone explain or link to something that explains how this is possible?
tromp 3 days ago 1 reply      
ultramancool 3 days ago 2 replies      
To make sense of big numbers like this where any state is valid, I find it good to compare with cryptographic key sizes, so in case anyone else is wondering:

log_2(L19) = 565 bits

cinquemb 3 days ago 0 replies      
Although the determinate of that matrix = 0, and if it's conjugate transpose determinant is non zero then I wonder if all valid possible configurations on this board can be represented by a complex lie group?

Edit: did the work, it does, but too lazy to describe the group https://gist.github.com/cinquemb/18e494348045725e2b60

hendekagon 3 days ago 0 replies      
Tromp's website is a treasure-trove of really cool things
33a 3 days ago 1 reply      
Does this take into account the super ko rule? If so, seems a bit small.

EDIT: Never mind, no it doesn't.

schoen 3 days ago 1 reply      
Neat, is this sequence in OEIS?
horsecaptin 3 days ago 4 replies      
Does this mean that a computer will soon be beating people at Go?
pavel_lishin 3 days ago 1 reply      
Can someone explain why factoring L19 is relevant?
joeyh 3 days ago 2 replies      
I wonder what's the smallest Go board such that the number of legal positions is a legal position?

(Not expecting an answer anytime soon.)

tel 2 days ago 0 replies      
Maybe it's a legal position if the numbers spiral out from the center.

Or, more precisely, I'd be okay with any sort of "nice" arrangement of digits which made it work.

tetraodonpuffer 3 days ago 1 reply      
as a benchmark it seems would be interesting to port this to java/scala and running it on a spark cluster, since it's map-reduce from the post (didn't look at the code) it should be possible I would think
fiatjaf 3 days ago 1 reply      
Why would anyone waste time calculating this? You may be curious, but you're not living to satisfy useless curiosity.
The Villain of CRISPR michaeleisen.org
216 points by texthompson   ago   85 comments top 11
junto 12 hours ago 8 replies      
I hate the fact that breakthroughs like this are patentable.

People need to follow Alexander Flemings lead:

 The pharmacist Sir Alexander Fleming is revered not just because of his discovery of penicillin the antibiotic that has saved millions of lives but also due to his efforts to ensure that it was freely available to as much of the worlds population as possible. Fleming could have become a hugely wealthy man if he had decided to control and license the substance, but he understood that penicillins potential to overcome diseases such as syphilis, gangrene and tuberculosis meant it had to be released into the world to serve the greater good. On the eve of World War II, he transferred the patents to the US and UK governments, which were able to mass-produce penicillin in time to treat many of the wounded in that war. It has saved many millions of lives since.

jimrandomh 12 hours ago 4 replies      
This is part of an ongoing dispute between Jinek et al at Berkeley and Zhang et al at the Broad Institute. Both groups did important work on CRISPR-CAS9, and now they're fighting over credit, a patent, and (probably) a Nobel prize. Eric Lander, head of the Broad Institute, recently published an article "The Heroes of CRISPR" which emphasizes his own institution's role and downplays Berkeley's. Michael Eisen, a professor at Berkeley, wrote this article to emphasize Berkeley's role and downplay Broad's. Lander has, apparently, been in a fight like this before, with Craig Venter's group over credit for being first to sequence the human genome.

My own position is that in a sane world, there would be no patent and the groups would share the Nobel. The patent ownership dispute is the only reason there has to be a fight at all, and while patents on techniques in biology aren't nearly as absurd and destructive over patents on software, I think they're almost certainly net negative overall.

texthompson 13 hours ago 2 replies      
Michael Eisen is a professor at Berkeley, founder of the Public Library of Science and pioneered the use of microarrays for studying gene expression. This blog post is in response to the recent controversy about CRISPR, in particular Eric Lander's article called "The Heroes of CRISPR."
KasianFranks 12 hours ago 3 replies      
A Nobel Prize is now at stake. Lifespan, disease and the human race is at stake. The internal scientific politicking on both sides is classic. "by going into depth about the contributions of early CRISPR pioneers, Lander is able to almost literally write Doudna and Charpentier (and, for that matter, genome-editing pioneer George Church, whose CRISPR work has also been largely ignored) out of this history. They are mentioned, of course, but everything about the way they are mentioned is designed to minimize their contributions."

However, it's also clear that Doudna's work was central and a hub for overall advancement.

texthompson 13 hours ago 0 replies      
It looks like Professor Eisen's blog is down at the moment. Here's a link to the Google cache of that page: http://webcache.googleusercontent.com/search?q=cache:http://...
nfoz 13 hours ago 0 replies      
> CRISPR, for those of you who do not know, is an anti-viral immune system found in archaea and bacteria, that until a few years ago, was all but unknown outside the small group of scientists have been studying it since its discovery a quarter century ago. This all changed in 2012, when a paper from colleagues of mine at Berkeley and their collaborators in Europe described a simple way to repurpose components of the CRISPR system of the bacterium Streptococcus pyogenes to cut DNA in a easily programmable manner.
nycticorax 11 hours ago 0 replies      
I thought this article was interesting because it was that first think I've read that actually lays out a seemingly plausible case for why Doudna et al. deserve primary credit rather than Feng at al. The "Whig History of CRISPR" article was interesting, but it left me wanting to hear more about the biology.
nonbel 12 hours ago 6 replies      
I have never seen a paper on CRISPR that can distinguish between selecting pre-existing mutants and actually modifying genes. I have read probably a dozen or so at this point, and it is amazing that they always fail to address this either in citations or actual data.

At first I thought it was an honest mistake, but now it would not surprise me if some of the main players know that their experiments with CRISPR have been misinterpreted. They are then pushing the gene "modification" label anyway because it is sexier.

After all, CRISPR has received an extremely unusual amount of media coverage over the last year or so, which raises red flags. I suspect a marketing effort is being directly funded. That is not a honest use of funds meant for research, especially that which is not meeting minimum scientific standards (ruling out other explanations for the results rather than just a null hypothesis).

astazangasta 12 hours ago 5 replies      
As time goes on, I'm understanding more and more that academic science, which I had naively imagined to be a pure endeavor prosecuted by good-hearted individuals on humanity's behalf, is in fact as dominated by powerful, acquisitive individuals who are more interested in advancing their own power than in human good, knowledge, etc. The pursuit of IP is taking over the university, much to its detriment.
bshanks 8 hours ago 0 replies      
The Villain of CRISPR is the BayhDole Act.
RyanShook 12 hours ago 3 replies      
Can someone share a tl;dr version of this?
The China GPS shift problem wikipedia.org
201 points by ivank   ago   69 comments top 11
Animats 1 day ago 2 replies      
What you really get from GPS is a vector in earth-centered, earth fixed (ECEF) coordinates. The origin is the rotational center of the earth, the XY plane is the equator, and the XZ plane goes through the prime meridian at Greenwich. You run this through a simple standard formula which models the Earth's slightly elliptical shape, and get out latitude, longitude, and elevation relative to a defined sea level.

ECEF is based on physical reality. The choice of prime meridian is arbitrary, but everything else has a physical basis. GPS, GLONASS, and Galileo all generate (almost) the same ECEF coordinates. (GLONASS has a position for the center of the earth about 3mm from GPS. Galileo is also about 3mm different.) This reflects when measurements were made.

Latitude and longitude are computed by putting ECEF coordinates into a geoid model. All such formulas are approximations. WGS-84 is used by most of the world. GLONASS uses PZ-90, which is slightly different from WGS-84 and fits the earth's profile better in Russia. The Chinese "encrypted" geoid is WGS-84 with some junk offsets added for coordinates within China's area of interest.[1] There's an actual "out of China" test:

 outofChina <- function(lat, lon){ if(lon < 72.004 | lon > 137.8347) return(TRUE) if(lat < 0.8293 | lat > 55.8271) return(TRUE) return(FALSE) }
This is a rather expansive view of China; this goes almost to the equator, including all of the South China Sea and the Spratly Islands.

[1] https://github.com/caijun/geoChina/blob/master/R/cst.R

acqq 1 day ago 1 reply      
This part of the article explains more:


"GCJ-02 (aka Mars Coordinates) is a geodetic datum formulated by the Chinese State Bureau of Surveying and Mapping, and based on WGS-84.[12] It uses an encryption algorithm[13] which adds apparently random offsets to both the latitude and longitude, with the alleged goal of improving national security.[14][15]"

"Despite the secrecy surrounding the GCJ-02 encryption, several open-source projects exist that provide conversions between GCJ-02 and WGS-84, for languages including C#,[20] C, Go, Java, JavaScript, PHP,[21] Python,[22] R,[14] and Ruby.[23][24] They appear to be based on leaked code.[25]"

matthewrudy 1 day ago 1 reply      
At work we've standardised on Baidu coordinates in China.

Even though they're "encrypted", the encryption is locally smooth.

Points may vary by 2km from their wgs84 equivalent, but taking two points 2km away, using the haversine formula will still yield a distance ~2km.

This means we can treat wgs84 and baidu-coordinates as equivalent, but not comparable, which makes a lot of things simpler.

Note: we don't do cross border orders, so we don't need to worry about cross border comparison.

Interestingly, Baidu uses it's own additional encryption on top of GCJ-02, so we are very much locked in


TazeTSchnitzel 1 day ago 4 replies      
China also makes it illegal to make maps of the country without authorisation. Don't get caught working on OpenStreetMap there.
kylehotchkiss 1 day ago 4 replies      
Just wondering, how does this impact national security? When most of the planet is well-covered by Satellite imagery, isn't finding an installation with 500m difference a matter of searching images?
reion 19 hours ago 0 replies      
I have run into this problem when I was living in China and wanted to map hiking trip with my friends in HangZhou.

Wrote about it on my old blog in 2013: http://michalkow.tumblr.com/post/43214865761/compensate-for-...

ggambetta 20 hours ago 0 replies      
I experienced this myself and it was completely confusing!

Back in 2008 before I had a smartphone with GPS, I had a camera and a separate GPS device I would use to "tag" places where I took interesting pictures during a 6-month trip around the world.

When I got back home I ran some code to match picture timestamps with GPS timestamps, correcting for time zone offsets (GPS was GMT, pictures were local). All of the pictures of the rest of the countries matched just fine, except for China :) Since I had relatively few data points, it took me a while to realise there was a systematic error in the data, not something I was doing wrong with the timestamps.

reustle 1 day ago 3 replies      
I was just in China over the border from Hong Kong and Google maps felt to just be offset to the west and a bit north by about 500m. Once you mentally adjusted to that, the maps still felt very accurate. Check out the border and look how the roads are all cut off at the border, very weird
y04nn 14 hours ago 0 replies      
Now I understand why when I was in South East Asia there was sometimes a shift (about 10m), I thought it was a signal reception problem. I was using Symbian Maps (now Nokia Here). So the correction was not perfect.
Theodores 1 day ago 2 replies      
I don't think the 'National Security' aspect would result in Uncle Sam's finest B52's using Chinese 'GPS' to then end up dropping their bombs 500m off course in every instance.

That would make a good story though, a full on first strike that missed every target '500m to the East' due to some mapping oversight.

However, as for real use 'National Security', you could hide some area by making it not exist with the surrounding area stretched a bit on the map to hide the secret area. With some natural features, or even a big forest, all kinds of things could be hidden from the general public (who are important and primary 'targets' for 'National Security') if not Uncle Sam.

haosdent 17 hours ago 0 replies      
we use baidu map.
Marvin Minsky's Society of Mind Lectures mit.edu
121 points by _pius   ago   2 comments top
vowelless 2 hours ago 1 reply      
Is this related to his book "Society of Mind"? If so, ow accurate is this view of the mind today?
       cached 26 January 2016 05:11:01 GMT