hacker news with inline top comments    .. more ..    24 Jan 2016 Best
home   ask   best   3 years ago   
Google's Free Deep Learning Course udacity.com
589 points by olivercameron   ago   59 comments top 14
j2kun 1 day ago 3 replies      
I'm going through the course right now, and the instructor is saying some strange things, clearly (to me) ignoring that what he's saying is only true in very specific contexts.

For example, in the video I just watched he said "the natural way to compute the distance between two vectors is using cross entropy." And then he goes on to describe some unnatural features of cross entropy. The truly "natural" way to compute distances between vectors is the Euclidean distance, or at least any measure that has the properties of a metric.

I can understand this is a crash course and there isn't time to cover nuances, but I'd much rather the instructor say things like "one common/popular way to do X is..." rather than making blanket and misleading statements. Or else how can I trust his claims about deep learning?

it_learnses 1 day ago 7 replies      
Would it be beneficial for me as a developer to take these machine learning courses? I took a course in the uni a while back and know the general techniques, but I'm not sure how it would help me in my career unless I'm doing some cutting edge work in the field or focusing on a machine learning career, in which case wouldn't I need to be pursuing a postdoc or something in it?
imh 1 day ago 1 reply      
If you want more than a 4 lecture course, I recommend Nando de Freitas's course. It's very high quality and free.


stared 1 day ago 0 replies      
When it comes to the course itself (I've just started it) it looks nice, but the (initial) questions tend to be vague.

E.g. in the first question with code I had to reverse-engineer what they mean (including passing values in a format, which I consider non-standard (transpose!)). The first open-ended questions were entirely "ahh, you meant this aspect of the question".

Otherwise, the course (the general level, pace, overview) seems nice.


The IPython Notebook tasks (i.e. the core exercises) are nice.

ganeshkrishnan 1 day ago 1 reply      
I think intro to machine learning https://www.udacity.com/courses/ud120 is the prerequisite to this course
maurits 1 day ago 0 replies      
For people interested, Stanford has an excellent online course on deep-learning with an emphasis on convolutional networks. [1]

It comes with video, notes, all the math, cool ipython notebooks and will let you implement a deepish network from scratch. That includes doing backprop through the svn, softmax, max-pool, conv and ReLU layers.

After that you should be more than capable to build a 'real' net using your favourite lib (Tensorflow, theano etc).

[1]: http://cs231n.stanford.edu/

stared 1 day ago 0 replies      
While TensorFlow may be not yet as mature as Theano or Torch, I love their tutorial: https://www.tensorflow.org/versions/master/tutorials/. It's clean, concrete, and more general than introduction to their API. (Before I couldn't find anything comparable in Theano or Torch.)

In any case, I regret waiting so long for learning deep learning. (I thought that I needed to have many years of CUDA/C++ knowledge (I have none); but in fact, what I need to to know the chain rule, convolutions etc - things I've learnt long time ago.)

DrNuke 1 day ago 0 replies      
Yes! Andrew Ng's coursera + kaggle.com + this deep learning course by Google is a very nice -and free- foundation.
magicmu 1 day ago 4 replies      
How accessible is a course like this with no prior knowledge of linear algebra? I know it's listed in the pre-reqs, but with a good head for math and lots of calc, is it something that could be picked up along the way? I'm normally pretty bold about stuff like that, but I know it's a core part of deep learning / ML. If it's really necessary, if anyone has any resources for linear algebra run-throughs it would be greatly appreciated!!
alok-g 1 day ago 0 replies      
Will the projects/assignments be workable on Windows, or would I need Linux et al for these?

And if not natively (using docker/VMs), would they be able to use NVidia CUDA card on my system? And how much disk space would be needed.


wodenokoto 1 day ago 2 replies      
Does this use tensor flow?
ntnlabs 1 day ago 0 replies      
Yeap, it's dead :)
jorgecurio 1 day ago 0 replies      
I fucking love Google, it's the greatest company there is. Thank you for this free course, incredibly high quality and very enjoyable to watch.
fiatjaf 1 day ago 0 replies      
Udacity is kinda ridiculous, making us answer some stupid questions every 5 minutes. I'm not in school anymore (by the way: no one ever learns anything in school).
T-Shirts Unravelled threadbase.com
611 points by janzer   ago   126 comments top 37
threadbase 7 hours ago 5 replies      
I'm the founder of threadbase. Thanks everyone for your kind words. I'd love to hear any comments or suggestions for what you'd like to see next or how we can improve the user experience. We're also looking for front-end/design help, as well as help with computer vision tech. Feel free to email me chris@threadbase.com.
nextos 10 hours ago 5 replies      
It's pretty well known that tees made using 1920s-era loopwheeling machines don't suffer from size changes, and age really well. But sadly these are now super expensive, and only offered by Japanese niche brands who bought machinery from American corps.
jakub_g 6 hours ago 8 replies      
I feel obliged to share a lifehack my mother taught me regarding the laundry.

After removing the laundry, take your t-shirts and stretch them yourself, one by one, when they're still slightly wet. Grab them with two hands symmetrically, stretch horizontally, moving your hands down along the shirt. Do the same vertically, and with the sleeves.

Do not use a machine dryer, just a regular standing dryer like [1].

Put your t-shirts carefully, symmetrically on the dryer, and once dry, put them on a hanger. If you follow this, you will not have to iron them at all.

Source: been doing this for 4 years and I didn't touch the iron since. I have all t-shirts 100% cotton (though I buy only high-grammage ones) and they all seem brand new and ironed (the only exception being one particular brand whose collar looks bad unless ironed, I stopped buying that brand). YMMV of course.

[1] http://ecx.images-amazon.com/images/I/41oWjx2Q-mL._SY300_.jp...

jkereako 1 hour ago 0 replies      
Good stuff. I'm interested to see data on other cotton garments, particularly buttondown shirts.

Cotton is just a lousy fiber. On the other hand, wool is a strong and resilient fiber. It also never needs to be washed provided it isn't stained.

My wife knit a wool sweater for a close friend of mine who spent 6 months as a bosun on the tallship the Lady Washington (the Interceptor in the Pirates of the Caribbean). Fresh water is scarce on a tallship so showers were infrequent. He came home during Christmas and I smelled the sweater which he claims he never washed and it smelled fresh. Surprisingly, it also kept him warm and dry on the open ocean. I later learned that Irish fishermen have been wearing wool sweaters at sea for generations.

Wool is the fiber of the past and future.

specialist 9 hours ago 2 replies      
This is a great resource, thank you. Being tall with a long torso, trial and error buying t-shirts has been torturous and expensive.
Dan_in_Brighton 9 hours ago 1 reply      
Great to have this data. But wouldn't the extent of dryer-induced shrinkage be driven by the amount of time in the dryer, as much or more than temperature?

While most modern dryers offer a choice of temperatures, the big knob mostly controls a humidistat-based target. I personally equate the "very dry" setting with "shrink beyond usability".

I'd expect that removing clothes while still damp would be more important to avoiding shrinkage than reducing the heat, but I'm no T-shirt scientist. (T-shirtician? T-shirtologist?)

tdaltonc 10 hours ago 1 reply      
Man, so many things that I always wanted to know. Why didn't a marketing team at Tide make a infographic about this a decade ago?
sschueller 9 hours ago 0 replies      
Very cool but please add a metric measurements option to your search.
flormmm 6 hours ago 0 replies      
This is data I'm grateful to have and at the same time, can't believe someone went to all the trouble to get it !
chrismartin 6 hours ago 1 reply      
This is excellent. Independent, consumer-empowering size analysis, especially measured over extended wear and washing. I'll be following you.

I wonder if there are any companies that sell inexpensive custom t-shirts? Provide your measurements, specify desired fit, neck type, color, and fabric, and order exactly what you want.

As someone very hard to fit for pants (28" waist and cyclist thighs), I would be thrilled if you also do this for jeans and shorts.

smcl 10 hours ago 0 replies      
These sizing charts are incredible and must have taken a lot of effort to put together. I'm gonna come back to this page a lot.
brad0 10 hours ago 0 replies      
Brilliant post. This explains exactly why some T shirts I buy fit great in the chest after buying and are shorter and wider after one wash.
boulos 6 hours ago 1 reply      
How did you actually do the measuring?

The "manufacturing variance" chart jumped out at me as looking fairly unnatural: there's a variation in width or variation in length but very little points that mix. Then I noticed that we're talking about just over half an inch in each direction.

How much of this effect is variation in your measurement?

vanilla-almond 8 hours ago 1 reply      
I don't have a dryer, but I've still had cotton shirts (not t-shirts) shrink in the washing machine. This usually happens the first time (or first few times) they are washed at the temperature recommended on the label: 40C (104F). However, at 30C (86F) I've never encountered any shrinkage. So this purely anecdotal experience makes me believe that the temperature of water can affect some cotton garments.
nether 6 hours ago 1 reply      
Awesome work. Small nit: wish the average values were shown as colored dashed vertical lines.
itchyouch 10 hours ago 1 reply      
I've standardized my shirts on the Uniqlo Crew Shirts and they seem to have very little change from wash to wash compared to my Banana Shirts.

The Uniqlo shirts are a cotton/polyester blend while the BR shirts are 100% cotton, which explains the help that durability of the synthetic materials provide.

It would be great if we could get some data on which brands have the most and least variance and which brands expand and shrink the most over their lifetime.

carlob 10 hours ago 2 replies      
Doesn't work on Safari with Ghostery and uBlock. No text loads, but the plots do.
mojoe 10 hours ago 1 reply      
This is a cool analysis, although I think every t-shirt I own is from Target (Merona and Mossimo brands) so I didn't have a single point of reference for the width and length charts. Those charts seemed like by far the most useful part of this post.

Edit: I'm curious about the downvotes. Are people appalled at my lack of taste in t-shirts? :)

mrbill 5 hours ago 0 replies      
I still have a problem with my big and tall shirts - I have to hang-dry them to keep them from shrinking vertically when run through the dryer (due to my body type). This backs up what I've complained about for years :)
jwagenet 8 hours ago 0 replies      
I would love to see the size dataset expand to casual and dress buttonups, and even jeans. A bit of data like this will greatly improve my shopping experience.
mc32 8 hours ago 0 replies      
The most maddening things is that even within a brand, size, S,M,L, etc. are inconsistent, never mind hoping there would be consistency of measurement across brands.

Their charts expose this onconsistency. Some brands Mack Weldon, is more consistent than say American Apparel.

Even when objective measures like pants waist size in inches, typically a size 32" actually 34" --I guess to make people think they are thinner tan they actually are.

bravura 9 hours ago 0 replies      
tldr Pro-tip: If you want to get long life out of clothes you like, don't put them in the dryer.
wslh 9 hours ago 1 reply      
side note to the web team: please add RSS to your blog.
jedberg 6 hours ago 0 replies      
The most important thing I learned from this page was that American Apparel sizes small. Since 3/4s of all my shirts are startup shirts and most of those are AA, this is good to know.
jimbobimbo 9 hours ago 0 replies      
I learned the dryer effect hard way - so many good tees were destroyed. :( Nowadays I just leave my t-shirts and polos to dry.

The size charts is a real eye opener. I know that Abercrombie carries smalls that fit me fine, but Zara was an unknown to me. Apparently, their tees are also reasonably priced and look pretty good...

Thanks for the post!

_greim_ 8 hours ago 1 reply      
Is there a "grain" to the fabric or something? Why not turn it 90deg and have the shirts increase in length and decrease in the chest instead? I'd prefer size to stay the same over time, but if I had to choose I think I'd rather have that.
sehr 10 hours ago 0 replies      
Probably one of the few times I've enjoyed an ad, useful and interesting! good job threadbase
BatFastard 10 hours ago 1 reply      
I like the concept of the post, but found it difficult to read. Too much data, and not enough conclusions.
_greim_ 8 hours ago 1 reply      
So is there no economic pressure to develop a fabric weave that's both efficient to manufacture and stable over time?
solotronics 2 hours ago 0 replies      
This is awesome!
janzer 10 hours ago 0 replies      
As a bit of an aside, I actually came across this from a tweet by Adam Savage of Mythbusters fame.
MaxHalford 9 hours ago 2 replies      
Does someone know what tools were used to make the graphs in this article?
vyyvyyv 8 hours ago 0 replies      
Are there plans to do this for women's clothing?
godzillabrennus 10 hours ago 0 replies      
Seems like you guys and http://Markable.com would have a natural symbiotic relationship for data sharing.
graycat 3 hours ago 0 replies      
How to make T-shirts longer:

Wash as usual and then hang on a plastic hanger until dry. So, don't use a dryer and just let them hang, starting when they are still wet from the washing.

Also works when hand wash and rinse but don't squeeze out much of the water, that is, hang them while they are still wet enough to drip.

Also works with knit polo shirts.

samstave 8 hours ago 2 replies      
>What surprised us was that over the course of many wash cycles, the chest and waist will drift wider and the length will drift shorter.

What if the fabric was rotated 90 degrees upon manufacture, wouldn't this eliminate this problem?

The shrink pattern is related to the orientation of the thread build of the fabric used is it not?

kelukelugames 10 hours ago 0 replies      
The results are too important to accept without peer review, examine the methodology, etc.
Why I quit my dream job at Ubisoft gingearstudio.com
580 points by Chico75   ago   188 comments top 34
lost_name 1 day ago 10 replies      
I once worked as a consultant to help users implement some software. I moved on to the development of that software, knowing the dozens of areas that could be improved to make life easier for the users, and honestly with little effort (it's a web app, I did that before the consultancy stuff). At that point in time, that was my dream -- I wanted to help make people's lives a little bit easier, and the people I helped would be those who used our software.

After around a year or so of implementing questionable features, I attempted to get approval for updates to old, well used features to improve them (stability and convenience focused, really), but was shot down. This wouldn't sell the software, because it worked well enough, and we needed more revenue more than retaining old customers. At that point I understood that after the software is sold the customer will be too ingrained into the product to leave without financial repercussions.

A while later, we got bought out by Big Company, so that strategy apparently worked. BC doesn't give half a shit about anything we ever did, and we piled on the features release after release with little concern about anything else. I tried a couple times after the buyout to get approved for existing product improvements, but always got shot down.

I continue to find it odd how the company can be so profit oriented, and yet so averse to improvements. I suppose I'm just wrong or don't actually understand. Either way, it makes it very hard to care about my work these days.

ransom1538 1 day ago 3 replies      
Why to not work at a video game company:

0) Abuse.

1) Executives cut projects: a lot. The budgets are so insane for games executives need to constantly trim budgets and shift things around. It is common to walk over to an artists desk and inform them the art they have worked on for 2 years wont be used. I am convinced telling a wife her husband has passed is the same feeling.

2) The budgets have exploded. My last project for an iPhone game was well over 4million dollars.

3) Complexity is compounding. My last team (for a prototype) consisted of: AI guy, graphics/C++ guy(s), gameplay guy, Art TEAM (vector and raster) and project managers. The art pipelines alone will suck the budget dry.

4) Pay is low. Since you are starting fresh each project (see 5), your working knowledge of the system is similar to someone new. Promotions, salary increases, etc don't make any financial sense (see 1) unless you are a rockstar. The new kids walking in usually burn out and quit because they don't understand the massive shit show the industry is. EA's managers just grind people until they can't walk. Disney is a sweatshop.

5) NOTHING is reused. After your second project, you quickly realize the AI you created for fish has nothing todo with with your AI for a 3d shooter. The asset pipeline you created for a soccer game doesn't translate over to a racing game. Game companies are full of dead code repros. People try to create/use repeatable platforms, but then the game designer guy will walk by and say "Hey is that the newest unreal engine?". In games: Anything reused is quickly spotted as reused. This is why games that have a good series going do really well financially. GTA what like 15?

6) Success is low. After a few years into a project, someone will say: "But its not... fun". Welp, good luck fixing that. Or plan on having it rot in some terrible online store.

7) Rockstars. Executive: "OMG you wrote the AI for GTA2 in 1998??". Welp, this guy is now your boss. AND, because games are almost always a luck play - this "Rockstar" will teach you absolutely nothing.

My takeaway:

I have talked with guys in the game industry that have been in it 20+ years and asked WTF. Basically, lifers are like high school teachers. They are abused and underpaid: but they love what they do.

endymi0n 1 day ago 3 replies      
> Since my very beginnings at Ubisoft, I knew I wouldnt spend the rest of my days here. I already dreamt of starting my own indie company.

Well, then that was probably your dream job instead of Ubisoft.

chaostheory 1 day ago 8 replies      
> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

A good description of a lot of big corp projects. Do people working on large open source projects eventually feel the same way?

this_user 1 day ago 4 replies      
This phenomenon is not exclusive to game development. Lots of people want to work for large, prominent companies like Google or FB, dreaming of working on cool projects. But the reality usually turns out to be much less glamorous. Instead of being the guy who comes up with the next killer product or feature, you will likely end up as a small cog in a huge, well-oiled machine, optimising ads to increase some metric in the fifth decimal place.
mattmaroon 1 day ago 0 replies      
I hope this does not happen to him, but wait until he releases a game (that might even be a really good one) and gets nowhere because 1,000 other people released a game that week. It's a tough industry now! You probably appreciate the indie side of it when you're in AAA, but I can tell you from experience you appreciate the AAA side of it when you're in an indie!
lhnz 1 day ago 1 reply      
> "When your expertise is limited to, lets say, art, level design, performances or whatever, youll eventually convince yourself that its the most important thing in the game."

This is my experience, too. Without autonomy and ownership across a whole project it's very easy for people to get tunnel vision about what's valuable. This causes general harm to both the team and the outcome of its project.

I'm not sure how to lessen the effect other than perhaps by making projects small enough that they can be worked on by just a few people and using this phase to establish a kernel of good ideas and team cohesion.

Perhaps there might be another structure where the tools that are provided to the team are literally so good that the main project can be done by just a few people working on everything together. (Idealistic vision here.)

ckarmann 1 day ago 0 replies      
Congratulations for pursuing your dreams! I also work at a big game company, not even on a game but on an internal technology: no player will ever see directly the result of my contribution. I still feel great about my work because of reasons not important here, but I totally understand what the author says.

But feeling of being a little cog in the machine aside, some of what is said here is about failure of management: communication problems, useless meetings, bogus decision process, lack of visibility of who is impacted by a decision, etc. It's true that big projects are more difficult to manage than small ones, but in truth a bad management or bad coworker dynamics can destroy motivation in big or small companies alike. I have worked in a few startups and two indie game companies and all were plagued by mismanagement as much if not more than my other experiences at a bank and at a big cell-phone company. I may have been unlucky, but it may a simple truth about the programmer's job: working with other people is hard and team dynamics is very important.

jnaour 1 day ago 2 replies      
Seems to be his first indie: http://openbargame.com/
gnulnx 1 day ago 0 replies      
> No matter whats your job, you dont have a significant contribution on the game. Youre a drop in a glass of water, and as soon as you realize it, your ownership will evaporate in the sun. And without ownership, no motivation.

This is why I left my 'dream job' of work working on a AAA MMORPG. I came on board early on as the first member of a 'NetOps' team, a senior linux systems administrator, which later split off and grew into a number of very large, very specialized teams. My loose definition of 'dream job' at that time was 'large scale' and 'video games'. Cool!

It took a few years for me to redefine what a 'dream job' really meant, and being a drop in a bucket was not it, so I left and moved on (slowly) to freelancing, and haven't looked back.

hacknat 13 hours ago 0 replies      
Late to the party and this doesn't address what the OP said directly, but the state of the video game industry actually makes me quite sad.

The last AAA game I played was Oblivion, which I couldn't finish. I haven't really played a AAA game since, and have only played two video games all the way through since (Braid, and Monument Valley).

When the OP talks about working on a project so big that no one person really "grocks" the whole thing I can relate, but I also want to say "it shows".

IMO, the current state of AAA games is shit. I think the reason they are this way have to do with what the OP is complaining about, the originating vision of the game comes from Marketing not an artist, and no one person has vision for the game. Maybe video games just have too many resources at there disposal.

I think I read somewhere that either Ocarina of Time or Mario 64 had double or triple the playable content of the released game and Miyamoto had a perfectionist eye for the game and was merciless in what made the cut.

Resource constraints are a good thing, IMO, as it forces people to make a razor focused product that trims the fat mercilessly.

Having unlimited resources is the enemy of good decision making, and it shows in the current state of video games (and film too). Games and movies are just too long/full these days.

cpsempek 1 day ago 1 reply      
I do not get why people, it seems, often use the "...or, how I learned to stop worrying and..." in their blog post titles. Are they doing it as an homage to the film Dr. Strangelove (I'm not sure if that was the originator of this alternate/sub title phrase), or, are they doing it because it has become a meme among bloggers?

If the latter, fine, at the worst they are unoriginal. If the former, then they haven't ever seen the movie, or, don't understand the movie and the absurdity of the title character nonetheless of "loving a bomb".

Or, this phrase is common and I erroneously associate its origin with the film.

In every case but the last, it irks me, but for no good reason ultimately.

martimoose 1 day ago 0 replies      
I don't work in the videogame industry, but I can totally relate. I work in a small website dev studio, and we interact with a lot of companies, both large (though not huge) and small.

As soon as you get people working on a project that are too specialized, no matter the size of the team, you inevitably get conflicting concerns. I think it's very important for managers to understand what those concerns are to be able to take the right decision.

I also think that even specialized people should have some knowledge of other specializations (e.g. designers that understand programming, and vice versa). On very large projects, this is impossible as there are just too many fields, but still I value very much "general knowledge" for that reason.

Anyway, good luck Maxime in your endeavors.

shmerl 1 day ago 0 replies      
The problem of companies like Ubisoft is mass market approach. Big publishers prefer commercial mass market art to good art. In result, more interesting games come out from independent studios like inXile, Obsidian, CD Projekt Red and others. Not sure how it looks from insider's standpoint, but from gamer's standpoint, big publishers like Ubisoft and EA are plain boring and their games can be compared to pulp fiction and you don't expect to see masterpieces from them (coincidentally they are also most often plagued by DRM in contrast with games from independent studios).
ergothus 1 day ago 0 replies      
This quote stood out to me:

"On large scale projects, good communication is simply put just impossible. How do you get the right message to the right people? You cant communicate everything to everyone, theres just too much information. There are hundreds of decisions being taken every week. Inevitably, at some point, someone who should have been consulted before making a decision will be forgotten. This creates frustration over time."

This is an issue I've wrestled with over the years - too small a company and your resources are limited, too large and progress mires, and it mires because of communication.

the_common_man 1 day ago 1 reply      
Fantastic write up. I know this feeling all too well.

A bit related is when you work in big companies like Apple and Tesla. These guys have a "hero" at the top. There is nothing you can do but wait for that headline that talks about a feature you made and it was Elon Musk's doing or Job's amazing leadership. I have nothing against these two but it is very demotivating to work.

dismal2 1 day ago 0 replies      
Fulfillment through structured 40hr+ a week labor is an illusion
davedx 1 day ago 0 replies      
I've worked for a couple of small games studios, and once for a big studio working on a AAA game. The headcount observations resonate.. I remember our teams growing, and growing, and growing, and each extra programmer detracted from the "community" feeling of being part of a studio, and added to the complexity of developing such a large code base with so many devs.

Compare that to small studios, where you can really feel like part of a family. It's very different, and all these kinds of feelings are more intense than other IT companies I've worked at. (Probably partly because of the extra time you tend to spend there when working in the games industry...)

Having said that -- some of my best friends were made when working at the big AAA studio! So it's not all bad.

emehrkay 1 day ago 0 replies      
This was a great read. I worked at a large web agency once and did some pretty decent work. It is definitely rewarding to see people use something that you worked hard on and to see it on tv and in magazines, etc. But that yearn to do your own thing, blaze your own path is a feeling that I'm certain most people who work in creative fields go though.

Sidenote: before he said that the small projects were cancelled, I assumed that they were Evolve (https://evolvegame.com/agegate/) (I don't follow games close enough to know which studio makes which game).

I'm curious as to how he was able to, I assume, bootstrap a game company for a year before releasing an iOS game.

jarjoura 1 day ago 1 reply      
So let me just throw this out there, we will always have to answer to someone. Whether it's our middle manager in a big organization, VCs telling us how fast we need to grow, or our demanding users because they are the only way to get revenue.

All software written at this stage is small cogs on a much bigger platform written by teams of brilliant people over the last 30-40 years.

I do think it's fair to say you want to work on actual interesting problems and being one of 20-40 people working on a game engine is probably very tedious. I imagine long code-review cycles since any tiny change could destabilize the entire system several layers up.

Some people need big organization structure to produce their best work while some people need the freedom to have infinite WFH days answering to users to produce their own best.

dexwiz 1 day ago 1 reply      
Good luck to him. Going Indie is a bold choice, especially after Steam Greenlight.
richerlariviere 1 day ago 0 replies      
> The team spirit was sooo good! Our motto was on est crinqus!, which more or less translates to were so hyped!. During our play sessions, we were so excited we were screaming and shouting all over the place. I think it bothered colleagues working next to us, but hell, we had so much fun. I didnt feel too guilty.

Wow. IMO A dream job is a balance between having fun like you described and working on complex problems. I love how you have written this paragraph.

ninjakeyboard 1 day ago 1 reply      
I went to business school instead of tech because I too dream of running the head of the ship one day. Tech will always be an interest but I thirst for freedom.
robertndc 12 hours ago 0 replies      
There are no dream jobs, just jobs or dreams:

. Build your own company and you will finish accepting profit as flagship.

. Find a job where you lead the direction and internal politics will make you adapt to a whole way against their life goals.

. Make an open source project that no one will use.

azraomega 1 day ago 0 replies      
tl;dr - feeling no ownership doing somebody else's big projects, he quit.
Mendenhall 1 day ago 0 replies      
We are all just cogs. What I learned is no matter what sized cog I am compared to others, just make sure my interaction with the other cogs is as smooth as possible. I take pride in doing good work no matter how small or large.
MollyR 1 day ago 0 replies      
Wow, cool stuff. I wish them the best.

I often had dreams of doing the same thing, especially inspired by this guy http://www.konjak.org/ .

It seemed like overkill for me as I could never get a team together.

Though with rise of VR, I've been looking into unity3d.How cool would it be to build your own world, then jump in and visit it.

durpleDrank 1 day ago 0 replies      
I used to work beside UBISOFT here in Montreal. I'd here them talk about videogames during lunch time and it was pitiful. It seemed like having colored hair and geek-chic was more important than actually knowing anything about videogames.
chris_wot 1 day ago 0 replies      
I can well imagine this can occur in larger, non-games software development projects. I wonder if it is the same?

I sort of suspect not. I am currently refactoring an (albeit important) part of the LibreOffice codebase - the VCL font subsystem. Mostly it's reading the code (in fact, 90% is reading and understanding the code), but it's kind of satisfying looking at how changes to the code make things better and... more elegant.

Perhaps this is just an Open Source thing. Or maybe I'm unusual in that I like to focus on smaller modules and make them really good, then move on to the next thing.

listic 1 day ago 1 reply      
How large was the development team of Assassins Creed Syndicate, at its peak? Is the overall budget known, as well?

I wonder how big the really big nowadays is.

iolothebard 1 day ago 0 replies      
Put the guitar on ebay/reverb. Or just send it to me!

Best of luck :-)

workitout 1 day ago 0 replies      
For me my dream job can be writing CRUD web software as long as people need it and appreciate my work.
oDot 1 day ago 0 replies      
How does this compare to Valve? Maybe having no deadlines can ease the specialization issue
bronz 1 day ago 0 replies      
Make sure to check out his upcoming game, Openbar. It looks really, really good.
Privilege and Inequality in Silicon Valley medium.com
604 points by dtran   ago   270 comments top 42
rdlecler1 1 day ago 3 replies      
I had a similar experience starting life with opportunity debt. Single parent family whose mother had no high school education. Have ADD and dyslexia, moved around a lot with a good part my life in subsidized housing, and never graduated from high school. No one teaches you the basics, so that when you do start coming into your own and taking control of your own life you are so incredibly behind your peers socially, politically, and intellectually. I eventually went to community college as a mature student, eventually made my way to university, did a masters and then a PhD at Yale. Through it all I was always one or two steps behind and so many opportunities were missed because I didn't have money. Similarly, now as an entrepreneur I find myself being a little more conservative because you've been through a lot of bad times without a safety net.
yeureka 1 day ago 6 replies      
This sounds very familiar.

When I was in University I didn't understand why some people didn't care about grades and partied so much. When we left school and got into the real world I understood why: they had rich parents with contacts that could get them good jobs or seed capital for their own businesses.

I had lots of ideas and worked in a lot of startups for more than 10 years but now the following phrase from the article describes my situation very well:

"Most of the time, potential founders who share my background tend to work at lucrative jobs in finance or tech until they can take care of everyone in their families before they even dream about taking more risksif they ever get there."

ginsurge 1 day ago 4 replies      
This really resonates with me, I was the first person in my family to go to university, and my grandparents had to work multiple jobs when they migrated from Europe in order to survive. My dad did slightly better, but both my parents only had high school education and worked blue collar jobs.

It does make it really hard to change your mindset when you come from this sort of background, when you've achieved more than anyone in your family and therefore can't really talk to them about your ambitions or career objectives.

It sounds awful, but sometimes I wish I had been born into a different family, with highly educated parents I could have amazing conversations with, who would encourage me to achieve and grow even more.

I find I constantly have a mindset of "I'm not good enough" and it's paralysing. I want to interview for the top tech jobs out there, like Google or Facebook, but my brain keeps telling me I'm not good enough, it's awful.

AlexB138 23 hours ago 3 replies      
This was a bit of a tough read for me. My reaction is sort of selfish, but it was very visceral. I read this and had to come back later to respond, though I imagine the conversation's largely over at this point.

My family basically fell apart when I was around 11. My parents divorced. I stayed with my father, siblings went with my mother. My father turned into a drunk. I spent good nights carrying him from the couch to bed, and bad nights carrying him from the lawn, sometimes without clothes. I learned to drive bringing drunks home when I was about 13.

I had no social skills. I struggled in school and failed a grade, though I eventually made it up and graduated high school on time. No one ever even mentioned college to me. I never thought about it until everyone I knew was talking about where they were going. Toward the end of high school my father's alcohol habit turned into a hard drug addiction. About a week after my 18th birthday, we were kicked out of our house because he hadn't paid rent in months. He went to go live with a fellow addict and I became homeless.

I lived on friends couches for a while. Around that time I realized that life could continue getting worse, or I could start fighting the tide. I got a job making pizzas, then doing construction work and then started a sub-contracting company doing construction when I was 19. When I was 22 I had 14 people working for me. I ended up shutting the business down, mostly due to mistakes I had made. After that, I got into tech.

I'm 30 now. I've got a family and don't have much of a relationship with my parents or siblings. I make a solid salary, and have done fairly well in my career, but I struggle with pretty severe imposter syndrome. I have trouble making lasting connections, and have failed entirely to find any mentorship. My wife hardly knows anything about my history, but she knows more than any of my friends.

All of this is a long winded setup to say, I didn't get that transformational experience that the writer here experienced at university. I didn't even know SAT classes existed until well after they would have helped, and had never heard of Stanford until I was into my tech career. I would have given quite a bit to trade my father for an immigrant who simply didn't work. I very much admire the writer's drive and results, and don't mean to detract from any of that, but I have a hard time fighting the urge to point out that he had more privileges than he probably realizes.

__jal 1 day ago 1 reply      
I found myself nodding along the whole article.

First, let me say that I am happy where I ended up. I'm successful, enjoy my work, and when I compare my personal income with our family income when I was growing up, it is an absurd multiple.

We were a very poor family in a poor part of the South. I went to a top-10 small private university on a full ride, felt completely alienated and never quite figured out how to function in that environment. I dropped out and moved to San Francisco at what turned out to be a very good time (early 90's), and once Netscape dropped, discovered nobody else knew what they were doing with this web thing either, and more or less faked it until I made it.

At the same time, I have had and do have ideas that others have executed on, that I know I could have made a go at, if only

The if only list is long, and most of it comes back to self-imposed limitations that I can trace back to how I grew up. Frequently it relates to economic security, but there are other habits of thought that stop me from even getting to worrying about that.

One big one is that I never learned to think about entrepreneurship. A big lesson hammered into me growing up was the importance of "finding a good job", not figuring out how to make my own.

I did start a company in my mid-30s, and we did OK, until we didn't. And that failure (I think) had nothing to do with the habits of thought of a poor kid. But failing in a similar way in my 20s would have left me in a position to learn from that and try again, something I'm unlikely to make a go at 10 years later. I do little things for side income, but those are hobbies.

So it ends up being this thing that doesn't really bother me at this point, but does leave me to wonder what would have happened if I had picked parents from a very different walk of life.

And I am quietly amused when people tell me how they built everything themselves "after a seed from Dad", or "with a great connection I made through a family friend" or similar. Those are impossible blockers for a lot of people, even if they get over some of the habits of mind better than I did.

nickbauman 1 day ago 6 replies      
There's a strong thread of meritocracy in the tech community, but there is no such thing. When you choose the clearly better developer over the other, you're often choosing the one who had better resources growing up, not just natural ability. The poorer developer may have had a natural advantage over the other one, but didn't have the money to develop it as much. So you're really just selecting for wealth all over again.

This is what's behind the achievement gap anxiety: Wise rich people don't want to perpetuate a world where only money selects success. It's wasteful and ultimately unsustainable.

mchu4545 1 day ago 0 replies      
Ricky mentions the guilt at not cashing out on his Stanford degree immediately and providing for his family.

Day-to-day, how do founders in similar positions coming from cultures with tight-knit families address this? Especially as parents age?

dtran 1 day ago 4 replies      
Has anyone done a survey of the family socio-economic status of startup founders and early employees? I'd be curious to see how many founders/early employees are from low-income families, whether their parents graduated college, etc. If not, I'd love to create one.
dsfyu404ed 6 hours ago 0 replies      
He conspicuously missed the part where time spent working a job, studying and generally acting like a responsible adult is time not spent networking.

The "poor" kids also tend to find each other at college and over the first few semesters form separate networks from the rich kids. People tend to want to hang out with people who are similar to them. One group goes out partying together, the other sits in a dorm room listening to music and drinking a $15 handle. Their friend groups don't overlap over much.

The poor kids tend to build networks where the members personal skills and resources they bring to the table in the present are important (or that was my observation). I guess when you can't throw money at a problem knowing who's the IT guy and the car guy becomes more important.

nish1500 1 day ago 1 reply      
I grew up other side of the world, amazing by what I heard in the news - that there existed a world beyond mine where smart people with smart ideas built great companies overnight. I am smart. I have merit. I dropped out at 19, taught my self how to code, and built a 6-figure business with my projects online. I want to learn more.

I got turned down by 15+ companies and startups in the past few weeks because they couldn't sponsor my work visa. This is Canada.

The USA? Being a dropout makes me ineligible for any US work visa.

So much for merit.

ianphughes 1 day ago 0 replies      
What was that axiom that Red Auerbach was attributed with? "You can't teach height." Ricky demonstrated he was hungry to learn and succeed at a very early age, a quality that will always bring some level of success through life: "I had to bring my dad to the office the next day and told him to pretend to say some words in Mandarin while I just demanded that I get put in an honors-level English class."

How do you identify those who are underprivileged, but carry that quality too? It can be very difficult to identify.

mempko 1 day ago 0 replies      
Excellent post. But I feel that we need to go beyond talking about what we can personally do to improve our situation. Either the vast majority of people are ill-adapted for success, or something else is going on. I think we should go beyond the classic argument "If we just all recycle, the world will..." Or "If we all buy electric cars, global warm....".

This post had some of that individualistic attitude to a much broader and obviously systemic problem.

decisiveness 15 hours ago 0 replies      
There might be many more success stories if children growing up closer to the poverty line were able to do so in more nourishing environments. However, discouragement, lack of confidence, anxiety are things not restricted to any racial or economic background. Not having a silver spoon is in many ways a better environment in which to be raised.

The OP does not say that his parents didn't show him any love, which is more important for the development of a person than any economic status. Many of the other struggles might be used as fuel for building positive character traits, unless one doesn't let it.

Having read through the post, it doesn't appear that he's actually arrived at a valid point, and is just trying to brand himself as being underprivileged through the telling of his life story, which has turned out to be successful by most standards. He uses the argument that "mindset inequality" gave him a chip on his shoulder so he was able to succeed, and therefore others fail because of it, which seems contradictory.

bluishgreen 8 hours ago 0 replies      
Paul Graham is my one of my biggest programming heroes. He single-handedly changed the way I think about and do programming about a decade back, and I am eternally grateful for it. One of the biggest lessons I got from him is "succinctness is power". That essay was a game changer both in terms of the math work and the programming work I do.

Here is one instance where that powerful way of thinking runs head on into a stone wall. He said "few successful founders grew up desperately poor" and moved on. Succinct. yes, but not powerful. This piece took a couple thousand words to say the same one succinct thing that PG said, and nails it in terms of the empathy it generates and the power with which it communicates. While PGs writing in this issue comes out as aspie. This is the lesson he needs to take from that latest article and the Internet's reaction, and not that Life is short and totally miss the point.

Narrativity and Authenticity and Poetry and Verbosity is power! (when dealing with humans).

tomcam 22 hours ago 0 replies      
Made me cry. Much of it rang true. Story has similarities except about 1/5th as traumatic and am a white dude who grew up here. Have done well financially but have a compromised home situation traceable to some of the same causes.
jmspring 21 hours ago 0 replies      
I was soft in my earlier comment, but fine taking a rep hit.

This is not the only post recently where the topic is "oh golly gee, look at the hardship I went through to get through college and the found something."

It's a millennial post, and there have been many of them.

Going through college is a challenge ... Having to work or be responsible during such sucks (I interned at Borland as well as worked for an astronomical research company).

Post college, more than a few have to deal with life obligations that come up.

Our profession certainly offers a bit of a cushion and flexibility, but we have to manage that and our obligations.

I don't see someone here whining about having to support their parents due to the last downturn or the many other personal decisions made.

The blog would have been better written as challenges met and overcome and left out the for lack of tact whiny bits...

Yes, coming from poverty has challenges, and friends in such stretched into their late 20s to complete a degree...but perspective and awareness of the wider world is needed... Not another post about personal insecurities..

ryanwitt112 1 day ago 1 reply      
interesting take. I'd like to hear what PG and others think. Coming from a middle class background, I can relate a bit and see-observationally-the other components to what Ricky's calling "mindset inequality". It's almost like "new money" vs folks that have bigger dollars to spend growing up. I know a lot of friends that have deeply entrenched psychological elements they need to overcome before reaching that "next level" that were engrained because of their upbringing. And, to Ricky's post, it's sometimes more of a challenge than the monetary differentials.
bobby_9x 1 day ago 0 replies      
"but building and sustaining a company that is designed to grow fast is especially hard if you grew up desperately poor"

Most people don't have the money or resources to build a company like this, which is why we have VC. They know you are in a desperate situation and exchange the money that you need for a % of the company.

The better thing to do is choose a solid business idea that can be built slowly and at a certain point, put money you make from this venture into an idea that needs more capital to succeed.

miiiiiike 1 day ago 0 replies      
An old friend and I were talking a few weeks ago and I smiled when he said "We were so poor growing up we didn't even realize we were poor." And we didn't, we were so poor we couldn't even pay attention. It's was good tho. It's still easy for me to live in a tiny apartment and exist on a steady diet of eggs'n'oatmeal, apples, and frozen chicken bought in bulk.
frodik 19 hours ago 0 replies      
It is a good thing that privilege is becoming a topic in these circles. It's fascinating to see how many people still try to present it as looking for excuses. Perhaps because they just don't understand what it really is. Or they need to validate their success by convincing everyone that it is only their hard work that matters and nothing else.Also, beware of survival bias. We don't exactly get to hear about the stories without a happy end here.
jcoffland 22 hours ago 1 reply      
My favorite line:

> Compare that level of confidence to a kid with successful parents whod say something along the lines of If you can believe it, you can achieve it! Now imagine walking into a VC office having to compete with that kid. Hes so convinced that hes going to change the world, and thats going to show in his pitch.

I enjoyed this article a lot but clearly this guy also made some of his own hardships. Going on ski trips just to fit in and then running out of money is incompatible with the image of a frugal poor kid.

peter303 10 hours ago 1 reply      
Im sorry, but if you graduate from Stanford you start near the top of the opportunity heap. Maybe people arent satisified with what they have and want more.
zanewill9 1 day ago 0 replies      
Excellent post. We like to think sometimes the underdog wins but sadly, success is typically given to those that were born with it. The unfortunate part to me is the credit they were given as if they were amazing, not born lucky.
return0 1 day ago 0 replies      
> We think this is the reason why poor founders tend not to be successful.

The essay by PG actually meant that there are no poor founders at all. It would be interesting to have statistics on whether poor founders fail more, or don't even get a chance to try at all. I have reasons to believe that the rare poor person is more motivated and determined than the average groomed-to-be middle class entrepreneur, and there are plenty of cases of dirt-poor persons becoming millionaires.

Htsthbjig 1 day ago 2 replies      
I believe this man is confusing lots of things.

I had lived in China for more than 5 years, Boston, Japan or Korea for more than 9 months each.

In my opinion, minimizing conflict has nothing to do with being poor, and a lot with being Chinese educated.

On the contrary, I volunteer helping poor kids like Spanish gypsies or Subsaharan African and they(and their parents) are ultra confident, and spontaneous. Being open is the default thing for them.

I managed Chinese people in China and there was a world of difference between natives and those Chinese educated overseas.

When living on the US, I was shocked to see parents cheering their kids for the most stupid thing, when in Europe as a kid you are forced to do 4x more effort without rewards at all(like learning multiple languages). It is just what it is expected from you.

In Asia, this pressure over kids is even higher than in Europe.

Family is very important for Chinese almost a religion.This has advantages and disadvantages. For innovation, it is a big disadvantage. Innovation means taking risks, being close to your family means having to convince lots of people those risk are worth it. Most people won't understand you and it is very hard.

In the US, everybody is on their own, basically, nobody gives a dam, which is great for changing the world.

zmitri 1 day ago 0 replies      
What an excellent post. Respect.
timewarrior 1 day ago 1 reply      
tldr; In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

Excellent article by the writer. Apologies for the long post, however hope it is helpful for someone in similar situation. I can relate to many things that he has faced and feel incredibly lucky to not have faced some things that he had to.

I grew up in a small town in a poor family in India as eldest of four siblings. Our monthly budget was 20 dollars and things were really tight. However my dad worked really hard 16 hours every day and made sure that my studies do not get hindered. He told me every single day that with hard work I can achieve anything that I can dream of.

I got into IIT Bombay (one of the most prestigious colleges in India). However it was obvious to me, that I need to get a decent paying job right after school to support siblings and my dad who couldn't do 16 hours any more.

It took my the next 8 years working for others, to save enough to pay for the studies and marriage of me and my siblings and to help my dad retire.

During these 8 years, I built and ran the biggest social network to come of of India. Apart from this also built something which is now the Twilio of India. I was also the part of the team which built the current mobile offering at LinkedIn.

If I had financial stability, I would have started working on mine own ventures 3 years into my career. But it took 5 more years. As soon as I had financial stability, I quit LinkedIn (with 2.5 years of stock unvested) to start a company.

I started a company, where we had incredible opportunities. We built something like Slack for consumers around the same time as Slack. However, being on H1 visa, I was a minority stake holder in the company. And it is a bad situation to be in, if your traction is not already proven. It made sense to exit the company, so we sold it to Dropbox in an acqui-hire.

Dropbox treated me really well. I met some of the smartest people I have ever met over there and it can be a great place to work for many people. However, I soon realized that it wasn't a good fit for me. Such companies are very top driven, there is little creative freedom, and most of the work is cleaning up the tangled code developed over 7-8 years. So I quit Dropbox after an year.

Now I am in a job that gives me more creative freedom and I am pretty happy on that front. Meanwhile, I have been sole advisor for a few companies over the past 2-3 years and they are all profitable and didn't need to raise any money. The entrepreneur in me, keeps me raring to go and start another company. However, because I am on H1 visa, I do not want build another company with minority stake at formation (USCIS rules). To fix this, I would need to get a Green Card. However if you are from India, it will take you 8-10 years to get a Green Card in EB-2.

So the next steps are either move from US, or find a way to get a Green Card on EB-1. If anyone knows any good immigration lawyers, please help introduce.

However, related to the original post. In spite of motivation, talent and hard work; financial situation and immigration (in my case) play a big role in your entrepreneurship journey.

codingsaints 21 hours ago 0 replies      
This is a great article. I'm more of a reader than a co tributor through these articles. I just had to comment on a great, positive post. It makes me want to provide more positive feedback to others to hopefully keep them going.
forrestthewoods 1 day ago 0 replies      
As someone who grew up in the exceptionally poor, rural South I'm not sure what to take away. I don't know anyone who was able to go to Stanford despite bad grades in high school. That's an enviable luxury.
erichmond 1 day ago 0 replies      
Props for writing this. Often times I want to tell my (different but similar) story, but never do. I don't know why. It probably has to do with a number of the points you make in the article, so you are a couple steps ahead of me.
jdenning 9 hours ago 0 replies      
Forgive me for being a bit sappy here, but this post, and the discussions that it inspired here are absolute gold!

It's certainly not the first time I've thought about this topic, but for whatever reason, the OP and much of the discussion is resonating very deeply for me (and apparently for a lot of folks). IMHO, this is some of the most productive discussion about privilege and opportunity that's ever appeared on the internet; for the most part, this discussion has avoided the sort of aggravated competition (i.e. pissing contests) and judgements that generally arise out of internet discussions of privilege. In place of those nastier (albeit very human) responses, this thread is full of empathy, support, and offers of help.

I'm very proud of our little community here today.

I'm planning on writing a more detailed post in a few days after collecting my thoughts a bit more, but I'd like to share some half-formed ideas which this post has inspired (comments and criticisms are very welcome!):

1) Part of what's awesome about this discussion is that it seems to have enabled a bit of ad-hoc group therapy. I think it's very helpful for folks who are facing these hurdles to realize they are not alone; while everyone's situation is unique, it's great that people have been acknowledging similarities in their stories, rather than arguing about the differences. We should try to do more of this (with other contentious topics as well)!

2) As several people have suggested, I believe that collecting these stories could potentially help a lot of people. I'm totally down to build and host a site towards that end - would anyone be interested in sharing their stories in that sort of venue?

3) While the specific issues that people have had to deal with are different, there seems to be some common 'flavors' that many have experienced: a) Socio-economic disparity causing an aversion to risk later in life b) Lack of confidence in oneself which adds an additional handicap compared to more self-confident people, likely resulting in missed opportunities (you can't win if you don't play vs you can't lose if you don't play); impostor syndrome. c) Lack of connections, again likely resulting in missed opportunities and increased difficulty in building new things/finding a job/etc. d) Disparity in access to knowledge that greatly improves chances of success (e.g. importance of SAT scores to college admissions; efficient resource management; interview skills)

Improving the situation in (a) seems to be what the world at large is most interested in. Unfortunately, it's a difficult, heavily politicized, and therefore divisive issue. By contrast (b), (c), and (d) seem like problems that we could really improve, at least within our own community.

For example, someone might have a harder time getting the type of (tech) job that they want due to a lack of personal connections (it can be really hard to get your foot in the door), however, it's likely that the personal connections they need are actually visiting this site every day. While we obviously can't just start providing references for total strangers, how much effort would it be to spend a few hours corresponding with someone and vetting their skills to see if you feel comfortable in recommending them? (I'll put my money where my mouth is on this one - if anyone feels like they'd be a good fit at Cloudera, let's talk! EDIT: just to be clear, I don't really have any hiring authority, but I'm happy to talk to anyone, and potentially help with a recommendation)

Likewise, it seems that (b) could be improved for a lot of people with simple communication - impostor syndrome is very common in tech, so I assume that a lot of people here have advice on the subject, or just an empathetic/sympathetic ear.

Regarding (d), this type of information is all likely available already on the internet, but perhaps it could be more usefully compiled for this particular case, minimizing the number of unknown unknowns? What about a thread (like "Who's Hiring") listing offers for mentorship ("Who Needs a Mentor?") ?

I dunno, am I just being overly optimistic here? It seems to me there's a lot of low-hanging fruit here, if some of us are willing to dedicate a bit of time to it.

More ideas? Criticisms?

kkajanaku 7 hours ago 0 replies      
This article was very real, and I cant help but identify with Ricky, and other stories Ive read on here, but its not just in SV, its entrepreneurship in general, I thought Id share my story as well:

I was born in Albania, a small, poor, European country with a GDP comparable to Zimbabwe, Namibia, or Sudan. That same year marked the fall of it's isolated strain of communism, and Albania's borders were opened for the first time since WW2. In the late 90s, after the collapse of its economy and ponzi schemes, social unrest reached its height following the violent murder of peaceful protesters by the government and police. This sparked an uprising and the government was toppled. The police and national guard deserted, leaving armories open, then looted by militia, and criminal gangs, with factions fighting in the streets to take control. My parents moved our beds to the hallway of our small apartment as there were no windows, and my little sister and I had to stay quiet so no one would hear we were there. After a UN operation, the government was restored, and the situation was relatively calm. Sometime that following summer, my dad found out about a US green card lottery, filled out an application form, and because he was in a hurry, handed it to a random stranger waiting in line to submit it for him. He then forgot about it, until a year later, when we got a letter telling us that we had won. My parent's weren't terribly off in Albania, they were comfortable, their friends, families were there, they had great jobs, and the future looked promising. But having just gone through that rebellion, then the Yugoslav Wars to the north trickling across the border, and the allure of the American Dream, they decided it would be best for my sister and I.

We moved to Philadelphia in 2000, in a working class neighborhood, with a few suitcases and not one word of English. My parents took on multiple jobs, their Albanian communist degrees were obviously not recognized in the US, so my dad, once a doctor, is still working maintenance, and shoveling snow in the East Coast, as I write this. Like Ricky said, and like all immigrant kids, my family depended on me to learn english and deal with translation, and everything in between. 5 years later when we became citizens, and received our passports, my parents knew more about American History than was taught in my inner-city high school.

My parents are incredibly supportive, but they moved to the US in their 40s, they werent familiar with the language, culture, and even more importantly capitalism. Apart from the classic model of education, they werent familiar with the tools required to be successful in such a strange place like this. But with their meager wages they were happy to support my hobbies, buy me lots of books, and a computer with internet access which taught me much more than my inner city schools.

Eventually I got a college degree, then went on to do a dual-masters in design and engineering at the Royal College of Art and Imperial College in London. I even got to go to Tokyo and work for Sony, while studying there. I graduated this past summer, and then launched my final group project as a startup in London with my friends, two English, Oxford educated engineers, and a Spanish designer/engineer whos father is the president of one of the largest companies in the world.

Then reality sank in, I had to leave, I cant be an entrepreneur just yet, and I moved to SV to find a high paying job in tech for the next 5-10 years, so that I can:a. afford to pay rentb. pay off my educational loansc. pay off my parents homed. help my sister pay for her educatione. send some money home because my dad is getting too old to shovel snow

Karunamon 1 day ago 2 replies      
Great read.

I've become allergic to words like "privilege" as they usually are seen in the company of ill-thought-out and grandiose/insulting/wrong proclamations about How Things Should Be Done,

..but this is none of that - it's an honest look and deep analysis of someone's experience.

And knowing how important upbringing is, and the sheer (almost superhuman) tenacity the author had to go through to even partially overcome the (poisonous? non-optimum?) mindset that was completely a result of things out of their control...

what the heck is everyone else supposed to do? How does society do right by people like this? Overall, we're pretty horrible at dealing with things that are as subtle as mindset.

dba7dba 19 hours ago 0 replies      
This story really made me reflect on my own similar past. Growing up poor in US as son of immigrant family and somehow getting into a nationally well known college (public though), I was shocked to see things that I had never known about.

The shock came from seeing how I lacked culture/experience/skills/confidence others had. And these others had grown up in more stable environments with either some or quite a bit of money.

I didn't know how to play any instrument. I wouldn't say everyone I knew in college played an instrument since I wasn't at Standford :) but still it was obvious to me I LACKED the soft skills my peers had.

I had not done many things as a teenager that are possible only when you grow up in a family with some means. And this weakened already not so robust confidence in myself, resulting in a mostly downward spiral as far as confidence in myself.

You see growing up with money buys you a lot of soft skill that helps you later.

I'm not bitter though. It is what it is. I try to be thankful for what I've had so far.

lifeisstillgood 20 hours ago 0 replies      
One thing striking me was how as s child the author had the "common" responsibility of dealing with landlords bills etc for the family.

It may not be something a startup can solve but "administrivia as a service" - some means of connecting families who need with someone able to actually advise and not be taken advantage of

In the uk we have a volunteer service called citizens advice bureau - I am thinking something like this on tap might be beneficial in ways hard to quantify


tn13 1 day ago 4 replies      
As an Indian immigrant when I see people complaining about Privilege and Inequality in SV (and in America) I feel like laughing.

I lived in a society where everyone was almost the same, similar economic status, similar privilege etc. etc. Life sucked. I decided to move out to be among the top 10% instead of one of the 100%. I eventually ended up in SV.

This place is awesome and the very reason I am here is because I can be in the top 10%. I dont want to be equal but I seek privileged, extra-ordinary wealth and stuff that most others can not afford. I think it is an amazing thing that places like SV exist. If you somehow take out that incentive I think I will move somewhere else. Of course I would be moving out of California sooner or later given the taxes.

namenotrequired 1 day ago 2 replies      
Finally someone who talks about the consequences of economic inequality. PG seemed to think all that mattered is the causes.
ryandrake 1 day ago 3 replies      
"I'm a self made millionaire [0]"

[0] - Apart from the safe suburban upper class childhood, the prep school and Harvard education my parents paid for, the job at Goldman Sachs my uncle got me straight out of school, and the finance network from that experience that eventually helped me with my first funding rounds, but yea, besides all that I'm TOTALLY SELF MADE!

marincounty 22 hours ago 1 reply      
I looked through your past posts, and you are legit!

I liked to that you went to a community college. I too screwed up in high school. I didn't even know why people were taking another test--the SAT. That said, I cleaned up my act in my senior year, but it was too late.

Everything, and a lot more, that I missed in high school, I made up for in two semesters at community college.

If anyone in high school is reading this, and thinking, "I wish I could do it over?". You can! I had a great time at my community college. I saved a lot of money, and met some really wonderful people. The teachers really seemed to care. I didn't find that at the four year school, or even my professional school.

Just make sure to transfer, and get that four year degree. So many people don't transfer to a four year university, or even get the associate degree. Yes, so much of college is absolute bull shit, but degrees are still valued in a lot of professions. It's changing though, and I couldn't be happier. British companies are taking the lead. I know at Penguin books; HR isn't even allowed to know if you went to college, or not. You are hired on your experience, and maybe a test? The way it should be.

robgibbons 1 day ago 3 replies      
You obviously have no idea what it's like to grow up poor. The fear, the guilt, the frustration, and the exhaustion that you learn almost as if through sheer osmosis from your parents.

The author is not arguing that you literally cannot compete if you're poor. But it's the very mindset from growing up in poverty that, through almost every interaction you have in childhood, leads you to _believe_ that you cannot compete, which prevents you from even trying. And even if you overcome that feeling (through constant hard work and willpower, such as our author's), say you do try to compete with the rich kids, then your lack of inborn confidence is so obviously apparent that you come off as inexperienced, or insincere. This is perfectly accurate in my own experience.

Mindset inequality is actually an incredible way to describe it.

dang 22 hours ago 0 replies      
> People feeling sorry for themselves because they're not male and white from a place with food are annoying.

This breaks the HN guidelines: it is both a personal attack (since you're talking about the OP) and gratuitously negative. Please do not post comments like this here.


BurningFrog 1 day ago 3 replies      
> No one teaches you the basics

Room for a startup/free service that does that!

Server Retired After 18 Years Running on FreeBSD 2.2.1 and a Pentium 200MHZ theregister.co.uk
409 points by joshbaptiste   ago   157 comments top 25
krylon 1 day ago 2 replies      
I remember a discussion on a FreeBSD mailing list, around 2003-2004, where people bragged about the impressive (though in comparison to this headline, puny) uptimes of a few years.

One of the developers remarked that while he was proud the system he worked on could deliver such uptimes, having an uptime of, say, three years, on a server, also meant that a) its hardware was kind of dated and b) it had not received kernel updates (and probably no other updates, either) for as long. (Which might be okay, if your system is well tucked away behind a good firewall, but is kind of insane if it is directly reachable from the Internet.)

Still, that is really impressive.

ljosa 1 day ago 3 replies      
The authoritative DNS server for pvv.ntnu.no is still a MicroVAX II from the late 1980s. It runs an (up-to-date, I think) NetBSD. Logging in by SSH takes several minutes, even with SSH v1.
crishoj 1 day ago 1 reply      
In fairness, from the article it's not actually clear whether the server literally had an uptime (as reported by the OS) of 18 years, or whether it had simply been in constant service (modulus power cuts) for 18 years.
keithpeter 1 day ago 0 replies      
Having read and enjoyed this thread and the later follow up thread on The Register, I was struck by the number of commenters who could not clearly remember the dates/machine types or who posted anachronistic descriptions.

People here forging ahead with innovative hardware, why not just record brief details of dates and setups in the back of a diary or something. In 30 years time, you'll be able to start threads like this!

geggam 1 day ago 1 reply      
I tossed out a similar system not too long ago

Pentium Pro 180Mhz running OpenBSD 64MB RAM with a perl BBS averaging around 10k hits / day.

Wasn't worth the electricity to run that thing. It still worked when I put it out on the corner.

theandrewbailey 1 day ago 0 replies      
Ars Technica had an article a few years back about a machine that was up for 16 years. Had pics, too! http://arstechnica.com/information-technology/2013/03/epic-u...
SEJeff 1 day ago 1 reply      
This was an old RHEL4 external dns server I ran at $last_job:


I was sad that we had to shut it down, but we had to shut it down due to migrating our primary colo to another city and were going to retire all of the hardware. I'd been manually backporting bind fixes, building my own version, and had to do some config tweaks when Dan Kaminski released his DNS vulns to the world.

It is always a sad day to retire an old server like that, but 18 years... What a winner!


But 1158 days for an old dell 1750 running RHEL4 isn't too bad considering it serviced all kinds of external dns requests for the firm. Its secondary didn't have the uptime due to constant power issues in the backup datacenter and incompetent people managing the UPS.

rogeryu 1 day ago 2 replies      
Almost as impressive is the fact that in 18 years, the electricity had no downtime.
jedberg 1 day ago 1 reply      
I used to run the FreeBSD box for sendmail.org. When I left that job in 2001 it had already been running for 2+ years.

Considering that the datacenter it was in is now the Dropbox office, I'm guessing it had to be shut down and moved at some point, but 2+ years seemed like a really long time even then!

FreeBSD is just really good at lasting forever.

wazoox 1 day ago 0 replies      
I always had many Unix machines with high uptimes around. My home PC (Linux) typically reboots 2 or 3 times a year. My office DNS server has currently 411 days of uptime and is the best of my bunch ATM.

In 2002 I had installed on the machines under my guard some program that reported uptime to some website. One of my machines, an SGI Indy workstation, had a high uptime, about 2 years. Then a new intern came, and we installed him next to the Indy. Unfortunately, his feet under the desk pulled some cables and unplugged the Indy and broke my hopes of records :)

MikeNomad 1 day ago 1 reply      
Great run for all-original equipment. I worked at Shell's Westhollow Research Center in the mid-90s. We handled the nightmare of standardizing the desktop space (for the first time ever).

A lab was decommisioning an instrument controller that had been running non-stop since they had first spun it up, fresh out of tge paking box, a decade previous.

And they had never backed up any of the data. Sure, the solution was the pretty straight forward use of a stack of floppies. It was still pretty nerve-wracking having a bunch of high-powered research scientists watching over my shoulder, "making sure" I got all their research data off the machine they were too smart to ever back up themselves. Good Times.

grabcocque 1 day ago 0 replies      
And now its watch is ended.
iolothebard 1 day ago 0 replies      
Anyone running old machinery that had DOS drivers would likely have older computers. I remember working on base seeing 386/486s in an aircraft hanger area that were so covered in grime I was astounded they were still used.
ommunist 1 day ago 1 reply      
Well, I have TI-92 graphing calculator still working OK, since 1995.
koolba 1 day ago 0 replies      
Is there a way to track uptime across kexec[1] restarts? That way you could differentiate between a hard reboot and "soft" one (ex: automated kernel upgrade). Having a system like that working for a 18 years would be insane!

[1]: https://en.wikipedia.org/wiki/Kexec

gtrubetskoy 1 day ago 0 replies      
Funny, in today's world, the uptime on my Linux (virtual) box is several times greater than that of the macbook within which it's hosted.
emcrazyone 1 day ago 1 reply      
oh man, I have them so beat! I have a Slackware Linux box with similar specs. 200 MHz Pentium, 32MB of RAM, and I think I have an old 10GB barracuda 80pin SCSI drive in it connected to an Adeptec PCI SCSI card. Every so often the hard disk starts making a high pitch noise but throws no errors and the noise goes away after a few minutes. It sits on a UPS and I probably have an uptime of a few years on it now. It has been running nearly 24/7 since 1996! Only powered off when I needed to move the box from a home-office and a few rented offices over the years.

When it was in the basement of my home/office, I would sometimes hear it's disks wine as I was working out (lifting weights and such). It was even in my basement through parties in my early bootstrap years.

I originally bought it to run WinNT 4.0 for a new company a friend of mine and I bootstrapped. I would guess a couple years later is when I put Slackware on it. It's running a 2.0 linux kernel. It's not exposed to the public Internet.

It use to be a local Samba, DHCP, and DNS server for the company. I eventually upgraded to new hardware and left this server around for redundant backups. I develop software so copies of my git repositories find their way onto this box each night. It is in no way relied upon other than to call upon it out of convenience if another server is down or being upgraded, etc...

At one point the box was in the basement of my home when a small amount of water got to the basement floor and because the box sat just high enough on rubber feet, no damage. Occasionally I go back there and pull the cob webs off it.

There is no SSL on it. We still telnet into it or access the SMB shares for nostalgia. It's sort of a joke in the office these days to see how long it will last or if it will simply out last us.

webkike 1 day ago 0 replies      
It will be given the highest honor a sys admin can give a piece of hardware: casual reference to it as "what a box" in the future.
vondur 1 day ago 0 replies      
Reminds me of the old Netware servers we used to have running file services and print queues for a few computer labs at a University I worked at. Netware was really stable and we only restarted them when some of the hard disks in the raid array were dying.
deutronium 1 day ago 0 replies      
Impressive! I made a silly kernel module that 'fakes' your uptime, by patching the kernel.


meesterdude 1 day ago 0 replies      
This is beautiful to me; it's ROI is off the charts from any kind of reasonable expectation. Keeping it cool certainly helped, and having it serve a role that could even exist for 18 years is another important factor.
NDizzle 1 day ago 0 replies      
18 years is a really good run. I had some white-box Cisco networking equipment that had 10 year uptime. I shut it down when we closed the office they were in.
bechampion 1 day ago 0 replies      
it would've been nice a photo of the uptime or something like that... i believe him tho.
Announcing Rust 1.6 rust-lang.org
440 points by steveklabnik   ago   215 comments top 20
haberman 2 days ago 3 replies      
Since this release is focused on stabilizing libcore for embedded use, it seems a good time to ask a question that's been on my mind.

C11 (and C++11) defined a memory model and atomic operations for shared-state lock-free concurrency. But that model and the atomic operations aren't being used by Linux, because they didn't match up with the semantics of the operations that Linux uses. (See https://lwn.net/Articles/586838/ and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p012...).

I'm curious what Rust says about this. Does Rust have a memory model like C11/C++11? I'm curious whether Rust (and C11/C++11 for that matter) will evolve to have primitives like what the Linux kernel currently defines and uses.

allan_s 2 days ago 8 replies      
I was wondering:

As the HN crowd seems to have quite a lot of Rust supporters, would it be a good selling point in a job description ?

i.e if (it's just currently a personal hypothesis) a company were to consider to (re)write some part of its Rest-ish microservices and that Rust was the chosen language, and was looking for people to help on that, would it make a `interesting++` in your mind ?for real services used by real people by a not-so-startup company, in europe.

edit: I already deployed in production for my previous company some (with a very stricly limited scope) microservices in rust (with everyone approval) and it was quite a success, so I'm more and more thinking that rust is now enough developed to fit the market of language for microservices, as it's more or less "understand HTTP, read/write from redis/postgresql/mysql/memcache, do some transformation in between" and Rust now support these operations quite well

Cshelton 2 days ago 3 replies      
This is relevant and cool, UPenn now has a course for Rust - http://cis198-2016s.github.io/

You can follow along, all slides and HW are on github.

valarauca1 2 days ago 3 replies      
Actually managed to sneak some Rust into production. Nobody cares what language you wrote the DLL in, just that it does what the docs say it should do.
VeilEm 2 days ago 2 replies      
Congratulations! I'm loving Rust, it's my go-to default language now. I'm going to start messing around with piston.rs to make some basic 2d games. I already wrote an irc bot with Rust.

For those wondering, stable just meant the API defined for interactions with some libraries were subject to change. It wasn't like a problem with using the APIs, it was just the developer has to know that new releases might change how they worked or if they would even be available in the future.

djb_hackernews 2 days ago 2 replies      
I've poked at the language a bit over time and while I don't think I'll ever "get" rust, I can say the folks in #rust on irc.mozilla.org are friendly and helpful, which can't be said for all languages.
EugeneOZ 2 days ago 2 replies      
Keep going! Thanks to all contributors for their work on this awesome language and crates.

I use Rust for web as REST API and I'm absolutely in love with this language. Maybe first steps were little bit steep, but it's worth it :)

// cargo run!

zcdziura 2 days ago 1 reply      
What do you mean when stating that applications aren't supported for developing around libcore?
criddell 2 days ago 0 replies      
I have a question for Rust fans: how do you deal user interaction? Do you have a favorite user interface library? Do you separate the UI from the program and communicate either via IPC or http+html? If you don't care about cross platform capabilities, is there a great library on the platform you do care about?
nikolay 2 days ago 2 replies      
I think Rust badly needs an AWS SDK as a lot of systems software is being written now in Go...
sdegutis 2 days ago 7 replies      
After trying to understand some Rust, it seems to me that it's just as complicated as C++, from the programmer's perspective. Was I mistaken in thinking that being simpler to program in than C++ was one of the goals?
dikaiosune 2 days ago 1 reply      
May the libc event never occur again (at least, I think it was libc with the wildcard dependencies). That's great to hear.
posborne 1 day ago 0 replies      
Is there a good way to look up no_std crates? For crates written with no_std, is there a keyword we should be tagging things with?

I have several crates providing access to GPIO/SPI/I2C under Linux and would like to put together a roadmap for having the interface to these types of devices that is portable to various platforms and devices (e.g. Linux as well as Zinc, ...).

wyldfire 2 days ago 1 reply      
W00t, I made the contributor list!
jedisct1 1 day ago 0 replies      
I'm still looking for an _easy_ way to write servers that can handle many TCP connections with Rust.

The native thread/connection model quickly shows its limits, and is very slow on virtualized environments.

mio is as low-level as libevent and just as painful to use.

mioco and coio are very nice but blow the stack after 50k coroutines.

kevindeasis 1 day ago 1 reply      
So, I've recently tried to figure out what rust it's been getting lots of attention in HN.

So, it allows you to have safety and control.I thought that is very neat.

I've got three questions for experts.One, what type of applications is Rust intended for?

Two, I like JS because I can code in the client, and server in one language. Will there ever be a web server framework for rust and an api that allows me to modify the dom?

Three, what are your predictions for the future of rust?

hokutosei 1 day ago 0 replies      
I'm not using rust for heavy/intense db/API stuffs currently w/Go. but in the future, i'm looking on using it as Main language for our API's. I like how safe it is and its explicitly.
jinst8gmi 1 day ago 0 replies      
Just waiting for official Jetbrains/IntelliJ support...
lasermike026 2 days ago 0 replies      
Very exciting!
imakesnowflakes 1 day ago 1 reply      
I have looked at both Rust and Go. What I have felt is that, Rust is too restrictive. I mean, you have to fight the compiler a lot harder then you have to do in Go. Some times, for example, if you are writing device drivers or real time embedded programs, then that is great.

But for web services? I think it is overkill. I think Go strikes a nice balance. I would love to be convinced otherwise though. So please tell me, what am I missing?

Show HN: Super Mail Forward, an HTML email that evolves as you forward it medium.com
450 points by hteumeuleu   ago   63 comments top 12
ldom22 2 days ago 2 replies      
This is great for pranks: you send a serious looking email to someone, and then they forward it to someone else thinking they sent some chart or whatever but the next receiver instead sees another picture of your choosing
shimon 2 days ago 7 replies      
TL;DR: A series of markup and styling hacks that exploit HTML interpretation quirks of various web email services can be hacked to intentionally vary message appearance between services. Coupled with forwarding, which further transforms the email using service-specific quirks, you can make a game where different paths of forwarding across services trigger different appearances.

Fun hack! I feel like there should be some clever practical applications but I'm drawing a blank.

yoavm 2 days ago 1 reply      
More than anything, this thing shows the sad situation of CSS support in different email clients.
whafro 2 days ago 0 replies      
One challenge many services face when sending emails is that you often want to log a user into the account if they've clicked in from an email after all, if they have access to the email account, they can usually reset the password.

But the rub is always the propensity for users to forward on those same emails. If they do, then the second recipient gets control of the first recipient's account, and that's rarely the intention of the first recipient/forwarder.

I haven't had a chance to dive in enough, but I wonder if a technique like this could effectively swap tokenized links with generic links (even if you're just swapping 'display' rules) when a message is forwarded. You might have to use different message style/markup output depending on which service you're sending the message to, but my read of this article is that it's not a ridiculous thought.

mschuster91 2 days ago 1 reply      
Ironically, Lotus Notes Webmail is the only client I have seen so far that actually uses iframes to display HTML emails. If webmails just could embed the HTML content into an iframe with the proper sandbox attributes... nods off and dreams
rosalinekarr 2 days ago 0 replies      
This would be great for an email marketing campaign. You could probably get a lot of people to refer their friends just for the opportunity to see some cool animation or graphics.
SatoshiRoberts 2 days ago 1 reply      
HTML email made me think of IE6 as the iPhone 6. The rendering engines on most clients are horrible.
__jal 2 days ago 0 replies      
This is a really interesting hack.

I love things that exploit the oddities of the landscape to do artsy/funky things; far more interesting than finding yet another way to trick someone into installing a password stealer.

kecks 2 days ago 1 reply      
This can leak the user's client by changing links per client.

Make a link per identifiable client, show only the one for the current client, and give each link a post/get parameter identifying the client. Quite easy to do, but a lot of work to have broad client support.

Tada! I now know you read your email on your [obscure and bugged client], which is susceptible to [this and that exploit].

Pranz 2 days ago 0 replies      
Wow, this was a very creative use of forwarding.
rorykoehler 2 days ago 0 replies      
Nice work here and also nice post about the Brave thing and internet business model on your site btw. I agree 100%.
alch- 2 days ago 0 replies      
That, sir, is totally awesome. Well played.
Why do people keep coming to this couples home looking for lost phones? fusion.net
355 points by cremno   ago   252 comments top 35
mapgrep 2 days ago 10 replies      
My money is on a wifi SSID that matches the one used by thieves or a heavily-trafficked location the victims all pass through.

My company moved ~5 blocks and it really screwed up the map on my phone (which I use to get around the city) for several months. My company had left the network SSID the same in the old location, so that no one had to re-configure their wifi. Even with GPS on, my phone was always convinced it was in the old location up the block, and this would persist even when I was out on the street, until I walked around a bit.

There are companies (presumably Skyhook is one of them) who drive around mapping SSIDs to physical locations. The problem is that SSIDs can move or be duplicated elsewhere.

The article says of one of the couple "at one point he reset their router, and changed the frequency at which it broadcasts; it didnt solve the problem." It does not say if he changed the SSID.

Theoretically, location is often determined using not just one but several nearby SSIDs, a sort of triangulation. Another possibility here is that there are multiple nearby SSIDs around this home that match the SSIDs surrounding some other area tied to the victims.

sideproject 2 days ago 3 replies      
I had a similar experience about a month ago, when I thought I lost my iPhone 5s.

Logged onto "Find my iPhone" app and it told me it was about 1km away from my house. I thought I must've dropped it somewhere nearby.

So I got the address from Apple Map, drove there and knocked on the door to greet a rather defensive (obviously) lady who, of course, denied ever picking up an iphone that day.

I snooped around to see if there were any suspicious people around, maybe she has a wayward son who goes around and steals other people's phones.

I then went to the police office nearby and asked them what I could do. They told me they can't use the GPS tracking as an evidence for a search warrant - doh!

It was frustrating because the app was telling me that my phone was right there! At the back of this lady's house!

At this moment, I was going through all sorts of thoughts - such as "should I break into her house at night?", "should I go back and just barge into her house and locate my phone and shout 'AH HA! I KNEW IT! YOU THIEF"!

Feeling dejected, I came home, only to find my phone sitting on the top of my drawer.

2 seconds ago, I swear I thought she was the thief.

Apple - you disappointed me.

patrickmay 2 days ago 7 replies      
I found this particularly disturbing:

> In June, the police came looking for a teenage girl whose parents reported her missing. The police made Lee and Saba sit outside for more than an hour while the police decided whether they should get a warrant to search the house for the girls phone, and presumably, the girl. When Saba asked if he could go back inside to use the bathroom, the police wouldnt let him.>> Your house is a crime scene and you two are persons of interest, the officer said, according to Saba."

The police shouldn't be able to detain someone for over an hour without probably cause and without arresting them.

Splines 2 days ago 1 reply      
My guess is that some IP-to-Geolocation service says nothing more specific than "Atlanta" and this couple happens to live in the geocenter of Atlanta.
IvyMike 2 days ago 1 reply      
It would be fascinating for someone to log into their Google Location History and see where the phone was right before it was at this house. It might provide a clue.


kazinator 2 days ago 4 replies      
What's sad is how the local police seem to have some sort of learning disability or amnesia about this problem. They showed up there repeatedly and harassed the occupants.
JonnieCache 2 days ago 5 replies      
Now this is interesting. Presumably the coordinates of their house are significant in some way. The result of some kind of truncation perhaps? I can't see how a floating-point error could converge on a specific value like this, but I'm no expert in such things. If they could only post their address the answer would surely be found very quickly, but that would defeat the object somewhat.

I suppose it's fortuitous that nobody lives at 0,0...

trunnell 2 days ago 0 replies      
How about GPS spoofing?

There's a claim that it's been done before [1]. Maybe a criminal organization has the resources to build a GPS spoofing device that's used in their "holding facility" before they root the phone?

Can anyone with RF or GPS experience guess the difficulty?

[1] https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...

fibbery 2 days ago 3 replies      
Crazy idea: install a heavy, wood engraved sign in front of their house that says "No, your phone isn't here".
adevine 2 days ago 0 replies      
If I were them, I would try a non-technical (or at least mitigation) strategy: put a sign in their yard or on their front door that says "Sorry, we don't have your iPhone" and a description of the problem and screengrabs/URLs of articles like this.
xlayn 2 days ago 2 replies      
Other options:

 -The name of the wireless in a database and picking up the first one, could not be fixable changing the router name or ip address as it is already recorded somewhere -Same for other routers or ip address in the neighbor -Maybe it's not their fault but someone else did it on purpose, e.g. take the cell phone and manipulating it inside a room with stolen/faked/forged data somewhere else and wrapped in a metallic sphere to block signals so only the forged router can be used? -Even more crazier: put a router really close and put there a vpn/proxy?

cubano 2 days ago 2 replies      
You know, its not completely out of realm of the possible that these people are lying and really know much more then they are telling police.

I am not implying anything about these people, but I am just saying it isn't impossible.

I lived in Las Vegas for a couple of years and was involved with some people who, from the outside, seemed like very normal folk...in fact, in many ways, I was someone like that, too, due to issues I was fighting at the time.

We all have a different set of experiences in our lives, and, unfortunately for me I suppose, my experiences make me think about this in a different way then many here might.

etingel 2 days ago 0 replies      
The problem is likely that the location where these phones are really at is near the WiFi router that used to be at this address. No amount of messing with their router will help with fixing this, since those phones aren't there to begin with. The couple might have better luck hitting up the previous tenants/owners.

Based on this line from the article, I'm almost certain that this is the real answer:

> It started the first month that Christina Lee and Michael Saba started living together.

dghughes 2 days ago 0 replies      
A teen in Canada tracked his lost cellphone to a home and was killed trying to get it back.


bloaf 2 days ago 2 replies      
I wonder if the couple could make a profit on this mishap by suing anyone who accuses them of stealing phones (i.e. on social media) after visiting their house.
qume 2 days ago 0 replies      
If this community can't pin point the problem together, then there is something up here. Clearly this warrants Google, Apple, the telcos and someone from an electronic forensic team each putting a part time expert into a team to figure this out for everyones benefit - themselves and this couple.

Of course the reality is that key Google and Apple staff know exactly what has caused this and don't have a ready solution so are keeping quiet.

In any case, if there are Google or Apple employees reading, perhaps you can suggest this idea to someone internally in the chance there may be some progress before someone innocent gets killed for 'stealing' a phone.

DigitalSea 1 day ago 1 reply      
I know it is an inconvenience, but couldn't they just maybe get a new router and see if that fixes the problem? Surely a cheap router is better than getting your house ransacked by police?
TuringTest 1 day ago 0 replies      
There is an obvious and simple solution to this problem; although it's not up to this couple to implement it, but to all the developers building or using geo-location.

This is a problem caused by incorrect data representation. Everything the companies know about the location of the phones is an imprecise area, yet they are representing it with this: [1]

This absurdly precise representation doesn't convey the error margins of the information available, and it's what's convincing people that this couple's home is the point that they're looking for. The mapping companies are misleading their users by hiding the level of confidence about the information provided.

Please all front-end developers, don't use a map pin to represent a place in the map if you don't know their coordinates or exact address. A circle with a radius proportional to the uncertainty area is the best representation in that case.

[1] https://www.google.com/search?q=icon+map+pin&biw=790&bih=750...

jbb555 1 day ago 0 replies      
I looked on https://maps.google.com/locationhistory at what google think my location is.

Usually it's pretty accurate. But several times a year there are major anomalies For example showing me traveling the 4 miles to work in London via a quick journey to Oslo.

It's pretty reliable but not 100%

Swannie 2 days ago 0 replies      
What's shocking is the lack of a clear explanation by the so-called experts contacted in this article.

My explanation as to what is likely happening:"Every WiFi router has a special unique number - think of it as serial number - baked into the device by the manufacturer (known as a MAC address). Manufacturers request a range of these unique numbers from the IEEE, and are never meant to duplicate them. When you connect to a WiFi router, you connect to its friendly name (SSID), but also your phone receives a part of this special unique number (BSSID address) [1].

Companies like Apple, Google, and SkyHook, record the location of WiFi routers using this unique number. When a phone or other device has a strong GPS location and a strong WiFi signal, they can fairly reliably assume that this unique number is at this specific location.

However, not all manufacturers strictly follow the unique number allocation rule, as getting allocations can be a time consuming process. 999 times out of 1000, reuse of these numbers is not a big issue, and goes undetected. In this case, it is likely that the thieves are using, or are located near, a WiFi router with the same unique number as this couple. Changing this special unique number is sometimes possible on expensive enterprise grade WiFi routers by knowledgeable experts, but not possible or advisable on home routers. The couple should change their WiFi router."

Yes, I have conflated a number of terms there for simplicity. For technical accuracy: WiFi router -> access point.

Edit: added [1] https://arubanetworkskb.secure.force.com/pkb/articles/FAQ/Ho...

As BSSID is != MAC address.

incompatible 2 days ago 1 reply      
"Avasts software kept tracking the phones after the other programs were cleared with a factory reset"

Does anyone know how that works?

tarikjn 2 days ago 0 replies      
Changing their router or even turning it off won't necessary help.

Here is a way to reproduce this issue and explain my point: if you are a thief, you could setup a GPS spoofer pointing to that house or have had your router in that house in the past so that some phones registered/verified it's MAC address to be at the house location. Now assume the thieves live in a location where they took this router with them and where there is no GPS signal or other router or cell signal, but only the thieves' router turned on. Now as soon as the thieves connect the stolen phones to their router, they will report being at the house.

My bet is that this is likely an intentional attack by the thieves and that they are aware of what they are doing. There is a small chance they could have been people living in the house before or drove by to setup their spoof as it would have been much easier than getting their hands on a GPS spoofer.

wahsd 2 days ago 0 replies      
This is just a hunch, but does anyone know whether there is any connection / shared services between the phone finding system and the iMessage airline flight tracking system?

It may just all be coincidence, but that flight tracking feature is so wonky and jacked, giving false locations, legs, flights, and information on the regular. I am surprised it hasn't caused a massive outcry for just how horrible it is. It kind of makes me wonder whether there is some shared service or database or something because the flight lookup feature just smells of the same kind of failure.

I realize, most people don't know/recall that iMessage will auto-link flight numbers. Just message the full flight number.

akavel 2 days ago 0 replies      
Given that people coming to them have their info from somewhere, there might be chance they could succeed through asking those visitors where exactly do they have the data from, and then trying to contact/file the complaint to this specific service/company/...
mangeletti 1 day ago 0 replies      
I didn't finish the entire article, but my immediate thought was that this has something to do with these new phone drop spots that I've seen at grocery stores.

You apparently can just put a phone in one of these ATM-like machines and get money out, which immediately struck me as a clever way to buy stolen phones on the cheap from criminals, with indemnity... which would definitely lead to situations like this when those stolen phones are resold to unsuspecting consumers.

mdip 2 days ago 1 reply      
Based on my experience with a few phones I own, there's a few things that could be happening here:

It was mentioned the "SSID"/MAC address problem. It's possible that they have a home router with its default SSID and are encountering a MAC address collision (assuming MAC address is always taken into account, which I'm not sure that it is). Their router is likely part of some database that the GPS uses when the phones enter an area with WiFi but no cellular service or line of sight to the satellites. I had a similar failure every time I went indoors to an archery facility I visited weekly for three months. Both my wife's and my phone would think we were a clear 30 miles away in another city the second we got far enough into the building to lose cellular service. I dug into it and discovered it was using WiFi APs to get location. I think the archery place has another location in that other spot, so it's possible they swapped WiFi gear at some point, but it's anyone's guess.

Another possibility, hinted at in the article, is that there's no other location data available to the stolen phone (no mapped WiFi, no cellular service) but it has an IP address so the devices are falling back to Geo IP which is extremely inaccurate (my IP address changed recently and I am now a Canadian according to location services on my PCs with no GPS capabilities -- 200 miles off). It could be a circumstance of "that IP isn't known, but that block is owned by x ISP and here's a general location of where that is ... only the little dot happens to land on their house.

It would be really smart for apps that track location for theft purposes to keep a reasonable history. If it's a mobile phone, the last known high-accuracy reading from the GPS should be presented along with lower accuracy results to help in situations like this. I'd imagine it wouldn't be terribly difficult to correlate several readings over a period of time and discard ones that are clearly not sane (as would have been the case with my phone in the archery place). A bonus would be to perform other actions when the device is marked "stolen", like take photos at certain intervals and upload them to the cloud to make it easier to "prove" your phone is in the hands of someone it shouldn't be (one of the tools I had did something like this).

ck2 2 days ago 0 replies      
The only way they are going to get this fixed is to sue the company making the locating software.

Kicking in their door is the least of it - some cop is going to shoot them when they scream or react the invasion.

plasticchris 2 days ago 0 replies      
I had something similar happen when I was in college visiting a friend and the police show up asking who dialed 911, if she was alone, etc. This was before ssid based geolocation become popular. I had to spend some time explaining how inaccurate cell tower positioning is, most people just assume that if the cops say it came from inside the house it must have.
jrochkind1 2 days ago 1 reply      
> Lekei said by email that the couples router could be causing the problem; if misconfigured, it could be broadcasting that its a different location than it actually is

Wait, what? Is my router broadcasting a location to someone? What technology is this, and how do I make sure my router isn't doing it?

ucho 1 day ago 0 replies      
Ignoring everything else - if it the phone that went missing, should the location at least be accurate to nearest base station. In case of missing child operator would provide triangulation results, right?
kelvin0 2 days ago 0 replies      
Once this is resolved, we might temporarily have a way to fool the 'illegal' surveillance gear used by law enforcement?Unless they use some completely different scheme to snoop on cell phone users?
tlrobinson 2 days ago 0 replies      
I wonder how easy it would be to spoof the location of a device with WiFi gear that can broadcast multiple arbitrary BSSIDs?
such_a_casual 2 days ago 0 replies      
$1 says it's some null value.

This is a good premise to an "off-by-one" parable where it turns out the neighbors are phone thieves.

strathmeyer 2 days ago 1 reply      
Why do they keep answering the door?
ChrisClark 2 days ago 1 reply      
Yes, let's bully someone because of their appearance.
ES6 Cheatsheet github.com
314 points by DrkSephy   ago   78 comments top 20
krisdol 1 day ago 3 replies      
Wow, var was so broken.

Anyway, we use as much ES6 as Node 4 allows at work. Transpiling on the server never made much sense to me. I also used to sprinkle the fat-arrow syntax everywhere just because it looked nicer than anonymous functions, until I realized it prevented V8 from doing optimization, so I went back to function until that's sorted out (I don't like writing code that refers to `this` and never require binding, so while the syntax of => is concise, it is rarely used as a Function.bind replacement). Pretty much went through the same experience with template strings. Generator functions are great.

I'm not a fan of the class keyword either, but to each their own. I think it obscures understanding of modules and prototypes just so that ex-Class-based OOP programmers can feel comfortable in JS, and I fear the quagmire of excessive inheritance and class extension that will follow with their code.

pcwalton 1 day ago 1 reply      
> Unlike var, let and const statements are not hoisted to the top of their enclosing scope.

No, let is hoisted to the top of the enclosing scope [1] ("temporal dead zone" notwithstanding). let, however, is not hoisted to the top of the enclosing function.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Raphmedia 1 day ago 0 replies      
I would recommend taking a look at this page for a bigger "cheatsheet": https://github.com/lukehoban/es6features#readme
abustamam 21 hours ago 0 replies      
I love how concise this is an handles a lot of "Gotchas" when working with ES6, but can we call a spade a spade and NOT call this a "cheatsheet?"

I always imagine cheatsheets to be just that; something I can render on one sheet of paper. Printing the entire raw README text would take 4 pages (2 sheets, front and back).

I think it would be better titled, "ES6 best practices" since I think that's a more accurate description of what it is.

jcoffland 1 day ago 1 reply      
Great reference and overview of ES6.

One minor quibble. I was bothered by the misuse of the words "lexical" and "interpolate". The lexical value of the keyword "this " is the string "this". Then, you might translate between two technologies such as CommonJS and ES6 but interpolating between them implies filling in missing data by averaging known values. Granted this word is commonly abused. Sorry this is a bit pedantic but these corrections would improve the document, IMO.

deckar01 1 day ago 1 reply      
Is "WeakMap" really the suggested way to implement private class properties?

Using "this" as a key into a Map of private variables looks bizarre. I would rather keep my code concise than create a layer of obfuscation.

edem 13 hours ago 0 replies      
[This](https://ponyfoo.com/articles/es6) is also a very informative guide of ES6. I highly recommend perusing it.
banku_brougham 1 day ago 0 replies      
Much more than a cheat sheet, this is a revealing window into js development. Helpful!
TheAceOfHearts 1 day ago 0 replies      
This cheatsheet is wrong about ES2015 modules. They don't define how module loading works, that's still being worked on [0]. ES2015 just defined the syntax.

[0] https://github.com/whatwg/loader

shogun21 1 day ago 1 reply      
Two questions: what happens if you use ES6 standards in a browser that does not support it?

And would it be wise to hold off adopting until all browsers support it?

joshontheweb 1 day ago 1 reply      
Is there a resource that tells me which of these features are available in the latest stable Node version?
s84 12 hours ago 0 replies      
Didn't realize arrow functions preserver this! Now using arrow functions.
kclay 1 day ago 0 replies      
This will come in handy, thanks.
overcast 1 day ago 0 replies      
String interpolation, classes, promises, and parameter stuffs. A tear rolls down my cheek.
igravious 1 day ago 4 replies      
The only thing from this list of new ES6 idioms that doesn't sit comfortably with me is the short-hand for creating classes. I remember being kind of blown away way back in the day with the prototypical/functional nature of Javascript and how you could wrangle something into being that behaved in an object-oriented manner just like other languages that had explicit class declaration and object instantiation.

Part of me feels that obscuring Javascript's roots in this respect is very un-Javascript-y. What think ye?

Coming from Ruby, loving template literals, feel right at home with them, I wish even C could have them (if that makes any sense!).

lukasm 1 day ago 1 reply      
Is there a similar thing for coffescript?
jbeja 1 day ago 0 replies      
Who is in charge of ES6 design? Is awful.
z3t4 1 day ago 1 reply      
"Require" is the reason why we now have a module for just about anything in Node.JS. I even think Kevin Dangoor or whoever invented it should get the Nobel prize. But then the ES committee choose to use paradigms from year 1970. I cry every time someone use import instead of require in JS because they miss out why globals are bad, modularity is good, and the JS API philosophy (super simple objects with (prototype) methods).
mercer 1 day ago 9 replies      
I'm all in on ES6 when it's practical or allowed. Arrow functions are wonderful, I love destructuring assignment, const and let, and considering that some projects I work on involve a lot of async stuff, I'm close to just giving in and using ES7's async/await functionality.

But most of the time this is in the context of Node.js development, and in every case I use Babel.js to turn the end result into ES5 code.

I'm perfectly comfortable with using ES5, because as a freelancer/contractor I often have to do so. But I really miss the ES6 stuff and the more I use it, the more time it takes me to 'switch' to a mindset where I'm only allowed to use ES5 functionality.

Nonetheless, it strikes me as really odd to actively prefer ES5. Having worked with Ruby and Python (among others), ES5 feels limiting for no good reason. The only rationale I can think of for prefering ES5 is nostalgia.

Could you elaborate why you don't like the 'perl/python' style changes? Because I truly do not understand why one would choose to limit oneself to things like .bind(this) instead of the different forms of arrow functions that make functional-like programming so much easier. And I've found that the best part of JS is that it's decently functional.

Edit: I would agree when it comes to the new 'class' keyword though. I'm not a fan of that.

sectofloater 1 day ago 0 replies      
This will likely get downvoted - but I have just realized how much I was underestimating the privilege of developing apps in Dart instead of JavaScript. Dart had none of the mentioned idiosyncrasies from day one, all the features, and has a lot of other stuff (like async/await, yield, mixins, etc) to offer. Its tooling is very simple and powerful, and the overall experience is really nice - when there is a problem, it's always in the logic of my code, and not things like some weird implicit conversions that are so common in JS land. I almost forgot how terrible JS is...
AWS Certificate Manager: Deploy SSL/TLS-Based Apps on AWS amazon.com
343 points by _alex_   ago   140 comments top 25
arcdigital 2 days ago 2 replies      
Just in case people haven't figured it out yet - ACM issues free wildcard certs :)!



ttcbj 2 days ago 5 replies      
Just a few days ago I spent $200 to purchase a multi-domain wildcard certificate so that I could host multiple secure domains, with multiple subdomains, on a single elastic beanstalk app. It was such a headache to figure out that I needed the multi-domain wildcard cert, then to find one to purchase for a reasonable price.

Now, 5 days later, AWS lets me create one for free in 3 minutes, with zero hassles. I cannot select it in beanstalk yet, but I am sure that will come. I am consistently amazed by how frequently AWS satisfies needs I barely knew (or didn't know) I had.

ubergeek42 2 days ago 6 replies      
Can anyone think of any advantages LetsEncrypt would provide over this offering from AWS? Or does this basically kill LetsEncrypt's usage on AWS?

The only thing I can think of is that AWS Certificate Manager only validates by email addresses which can be problematic if you don't have MX records or don't have control over it(Maybe a large organization where the people who do control those email addresses won't click simple verification links)

It seems a bit inconsistent as to when it will use the email on the whois record for the validation too. For some subdomains I try it will allow validation using the whois address, other times it's just the common aliases@sub.domain.com(which requires an mx record)So I guess if you're nesting deeper than one subdomain(e.g. abc.def.example.com) then maybe it'd be easier to get letsencrypt set up than try to get mx records for abc.def.example.com.

Shameless Plug/Disclaimer: I had been working on a tool to make it dead simple to use Lets-Encrypt certificates for CloudFront/ELBs and handled autorenewal via Lambda. I'm not sure there is any use for this now that this exists though.


nik736 2 days ago 0 replies      
"Even better, you can do all of this at no extra cost. SSL/TLS certificates provisioned through AWS Certificate Manager are free!"


mbesto 2 days ago 4 replies      
ademarre 2 days ago 2 replies      
Are these certificates exclusively for ELB and CloudFront? Does anyone see a way to download a certificate for manual installation on a server (EC2 or otherwise)?


> Currently, ACM Certificates are associated with Elastic Load Balancing load balancers or Amazon CloudFront distributions. Although you install your website on an Amazon EC2 instance, you do not deploy an ACM Certificate there. Instead, deploy the ACM Certificate on your Elastic Load Balancing load balancer or on your CloudFront distribution.

Ideally ACM certificate issuance and deployment would be two separate things, and this would be a general-purpose CA, which just happens to have integrated deployment tools for ELB and CloudFront.

supersan 2 days ago 0 replies      
Wow that was super easy. I tried this on one of my sites and it really took me like 2 minutes total to add SSL to it.

The only confusing part was that port 443 was blocked in ELB by default (which made it look like it didn't work but got fixed easily as soon as I figured it out). I've never seen an easier way to do this till date.

iancarroll 2 days ago 2 replies      
It looks like this is free (the pricing page isn't up yet), as I was able to issue a certificate and not have any charges show immediately.
lukeadams 2 days ago 1 reply      
Looks like Certificate Manager is only available in the US-East region thus far.
michaelZejoop 2 days ago 1 reply      
Buying SSL Cert through Bluehost (my domain registered, and blog hosted) and figuring out how to apply it my web-app, zejoop.com, hosted on AWS was far and away the most annoying and difficult chore in my development/deployment process as a relatively junior SW developer. If I could solve all inhouse within AWS (at reasonable cost) is be very happy. My cert just renewed, so until I roll change to AWS my https:// is down. If update is as difficult for me as original install was, then I guess it will be about 18 hours of aggravation. So I'll look into this, if the OP title is a reality.
philip1209 2 days ago 0 replies      
This is great for microservices where managing lots of SSL certs can be a pain.
cagenut 2 days ago 2 replies      
Its not immediately obvious clicking around, who's the CA?
arcdigital 2 days ago 3 replies      
If you're trying to enable ACM on beanstalk, I made a guide on how to do it: https://medium.com/@arcdigital/enabling-ssl-via-aws-certific...
nodesocket 2 days ago 1 reply      
This is great, but I'm willing to pay for SSL certificates that are managed inside of AWS, just like domains are purchased and managed in Route53.

As long as AWS provides an API to provision certificates, that would be awesome. I use Nginx, and need access to the private key and cert.

ajsharp 2 days ago 0 replies      
Sucks to be in the cert market today. This is great news for everyone else though!
dankohn1 2 days ago 1 reply      
This is fantastic news. Now, let's see Heroku use either this or Let's Encrypt and eliminate their onerous $20 per host SSL fee, which is making the Internet less secure.
kujjwal 2 days ago 0 replies      
Currently It's Only supported in US East (N. Virginia). Is there any way I can use it for apps deployed in different geographical location?
derFunk 1 day ago 1 reply      
How will Amazon's new root certificate be spread to all browsers and mobile devices, so it's made sure that it will be trusted on every possible endpoint? Is the root certificate cross signed with another, already trusted cert?
jsnk 2 days ago 6 replies      
Given that Let's Encrypt is free, is there any reason why someone would use paid service for SSL certificates?
fragsworth 2 days ago 1 reply      
It's about time.
base 2 days ago 2 replies      
Wrong title. This is a Certificate Manager.
jwaldrip 1 day ago 0 replies      
Implemented and deployed! Now just need Terraform support to bring it full circle.
ajeet_dhaliwal 2 days ago 0 replies      
Wow, this is awesome for my needs.
dang 2 days ago 0 replies      
Url changed from https://docs.aws.amazon.com/acm/latest/userguide/acm-overvie..., which seems to be the main announcement.
baby 2 days ago 2 replies      
You mean TLS certificates.
Advice for companies with less than 1 year of runway themacro.com
281 points by loyalelectron   ago   94 comments top 13
danieltillett 1 day ago 3 replies      
There is a third way which is put the company into hibernation. I was faced with this with my startup a bit more than 10 years ago now. I ran out of runway so I laid everyone off, paid the bills, and got a job. I then bought out everyone else, worked part time on the business and built it back up over then next few years to the point where I could return full time. I could have started a new business, but I believed that there was a lot of value in the old business [1] which proved to be correct.

1. Some caveats here. Firstly, I did not have many people to buy out and they were willing to sell at a reasonable price. Secondly, my business is in biotech/bioinformatics and we had put a lot of resources into R&D. This R&D had real value that could be used to bring the business back to life.

Animats 1 day ago 6 replies      
The alternative is becoming profitable as soon as possible. Then you get run over by someone with better access to capital who can absorb losses. Example: Sidecar.[1]

[1] http://venturebeat.com/2016/01/20/sidecar-we-failed-because-...

nunobrito 1 day ago 7 replies      
Was laughing when reading "12 months of runway" as "low runaway". My bootstrapped company is "low runaway" by default since the past two years. Every. Single. Month.

Funny how your mind gets busier to work and build revenue in Europe without a comfortable cushion like SF guys seem to have.

AndrewKemendo 1 day ago 2 replies      
I think this article needs to be paired with an article about "When to shut your company down." If I recall, that article exists and it basically says: when you lose hope.

Maybe I am not looking at this right but this part doesn't make sense to me:

In many cases, <2 months is the point of no return. If you are in this state it is immediately necessary to lay off your employees and give them severance, pay down your obligations, and use your remaining cash for shutdown costs.

So is that for companies that had a year+ of runway at some point and are now down to 2 months? What about companies that never had 1 year of runway? The differences between those are pretty big.

For example if you have a 4 person startup and 2 months of runway after being on the market for only 4-6 months, you are supposed to just shut it down?

No, you take consulting jobs and do side work till you can get higher revenue or some financing.

I think, like most startup articles, this applies to companies who have already gotten past seed stage, initial traction and thus is not applicable for 90% of us.

gjmulhol 1 day ago 3 replies      
My friends, we are finally hitting the new economy where even startup are being asked to make money---maybe not to the point of profitability, but even a little revenue can make a big difference in a lean organization.
josh_carterPDX 1 day ago 0 replies      
I've never understood the psychology of those that do not fundamentally get this. If you just finished raising a seed or angel round, chances are you had less than 12 months of runway to begin with. Perhaps your personal savings was drying up or you were running out of friends and family resources that could help you run this out further. The sense of urgency and anxiety you felt while raising your seed round doesn't go away simply because you were able to raise some money. If anything, it would increase. So the fact that someone had to specifically cover this in a blog post seems really counterintuitive to me.
rl3 1 day ago 1 reply      
>The Fatal Pinch does not apply to me

Except, sometimes it doesn't? If you look at the notes[0] at the bottom of The Fatal Pinch:

>There are a handful of companies that can't reasonably expect to make money for the first year or two, because what they're building takes so long. For these companies substitute "progress" for "revenue growth." You're not one of these companies unless your initial investors agreed in advance that you were. And frankly even these companies wish they weren't, because the illiquidity of "progress" puts them at the mercy of investors.

What do you do if you're one of those companies? There's plenty of business models that could be attractive acquisition targets (read: billions), but otherwise can't monetize to save their souls.

Two pieces of advice often encountered (paraphrasing):

"Treat each funding round as if it's your last."

"VC money is like rocket fuel. It's intended to be burned at a high rate."

I imagine reconciling both is difficult at best.

[0] http://paulgraham.com/pinch.html

JarvisSong 1 day ago 0 replies      
Great analysis. I would add another chart -- likelihood of more investment / buyout -- both decrease as you get closer to zero runway.
bshimmin 1 day ago 3 replies      
I don't wish to be overly mean or uncharitable, but I don't really think anyone who is unable to figure out the advice offered in the section titled "Some tips on reducing burn" all by themselves is ever going to be able to run a successful business.
larakerns 1 day ago 2 replies      
It's discouraging when new employees expect a certain lifestyle on joining your startup but you're runway is less than a year. Startups have been portrayed as having so many perks that there's an impossibly high standard to strive toward.
Kiro 19 hours ago 0 replies      
> In especially messy scenarios you can end up with personal liability.

When can this happen?

aagha 1 day ago 1 reply      
I love how investors are always willing and wanting to show examples of how they have the upper hand and the entrepreneur has the lower one (chart in the article).
lotso 1 day ago 2 replies      
Could someone give a brief bio of Dalton Caldwell? I know he created Svbtle, but don't know much else about him.
Unikernels are unfit for production joyent.com
353 points by anujbahuguna   ago   291 comments top 42
nkurz 1 day ago 7 replies      
Bryan may certainly be right (I neither know him nor much about unikernels), but some parts of his argument seem incredibly weak.

 The primary reason to implement functionality in the operating system kernel is for performance...
OK, this seems like a promising start. Proponents say that unikernels offer better performance, and presumably he's going to demonstrate that in practice they have not yet managed to do so, and offer evidence that indicates they never will.

 But its not worth dwelling on performance too much; lets just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.
"Let's just say"? You start by saying the that the "primary reason" for unikernels is performance, and finish the same paragraph with "its not worth dwelling on performance"? And this is because there are "well-grounded counter-arguments" that they cannot perform well?

No, either they are faster, or they are not. If someone has benchmarks showing they are faster, then I don't care about your counter-argument, because it must be wrong. If you believe there are no benchmarks showing unikernels to be faster, then make a falsifiable claim rather than claiming we should "move on".

Are they faster? I don't know, but there are papers out there with titles like "A Performance Evaluation of Unikernels" with conclusions like "OSv significantly exceeded the performance of Linux in every category" and "[Mirage OS's] DNS server was significantly higher than both Linux and OSv". http://media.taricorp.net/performance-evaluation-unikernels....

I would find the argument against unikernels to be more convincing if it addressed the benchmarks that do exist (even if they are flawed) rather than claiming that there is no need for benchmarks because theory precludes positive results.

Edit: I don't mean to be too harsh here. I'm bothered by the style of argument, but the article can still valuable even if just as expert opinion. Writing is hard, finding flaws is easy, and having an article to focus the discussion is better than not having an article at all.

vezzy-fnord 1 day ago 3 replies      
Bryan Cantrill seems to have some personal interest in denigrating OS research (defined as virtually everything post-Unix) as all being part of a misguided "anti-Unix Dark Ages of Operating Systems". He has expressed this sentiment multiple times before, and places a great deal of faith on Unix being a timeless edifice which needs only renovation. Naturally, he regards DTrace and MDB to be the pinnacles of OS design in the past 20 years and never stops yapping on about them, this article being no exception. It's his thought-terminating cliche.

He voiced all this here [1], and so I countered by listing stuck paradigms in traditional monolithic Unixes, as well as reopening my inquiry on Sun's Spring research system, which he seems to scoff at, but over which I am impressed by the academic research it yielded. He has yet to respond to my challenge.

[1] https://news.ycombinator.com/item?id=10324211

chubot 1 day ago 4 replies      
Big upvotes for this article. I'm glad it was written, because I've seen nothing but hype for Unikernels on Hacker News (and in ACM, etc.) for the last 2 years. It's great to see the other side of the story.

The biggest problem with Unikernels like Mirage is the single language constraint (mentioned in the article). I actually love OCaml, but it's only suitable for very specific things... e.g. I need to run linear algebra in production. I'm not going to rewrite everything in OCaml. That's a nonstarter.

An I entirely agree with the point that Unikernel simplicity is mostly a result of their immaturity. A kernel like seL4 is also simple, because like unikernels, it doesn't have that many features.

If you want secure foundations, something like seL4 might be better to start from than Unikernels. We should be looking at the fundamental architectural characteristics, which I think this post does a great job on.

It seems to me that unikernels are fundamentally MORE complex than containers with the Linux kernel. Because you can't run Xen by itself -- you run Xen along with Linux for its drivers.

The only thing I disagree with in the article is debugging vs. restarting. In the old model, where you have a sys admin per box, yes you might want to log in and manually tweak things. In big distributed systems, code should be designed to be restarted (i.e. prefer statelessness). That is your first line of defense, and a very effective one.

Hoff 1 day ago 1 reply      
Interesting article. Rather than arguing what can or cannot be done or what might or might not work, here's some code, and some history.

Here's full-mixed-language programmable, locally- and fully-remote-debuggable, mixed-user and inner-mode processing unikernel, and with various other features...

This from 1986...


FWIW, here's a unikernel thin client EWS application that can be downloaded into what was then an older system, to make it more useful for then-current X11 applications...

From 1992...


Anybody that wants to play and still has a compatible VAX or that wants to try the VCB01/QVSS graphics support in some versions of the (free) SIMH VAX emulator, the VAX EWS code is now available here:


To get an OpenVMS system going to host all this, HPE has free OpenVMS hobbyist licenses and download images (VAX, Alpha, Itanium) available via registration at:


Yes, this stuff was used in production, too.

ChuckMcM 1 day ago 4 replies      
Well that is pretty provocative :-) Bryan might be surprised to learn that for its first 15 years of its existence NetApp filers were Unikernels in production. And they out performed NFS servers hosted on OSes quite handily throughout that entire time :-).

The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.

derefr 1 day ago 5 replies      
> Unikernels are entirely undebuggable

I'm pretty sure you debug an Erlang-on-Xen node in the same way you debug a regular Erlang node. You use the (excellent) Erlang tooling to connect to it, and interrogate it/trace it/profile it/observe it/etc. The Erlang runtime is an OS, in every sense of the word; running Erlang on Linux is truly just redundant, since you've already got all the OS you need. That's what justifies making an Erlang app a unikernel.

But that's an argument coming from the perspective of someone tasked with maintaining persistent long-running instances. When you're in that sort of situation, you need the sort of things an OS provides. And that's actually rather rare.

The true "good fit" use-case of Unikernels is in immutable infrastructure. You don't debug a unikernel, mostly; you just kill and replace it (you "let it crash", in Erlang terms.) Unikernels are a formalization of the (already prevalent) use-case where you launch some ephemeral VMs or containers as a static, mostly-internally-stateless "release slug" of your application tier, and then roll out an upgrade by starting up new "slugs" and terminating old ones. You can't really "debug" those (except via instrumentation compiled into your app, ala NewRelic.) They're black boxes. A unikernel just statically links the whole black box together.

Keep in mind, "debugging" is two things: development-time debugging and production-time debugging. It's only the latter that unikernels are fundamentally bad at. For dev-time debugging, both MirageOS and Erlang-on-Xen come with ways to compile your app as an OS process rather than as a VM image. When you are trying to integration-test your app, you integration-test the process version of it. When you're trying to smoke-test your app, you can still use the process versionor you can launch (an instrumented copy of) the VM image. Either way, it's no harder than dev-time debugging of a regular non-unikernel app.

geofft 1 day ago 3 replies      
It may well be the case that unikernels as currently envisioned by unikernel proponents are impossible to make fit for production; it may also well be the case that there exists a product that is closer to a unikernel than current kernels, that is quite production-suitable, and unikernels are fruitful research to that point.

For instance, you could imagine a unikernel that did support fork() and preemptive multitasking, but took advantage of the fact that every process trusts every other one (no privilege boundaries) to avoid the overhead of a context switch. Scheduling one process over another would be no more expensive than jumping from one green (userspace) thread to another on regular OSes, which would be a huge change compared to current OSes, but isn't quite a unikernel, at least under the provided definition.

Along similar lines, I could imagine a lightweight strace that has basically the overhead of something like LD_PRELOAD (i.e., much lower overhead than traditional strace, which has to stop the process, schedule the tracer, and copy memory from the tracee to the tracer, all of which is slow if you care about process isolation). And as soon as you add lightweight processes, you get tcpdump and netstat and all that other fun stuff.

On another note, I'm curious if hypervisors are inherently easier to secure (not currently more secure in practice) than kernels. It certainly seems like your empirical intuition of the kernel's attack surface is going to be different if you spend your time worrying about deploying Linux (like most people in this discussion) vs. deploying Solaris (like the author).

bcg1 1 day ago 2 replies      
This article is mostly FUD I think.

It comes off as a slew of strawmen arguments ... for example the idea that unikernels are defined as applications that run in "ring 0" of the microprocessor... and that the primary reason is for performance...

All of the unikernel implementations he mentioned (mirageos, osv, rumpkernels) all run on top of some other hardware abstraction (xen, posix, etc) with perhaps the exception of a "bmk" rumpkernel.

We currently have a situation in "the cloud" where we have applications running on top of a hardware abstraction layer (a monolithic kernel) running on top of another hardware abstraction layer (a hypervisor). Unikernels provide a (currently niche) solution for eliminating some of the 1e6+ lines of monolithic kernel code that individual applications don't need and introduce performance and security problems. To dismiss this is as "unfit for production" is somewhat specious.

I wonder if Joyent might have a vested interest in spreading FUD around unikernels and their usefulness.

_wmd 1 day ago 1 reply      
I think the problems with this article are well covered already. Just a suggestion for Joyent: articles like this are damaging to your excellent reputation, would suggest a thin layer of review before hitting the post button!

Some additional meat:

- The complaint about Mirage being written in OCaml is nonsense, it's trivial to create bindings to other languages, and in 40 years this never stopped us interfacing our e.g. Python with C.

- A highly expressive type/memory safe language is not "security through obscurity", an SSL stack written in such a language is infinitely less likely to suffer from some of the worst kinds of bugs in recent memory (Heartbleed comes to mind)

- Removing layers of junk is already a great idea, whether or not MirageOS or Rump represent good attempts at that. It's worth remembering that SMM, EFI and microcode still exist on every motherboard, using some battle-tested middleware like Linux doesn't get you away from this.

- Can't comment on the vague performance counterarguments in general, but reducing accept() from a microseconds affair to a function call is a difficult benefit to refute in modern networking software.

uxcn 1 day ago 1 reply      
I think Bryan Cantrill and Joyent are doing a number of interesting things, but this reads more like an ad than a genuine critique of Unikernels.

 The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster.
I haven't heard this argument made once. There are performance benefits (smaller footprint, compiler optimization across system call boundaries, etc...). However, the primary benefit is not performance from eliminating the user/kernel boundary.

 Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable.
If this were true, and an issue, FPGAs would also be completely unusable in production.

pyritschard 1 day ago 1 reply      
The essential point the lengthy article makes revolves around debugging facilities for unikernels.While mostly true for MirageOS and the rest of the unikernel world today, OSv showed that it is quite possible to provide good instrumentation tooling for unikernels.

The smaller point about porting application (whether targetting unikernels that are specific to a language runtime or more generic ones like OSv and rumpkernels) is the most salient, it will probably restrict unikernel adoption.

For docker, if only to provide a good subtrate for providing dev environments for people running windows or Mac computers, it is very promising.

ewindisch 1 day ago 0 replies      
I'm happy for this article because it does hit some points on the head. Other points are deeply entrenched in Bryan's biases, but I can't really fault him for that.

In particular, I am suspicious of the idea that unikernels are more secure. Linux containers make the application secure in several ways that neither unikernels nor hypervisors can really protect from. Point being a unikernel (as defined) can do anything it wishes to on the hardware. There is no principle of least-privilege. There are no unprivileged users unless you write them into the code. It's the same reason why containers are more secure than VMs.

Users are only now, and slowly, starting to understand the idea that containers can be more secure than a VM. False perspectives and promises of unikernel security only conflate this issue.

That said, I do think the problems with unikernels might eventually go away as they evolve. Libraries such as Capsicum could help, for instance. Language-specific or unikernel-as-a-vm might help. Frameworks to build secure unikernels will help. Whatever the case, the problems we have today are not solved or ready for protection -- yet.

This blog post was clearly spurred by the acquisition made by Docker (of which I am alumnus). I think it's a good move for them to be ahead of the technology, despite the immediate limitations of the approach.

seliopou 1 day ago 1 reply      
First, let's put aside the start of the blog post, which consists entirely of empirical questions. Each potential adopter of unikernels will have to figure out for themselves wether their specific use-case justifies the cost and benefit of this particular technology, just like all others.

Putting that aside, debuggability is an obvious and pressing issue to production use-cases. Any proponent of unikernels that denies that should be defenestrated. I haven't come across any that do.

How to go about debugging unikernels is unclear because it certainly is still early days. However, I don't think the lack of a command-line in principle precludes debuggability, nor does it my mind even preclude using some of the traditional tools that people use today. For example, I could imagine a unikernel library that you could link against that would allow for remote dtrace sessions. Once you have that, you can start rebuilding your toolchain.

P.S. Bryan, where's my t-shirt?

zobzu 1 day ago 1 reply      
From TFA: "At best, unikernels amount to security theater, and at worst, a security nightmare."

As a security engineer, that's a good one sentence summary from my point of view of unikernels, since, forever.

I think the reason why unikernels are being developed is due mostly to ignorance, and if any of them is successful, it will morph into an OS that is closer to Mesos, Singularity, or even Plan9. That's faster, safer, more logical, etc.

ori_b 1 day ago 3 replies      
The key thing to realize, I think, is that if you're using virtualization, a unikernel is nothing more than a process that uses a very strange system call API.
pcwalton 1 day ago 1 reply      
It's not by any means the main point of the article, but: I'm not sure citing the Rust mailing list post on M:N scheduling is proof that it's a dead idea. The popularity of Go is a huge counterexample.
toast0 1 day ago 0 replies      
I'm not likely to run a unikernel anytime soon, but I wanted to respond to this:

> And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever.

Multi-tenancy is needed in some cases, but I don't need it, we use the whole machine, and other than the one process that does all the work, we only have some related processes for async gethost, monitoring/system stats processes, ntpd, sshd, getty.

zmanian 1 day ago 3 replies      
Op seems to misunderstand the following:

1. Your hypervisor is the security boundary. 2. Unikernel design lets you maximize the security benefits of AppSec and LangSec by removing the large OS surface area.

readams 1 day ago 0 replies      
One of the things that seems to really fall flat is the claim claim that the security is bad for unikernels. The comparison point though is not a traditional OS running in a hypervisor but a container running on the host OS. In that comparison I think unikernels are emphatically more secure than what you get on Linux, and have essentially all of the same advantages of containers (plus a few extra ones).

For Joyent of course they have a book to talk up and they want to sell you their own solution which looks more like containers than a hypervisor. The Joyent solution is I think undoubtedly very interesting and well-considered but I have a suspicion that they've hitched their wagon to the wrong horse and Linux will keep winning.

PaulHoule 1 day ago 0 replies      
I dunno.

For a long time the dominant programming environment for IBM mainframes has been VM/CMS, where VM is something like VirtualBox and CMS is something a lot like the old MS-DOS, i.e. a single process operating system. Say what you like but it was a better environment than anything based on micros until you started seeing the more advanced IDEs on DOS circa 1987 or so.

Now the 360 was a machine designed to do everything, but it's clear the virtual memory in most machines is an issue in terms of die size, cost, power consumption and performance and I wonder if some different configuration in that department together with a new approach to the OS could make a difference.

Animats 1 day ago 1 reply      
Can you run the unikernel under a debugger when testing? Can you get crash dumps? Stack backtraces?

Unless you're running your unikernel on bare metal, it's still running under an OS. It's just that the OS is called a hypervisor and is less bloated than most OSs.

erichocean 11 hours ago 0 replies      
Simple explanation for the article: Bryan is "talking his book".[0]

Joyent doesn't sell unikernel services, hence unikernels are bad. Color me shocked. Is it me, or has Joyent become less than upfront about their motives over the last few years? I don't require everyone to embrace "don't be evil" or whatever, but I always get a "righteous" vibe from Joyent employees that seems at odds with their actual behavior. Maybe they feel under siege or whatever, and are reacting to that? The whole thing is vaguely off somehow.

[0] http://www.investorwords.com/8436/talking_my_book.html

hughw 1 day ago 0 replies      
Isn't it a feature of (some) unikernels, that you can fire one up to respond to some request, and tear it down, in milliseconds? If so, running an AWS Lambda-like service with all the isolation you get in a HVM seems desirable for some situations. The isolation provided by a Docker container might not be good enough. It's a feature whose benefits, for some applications, might balance the debugging costs the article outlines.
patrickaljord 1 day ago 0 replies      
Nothing like a 15 paragraphs corporate blog to explain why a technology is bad and unfit for production to promote said technology. Microsoft used to do the same with linux and now we have this http://blogs.technet.com/b/windowsserver/archive/2015/05/06/...
kriro 1 day ago 0 replies      
I think he brushes by the security argument too quickly. Unikernels are (typically) smaller with less attack surface and more importantly it's easier to reason about them. I'd argue that this ability to keep more of the entire OS in your head at any given time improves security on a high level of abstraction.
jorge-fundido 1 day ago 1 reply      
"Unikernels are entirely undebuggable."

I'm confident this will be addressed eventually. Anyone have a sense of what that will look like? Something like JMX? Something like dumping core, restarting, and analyzing later?

ccostes 1 day ago 1 reply      
Reading through the article I feel like the author and I are describing different things when we use the term unikernel, which is surprising because we both have experience with the same unikernel: QNX. I'm not very familiar with the other examples, but my QNX application definitely does have processes that I can see using top, htop, etc., and interfaces with system hardware using the QNX system calls; all things the article describes as not being features of unikernels.

Either the article is written in the context of writing kernel software, which wouldn't have much of an impact on my decision to run my application on a unikernel OS or not, or QNX is a far outlier from other unikernel OS's and that's why I'm so confused.

Philipp__ 1 day ago 1 reply      
Wow. This thread went just as I thought it will... The HN way.

I hadn't any experience with unikernels (still student), but there are few concerning things about them. And the main thing is that those things that are concerning are at it's core.

I have only respect and admiration for Mr. Cantrill, but this post felt kinda strange. After reading the last paragraph it sounded like and ad. Maybe they got scared of Docker possibly expanding and taking part of their cookie. I don't know, but these discussions were interesting to read at least...

kev009 1 day ago 0 replies      
I tweeted to him to research IBM's zTPF before writing this, I guess it conflicts with the narrative he's telling though. In general, I agree with his sentiments, but there are no absolutes, only trade offs here. You can, for instance, hook a debugger into the kernel or through the hypervisor. And debugging hardware looks a lot like debugging a unikernel in that sense.
jupp0r 1 day ago 0 replies      
The main use case for unikernel apps (the way I see it) is running language specific VMs like Beam, MRI or the JVM almost directly on bare metal and getting rid of all the complexity of OSes. The idea is to make it easier to debug, optimize and tune applications by removing traditional OSes complex kernels from the equation. The real argument for security (that the author omits) is derived from that: 20 million less lines of code in the stack that you deploy.
woah 1 day ago 5 replies      
So can someone intelligently contrast a hypervisor with an OS? Both allow multiple applications to run on some hardware, what are the major differences?
Mojah 1 day ago 0 replies      
In case anyone is still struggling with the concept of a 'unikernel', I found this article to help in clearing it all out: https://ma.ttias.be/what-is-a-unikernel/
dicroce 1 day ago 1 reply      
If the definition of unikernals doesn't include hypervisors then arguments against unikernals specifically attacking hypervisors are only viable against unikernals with hypervisors.
dustingetz 1 day ago 1 reply      
So you need good enough logs that you can debug production without ssh to production (since there is no ssh, bash, ps etc)? Don't we already have this?
jksmith 1 day ago 1 reply      
>"There are no processes, so of course there is no ps, no htop, no strace but there is also no netstat, no tcpdump, no ping! And these are just the crude, decades-old tools."

So does this mean something like a Symbolics machine or an Oberon machine can't be debugged, or does this mean that the unikernel has to be debugged at a higher level by the application(s) it's dedicated to?

chris_wot 1 day ago 0 replies      
I think Cantrill is doing a massive favour for those who are pro-Unikernels - he's essentially trolling them and will force them to come up with responses to some of the issues he's making.

Given how invested Joyent is in their current positions, I can see why Unikernels may seem a threat, but none of the things Cantrill has raised as concerns seem insurmountable.

nevir 1 day ago 2 replies      
TL;DR for those reacting to the title, but not reading the entire article:

Unikernels are young, and lack tooling/robustness that we have in more traditional approaches. They are not production ready yet, but will likely become a prominent way of building and deploying applications in the future.

cbd1984 1 day ago 0 replies      
How are unikernels unfit for production? CMS has been in production longer than most here have been alive.
0xdeadbeefbabe 1 day ago 1 reply      
> it is all OS, if a crude and anemic one.

Crude or anemic? The program does what you want or it doesn't. Quit trying to make it a human.

Edit: If the author can believe programs are crude or anemic he clearly likes to look at them from a high level, but you need a low level view to get excited about unikernels.

Edit: What?

dang 1 day ago 0 replies      
Personal attacks are not allowed on Hacker News.
MCRed 1 day ago 1 reply      
This article seems to miss the point. Not that Unikernels seem useful for running in VMs, not on bare metal. Thus you get the isolation of a true VM with a container like performance & resource usage.
Launch, Land, Repeat blueorigin.com
271 points by Aaronn   ago   158 comments top 19
molyss 22 hours ago 9 replies      
Am I the only one finding the last 2 BO videos highly disingenuous ?

As mentioned numerous times, there's the whole "reaching space" vs "going into orbit" before landing.

More important to me is the fact that SpaceX is streaming its different tries in _live_, taking the risk of crashing the rocket out in the open. How many vehicles did BO lose before achieving a vertical landing ?

Oh, and what about the fact that they have total control on the location and time of the launch ? Meaning they basically control weather to an accuracy no one launching anything useful into space has. For example, last failed SpaceX landing was officially linked to fog icing the leg locks. That's not going to happen if you launch on a clear day from the desert.

These are more comparable to the Grasshoper tries than to anything SpaceX has done recently : no horizontal speed, full weather control, no reporting on failed attempts, very limited weight. Even the last grasshoper video seemed to have more side winds that had to be countered than this 100k altitude video.

Even the format of the video itself screams "vaporware" to me. It looks like a trailer for a bad action movie, where some spacey something goes to space, separates and lands back in 15 seconds. When the grasshoper videos left me in awe, looping over them 5 times in a row, the BO ones just make me feel like they sh/could end with some sexual innuendo over their big rocket

yborg 1 day ago 6 replies      
Honestly just roll my eyes now at these pissing contest blogposts from Bezos. He does his team a disservice by suggesting that what they are achieving is actually more advanced than what SpaceX has done - it all looks like the approach the Soviets took in trumpeting various "firsts" in space in the 60s as the US methodically built capability far beyond what the Russians could sustain.

I am impressed by both companies' ambition, and SpaceX clearly has both the time and money advantage over Blue Origin. Let your accomplishments speak for themselves.

gus_massa 1 day ago 1 reply      
Impressive, but one important difference with the SpaceX rockets is that this rocket only goes up to space but it doesn't put satellites in orbit.

Form: https://what-if.xkcd.com/58/

> The reason it's hard to get to orbit isn't that space is high up.It's hard to get to orbit because you have to go so fast.

vonklaus 19 hours ago 1 reply      
This is amazing, and a pretty amazing feat that we are taking for granted. Space is super super tough, the complex coordination of manufacturing something like this is being totally written off by many, but I assure you it is non trivial.

A popular sentiment in that industry is that rocketry is like writing software composed of many modules and testing each module separately on mac, then deploying the entire build on linux. If it doesn't work, you don't just back out the conversion error or stray quotes you left in, your rocket explodes.

The engineering spend alone is massive, as is the damage to the company when a failure is syndicated across youtube. Taking big risks is something we should be promoting.

We are in a technological renaissance and it starts with lowering launch costs to achieve realtime LEO satellite blanketing and distributed communication channels to connect to the other fucking 3 billion people without internet. Bezos is accomplishing something great, and we don't need to qualify that statement.

He and Musk are definitively the Jobs and Gates of the 21st century if you want to use the obvious cliche.

What Gates did. What Jobs accomplished. It was pretty fucking powerful. Musk and Bezos are sort of doing that, except both are working in at least 3 industries at that same scale.

I wish Blue Origin, Sierra Nevada, Firefly and all the other people in new space well. Nano-sats will provide realtime insight to the earth, people will be able to own a satellite in ~5-10 years because of these advancements.

this is good for all of us, and the only negative thing to say about it is that for god sakes Jeff, that rocket does look a bit like a stubby penis.

hayksaakian 1 day ago 1 reply      
I like it -- we're starting a 21st century space race between corporations rather than nations.
sandworm101 22 hours ago 1 reply      
SpaceX = Space launch.

Blue Origin = Rollercoaster.

I really don't see why these companies are competing. They are in totally different markets. Sure, there is some technological crossover in that they both use rockets, but this is like comparing a prius to a locomotive.

xgbi 16 hours ago 1 reply      
"Our vision: millions of people working and living in space"

When you have only a few seconds above the "official" space altitude on a parabolic trajectory, I wouldn't say" working and living in space", and specially not "millions" at the same time..

Is it me or this is primarily a pitch video ?

raldi 1 day ago 2 replies      
It sounds like Blue Origin rockets are only capable of sending payloads to space for just an instant, before gravity pulls them back down to earth. They're nowhere near close to capable of putting anything into orbit.

Is that correct? If so, what are they good for?

ChuckMcM 23 hours ago 0 replies      
I think its great that New Shepard is coming along, I don't get how Bezos feels he is helping his cause when he says "people living and working in space." when he doesn't come close reaching orbit. The difference between an orbital mission and a sounding rocket.

Now, that he is getting closer to having tourist flights outside the atmosphere than Virgin Galactic? That is pretty cool and a fair comparison. Being able to out execute Burt Rutan? That counts for a lot, but don't try to compare yourself to SpaceX until you're putting things into LEO and getting back the hardware to use again.

andrewtbham 23 hours ago 0 replies      
Here is an animated video that shows what space tourism will be like. You will be in space for a few minutes. the view of the world from space will be amazing. plus you will be weightless. not sure how long you will be up there or the cost but it looks awesome.


sailfast 23 hours ago 0 replies      
This looks like a very complicated engineering achievement that undoubtedly will lead to advances in space travel and tourism.

That said, I didn't get to watch the launch and join in its success or failure, so I'm finding it difficult to actually care as much as other launches.

iamcreasy 21 hours ago 0 replies      
What I am really looking forward in the bigger rocket from Blue Origin that will be able to achieve orbital velocity.
dba7dba 19 hours ago 0 replies      
I wonder SpaceX opening an office in Seattle was just to make it easier to hire away engineers from Bezos?
anjc 17 hours ago 0 replies      
Article title sounds like somebody's attempt at a new MVP/PMF paradigm
forrestthewoods 1 day ago 3 replies      
Is Blue Origin landing a booster that SpaceX is letting crash? Would their forces combined result in total re-use? I'm not sure what all the parts and roles are. Sorry for the dumb question.
igravious 22 hours ago 1 reply      
Is there a significance for ~100km? Is this, roughly speaking, space -- where the atmosphere is so thin to be almost negligible? Clearly atmosphere thins gradually so how do we define where space starts? Is the significance of ~100km something to do with the effects of gravity at that altitude from the Earth's surface? Does ~100km give you weightlessness? Or is Blue Origin going up to ~100km because it's a nice round number that is roughly (whatever that means) in space. But aren't kilometres completely arbitrary?

Also, can people please stop knocking Blue Origin. We get it at this point, okay? I'm a huge fan of SpaceX and Elon Musk but does Blue Origin have to lose for SpaceX to win? No. There's nothing in this post from Bezos bashing SpaceX as far as I can see. There's simply saying, look, we did it again with the same refurbished rocket. Good on them. May they do it again and again. And so may SpaceX. The next space race is on, happy days!

sjg007 23 hours ago 0 replies      
These guys played way too much lunar lander :)
mchahn 1 day ago 3 replies      
One would think that using fuel to touch-down slowly is wasting fuel since they could use some kind of capture scheme with a parachute instead. I've read many times that the weight of the fuel is a big problem in spacecraft.
jcoffland 21 hours ago 2 replies      
This is awesome. A great compliment to the work SpaceX is doing. To put it in perspective this rocket went about 100x as high as an average international airline flight but would still need to go about 4000x as far to reach the moon. Not sure about the 3 mile per hour impact with the ground on my way home from work. I suppose with a nice soft seat it would be fine but by the time United Airlines gets done with it you'll be packed in like an NYC cross town bus with a seat just as hard.

edit: got my facts straight

Docker Acquires Unikernel Systems as It Looks Beyond Containers techcrunch.com
275 points by amirmc   ago   101 comments top 22
nickpsecurity 2 days ago 2 replies      
I was thinking: Docker. Hmm. Containers. Hmmmm. Xen developers. Hmmmmm. Seemed really boring until I saw " Anil Madhavapeddy, the CTO of Unikernel Systems." Oh... I know that name: it's on quite a few IT/INFOSEC papers I stashed and shared over the years. A smart researcher with a practical focus. Didn't know he was CTOing at a startup.

Yeah, Docker is about to get some enhancements for sure. Maybe some real security improvements, too. You can count on it.

hacknat 2 days ago 5 replies      
I used to be in the unikernel camp of "this is the next step in virtualization tech", but having played around with both containers and unikernels, and now developing with containers, I think unikernels are going to occupy only a very niche space.

There are two touted benefits of unikernels, performance and security. Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent (with the OS actually winning in some use cases).

Security is definitely an issue, but it's so abstract. My company is a compliance (a very specific industry's compliance) cloud provider and we have gone with Docker as we get to use the OS as our Hypervisor, which means it is much more extensible and, in our use case, secure as we are able to auto-encrypt all network traffic coming out of the hosts with a tap/tun virtual device.

Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.

btilly 2 days ago 2 replies      
Hopefully Docker's paternalistic attitude doesn't infect Unikernel systems.

I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies. This move embraces unikernels, and now they are perfectly positioned to go the rest of the way.

The discussion at https://news.ycombinator.com/item?id=10904452 may shed light about my complaint.

pjmlp 2 days ago 1 reply      
OCaml and MirageOS FTW!

I hope this brings Unikernels adoption forward, and boosts a little more OCaml's adoption as well.

div0 2 days ago 5 replies      
OCaml is a fine language that most people don't use. If I want a unikernel in my own language, do I need to build one myself? I wonder if someone is building a unikernel that have external language bindings, which will allow one to create "High-level" unikernels. This will open up the possibility to completely bypass the installation of language runtime. For example, I can just type some Python code into a browser editor, the backend can take the source code and fork a Python unikernel to run the code. Docker can currently do this but one still has to rely an underly OS to manage all the packages etc.Wouldn't it be nice if you could simple write "import xyz", and the unikernel takes care of fetching them automatically?
chatmasta 2 days ago 2 replies      
Docker has some really smart people leading it to success. But make no mistake, the economic thesis of Docker depends on a massive landgrab of vendor lock-in. This acquisition is a hedge against any Unikernel company looking to make the same landgrab.

There's a reason Docker is so heavily funded by the biggest cloud companies. They're the ones who stand to benefit from specialized Docker container optimized for their own platform. It's a great way to package open source services and leverage the effort of the developer community into centralized profit.

It seems blatantly obvious that Docker is looking to build the app store of devops. I wish them the best of luck, but they are going to face some heavy resistance from open source initiatives. There is nothing about Docker that makes it fundamentally superior to the systems it's based on, specifically the LXC project. When developers finally wake up to the fact that they are sleep walking into a massive walled garden, Docker will lose some of its clout.

amirmc 2 days ago 0 replies      
We also put a note on the company website [1]. I'm looking forward to working with the Docker folks on this. :)

[1] http://unikernel.com/#notice not to be confused with the community website at http://unikernel.org :)

bcg1 2 days ago 2 replies      
So far Docker seems to be a good citizen when it comes to FOSS, hopefully that will continue.

I've been following the Mirage and rumpkernel lists for a while and its nice to see these hackers getting traction (and money!) for their efforts.

Not too long ago unikernel.org was started, which IIRC was billed as a community driven "one stop shop" for information on the subject, which I assume is independent of the company "Unikernel Systems". Hopefully Docker won't go rogue and start attacking others that use the term "unikernel" by claiming that its trademarked or something like that.

Congratulations Amir et al!

seliopou 2 days ago 0 replies      
Congratulations to the all the OCaml homies up in Cambridge! Well-deserved.
crudbug 2 days ago 3 replies      
I have been following unikernel development for sometime. The work done by Atti Kantee and others on Rumpkernels [1] is most promising and has the right abstractions (POSIX userspace using NetBSD stack). Also, in the demo video, unikernel folks should acknowledge rumpkernel work as they are using it :)

[1] http://rumpkernel.org

EvanPlaice 2 days ago 0 replies      
Holee shit. I totally called this. Even got made fun of on twitter (via @ShitHNSays) for mentioning it.

I didn't expect unikernels to gain mainstream notice for at least 6 months to a year.

tyingq 2 days ago 2 replies      
I didn't quite get what a unikernel was. Reading up on approaches that were a bit different than MirageOS was helpful:



erichocean 2 days ago 0 replies      
Our latest distributed database uses a mono kernel too. We use Pure64[0] to boot the system and then the "kernel" is derived from QK[1], but it's also just our database software.

Other than reducing complexity, our distributed database use the virtual memory hardware in a unique way, so a mono kernel was essential.

Having said that, the easiest way to develop such a system is not on the bare metal, it's by running Linux in such a way that in only uses the first 1 or 2 cores, and then running your "custom kernel" on any other cores in the system. Then you can use a normal debugger and utilities during development. It's only when you actually want to put it into production that you can consider not using Linux at all.

[0] https://github.com/ReturnInfinity/Pure64

[1] http://www.state-machine.com/qpcpp/struct.html#comp_qk

hueving 2 days ago 1 reply      
Considering the way docker tends to feature-creep, they will eventually just be re-implementing a full kernel. :
bch 2 days ago 1 reply      
Why don't I see Antii Kantee's[0] name all over this?

[0] https://archive.fosdem.org/2013/interviews/2013-antii-kantee...

takeda 2 days ago 2 replies      
I'm a bit surprised. Isn't that a different approach than containerization?

I hope this wasn't acquisition to simply kill unikernel approach.

dukedougal 2 days ago 1 reply      
They seem to have sold very early in the lifecycle of the Unikernel technology. Possible they've left billions on the table?

Were the terms of the deal disclosed?

Perhaps Unikernel Systems ran out of money and it was an "acqui-hire"?

AkihiroSuda 2 days ago 0 replies      
Interesting demo materials(`docker-unikernel`): https://github.com/Unikernel-Systems/DockerConEU2015-demo
mwcampbell 2 days ago 0 replies      
I'm curious about how support for building unikernels will be integrated into Docker. The current Dockerfile-based build process doesn't support separate build-time and run-time environments, but when building a unikernel, the build-time environment is completely different from the run-time artifact. Support for separate build-time and run-time environments is also useful when building container images, so the image doesn't include things that are only necessary at build time. So I hope that problem is solved first; I think the addition of unikernel support will be more natural that way.
e_d_g_a_r 2 days ago 0 replies      
So Docker will now have some OCaml openings? :)
dap 2 days ago 2 replies      
This feels specious:

> The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).

Obviously traditional operating systems provide a lot of interfaces that represent attack surface, but they're generally able to be secured. On the other hand, much of the operating system actually _implements_ security, so if you throw it out, you're losing that.

e_d_g_a_r 2 days ago 0 replies      
Yes! More OCaml adoption.
Show HN: I made a site to catalogue 10,000 CC0-licensed stock photos finda.photo
297 points by davidbarker   ago   63 comments top 24
liamca 1 day ago 1 reply      
[Full disclosure, I work on a service called Azure Search]

Very nice site! Since your site is so much based around search, I thought I would pass on a few suggestions based on what I saw. If you happen to be using a search based engine for your content such as ElasticSearch, SOLR or maybe Azure Search :-), there are a few simple things you could add to make the experience a little smoother. Suggestions in the search box are nice to allow people to quickly see results as they type. You could even add thumbnails of the images in the type ahead such as you see using the Twitter Typeahead library (http://twitter.github.io/typeahead.js/). I also noticed that your search does not handle spelling mistakes or phonetic search (matching words that sound similar). Finally, through the use of Stemming, search engines can often help you find additional relevant content. For example, if the person is looking for mice, but your content has the word mouse in it, this will bring back a match. Since you don't have a lot of content, this can really help people find relevant content.

Hope that helps.

oliwarner 1 day ago 3 replies      
I like to think I am very conscious of copyright. I might not always adhere to it in my person life (who can claim they do these days?!) but professionally, everything is done strictly legitimately. With that in mind... Am I the only person who is slightly uncomfortable with the phrasing around PD and CC0? With other copyright licenses there is somebody there is saying they own something.

I'm particularly uncomfortable with Flickr's "no known copyright restrictions". What if people infer PD from that and upload it somewhere else under CC0? Then it gets sucked into this finda.photo? Yuck.

As for finda.photo, why are you truncating the source down to just a domain name?! Many of the sources include proper uploader details so why aren't you copying those over and displaying them?

I know you're not required to, but attribution isn't a bad thing if you can give it. I for one would be much happier using a photo if I knew exactly where it came from.

BrunoJo 1 day ago 2 replies      
I always use https://pexels.com. They also have only CC0 images.
brandonheato 1 day ago 1 reply      
Why not just use flickr? A search for images with "No known copyright restrictions" returned 663,502 results.https://www.flickr.com/search/?license=7%2C9%2C10&text=&adva...
m-i-l 1 day ago 1 reply      
Looks good. Feedback from a designer I showed this to: it would be useful to search based on aspect ratio (landscape vs portrait at minimum).
vortico 1 day ago 0 replies      
I like the domain, the design is usable, and the database is great. This has it all.
lucaspiller 1 day ago 1 reply      
Very nice! What are you using to search the photos by colour and feature?
petecooper 1 day ago 1 reply      
Adding Alana to the list of CC0-only stock photos.

[1] http://alana.io/

[2] http://alana.io/about-us/

Flimm 1 day ago 0 replies      
When of the about pages says that the photos are on a GitHub repo, which sounds really cool, until you follow the link and the repo hasn't been shared yet. Hopefully it's just a matter of time before it is shared.


j_lev 1 day ago 1 reply      
Hi - for some reason the search bar keeps changing my search terms eg Australia --> Australium
frantzmiccoli 1 day ago 0 replies      
It seems that you do have a valid SSL certificate but https://finda.photo/search?q=test is not working properly.
trtmrt 1 day ago 1 reply      
Firstly it is slow...Secondly I have typed "wolf" and I got: 3 foxes, 1 lion, 2 monkeys, 1 snow house and 2 wolfs that do not look like wolfs !?
franciscop 1 day ago 0 replies      
Check also http://pixabay.com/ for Public Domain pictures, I've found many awesome gems there
chrxr 1 day ago 1 reply      
http://finda.photo/image/14847 - Tags are weird. This is not a dog, mouse, canine or feline. It's not sitting. It has 'eyes' but I think that might be irrelevant. Although I would agree that ferrets (not an included tag) are cute, I'm not sure I'd describe them as domestic. Otherwise, great!
hantusk 1 day ago 0 replies      
An idea: You could use this pretrained machine learning library to classify your images/improve search even more:https://www.reddit.com/r/MachineLearning/comments/3yt4o5/dee...
andreash 1 day ago 2 replies      
what is the diff betweeen this and pixabay or pexels.com? which there was one meta-search engine to cover them all :)
uvesten 1 day ago 1 reply      
I really like both the selection and the color chooser!Did you do any manual selection of the photos?
quaffapint 1 day ago 0 replies      
Might be a good place for an infinite scroll when going through pages of image results - one lest click they have to do.
_spoonman 1 day ago 0 replies      
Just a really great job on this. Love it.
edpichler 1 day ago 0 replies      
Just curiosity, where the owner possibly find all these photos to fill up the database?
fareesh 1 day ago 0 replies      
I ran into a Laravel error on the homepage due to the server running out of memory.
jlis 1 day ago 0 replies      
nice one!
juiced 1 day ago 0 replies      
My first search on "new york" returned nothing...
thecodemonkey 1 day ago 0 replies      
If you're still a little concerned with licensing and copyrights, I would recommend taking a look at www.graphicstock.com - you just play a flat monthly or yearly fee and you can download as much as you want.

Disclaimer: I work for the company behind GraphicStock. Oh, and we're hiring!

CA assembly member introduces encryption ban disguised as human trafficking bill asmdc.org
270 points by asimpletune   ago   91 comments top 22
AJ007 2 days ago 1 reply      
Here is the bottom line -- if smartphones can not be securely encrypted there are a lot of things we can't use them for:

- Phones aren't going to replace credit cards- You will need to type in all your passwords each time you use them- Two Factor authentication will need to be done with a different device- Healthkit and other medical records will need to be moved elsewhere- Any profession where there are very serious consequences for leaked communication will no longer be able to do it through their smartphone (lawyers, doctors, executives.)

Basically losing or having your mobile phone stolen will be equal to a burglar pulling up to your house or office and driving away with every sensitive document and record in the back of a van.

No tech company wants to see the end of the mobile revolution. Forget the national interest side to this, anyone supporting broken encryption basically looks like a total moron.

Steko 2 days ago 1 reply      
Assemblyman Jim Cooper represents Elk Grove a city of ~160K just South of Sacramento. Apple is the second largest employer in Elk Grove [1] and currently expanding their footprint there by several thousand jobs [2].

Hopefully there's a primary challenger or soon will be, I'll donate.

[1] https://en.wikipedia.org/wiki/Elk_Grove,_California#Top_empl...

[2] http://www.bizjournals.com/sacramento/news/2015/12/07/someth...

{note: [2] gives a significantly larger current headcount than [1]}

joshka 2 days ago 2 replies      
Same problem pretty much with the NY bill.Buy a phone that is unlocked / decrypted at the time of sale.The next step is for the user to login and encrypt. I don't see how this bill actually fixes that.I guess this hinges on the definition of authorized when it comes to encrypting something I own. I hope I don't require authorization to do this.

A few questions I posed to the NY senator earlier this week:

1. Would you use such a phone knowing that the government / apple / seller of the phone could easily get into it.2. Would it be legal for someone in the legal profession to use such a phone without being disbarred for negligence of the right to private communication?3. If sold unlocked, and then later locked (i.e. every phone right now), where's the change?4. Where's the 4th amendment fit in with this?5. What should we do with old phones that don't support this? Dump them in the bay I guess?6. Where are the technical experts that are telling you that this is actually feasible to do securely and safely? I'm looking hard, but only seeing negative responses from those that know what their talking about.7. Who's responsible for fixing the broken device once the master key gets leaked? The manufacturer? The state of {CA/NY}?8. the list goes on.

fiatmoney 2 days ago 7 replies      
For a long time gun owners have had the singular pleasure of having massively intrusive, incoherent regulations written by people with no technical understanding of the subject matter. It's nice to finally have some company.
Dowwie 2 days ago 3 replies      
"Full-disk encrypted operating systems provide criminals an invaluable tool to prey on women, children, and threaten our freedoms while making the legal process of judicial court orders useless.
jonathaneunice 2 days ago 0 replies      
This is the game. There will never be a "Prohibiting Encryption and Preventing Privacy Act." It will always be a ostensible act of patriotism and protection. Combatting terrorists, child molesters, sex traffickers, drug cartels, money launderers, and other easy-to-demonize scary folk.
cmurf 2 days ago 1 reply      
Cryptography yields two components: encryption/decryption, and authentication. Break one of those, and they're both broken. And that's what really bothers me about all of these politicians who only fixate on the encryption part. They're oblivious to extreme risk introduced by breaking authentication.
trhway 2 days ago 1 reply      
>Full-disk encrypted operating systems provide criminals an invaluable tool to prey on women, children, and threaten our freedoms

"If Youre Typing the Letters A-E-S Into Your Code Youre Doing Wrong"

mdip 2 days ago 0 replies      
It's a political tactic that's been used forever.

When the legislature wants to do something unpopular (or even stupid which is what this is), associate it with the "Evil Of The Era" and propose the bad legislation as the solution to said evil. These days, popular "Evils" are Human Trafficking, Child Porn, and "Terrorism". The first two evoke extreme emotion of crimes committed against the most innocent of victims, so they're the best choice in this scenario. In the 80s-90s it was anything to reduce "Crack Babies" or win "The War on Drugs".

It's an old trick -- when people talk about logical limits placed on the first amendment, you'll hear the phrase "Shouting Fire in a Crowded Theater". Most of those who utter it don't realize that this phrase originated as part of a ruling that had nothing to do with "fire" or a "crowded theater" but was made to curtail the dangerous speech of opposing the draft during World War I[1].

[1] https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...

kabdib 2 days ago 1 reply      
The bill says "...shall be capable of being decrypted and unlocked by its manufacturer or its operating system provider."

It doesn't say how, and it doesn't give a time frame.

So: Provide an API to accept a key. Allow two key attempts per second. Start with key 0x0000..000, next try 0x000..0001. This is guaranteed to complete, you just have to be prepared to wait a while.

(Yes, I know that courts are unhappy with this kind of thing. But the bill is a crappy bill, in many regards).

mmanfrin 2 days ago 0 replies      
What a severe irony for this idiot to be on the Privacy and Consumer Protection Committee.
pdkl95 2 days ago 0 replies      
So the 2nd crypto war has move beyond mere fighting words. The long term battlefield is usually the court of public opinion, so I hope Silicon Valley recognizes this challenge to their power. Tech firms should have been attacking this rhetoric hard when it started, but accusing politicians of not understanding math/crypto has been a common response.

Do you want crypto to work? Or do you want to be forced to replace crypto with security theater? Is your business actually willing to actively protect a free internet? Or is it easier to assume this is "someone else's problem"?

I guess we will see which companies defend themselves, and which companies think being a collaborator is more profitable?

trhway 2 days ago 1 reply      
so basically there should be 2 components sold separately - "dumb GSM connectivity module" and "smart OS module" (iPod basically). The latter not having cell phone connectivity wouldn't be subject to that law and thus can have FDE/whatever. The GSM module can just attach to the "iPod" back like external battery.
ianamartin 2 days ago 1 reply      
The "shall" wording is going to keep this in courts for years, even if it does pass.

Shall is the source of more litigation than any other single word in the English language. It can always be debated because no one knows if it reliably means "can", "must", "may", "might", "will", "should", "ought to", or "is allowed to".

All the above uses can be supported with evidence. Because language evolves.

It's killer word for any law or contract and guaranteed to be disputed.

I am not a lawyer, btw.

But if this somehow passes, it will get tossed because of the wording.

ams6110 2 days ago 1 reply      
Would Apple have the balls to stop selling phones in California?
passwordreset 1 day ago 0 replies      
Where the hell is Anonymous in all this? Shouldn't they be out there doxxing and haxxoring and whatever it is that they do to these kinds of people? I'd figure if someone stands up and says "Encryption should be illegal", they probably don't encrypt jack shit themselves, and they're probably easy targets. They might even take the hint and say "shit, I should have encrypted my internetz" and change their stance. Eh, doubtful.
peteretep 1 day ago 1 reply      
Someone needs to make a big deal about how this is bad for business because it allows the Chinese/Russians/French/Welsh/whoever to steal American Innovations(TM) and then write to whomever this person will be challenged by in upcoming elections with "x is anti American Business" talking points. Both sides can play "Won't someone think of the children?"
tdkl 1 day ago 0 replies      
I'm waiting on the day something like this gets proposed in all of the EU states, for the same BS reasons.

As a matter of fact, I'm certain that current leaders of the EU countries who publicly invited immigrants to their state (we all know the most prominent one), was considering this as a easy way to change the privacy laws - and be applauded for it.

LinuxBender 2 days ago 1 reply      
What impact might this have on tax revenue from said controlled devices no longer being purchased in California?

Do we start referring to encrypted devices without back doors as contraband?

jegutman 2 days ago 0 replies      
Might as well try to ban speaking pig latin in public.
rdudek 2 days ago 0 replies      
Fair question, if I wanted to buy a phone now, which manufacturer/OS comes with ability to do FDE and said party does not have a copy of the key?
Silicon Valley and Black Coders: Howard University fights to join the tech boom bloomberg.com
254 points by ml_hpc   ago   457 comments top 50
alistproducer2 2 days ago 3 replies      
I'm a black guy working in IT at a company in the top 5 of the Fortune 500. This is complicated issue. There have been experiment that show having a "black" name lowers your job prospects [1]. Other studies that show:

"...race actually turned out to be more significant than a criminal background. Notice that employers were more likely to call Whites with a criminal record (17% were offered an interview) than Blacks without a criminal record (14%). [2]"

So all the people acting as though our society is some meritocratic, utopia can keep that bullsh*t to themselves.

On the other hand, there is no doubt that blacks under-perform relative to whites when it comes to academics. There are obvious reason for this, but those reasons don't change the truth. Companies that are heavy on the engineering are going to use academic markers to try and select the best of the best. There aren't enough blacks >= white peers in the top percentiles of CS to give us proportional representation.

[1] http://www.nber.org/digest/sep03/w9873.html[2] http://thesocietypages.org/socimages/2015/04/03/race-crimina...

tom_b 2 days ago 0 replies      
I am a non-minority, CS graduate of a HBCU. There were amazing hackers in my program. I believe there were around 200-300 declared CS majors across all levels when I graduated. Numbers are down from those peaks now.

My institution was heavily recruited by big corps, government labs, and east coast companies.

The "best" students, by GPA, were in high-demand for all of the above. Many were heavily recruited into management tracks for non-IT companies. A large number of government institutions and defense contractors were also eager to land new grads from our school. The "best" students, by hacking skills were (maybe stereo-typically for hackers) less interested in classes that didn't involve slinging code, but also all landed programming gigs. Less committed students, from either metric, seemed to still be getting jobs but I can't generalize as to the job type.

I think it is fair to say that my undergrad course-work was not as demanding as (guessing a bit here) Stanford, MIT, or CMU. But my GPA and GRE scores landed me multiple job and graduate school offers.

One aspect of hiring from (or at least my) HBCUs is that there is very strong network effect - alumni come back to the school and recruit interns and fulltime positions for their companies, help prep students for the process, and students looked to those alumni as trusted sources.

If Silicon Valley really wants to hire from HBCUs, that is the path I would recommend. Hire a few alums from the HBCUs and make recruiting and grooming candidates a priority for those alums.

pingou 2 days ago 14 replies      
"as the only African American on her team, she didnt feel she had much in common with her colleagues. When I went out to lunch or something with my team, it was sort of like, Soooo, what are you guys talking about? she says"

I find this sentence really shocking, perhaps because I'm french and in France we try to assimilate people more (I don't really know), but I would definitely think that as a white software engineer I have a lot more in common with the black software engineer working with me than with whatever random white dude.

dengnan 2 days ago 5 replies      
> One senior, Sarah Jones, ... There are not a lot of people of color in the Valleyand that, by itself, makes it kind of unwelcoming."

This statement may be true if "people of color" means African American. Otherwise, it is just not the fact. I do think, through my personal experience, the Valley is probably the most diverse place that I have been. I've seen people all over the wrold here: Asian, Latino, European, etc.

igonvalue 2 days ago 6 replies      
The clickbait article headline is "Why Doesnt Silicon Valley Hire Black Coders?"

The answer is buried three quarters of the way into the article:

> When they started interviewing seniors, companies found as Pratt did at Howard that many were underprepared. They hadnt been exposed to programming before college and had gaps in their college classes.

So why isn't the article titled, "Why Aren't Enough Black Coders Prepared for Silicon Valley"?

mberning 2 days ago 2 replies      
Is it that they don't hire black coders? Or is it that there are very few black coders to begin with? African Americans make up 13% of the US population and they graduate college at a lower rate than other ethic groups.

It would also be interesting to look at selected majors across ethnic groups. I suspect that blacks go in to CS at a lower rate than other ethnic groups.

carbocation 2 days ago 1 reply      
Looking at the commentary at Hacker News, I would say that the author has done a disservice to the topic by ignoring the minority status of East Asians, South Asians, and Latinos.

With very minor changes, they could have used the correct words to cover the topic they really wanted to cover: that there are fewer black programmers in SV than is desired/expected/needed. And that is a topic that deserves discussion. But because the author minimized the experiences of a huge number of other minority groups rather than focusing on the concerns at hand, we are now squabbling about essentially irrelevant material.

95% of the article could remain intact. By cutting the 5% which is both fluff and offensive to other groups, the rest of the article would be much stronger.

zanek 2 days ago 2 replies      
As a human that people in America would call black/African-American I really dislike articles like this. I understand that some black people feel like they cant relate to others, but I find the entire premise of such arguments about homogeneous workplaces and cultures completely ridiculous. The culture of 'black' people in Alabama will be very different from Howard or Washington DC, does that mean Alabama is unwelcoming ?

Secondly, who cares what schools top tier companies are targeting. If Howard is churning out good software engineers that are so good they cant be ignored, a) they wont need Google, et al to hire them b) their skills will speak for themselves when they apply for a job

It seems like so many people (black, white, Asian, etc) actually buy into this socially constructed division by culture or skin color which is completely insane to me. To me it's like dividing people into groups by eye or hair color and saying you feel unwelcomed by the blue eyed people.

Articles like this seem to reinforce the notion that there is this 'otherness' of culture and skin color. If Google ,Facebook, etc are ignoring software engineers that are top notch from Howard and other historically black colleges, that would be a problem, but I doubt that is the case. Most companies want people that can get the job done well and know their stuff in my experience (I've worked in Silicon Valley and Fortune 50 companies).

The article seems to repeatedly make the point that the black people at tech companies were feeling out of place while working at Google, etc. as if any Indian, Asian, or White person do not experience the same thing (someone from India will have to learn the culture of SV just like someone from Howard Univ. or some white person from Alaska). Who cares if you dont watch the same TV shows or read the same books. If anything , I think thats a good thing, as its a starting point to learn more about something you havent experienced. I think the most important thing is mindset and attitude going into situations like this. Curiosity and open mindedness would do wonders for the people in the article who feel like 'others' in SV.

I dont feel like the culture of SV is as homogeneous as they are trying to project, but this 'otherness' is the real projection

I've never been openly been discriminated against, or felt like the color of my skin had anything to do with my success in Silicon Valley or the East Coast while working at tech companies. I've found almost all people of all 'races' to only care about competence and efficiency (other than the occasional jerk or misanthrope)

travjones 2 days ago 2 replies      
I've only been to SF twice, but I have to agree that it is pretty white. Not as white as Colorado, but still. I also toured Medium and was blown away by the lack of black folks working there.

It's tough to describe the feeling, but when you're the only black person in the room, you do feel different--a little uncomfortable. However, I don't think this reflects a conscious effort to not hire blacks, rather there are institutional and socioeconomic barriers that leave us underrepresented in tech and many other fields.

pmorici 2 days ago 0 replies      
The real story here seems less about race and more about how the CS program at Howard was mediocre (or at least didn't produce students that met Google's expectations) and one of the professors with a background of working for Google and attending an elite CS program realized it's deficiencies, improved it, and was able to get a bunch of students hired by Google by filling in their knowledge and experience gaps.

The school happens to be historically black but I'd be surprised if you found hiring statistics from a majority white school with a similarly ranked CS program to be substantially different.

gtk40 2 days ago 8 replies      
It's interesting that it describes Silicon Valley as being too white, when it seems like there are quite a few Asians. Even working elsewhere, a high percentage of our programmers are Asian, higher than the metro's demographics would suggest.
colmvp 2 days ago 3 replies      
There are not a lot of people of color in the Valleyand that, by itself, makes it kind of unwelcoming

Oh I forgot, us Asians don't when it comes to people of color or diversity.

jgalt212 2 days ago 1 reply      
How about this?

Black engineers would prefer to work in IT at a big bank with high steady pay than opt for the highly variable risk/return profile of being an engineer in SV.

And why do they do this? If you look at poverty being an overriding theme for blacks in America, even if they themselves are not poor, then one would clearly prefer a lower risk med/high reward job than an a high risk low or super-high reward job.

Now the above only explains why black Americans underparticpate in start-culture. It says nothing for why they are underrepresented at high paying low risk shops like FB, Google, YHOO, Salesforce, ORCL, etc. Unless, of course, if you need to have first slugged it out at a few start ups before getting a job at a bigger shop. I'd say that's maybe only true for parallel hires and not kids right out of university.

bdavisx 2 days ago 7 replies      
I would say some of the students in the article probably aren't SV material - "Theyd begun studying computer science in college"? So you've been involved with computers for 4 whole years and you think you're qualified for a top tier job? I'm sure not all of the students only started in college - I'm also not naive enough to think there isn't prejudice in SV, but didn't start until college...
donretag 2 days ago 0 replies      
I have a few minor issues with this article/approach at Haward. First of all, it makes it appear that companies like Google and Facebook are the end-all-be-all. There are more companies than the top tier. In fact, there are also companies outside Silicon Valley. Not only are they setting up students with potential failure, but they are painting a different view of what the industry looks like and where it is located. They are even discounting NYC, which is just a train ride away.

Also, the problem with Howard not being a top tier school applies to every school that is not in the top tier. Many do not even get the same access to recruiters that Howard does.

I also believe schools should be teaching fundamentals and theory and not be used as job training.

That said, the assimilation problem, "cultural fit", is real, but is often neglected. Many programs trying to fix the minority inbalance simply focus on outreach, the recruiting pipeline.

mynameishere 2 days ago 0 replies      
The slow progress reflects the knottiness of one of Silicon Valleys most persistent problems: Its too white.

It's actually too Asian. And too Jewish. That is, if you're using, you know, math, and a simpleton's understanding of demographics. If you're using contemporary ethnic racketeering, then yeah, it's too white. Even the NFL is too white.

I actually think it would be funny to see Bloomberg come out with an article demanding that fewer Asians and Jews be hired wherever they excel.

rcavezza 2 days ago 2 replies      
What surprised me from the article is that only 8 out of 10 students at Howard are black. Howard is a historically black university and I assumed the percentage would be 90%+. I did some research and found out that the latest numbers I found were 91% "Black or African American" students at Howard. http://www.collegefactual.com/colleges/howard-university/stu...

Offtopic - but Howard has an amazing marching band. They played Rutgers when I was in school (football), and my favorite part of that game was the Howard band at halftime.

numbsafari 2 days ago 0 replies      
The biggest issue I have with the article is that it presents SV as the only place you can go to be successful with a CS degree.

It very well could be that the article misrepresents the efforts of Prof. Burge and the Howard staff since the article is focused on SV.

But if SV is turning away energetic, engaged, intelligent and capable new recruits, then please send them to NYC, Seattle, Chicago, Philadelphia, Triangle Park, LA, or anywhere else where companies are looking to hire.

It might not give you a "direct impact" on SV itself, but it does get your people into good paying jobs where they can further develop their skills and experience (especially for those without a long childhood of working with computers). It's a small industry. Soon enough these graduates will be attending conferences and making an impact on this culture.

More importantly, they'll also be representatives in their local communities, helping to inspire the next generation of students who don't see themselves or their experience reflected in this industry. And perhaps that next generation will be more likely to pick up programming in middle school.

wobbleblob 2 days ago 0 replies      
This is an interesting question, and probably related to "why doesn't Silicon Valley hire female coders?" and "Why doesn't Silicon Valley hire coders over 30?"
maker1138 2 days ago 0 replies      
I have a dream where people are judged by the content of their character instead of the color of their skin.

Seriously. I'm tired of these people focusing only on skin color. Why don't we let it go, forget about skin color, and live together.

crapolasplatter 2 days ago 0 replies      
WTF, Being in the industry myself I see a lot of non white people in the industry.

This could be used to reinforce the thought that the majority of people that care and judge based on color are mostly African Americans.

"More than 20 percent of all black computer science graduates attended an historically black school, according to federal statisticsyet the Valley wasnt looking for candidates at these institutions."

Ah news for you , they are also not looking at candidates from my community schools.

Perhaps the article should be why you shouldn't attend a non racially diverse university or a university that doesn't attract employers in your field?

rm_-rf_slash 2 days ago 1 reply      
This is a very simple problem and racism has almost nothing to do with it:

White founder has a business idea and they bring along their friends - most likely white. Those friends bring in their friends and colleagues - also most likely white - to become the executive team. The executives hire tomorrow's managers. By that time the vast majority of employees are white, and even if they work very very hard to hire black people, it will take a very very long time until there is proportional representation all the way up to the executive level. Some execs work well into their 80s, meaning that it could take more than a century until there is population-proportional diversity at any predominantly white-founded institution.

The longer a lack of diversity persists in a company's trajectory the harder it becomes to fix it. The only solution I can think of is for black people to start more companies themselves.

leaveyou 2 days ago 0 replies      
My guess would be: for the same reason it does not hire white coders.. incompetence ?
saturdaysaint 2 days ago 0 replies      
A good article with a horrible, clickbaity headline. Howard's CS department head and even the students seem fully aware that "the problem" doesn't lie in evil Silicon Valley HR departments, but in the challenges of preparing kids who haven't coded until college.
donatj 2 days ago 0 replies      
A surprisingly high number of the speakers I went to at AWS Re:Invent were Indian, close to half, and they all worked for Amazon. I found it very interesting.
adetdot 1 day ago 0 replies      
I am a Howard University Alum of the Computer Science Department. My experience at Howard was an eye opening experience. I went to Howard because I got track scholarship and I wanted to get the "Different World" TV Show experience. As a first generation Nigerian American, there was a lot of diversity in the sense that I got to meet black people from all over the world. I even got to get the chance to learn about my history. Also, when I graduated a lot of my classmates went on to work at Microsoft, Goldman Sachs, or other Fortune 500 companies. Google IPO'ed a year earlier and wasn't really on campus. Google and Facebook would get students a couple of years after I graduated. I know a couple of those students that are doing well because they got in early at Facebook. There is a decent amount of Howard Alumni at some of the tech companies. Anyways, what Dr. Burge is doing is great. His focus is to get more students to work on projects and get more tech companies on campus. More people in DC and the US are helping as well.
sakopov 2 days ago 0 replies      
I'm not American but I've lived here for a quite some time and this is my observation purely from my outsider perspective. Ask yourself how many times you see African Americans in a group of Asian, Latino or White people? I think the answer is very simple - Black people segregate themselves not just from White people but people of ANY other race. The majority stay in their cliques and never try to get out and the general consensus is "Why even bother?" I mean, the young lady in the article says she doesn't fit in and all that goes through my mind is how is this anyone's fault?
alvern 2 days ago 0 replies      
> Pratt also noticed that many advanced classes at Howard and other black colleges werent as rigorous or up-to-date as they were at Carnegie Mellon or Stanford. By senior year, students risked falling behind their peers from other institutions. Id ask faculty members, Why are you teaching this course that way? he recalls. And theyd say, Well, Ive been teaching the course for 25 years.

That's the core issue right there. The school hasn't adapted to the technology and practices. What use would you be on day one if your coding knowledge was stuck in 1991?

systems 2 days ago 2 replies      
There are not a lot of people of color in the Valleyand that, by itself, makes it kind of unwelcoming

i dont like this attitude, i am not sure what is should be called, but you should feel relaxed in your own skin, accept diversity, and dont mind it when most of the people around you, are not the same color

why is it unwelcoming ...maybe not what you hope for, but why call it so negatively

ctstover 2 days ago 0 replies      
Free Advice:

Look, the US is filled with businesses in big cities that, while are not "software companies", do very much need to write software to conduct operations. Take Houston, Texas for instance. It doesn't matter where you came from, what you look like, or who your daddy is. The game is supply and demand. If you can supply, you are in demand. If you are a native English speaker, then you are already ahead. In this country, if you are willing to move, work your ass off, and actually like programming - eventually you will reach gainful employment. Especially if you can pass a drug test. The first year? Hell no. Look for the hardest shit you can find that people with no patience think they are "too good for", and you will be filling in your experience in no time. Life is not easy or fair. If you are smart enough to do even some half ass programming, you have been given a gift.

ajeet_dhaliwal 2 days ago 0 replies      
Some interesting points in the article picking up on the mono culture in a lot of software companies and especially games companies, in my experience outside Silicon Valley, I think it's a programming thing in general wherever you are. After many years I'm quite tired of the endless Star Wars talk, references, and T-shirts that I have to endure from colleagues. I even like the Star Wars movies but there comes a point where I think we could surely shut up about it for one day. However, I have to endure it, I've worked in numerous companies and it's the same thing all over the place, the blah mono culture, I'd love to work with some of the people in this article.
davidf18 2 days ago 0 replies      
I started programming at a top computing university where my father was on the faculty while still in high school. I was working with other high school students and college students who were very passionate about programming and computer hardware. I remember working 80 hour work weeks during summers and breaks learning an enormous amount from fellow students as well as faculty and researchers.

Generally, if one wants to get into computing in a highly competitive environment, they should attend a top computing university. Fortunately there are top schools that are public as well as private.

I rarely see blacks (or Hispanics) at computer Meetups in NYC. For that matter, at many computer Meetups, there aren't so many women either.

matt_morgan 2 days ago 0 replies      
One company's "culture" is another company's Old Boy Network.
thegayngler 2 days ago 0 replies      
I'm a gay black guy who went to SMU via scholarship and majored in finance. Coding is my hobby. The New York Times hired me to write code for them. I worked there 2.5 years. We are in the tech already. Silicone Valley needs to pick those up who are already here. Some of us are more than willing and capable to work with Silicone Valley.
Kinnard 2 days ago 1 reply      
Wonder why this isn't on the front page . . . a simple points/time since posted doesn't add up to me . . . IMHO HN is too opaque.
nickthemagicman 2 days ago 0 replies      
I think it's similar to the issue of women in comp sci. It's a recruiting and culture problem in the whole industry. African Americans have more barriers than women imo, because statistically they have issues of poverty and less tech early age tech exposure on TOP of culture mismatch.
blisterpeanuts 2 days ago 7 replies      
What a big steaming pile of political correctness. The author interviews a tiny handful of mediocre college students who blame their "color" for not getting hired right out of school by big, glamorous tech companies. Are we supposed to be sympathetic?

I remember my own struggles to break into the tech business, many years ago now. Although white and "privileged", i.e. no cultural barriers to entry, I found it very tough and had to jump through hoops, work my way up from semi-tech to actual development positions. I took night school courses on a credit card and got into debt. I bought whatever gear I could afford and stayed up until 3am writing code, then got up and went to my menial job.

The opportunities didn't just fall in my lap; I had to earn them. No glamorous technology titans came knocking on my door, begging me to come interview. I had to work for everything I got, and God, it was hard. It still is.

This same work ethic applied to everyone; I was on the chatboards in the late 80s, all through the 90s, and the 2000s, and the story is always the same. You have to have the right stuff if you want to build a career in technology -- be smart, creative, have some initiative, humility, humor, etc.

So maybe Black Americans don't get that in their upbringing. Maybe they're not taught to be smart, competitive, hard charging over achievers. Maybe they're not encouraged to be creative, to think outside the box, etc. I don't know. What I do know is, you can't compensate for that by handing people undeserved opportunities.

Affirmative action is a failure; it's nothing but a form of welfare. If Google reaches out and hires under qualified people from Howard or wherever, just to say it's trying to overcome "barriers" and achieve "diversity", that's all doublespeak that in the end means "We will hire a few token blacks because we have extra money. It will make us feel good, and it will fool them into thinking they made it. Whatever. We have to do it."

kdamken 2 days ago 0 replies      
solaris_2 2 days ago 0 replies      
Silicon Valley is not interested in hiring black programmers because they simply do not want to.

The easiest way to prove this is by looking beyond the software engineering field.Why do big SV companies like Facebook, Google or LinkedIn not have black non-engineering staff?

Are capable black accountants, project managers, lawyers, support staff non-existent too?

rezistik 2 days ago 2 replies      
The issue I take with this is that American assimilation is a two way street. Every culture puts in and takes out. Tacos, sausages, pizza, sushi, these are American foods as much as they are Mexican, German, Japanese, or Italian. Some of them more American than their source in ways. It's not about forcing White Angle-Saxon culture. It's about forging American culture and identity.
tosseraccount 2 days ago 1 reply      
Silicon Valley would rather hire foreigners than hire black Americans.

http://www.nytimes.com/2015/09/04/technology/silicon-valley-... ...

"Google revealed that its tech work force was 1 percent black, compared with 60 percent white. Yahoo disclosed in July that African-Americans made up 1 percent of its tech workers while Hispanics were 3 percent."

Affirmative Action has worked in other industries.

Why does it fail in Silicon Valley?

umanwizard 2 days ago 1 reply      
Everyone is the direct descendant of Africans. That's not what "Black" means in the sense it's being used. The people you are talking about are culturally Dravidian Indian, not culturally African-American.
dang 2 days ago 2 replies      
> Did you read the article?

Please don't do this here. The HN guidelines specifically ask you not to: https://news.ycombinator.com/newsguidelines.html.

We detached this subthread from https://news.ycombinator.com/item?id=10945107 and marked it off-topic.

dang 2 days ago 1 reply      
We've banned this account for repeatedly breaking the HN guidelines.
dang 2 days ago 1 reply      
We've banned this account. When a new account shows up and breaks the HN guidelines this badly, we ban it.
devalier 2 days ago 2 replies      
Computer programming is one of the most cognitively demanding professions in existence. Ability to program correlates pretty highly with cognitive test scores, such as the SAT Math. If you look at the top scorers on such tests in America, only about 1% are black. The ratio of black engineers in Silicon Valley matches what you would expect based on the test scores.

To see it visually, this is the bell curve based on millions of test results: http://i.imgur.com/zB1oENS.png?1 There are very simply very few black people in the far-right portion. This is not even a disputed fact (the dispute is mainly over why the curve is skewed and if it can be fixed; the existence of the skew is incontrovertible).

This was at least partly because of the way companies recruited: From 2001 to 2009, more than 20 percent of all black computer science graduates attended an historically black school, according to federal statisticsyet the Valley wasnt looking for candidates at these institutions.

The average SAT scores at Howard are thoroughly mediocre, on par with second and tier state colleges, and you would not expect an elite school to concentrate on Howard, any more than you would expect it to concentrate on Southern Illinois University or the like. The only reason an elite company would recruit at Howard is for diversity reasons.

Foreman is strong-willed, which sometimes gets him in trouble. I just chalked it up to soft skills, I guess, he says, explaining that he and his interviewer had clashed. Pratt says hed been furious to learn that Foreman had been passed over. Other companies said no, too.

So was he a good programmer or not? How do we the readers know that he good do the job and was passed up unfairly?

The phenomenon, stereotype threat, is getting more attention in the Valley, and companies have begun training employees to be aware of it.

The idea of stereotype threat is extremely dubious - http://isteve.blogspot.com/2012/10/john-list-on-virtual-none...

She doesnt fit the profile of what people think of when they think of engineers. Even though people think of Silicon Valley as a big meritocracy, I dont think thats how it works.

There are now a number of companies that do automated programming interviews -- Starfighter, Hacker Rank, etc. Do these manage to overcome stereotype threat? Do these blind interviews allow through more African-Americans? Before throwing around slanderous accusations, one should actually show that Silicon Valley is treating people with the same programming ability differently.

The sad thing is that these tech companies cannot just admit, "We don't recruit at Howard because the SAT scores are not there." Rather these companies have to pretend that anyone can be a great programmer if they just put in the work, and a lot of people end up with false hopes that only get crushed.

StripeNoGood 2 days ago 0 replies      
And my question is - why are there so few white players in NBA? This is racist! Protest that!
isnullorempty 2 days ago 1 reply      
The same reason as there were no Black people nominated for Oscars.
isnullorempty 2 days ago 1 reply      
Look at every notable tech company ever, do you see many black people involved in the founding? Why is that?
auggierose 2 days ago 4 replies      
Let's assume there is this black guy / girl sitting in the interview room with you. You (white) are asking interview questions and you soon realise that the person you are interviewing is excellent. But something is off, you can't really put your finger on it. What do you do?
Why didn't France adopt the longbow like England did? peterleeson.com
269 points by blacksqr   ago   174 comments top 36
vkazanov 1 day ago 2 replies      
Interesting. Although this point of view might be a bit simplified, there's another example of a weapon, which was both relatively cheap to make - and really hard to use unless the whole society was built around the skills required to use it.

Mongols! Light cavalry using composite bows was both unbelievably effective and hard to copy for everybody but steppe nomads. All Mongols were hunters, they practically lived with their bows on their horses. So the whole population could do warfare.

While back in the days, in both Eastern and Western Europe, contemporary warfare was rotating around heavy cavalry, and one can't have too many knights. Even if somebody managed to gather an army more or less comparable to Mongol hordes - heavier cavalry would just be meat for lighter riders making circles around them.

Besides, feudal lands never managed to be centralized enough to counter mongols. In medieval Rus' the need to centralize led to the rise of Moscow - and it took quite a while anyway.

zhte415 1 day ago 4 replies      

The abstract does a wonderful job. It is will worth reading.

Longbow was cheap and technically superior, but required training. Crossbow more expensive, required less training. Rulers of England less worried about rebellion, OK to invest in training. Rulers of France/Scotland not so happy because of fear to give potential of overthrow to the people (Scotland not in title, but in article along with France).

Perhaps an analogy could be painted with companies today. Those that churn, and those that nurture skills.

junto 1 day ago 3 replies      
Kevin Hicks is a great resource on the longbow: https://www.youtube.com/watch?v=EvKJcxa8x_g

He is also very knowledgeable about all things from this period of history: https://www.youtube.com/watch?v=hUYd6pNy6QU

There is also some more detail regarding the make up of the "English" army bowmen here: http://www.bowyers.com/bowyery_longbowOrigins.php


- Bowyer: Makes the bow

- Fletcher: Makes the arrow

- Stringfellow: Makes the string

- Arrowsmith: Makes the arrowhead

wtbob 1 day ago 1 reply      
This seems a good place to post about Mad Jack Churchill, who brought his longbow to France in 1940 and shot a German with it: https://en.wikipedia.org/wiki/Jack_Churchill
gobbo 1 day ago 3 replies      
I see a possible problem with this theory: in few words, according to the authors the French and Scots did not adopt the longbow for fear of rebellion. Still the nobles participated to the very battles that saw them being defeated by the English and their longbow. If the longbow was cheap and relatively easy to adopt, a noble aspiring to the crown would have been able to develop the technology independently from the (unstable) central government and have an even easier road to glory against their own technologically inferior king.

I admittedly did not read carefully the whole paper, but this possibility does not seem to be addressed.

mhd 1 day ago 2 replies      
The intro still assumes that the battles won by the English in the Hundred Years War were due the longbow (alone), which AFAIK is quite debatable (at least outside of England, where Agincourt is a bit of a national myth, even more than e.g. the Black Legend).

And never mind instability, the French also weren't as geographically isolated as the English and thus it was easier to hire mercenaries. Genoese crossbowman being a particular example.

The penetrative ability of the longbow is also greatly exaggerated, citing a book that did some pretty shoddy testing (flat sheets of poor quality metal used as targets, but hardened bodkins as penetrators, 10m distance, no padding).

lambda 1 day ago 4 replies      
I think that the way this paper discusses on the longbow as being the superior weapon may obscure a key fact here. Man for man, a crossbow is a superior weapon; requires less skill to operate, has longer range, much easier to aim, better penetrating power. The main advantage of a longbow is how simple and cheap it is.

Speed of reloading is another advantage the longbow has, but I think this article overstates it. While some crossbows do require using a stirrup or crank to load them, there are others that you can reload against your hips, and shoot from there, to increase your speed considerably, at some cost to accuracy. I know people who have managed to get 6 bullseyes at 20 yards on a crossbow in 30 seconds. Meanwhile, archers would not be firing at the maximum possible rate in battle; ammunition is a limited resource, and with the draw weights of warbows fatigue would set in quickly. Overall, with the archers they had and bows they had at the time, it is likely that the longbows were able to be a little faster than the crossbows, but it's not a night and day thing; and the range, accuracy, and penetrating power on the crossbows were better.

The simplicity became an advantage in a few battles, which came after substantial rainstorms that caused problems with crossbows more complicated mechanisms. But the main advantage was how cheap and fast to produce they were; you could easily arm a large populace quite quickly. In order to take advantage of the longbow, you had to do that; you needed a very large number of archers to effectively take advantage of longbows, while you needed fewer archers to be effective with crossbows. But because it was cheap and simple, it was feasible to do that.

I think that cost and simplicity of the longbows were their biggest advantage; speed perhaps a secondary factor, but the sheer numbers were likely to be more important.

There is, of course, an interesting parallel here with some trends in modern military spending. The Joint Strike Fighter is a technological marvel; one of the most advanced pieces of military equipment ever. However, they are staggeringly expensive, and not actually the best dogfighters in the sky. You wonder how much more effective spending that money on more and simpler weaponry might have been.

caoilte 1 day ago 0 replies      
I still remember being blown away when I realised how institutionally organised longbow practice was as I read this Rosemary Sutcliff as a child


4bpp 1 day ago 0 replies      
I wonder if this practice of arming citizens with easily procured long-distance weaponry that in a less stable country would be feared to become useful in a rebellion also could be considered one of the memetic ancestors of modern American firearm culture.
camperman 1 day ago 0 replies      
Great paper. Nice to see the authors include Edward III's compulsory practice longbow after church on Sundays for two hours.
dozzie 1 day ago 4 replies      
I don't know, maybe because bow is a peasants' weapon, and France had knigts?

Western Europe had its share of stupidity, like kings leading charges instead of commanding battles on a tactical level.

brudgers 1 day ago 1 reply      
kiwi93 1 day ago 1 reply      
surprised there's no discussion of the importance of chivalric values in the French military. French knights were so married to the idea of valor that employing yeoman infantry was seen as dishonorable. Obviously this is directly related to the political context of state security but it's worth considering the cultural factor as well
david-given 1 day ago 0 replies      
Asimov wrote an essay on this back in 1980, called _The Unsecret Weapon_, in the collection _The Sun Shines Bright_. Same conclusion, IIRC.

However I've never found a copy online, which is a shame because I remember it has being one of his best.

alricb 1 day ago 0 replies      
Ah, Agincourt-fetish, one of the few fetishes you can display in public without looking too ridiculous (in the English-speaking world).

Longbows are great in open battle, yes. But the hundred years war was a war of sieges and raids (by the English and the great companies), and for those the stonemason is infinitely superior to the longbowman.

For the French, the winning strategy was always to avoid pitched battles and fortify river crossing points until English armies had run out of supplies, then patiently retake lost fortified places through siege.

trhway 1 day ago 1 reply      
>Yet the Hundred Years War (13371453) lasted longer than a hundred yearsplenty of time for Englands enemies to learn that their defeats were heavily influenced, if not caused, by the longbow.

Didn't England lose the Hundred Years War? At least looking at the map before and after - it lost everything on the continent, incl. last remnants of Angevin and Normandy lost to France, with France rising up significantly bigger and stronger as a result of the war.

While long bow is a nice nostalgic weapon, the crossbow is technologically more advanced, and in our civilization technology wins :

"Plate armor that could be penetrated by large crossbows, but was impenetrable by longbows, was uncommon in Europe until about 1380"

(funny that while a child i was initially making bows, yet soon switched to making crossbows - and they were interesting until i made my first single shot handgun at the end of the 1st grade :)

alricb 1 day ago 0 replies      
A paper about the hundred years war that doesn't cite a single French-language source except Froissard? Not very serious.
trengrj 1 day ago 1 reply      
Makes me want to learn how to use the longbow.

It seems a challenge that this skill has been lost, and is interesting in that how long it took to develop.

ggillas 1 day ago 0 replies      
Explains the large number of hackathons we have these days: "...a ruler who wanted to adopt the longbow had to create and enforce a culture of archery through tournaments, financial incentives, and laws supporting longbow use to ensure sufficient numbers of archers."
ajuc 1 day ago 2 replies      
It's funny how the article quietly assumes "Scotland+France+England" = "Europe".
mcv 1 day ago 0 replies      
A historian friend claims this is bollocks. England was not significantly more stable, and the longbow was not a superweapon on its own. It was part of a system of combat, of combined arms, involving knights fighting on foot and choosing the right terrain.

And when they didn't have the right terrain, those English longbowmen also lost plenty of battles. They had some spectacular victories at Crecy and Agincourt, but they also had their fair share of losses.

Excellent weapon, but no silver bullet.

chernevik 1 day ago 0 replies      
It's an interesting paper but "politically stable" isn't the right term.

A population able to defeat the infantry technology of the time requires a different social and legal position than one that doesn't have that ability. That is, the government needs more cooperation and consent of the governed. That government is stronger than other governments, because it can kill their armies. But it is more dependent on that population and so cannot abuse it in the same manner as those other, "weaker" governments.

widdershins 1 day ago 2 replies      
I've only read the abstract, so I'm not sure if this is covered, but the longbow requires huge amounts of practice from a young age to be effective. You had to be incredibly strong just to draw the string. A crossbow, by contrast, was easier to draw, and men could be trained to use it in a far shorter time. English Kings had constant problems with procuring enough men capable of using a longbow, passing all sorts of laws banning all sports except archery etc. Perhaps the French simply couldn't find enough trained men?
rogeryu 1 day ago 1 reply      
Last summer I did a workshop with the longbow. It's real fun, and has many links to meditation, finding your center etc. I had seen it before, never liked it, but it was great.

In the end I shot at a 1.5 meter target about 100 meters away. You could barely see it, as it was lying flat on a hill. At first I could not believe that I had the power to get that far, but it worked out. I missed it by about 12 meters, which was not bad looking at the competition that day.

DougN7 1 day ago 0 replies      
Interesting discussion, including the comments below respecting how Barons didn't necessarily want an armed populice because it made it harder for them to stay in power. Does any of this sound familiar or applicable to today (gun control debate)??
jdlyga 1 day ago 0 replies      
Because France was playing Orc and researched lighter throwing axes instead.
thesz 1 day ago 0 replies      
"We determined the true solution of medieval war puzzle. The medievals truly had to reason just like we do. We cite no such reason in medieval literature sources, just use indirect proofs that we are right."

A good reason to laugh, I guess.

msh 1 day ago 2 replies      
I fail to see their references to support the claim that france or scotland was more politically unstable.

Also how about the rest of european powers, I am not certain there is support that england was the only politically stable entity is europe during the middle ages.

hyperion2010 1 day ago 1 reply      
The Brits have had the advantage of a technologically superior political system for a very long time. I usually make an argument that is quite similar to this to explain their rapid and unprecedented rise to imperial splendor.
Justsignedup 1 day ago 0 replies      
tl;dr -- The longbow is worthless in most applications except military. The monarchy made it compulsory to train in bow use, so anyone recruited for the militia was ready in some shape or form to use the bow, so in a relatively short amount of time anyone can be trained to use a longbow for battle. They forced bow imports and kept prices very low throughout England.

The rest of the world couldn't enact such rules and thus could not make the Longbow a successful military weapon. It required years of training, not something you can do to a soldier who just got conscripted.

JoeAltmaier 1 day ago 1 reply      
Anybody going to draw a parallel between this, and modern gun-control efforts? Or is that too contentious to be helpful in illuminating this centuries-old issue.
wuschel 1 day ago 1 reply      
Interesting read of the introduction - and very good points. Perhaps yew was also way too hard to come by / expensive for the Scottish?

Also, it does make sense that training long term military personel was reserved to the ruling feudal class. Still, producing a longbow compatible population (of strong, loyal) men might have had other costs then political ones. Precision wise, a longbow is not a real tournament weapon - and you needed tall, strong men to wield it.

Agathos 1 day ago 2 replies      
It's nice to see the conventional wisdom confirmed for once, but... isn't this the conventional wisdom?
encoderer 1 day ago 0 replies      
If anybody wants to go more in depth on this I suggest A Distant Mirror. Brilliant work.
GarvielLoken 1 day ago 0 replies      
I thought this was going to be about the AH-64D Apache Longbow.
SFjulie1 1 day ago 0 replies      
Plain BS.

I learnt "canne d'arme and baton d'arme" the "fencing of the i-gnobles".

From feudality to absolute monarchy the raise of monarchy has been made at the costs of "Jaqueries". Peasant revolts of the "non nobles" "ignobles" in latin derived french.

The central control brought by the carolingien and then the bourbon as resulted in strong traditions: knights and nobility are also a force to squalsh revolts.

This and the dissolution of Lances towards "regular armies" after azincourt defeat (longbow involved) has been used to cut the fraternity at arms between feuds members. (Lances were like organic units of versatile men at arms doing their best to bring everyone alive the local feud included).

The strength of the knight were enforced like in feodal japan, by preventing the crowd to gain power.

For this, metal was considered the weapons of only knights.

Which means that when using the old franc laws for something as rude as sullying a women in a church out of the accepted "traditions", the divine judgement could be called ... a duel.

Needless to say peasants were not authorized to have metal ... officially.

So with all the jaqueries going on, you don't really want the peasants to have weired ideas about efficient wooden weapons.

And still monarchy was a vast joke at this time and era, cousins of the royal families were lending each others money, and were often tight by blood.

England had no interest to destroy the french society.

French kings had no real interest in defeating england. They were mainly aiming for weakening the local suzerain. The feuds.

Of course it backfired. Louis XIV almost get killed during the "fronde".

The New York Times Introduces a Web Site (1996) nytimes.com
214 points by danso   ago   93 comments top 15
danso 1 day ago 5 replies      
The oldest snapshot on Archive.org is from November 1996: http://web.archive.org/web/19961112181513/http://www.nytimes...

Back then, they even had a low-bandwidth version of the site: http://web.archive.org/web/19990117023050/http://www.nytimes...

The website included various tutorials on how to use it, including a guide that covers the different browsers. None of the browsers listed are actively developed today: http://web.archive.org/web/19961112182937/http://www.nytimes...

edit: A couple of other observations:

- How many other content websites have published for nearly this long and yet have their oldest articles remain on their original URLs? Most news sites can't even do a redesign without breaking all of their old article URLs.

- I like this Spiderbites feature -- a sitemap of a year's worth of articles (likely for webcrawlers at the time): http://spiderbites.nytimes.com/free_1996/

morgante 1 day ago 5 replies      
It's pretty impressive how good a job the NYT has done of maintaining historical articles.

You can even find quite old articles from key historical times and they're presented just like articles today. For example, the famous Crittenden Compromise is at http://www.nytimes.com/1861/02/06/news/the-crittenden-compro...

esaym 1 day ago 0 replies      
And thanks to them, we have the world's greatest Perl debugger/profiler: https://metacpan.org/pod/Devel::NYTProf
aaronbrethorst 1 day ago 2 replies      

 The electronic newspaper (address: http:/www.nytimes.com)
hilarious, butthen againthe colon-double slash still isn't clear to most people.

jedberg 1 day ago 1 reply      
It's funny because this is the digital version of an article that was printed in a newspaper about that newspaper going digital.

If only they had any idea of the pain they were about to cause themselves. :)

ChrisArchitect 1 day ago 0 replies      
briantmaurer 1 day ago 1 reply      
I would love to see the complete evolution of the home page.
Haul4ss 1 day ago 0 replies      
> "The market is booming for newspapers on the World Wide Web," Mr. Kelsey said.

Not anymore. :)

jcoffland 22 hours ago 0 replies      
I find it interesting that they explicitly excludedreporting that appears in the news paper. They were on the web early but with caution.

> The New York Times on the Web, as the electronic publication is known, contains ..., reporting that does not appear in the newspaper, ..."

mc32 1 day ago 3 replies      
My impression is the mercury news from san jose had an earlier, if paid, presence.

Funny how the NYT wanted to charge nearly two dollars to allow you to print older articles. Asking people to pay for the own digitization.

spacefight 1 day ago 0 replies      
Who had their first website online back then in 1996 as well?

raises hand

Good memories... Claris Home Page!

hackuser 1 day ago 5 replies      
After 20 years, they still are adapting to the 'new' platform. Look through their site with fresh eyes: If you were designing a news website (rather than moving a newspaper to this new platform), how many design, UI and functionality choices would you make differently?

A quick start:

* The separation of different forms of content: They don't really mix text with video, images and graphics, even though most web-native bloggers will do it. They seem to lack fluency with mixing media; it's a project for them. They'll staple a video and decorate text with images and graphics, but they don't really commuicate with it; they don't say, 'here's how Clinton responded to Sanders:" <video>, or, 'here was the scene when the earthquake struck' <video>, or even in a movie review, here's what the scene looks like: <video> or <image>. Instead, they try to describe the visual with text. Even explanatory graphics are a separate, special production, on a separate page.

* The font in their title: Back when printing fancy fonts was a technological feat, this font communicated that they were serious and sophisticated. Now, if you step back and ignore the history, it looks like a kid playing with fonts. (Look at it this way: would you ever use that font on a website you were designing?). It says, insists even: We're anchored to the paper age and will never let go. We're the old, dying generation. If you want something new, go elsewhere.

* The discoverability of content: Obviously mimicing a newspaper, but a bad choice for the web. How many links are on that home page (scroll down)? And even more content doesn't even appear there. All that hard work and content, unlikely ever to be found, buried and lost forever. It's tragic. But that's what they did in the hard copy newspaper so I guess it's ok.

* Also, where are stories updated since I visited a couple hours ago? Oh look, if I look at every link a red 'updated' indicator is next to some links (just like the web 20 years ago!), which I see if I examine every one of them (and how do I identify brand new links in this massive page of links?) - but where in this multi-page story are the new parts? I guess I'll just re-read the whole thing.

I say this all out of love. They are an very important institution. The news business is hard enough; stop handicapping yourselves! From the outside they look like they still, in 2015, haven't fully embraced the new technology. What would you say about another business' web team (that was not adapting a newspaper to the web) that produced a site that looked like this? Egads. [1]

EDIT: Some minor edits and additions

[1] I'm not blaming the web developers; I assume they are working within the general constraint of: Make it look like the newspaper.

briandear 1 day ago 0 replies      
"..the Web as being similar to our traditional print role -- to act as a thoughtful, unbiased filter and to provide our customers with information they need and can trust."

Unbiased? Some quality reporting to be sure, especially when politics aren't involved, but they jumped the bias shark a long time ago.

plg 1 day ago 1 reply      
come on NYTimes, update the iOS app for iPad Pro --- it currently is gigantic (I presume because the screen design is just a scaled-up version of the iPad Air)
VeilEm 1 day ago 4 replies      
I like to call the incognito window my nytimes reader. I paid for the nytimes for a bit but it costs more per week than a monthly netflix subscription and it feels stupid to pay for not knowing how to use the incognito window. It's like a "I don't know how to use software" tax.
NSA Chief Stakes Out Pro-Encryption Position, in Contrast to FBI theintercept.com
226 points by sinak   ago   78 comments top 16
exelius 1 day ago 8 replies      
The NSA chief should be pro-encryption: the presence of backdoors in encryption (as demanded by some US law enforcement officials) creates a national security threat. Period.

If law enforcement needs access to encrypted data, they already have a few different ways. They can subpoena the data and throw the person who controls the key in jail until they release it, or they can just brute-force the encryption in cases of extreme national interest (it's too expensive to do for run-of-the-mill crime, but they have the capability if they really need it).

IMO the entire goal of encryption tech should be to make the government incur significant costs for every invasion of privacy they feel they have to perform. That way, they have the power to invade our privacy (and I don't think we as a populace can really stop them from having that power) but it's so expensive / cumbersome to use they really only use it in extreme cases. I'm fine with privacy being broken by the government on a case-by-case basis; the danger is when the government does a dragnet on everyone.

micah94 1 day ago 2 replies      
I would hope so. It's kind of their job. They've also published guides for strong encryption and best practices for operating systems for years. You dismiss their wealth of knowledge at your peril.

The FBI just wants to throw you in jail. What do they publish? Lists of people they want to throw in jail. Anything that stands in their way of throwing you in jail is bad, including your encrypted phone.

nickpsecurity 1 day ago 0 replies      
NSA chief can easily be pro encryption in public while breaking, subverting, and bypassing it in private as they always have. So, it's a smart position from a political standpoint. Action movie equivalent of being perceived as James Bond while pulling off super-villainy at the same time. Win win!
laotzu 1 day ago 0 replies      
If the number one security vulnerability is the human element, then shifting a system's security from autonomous unbiased code to the human element is necessarily decreasing security.

Advocates for such are either deliberately malicious or grossly negligent.

jostmey 1 day ago 2 replies      
A play for political power. The NSA probably doesn't care if data is encrypted---they most likely already have a back-door to take data before its encrypted or after its decrypted. So if encryption is used, the FBI will be dependent on the NSA.
barkingcat 1 day ago 1 reply      
They are pro encryption because they can already crack the algorithms (or already have funding to bruteforce them). Of course they'd be Pro-encryption. It's their job. They are also a state-actor.
o0o0_ooo 1 day ago 0 replies      
Wasn't this the whole situation with Dual_EC_DRBG? As far as I understand (which may not be that far when it comes to cryptography, admittedly), the NSA has already been caught intentionally weakening cryptographic standards via its influence over the NIST and by paying RSA.


RSA makes Dual_EC_DRBG the default CSPRNG in BSAFE. In 2013, Reuters reports this is a result of a secret $10 million deal with NSA.

According to the New York Times story, the NSA spends $250 million per year to insert backdoors in software and hardware as part of the Bullrun program.

JustSomeNobody 1 day ago 0 replies      
I am glad to hear someone making sense. All this handwavy let's just outlaw encryption babble is getting old. There are serious consequences to outlawing encryption.

Oh, and if terrorists are hell bent on attacking, they will do so with or without encryption. And no amount of data collection is going to stop them if they plan well enough.

mariodiana 1 day ago 0 replies      
There are two main points in this debate. Number one, we cannot abide having encryption weakened with back doors. Modern society relies on strong encryption. Number two, no amount of "magical technology" is ever going to replace human intelligence. The front lines in the war on terror is made up of human infiltrators and turncoats, not ones and zeroes.
nickik 1 day ago 0 replies      
He is most definitely not pro encryption. He is just against legal access by other agencies. He wants the NSA to have a backdoor into every possible crypto system and make them the organization every else has to come to for their data.
Golddisk 1 day ago 0 replies      
Not particularly surprising, the government has ways to get around the encryption anyways, it just takes longer than if they had the backdoors to go through.
multinglets 1 day ago 0 replies      
wait so which department is the FBI under and which department is the NSA under again?

... ah nevermind, I'm sure that doesn't mean anything.

kriro 1 day ago 1 reply      
Guess we know which agency has the working quantum computers ;P
njharman 1 day ago 0 replies      
Probably because NSa can break most encryption.
jlarocco 1 day ago 0 replies      
I'm not sure this means much.

Several popular encryption schemes have been developed by or heavily influenced by the NSA (including algorithms mandated by FIPS and other government organizations), and there has been a lot of speculation that they added backdoors to AES and other algorithms.

So in reality they've had the ability to add backdoors all along, and it's in their best interest to keep it a secret whether they've added one, so it makes complete sense that their chief would say this.

macawfish 1 day ago 1 reply      
maybe that's cause the NSA has secret algorithms to factor large numbers with quantum computers 0_0


90% of the billion dollar unicorn startups are in trouble businessinsider.com
228 points by elfalfa   ago   230 comments top 33
itssometoneelse 2 days ago 6 replies      
"This CEO said the Valley used to be a place of "quirky people" but was now filled with "arrogant" people"

Valuations and market corrections aside, this comment is the thing that resonates with me the most. I got into tech and the Internet as a kid in the early 90s because of all the cool, intelligent weirdos thinking about and building the future. I've been traveling out to SF for about a decade now from NYC to do work, and it's been sad to watch that city go from a place I thought could be the only other place besides NYC I could live to a city I try to avoid. It feels like it's getting harder and harder to find those awesome weirdo hackers. The homogeny is brutal in SF.

The one upside is that it's still the Internet and I don't have to be there physically to enjoy the parts of it I like.

kolbe 2 days ago 1 reply      
I'm in Chicago, far away from the action in SV. Lately, my friends and I have been getting unsolicited calls to participate in the next rounds of funding for various "unicorns." We are mostly just laughing, because it's farily obvious that we are only getting these calls now, after all these years of unbelievable[1] tech returns, because SV thinks that backwards and unprogressive midwesterners like us are going to be the goats buying the top of their grand pump-n-dump scheme.

[1] as in, not believable

snowwolf 2 days ago 2 replies      
I still don't understand why tech media (and the rest of the industry) are still not calling out these silly valuations for what they are - marketing spin.

With term sheets the way they are with liquidity preferences if I as a VC invest $100 million in a startup for a 10% stake at a 1X liquidity pref, the valuation I'm placing on that startup is $100 million and that is therefore what its reported valuation should be.

No need for any repricing of these startups - just report their true valuation as largest amount invested with liquidity preferences.

Edit: Just an additional point to add - If I really valued it at $1 billion I wouldn't need the liquidity preference.

code4tee 2 days ago 9 replies      
The worst kept secret in Silicon Valley is that 90+% of the so called 'unicorns' are just donkeys with a plastic cone on their head.

It long since stopped being about innovation and disrupting markets and became mostly about shuffling piles of imaginary units around so a select few could get rich. Sadly the pawns in this whole game will be all the employees holding options that are soon to be worthless... and who were convinced to take those options in exchange for taking a proper cash salary.

The only good thing this time around is that (unlike the last big Silicon Valley implosion) most of these companies are still private. So it will be really messy for some private investors and the greater San Francisco area is going to have a mess on its hands, but the broader US economy isn't going to get impacted as much. The stock market of 2015 also isn't propped up by bloated tech stocks in the way it was in 1999-2000.

VeilEm 2 days ago 3 replies      
This is a popular idea on HN and just gets upvoted because people want to agree with this and it makes them happy for some reason, but this article is not a good article. It's basically just a quote. I think any kind of the-sky-is-falling article has the chance to get good traction on HN lately without offering anything meaningful. The discussions in this thread are pretty poor and the article is poor.
jknightco 2 days ago 0 replies      
This is one of the most blatant cases of spin and justification I've ever read. VCs put "ratchets and downturn protections" in order to satiate founder greed? Ha! More like VCs waved huge valuations in front of founders and said "don't worry about the terms, this is good for everyone."

If nothing else this article does a good job of demonstrating why its important to always check your sources and their biases.

robbyking 2 days ago 0 replies      
I was a Web Developer during the first dot-com boom, and now that I work as an iOS engineer during this boom, I'm shocked at how many of the same mistakes are being repeated.

On almost a daily basis I get recruiters contacting me with interview offers from companies who have no chance of surviving past the end of the year. Their messages are often accompanied with bravado about their company's VC backers' other, more successful projects, which only makes me more skeptical.

colindean 2 days ago 0 replies      
I've always envisioned most tech businesses in general as an elephant standing on a board with a bunch of mice underneath it. The mice keep the elephant moving, but if the mice aren't continuously fed, they slowly die off. When enough mice die, the rest can't support the elephant and it thus squishes them. The elephant is shot shortly thereafter because it stopped moving. The mice that left sometimes come back to feed on the elephant carcass.
chiph 2 days ago 6 replies      
If there is a tech bubble that bursts, I don't see how much of middle America would be affected. "So, the people in SV can't get 1-hour grocery delivery any more. How does that affect me again?"
gphil 2 days ago 2 replies      
> There are about 144 unicorns right now. If only 10% break out, that's only 14 companies that will really make it.

Doesn't this ratio seem about right for any basket of unprofitable (or even zero-revenue) high-growth companies regardless of valuation? If those 14 winner companies average greater than a 10x return then everything pans out as expected--lots of risky investments together produce a reliable if more modest return on investment.

It seems like the only abnormal aspect is the size of the valuations, but that might be just what happens in a low interest rate environment--too much money chasing too few deals. Whether this affects this success rate of these investments remains to be seen I guess.

riggins 2 days ago 1 reply      
You have to keep in mind the source. Let me put it this way. A VC's job is to buy X (where X is equity in a startup). Of course every VC would love for X to go on 50% sale ... or even better 75% sale.

You saw this with hedge fund managers and the stock market as well. Lots of hedge fund managers went on and on about how irresponsible Bernanke was because he kept interest rates low which raised asset prices.

To be clear, I don't think this is a nefarious or even conscious process. However, I think if someone really wants a particular scenario it tends to color their thinking.

Also the actual claim made isn't as sensational as the headline. Just says that 90% might take a lower valuation. All that requires is a general market decline.

Anyway, take a look at the list of unicorns.


area51org 2 days ago 1 reply      
What's missing from this guy's evaluation: whether or not any of these unicorns are close to or at profitability, and what their actual market potential is.

If the businesses are based on bullshit, then he's probably right. If the valuations truly are wildly out of control (and it does seem like it), then sure, they're due for a correction.

It's also worth keeping mind that if there are 144 of these startups, 90% of them is 129. That doesn't add up to that much money in SV terms. This seems like a tempest in a teapot. People love to make headlines, it seems, with "OMG bubble OMG!"

criddell 2 days ago 0 replies      
> 90% of the startups will be repriced or die and 10% will make it

Wasn't that inevitable though?

There's a lot of money at stake, but the number of affected people is relatively low, isn't it? I understand why HN readers are interested in this, but is it a big story outside of tech circles?

paulpauper 2 days ago 1 reply      
He says there is "blood in the water," and we are entering a 90-10 situation for the unicorn class of startups with billion-dollar valuations in which 90% of the startups will be repriced or die and 10% will make it.

Well, that's kinda how it's supposed to be. That's why expected value is more important. A few $200+ billion Facebooks and Googles can compensate for a lot of smaller $1 billion failures.

ndirish1842 2 days ago 1 reply      
I really wish business insider would post an article that refrains from wild speculation and provides actual evidence to support their outlandish claims.
subrat_rout 2 days ago 2 replies      
The article mentions about 144 unicorns(billion dollar startups). Can anybody point me to the source having lists of these companies? Curious to see who are these unicorns are.
tempodox 2 days ago 2 replies      
The unicorn is not a new buzzword but the frequency of it popping up the last several days lets me assume there is a unicorn inflation underway. Put on your muck repellant and get ready for bursting bubbles full of unicorn blood.
brudgers 2 days ago 0 replies      
Being about startups, "Only" might be appropriate.
AndrewKemendo 2 days ago 1 reply      
90% of the startups will be repriced or die and 10% will make it

I thought this was always the assumption.

Is it because they are already valued at $1B+ that this thesis should change? I don't see why that should be the case...

DubiousPusher 2 days ago 1 reply      
Does this belong on HN? It seems like clickbate speculation without any real substance. Isn't that the opposite of what this place is about?
ruddct 2 days ago 2 replies      
Serious question (though, honestly, one I likely won't act on): How might someone short the unicorns?
paulpauper 2 days ago 0 replies      
There's a bubble in bubble predictions. Everyone wants their 'I told you so' fame, for some reason. I guess it's human nature to want to be right or to see the overly successful stumble back to earth.
wrong_variable 2 days ago 3 replies      
This is a classic example of simpson's paradox.

Market's across the board are doing pretty bad this month - It would be interesting to how bad tech is doing relatively to oil futures and other commodities.

fourpac 2 days ago 0 replies      
I wonder how much the HBO show Silicon Valley has affected the way people are looking at startup funding. It's finally been exposed to a mass audience.
anonbanker 2 days ago 0 replies      
We're not allowed to call it a bubble yet, right?
tylerpachal 2 days ago 3 replies      
> There are about 144 unicorns right now.

I didn't realize that there are this many unicorns. Or is this a typo?

nickthemagicman 2 days ago 0 replies      
Blood in the water. I'm proud of your melodrama.
frik 2 days ago 0 replies      
Are Unicorns right after their IPO in trouble too?
unknownzero 2 days ago 0 replies      
This link 404s for me, at least on mobile, huh.
desireco42 2 days ago 0 replies      
Is this because Unicorns aren't real :)?
acd 2 days ago 0 replies      
Dotcom bubble 2.0, The unicorn bubble.
isnullorempty 2 days ago 0 replies      
Repeat of the DotCom bubble, a few startups make it big then everyone and their aunt tries to emulate it with slight variations or solutions to a problems that don't exist. This app bubble is very pregnant and about to deliver something awful.
_asdf_asdf 2 days ago 0 replies      
Okay, caps lock. You are cool
Number of legal Go positions computed tromp.github.io
205 points by tromp   ago   67 comments top 19
richard_todd 1 day ago 2 replies      
Does anyone try to count equivalence classes (after rotation or reflection) instead of raw board positions? To my mind, that would also be of interest if you want to know how many actually distinct game situations there are. I guess as a rough under-estimate you'd just divide the count by (4 rotations * 2 reflections)?
Fargren 1 day ago 1 reply      
I played very little Go a very long time ago, so I don't know this: could there be any legal positions that aren't reachable by any legal moves?
daveloyall 1 day ago 0 replies      
A note for the author and new readers:

L19 means "the number of legal positions for a 19x19 board".

cf. L18 which means "the number of legal positions for an 18x18 board".

L19 does NOT mean position L:19 on the Go board. :)

Alex3917 1 day ago 7 replies      
The fact that the number of valid positions is 19 x 19 in base 3 is wild. You'd have to be almost dan-level to immediately recognize that the pic above isn't actually a real game.
tel 1 day ago 1 reply      
As I remarked elsewhere, what was more interesting to me was that the 2x2 board has hundreds of billions of games (assuming a superko, I suppose).

It's easy to recognize that there must be a lot of them, but hundreds of billions is absurdly fast growing. As another data point, the 2x1 board has 8 games.

sago 1 day ago 3 replies      
There are billions of valid games on a 2x2 board? Can anyone explain or link to something that explains how this is possible?
tromp 1 day ago 1 reply      
ultramancool 1 day ago 2 replies      
To make sense of big numbers like this where any state is valid, I find it good to compare with cryptographic key sizes, so in case anyone else is wondering:

log_2(L19) = 565 bits

tel 7 hours ago 0 replies      
Maybe it's a legal position if the numbers spiral out from the center.

Or, more precisely, I'd be okay with any sort of "nice" arrangement of digits which made it work.

hendekagon 1 day ago 0 replies      
Tromp's website is a treasure-trove of really cool things
33a 1 day ago 1 reply      
Does this take into account the super ko rule? If so, seems a bit small.

EDIT: Never mind, no it doesn't.

schoen 1 day ago 1 reply      
Neat, is this sequence in OEIS?
joeyh 1 day ago 2 replies      
I wonder what's the smallest Go board such that the number of legal positions is a legal position?

(Not expecting an answer anytime soon.)

pavel_lishin 1 day ago 1 reply      
Can someone explain why factoring L19 is relevant?
tetraodonpuffer 1 day ago 1 reply      
as a benchmark it seems would be interesting to port this to java/scala and running it on a spark cluster, since it's map-reduce from the post (didn't look at the code) it should be possible I would think
horsecaptin 1 day ago 4 replies      
Does this mean that a computer will soon be beating people at Go?
cinquemb 1 day ago 0 replies      
Although the determinate of that matrix = 0, and if it's conjugate transpose determinant is non zero then I wonder if all valid possible configurations on this board can be represented by a complex lie group?

Edit: did the work, it does, but too lazy to describe the group https://gist.github.com/cinquemb/18e494348045725e2b60

fiatjaf 1 day ago 1 reply      
Why would anyone waste time calculating this? You may be curious, but you're not living to satisfy useless curiosity.
Trello clone with Phoenix and React 5-part tutorial diacode.com
235 points by tortilla   ago   49 comments top 9
javiercr 1 day ago 2 replies      
Diacode team here. Thank you for sharing the post. This article is part of an ongoing series of blog posts that covers the whole Trello clone / tribute that our colleague @bigardone did.

We didn't submitted it to HN before because we were waiting to complete the whole thing and create a proper index for all the articles. The part 6 will be published tomorrow and you can expect a few more articles in the next weeks.

I'd like to clarify that this is not a product, it doesn't cover all the awesome features that Trello has. It's just a learning experiment that we're sharing with the rest of the world.

Finally, to give you some background, we're a small Rails dev shop (5 guys) who works remotely. We're now playing with Elixir and Phoenix and having a lot of fun with it. I'd totally recommend any dev to play with Elixir, specially if you come from a Rails background.

Kudos to our colleague @bigardone for putting all of this together.

hakanderyal 19 hours ago 1 reply      
That's a really informative tutorial series.

A tip to everyone checking the codebase (JS parts) to learn about building a Trello-like application, there isn't any optimistic UI updates in this tutorial app. (eg. after dragging a card to another list, displaying the card at the dragged position before receiving confirmation from the server.)

When you add optimistic updates to the mix with real time updates, things get much more complicated. Tracking pending updates, rolling back when something goes wrong, ordering of updates, reconciliation with server when the client is missing some updates etc.

I'm building something similar, and these have been most time consuming parts to build in a reliable way.

dwarner 1 day ago 3 replies      
I'm trying to work out how apps like this and trello calculate and store card poisition with as little overhead as possible.

It seems obvious that every time you move a card to a column, you recalculate the order of every card, however that has overhead because you have to send the new order of every card to the server.

Is there a more efficient way to solve this problem? It seems to me it would be more efficient to assign a position value for only the card you are moving relative to those before and after it.

I'm sure trello has solved this

edwinnathaniel 20 hours ago 0 replies      
Looks really cool except Part 2 where the ceremonious setup for the front-end part really point out the issue with JS right now. Other than that, great work!
d1ffuz0r 1 day ago 5 replies      
Why do you need Elixir for building this kind of project? Any real benefits?
Omnipresent 1 day ago 1 reply      
I'm more interested in the React part of this tutorial. If the backend was swapped with another framework. can the tutorial still be followed to learn React?
jbhatab 12 hours ago 0 replies      
Thank you so much. Been doing lots of phoenix+react and blogs like this are a tremendous help to the community.
wildmXranat 12 hours ago 0 replies      
Really nice. I wondered how an elixir app can be designed and this is a cool resource to use.
Y-Cloninator: GitHub Projects Trending on HN Without Distractions ycloninator.herokuapp.com
197 points by muricula   ago   39 comments top 23
toxicFork 2 days ago 1 reply      
Nice project, I'd love to be able to sort these based on things like:

 * date first mentioned on HN * date last mentioned on HN * number of links pointing to * number of stars on github * language

jedberg 2 days ago 1 reply      
This is great!

One quick feedback, you should probably normalize the case in searches. I searched for "python" and got nothing. I had to change it to "Python" to get it to work.

lobster_johnson 2 days ago 2 replies      
The newest link is this (https://news.ycombinator.com/item?id=8818244), which is 386 days old. Looks like the author hasn't been updating it for more than a year.
dpritchett 2 days ago 0 replies      
Looks cool! I just noticed the search is case sensitive - "php" returns no results while PHP returns quite a few. Same with c#/C#.


swegg 2 days ago 0 replies      
Nice! Nitpick: it would be cool to be able to click on a language to automatically search for it.
chao- 2 days ago 0 replies      
Already used this to find some fun tools, one of which I'll start using later today. Thanks!

As others have said, it would be useful to have some "max age" requirement.

brudgers 2 days ago 0 replies      
The links to HN suggest that the data set is about a year old.
sudhirj 2 days ago 1 reply      
Feed, please? Would love to get this on my daily reader.
kseistrup 2 days ago 0 replies      
It looks good!

I'd like to be able to click on a language in the language column, rather than having to ring it in in the search box.

Also, it would be nice if the Read on HN link had the posting date preferrably in ISO 8601 notation as tooltip.

sjs382 2 days ago 1 reply      
What does "trending" mean, in this context?

The top item I see (node.php) has 7 points, no comments, and was posted 386 days ago.

BinaryIdiot 2 days ago 1 reply      
Hmm this is pretty cool but I'm a little sad my project, msngr.js, isn't listed (it was on HN 344 days ago and hit the front page briefly). Would love to see it stated what criteria(s) are used in determining the projects without me going through the source.
elcapitan 2 days ago 1 reply      
Is this manually curated? (the titles seem to be different than the original HN titles)
saidajigumi 2 days ago 0 replies      
Another suggestion: In addition to normalization that others have mentioned, support for searching on the most common synonyms and abbreviations would be great. E.g. "js" for "Javascript", etc.
joeax 2 days ago 0 replies      
This is pretty neat and useful. A bit disappointed though that I didn't see my GitHub project listed. I would think you could build this list with a simple Google query "site:news.ycombinator.com link:github.com"
skrowl 2 days ago 0 replies      
This is a pretty handy idea, especially for the "X version 1.0.0 released!" with absolutely no description about what X is. Unfortunately, it doesn't seem up to date / accurate.
lobster_johnson 2 days ago 0 replies      
An Atom feed, so I can plug this into my reader, would also be great.
MCRed 2 days ago 1 reply      
Love the idea! Alas it appears that there have been no go, elixir or "golang" projects posted, or in your index. (or maybe search isn't working.)
wiwillia 2 days ago 0 replies      
Really interesting, thank you for sharing. Discovered resdet which is something I've been searching for a long time!
JoshTriplett 2 days ago 1 reply      
Interesting idea! Would you consider extending this to cover projects not hosted on GitHub, including gitlab, bitbucket, and similar?
n00b101 2 days ago 0 replies      
This is awesome! It would also be really useful to have column indicating which open source license the project uses.
elviejo 2 days ago 0 replies      

I would like the [Search] form to be implemented using GET.so that I send links with search results.

vinceguidry 2 days ago 0 replies      
Can't hit the back button to go back to the project list. Makes for bad UX.

Cool concept.

mickael-kerjean 1 day ago 0 replies      
Amazing thank you!
Free the Law: all U.S. case law online harvard.edu
144 points by fitzwatermellow   ago   43 comments top 8
Animats 1 day ago 4 replies      
No, not "freely accessable online", not until the 8 year exclusivity agreement with Ravel expires. It's a pay service with a free tier.[1]

"Under the Harvard-Ravel agreement, Ravel is paying all of the costs of digitizing case law. HLS owns the resulting data, and Ravel has an obligation to offer free public access to all of the digitized case law on its site and to provide non-profit developers with free ongoing API access (Ravel may charge for-profit developers). Ravel will have a temporary exclusive commercial license for a maximum of eight years."

"For the duration of that commercial license, there will be a restriction on bulk download of the case law, with some notable exceptions. Harvard may provide bulk access to members of the Harvard community and to outside research scholars (so long as they accept contractual prohibitions on redistribution)."[2]

[1] https://www.ravellaw.com/plans

[2] http://lj.libraryjournal.com/2015/12/oa/harvard-launches-fre...

DannyBee 1 day ago 2 replies      
They have to digitize, in part, because all of the states have exclusive publication/etc agreements with westlaw or lexis or ....

So you can't get a feed of cases from pretty much anywhere, and often, you aren't allowed to bulk download, etc.

Plenty of folks have digitized all the data harvard is talking about here. They are not first. Carl malamud, for example, has scanned all the federal reporters and tons and tons of other cases.http://radar.oreilly.com/2007/08/carl-malamud-takes-on-westl...



(My experience here is from back in the early 2000's working on getting pacer/states/etc to open up all of this data, so we could get it into google scholar and elsewhere. Often, they were willing to sell it to us, but they would not let us pay them pretty much any amount of money to make it just open and freely available, which is what we really wanted. Things have not gotten better, sadly, and in fact, have gotten worse)

tzs 1 day ago 4 replies      
They are projecting to have Federal and CA, NY, MA, IL, TX done in 2016, and the rest of the states in 2017. I'm curious why those particular states are being done first.

In particular, I'd have expected Delaware to be in the first group, because so many public companies are incorporated there, and so the decisions of its courts on corporate and stockholder issues have major national importance.

Offhand, I can't think of why MA or TX would worked on ahead of DE. Of course it is possible that the volume of material from each state is a factor...it could be that DE is being done in the first group but has a lot of material so won't finish in 2016. I've never taken a look at the volume of each state's output and so have no idea which state courts handle the most cases.

achow 1 day ago 0 replies      
Ravellaw (https://www.ravellaw.com/) has built interesting knowledge graph visualization tool for the court cases.


So one truly fascinating aspect of legal practice is that we tend to operate in the gray areas. However, the traditional way of researching case law reviewing a list of cases returned based on your query does little to help you sort through the mess.

With data visualization, you not only see the cases, but you see the relationship between cases, and how the cases work together.Among the most significant benefits, the data visualization elements of Ravel Law will help you narrow your research to the most relevant cases more quickly, while also helping you find those cases and arguments that, for whatever reason, didnt rank in the top of your search.


The value in this appears to relate concepts from one case to others through the visuals on the graph. The larger the circle, the more important the case will be. Lines connect one circle to another circle and its very easy to see which major cases are connected to other major cases. This is like a citator on steroids in my opinion as one can get to this point with a simple search. That means multiple steps in developing the analysis that finds the value and use of related cases. The snippets help immensely in determining which related cases are of value.


kevin_thibedeau 1 day ago 3 replies      
I'm curious how LexisNexis is going to attack this breach of their monopoly. Do they have patents on case law search?
thinkcomp 1 day ago 2 replies      
No response.


To: Erik Eckholm <eckholm@nytimes.com>

From: Aaron Greenspan

Date: October 30, 2015 at 1:31 PM

Subject: Concerns over Ravel/HLS Deal

Mr. Eckholm,

We just briefly spoke on the phone about your article (http://www.nytimes.com/2015/10/29/us/harvard-law-library-sac...). I am a Harvard College 04-05 alum, one of Professor Zittrains former students (I actually had to fight the administration to be permitted entry into his Law School course in 2001), and one of the first people Ravel tried to hire, because I am a programmer and I run a legal database called PlainSite (http://www.plainsite.org), which competes with them and receives about 16,000 unique hits daily worldwide. I was also a CodeX Fellow at Stanford Law School in 2012-2013, which is a program at Stanford that Daniel Lewis and Nik Reed are now also affiliated with. I tell you all of this only to point out that I am generally quite familiar with the principles, technologies and individuals involved here.

Ive now corresponded with Jonathan Zittrain and Adam Ziegler at HLS, the latter by phone earlier today. I have brought to their attention a number of concerns, none of which have been resolved in my mind. They are as follows:

1. Harvard University is a Massachusetts not-for-profit organization. Its investment in Ravel, a for-profit corporation, via its XFund venture capital arm, and its subsequent contract with Ravel to earn "proceeds" (HLSs term) from that relationship, involves profit. The University could in theory lose its tax-exempt status over this deal. This is not the same as the Harvard Management Corporation investing in for-profit corporations to further the Universitys mission by earning capital gains and/or dividendsthis is an exchange of cash for assets that Harvard claims it owns (even though case materials are public domain) and a contractual promise to monetize those assets through a for-profit company on an ongoing basis.

2. Worse yet, the deal involves profit from the withholding of public access to legal data, which is the precise ill that this relationship is nominally supposed to and claims to cure. In reality, it only exacerbates it by legitimizing, with all of Harvards imprimatur, the monopolistic legal information model that has dominated the nations judiciary for the past century and a half.

3. Professor Zittrain wrote an entire book on the dangers of internet lock-in and monopolies, yet his actions here are helping to create exactly the kind of monopoly he has become well known for warning about. According to Adam Zieglers recent post on the HLS Library blog (http://etseq.law.harvard.edu), there are to be "bulk access limitations" and "contractual prohibitions on redistribution." This is inconsistent with precedent concerning openness to court records and First Amendment law. That aside, what will these restrictions look like exactly? We dont know, because

4. ...Adam Ziegler told me that the contract with Ravel is not available for public examination and he did not know when it would be (if ever). He did read me a portion of the contract over the phone, which cited "non-commercial developers," and challenged me to come up with better wording. Thats easy. I dont know what a "non-commercial developer" is, but I do know what a "non-profit organization" is. As an individual, I am a software developer who is the CEO of a for-profit corporation in a joint venture with a 501(c)(3) non-profit organization which together maintain PlainSite. Does that make me a "non-commercial developer?" Although Mr. Ziegler insisted that the contract was not subject to interpretation because it is simply clear enough already, I strongly disagree, as I expect any lawyer would. All contracts are subject to interpretation. The contract needs to be posted.

5. One of Ravels investors is Cooley LLP, a law firm in the Bay Area. Based on what Daniel and Nik have told me in the past, Cooley has early access to Ravels software. Essentially this means that Harvard Law School is giving one particular law firm an advantage, which I imagine must violate a number of its own policies, and seems wrong on the surface.

6. Professor Zittrain claims it would have taken 8 years to raise the money that Ravel is providing for this effort. This is extremely difficult to believe. Although Mr. Ziegler refused to disclose how much money is actually involved, we can safely assume it is in the $5 million range given that Ravel has only raised just under $10 million and has had employees to pay for several years. Recently, a single donor gave Harvard Universitys engineering school $400 million, as your own newspaper reported (http://www.nytimes.com/2015/06/04/education/john-paulson-giv...). Harvard is also in the middle of a $6 billion-and-counting capital campaign, as reported by The Crimson (http://www.thecrimson.com/article/2015/9/18/capital-campaign...). Are we really to believe that the number one law school in the country (by some measures, anyway) could not scrape together the cash to buy its own scanners, or that it does not have scanners already? Are high speed scanners even that expensive? Heres one on eBay for $1,450:


7. Mr. Ziegler could not answer my question as to why a consortium of non-profits was not consulted ahead of time. I know many that would have been eager to assist, likely including the Internet Archive in San Francisco, which already has several scanners.

8. Though I do not speak for them, I did notice that Harvard and Ravel seem to have nearly appropriated the name "Free Law Project," which is actually a project and non-profit organization at Berkeley that took over from work at Princeton. See http://www.freelawproject.org and http://www.courtlistener.com.

9. The Harvard Gazette has falsely reported, "The 'Free the Law initiative will provide open, wide-ranging access to American case law for the first time in U.S. history." (See http://news.harvard.edu/gazette/story/2015/10/free-the-law-w...) I have been in regular contact with Jonathan Zittrain, Harry Lewis (an XFund Advisor who was Dean during my freshman year) and others at HLS about PlainSite since I brought the idea to them in 2011 almost immediately as soon as I started working on it. Additionally, CourtListener (from the group at Berkeley) has also been in operation for years, offering open, wide-ranging access to American case law. Theres also Google Scholar, which is free and certainly more wide-ranging than Ravel.

10. Ravel is, to the best of my knowledge, unprofitable. It remains unclear why Harvard would place its bets on an unprofitable startup, rather than solicit donations for a projectas it is so adept at doingin order to ensure maximum sustainability.

Mr. Ziegler attempted to dismiss the above concerns on the grounds that we still both agree in the greater goal of open access to law. I certainly have done all that I can to promote open access to legal information, including developing prototypes for digital legal data standards and suing the courts themselves (http://www.plainsite.org/dockets/29himg3wm/california-northe...). But if we both agree on this greater goal, then why has HLS been almost completely unresponsive to requests for cooperative assistance for the past four years, while this deal was being negotiated in secret?

To be clear, Harvard is not the only institution that has made highly questionable and insincere claims about its legal transparency efforts. Stanford CodeX claims to support open access to the law, yet it is now directly sponsored by Thomson Reuters, the parent company of West Publishing, and its "innovation contests" involve pledges not to redistribute case materials. But I would expect the Times to be able to distinguish between academic puffery and genuine efforts to improve the state of our incredibly broken legal system.


PlainSite | http://www.plainsite.org

late2part 1 day ago 0 replies      
What would Aaron say?
techflare 1 day ago 1 reply      
Didn't a Stanford student already do this, posting XML of all federal/state judge opinions on his blog?

https://news.ycombinator.com/item?id=7026960 discussion)

https://law.resource.org/pub/us/case/ (free mirror, looks like)

It's a great concept, and more/newer is better, but it seems odd for Harvard to act like they're the first to pull it off.

Why We Use Om, and Why Were Excited for Om Next circleci.com
211 points by nwjsmith   ago   62 comments top 23
dwwoelfel 2 days ago 1 reply      
I was the main author of Circle's frontend (the code is open-source: https://github.com/circleci/frontend).

I went on to build Precursor (https://precursorapp.com), which uses Datascript (https://github.com/tonsky/datascript) to accomplish a lot of the things that Om Next promises. If you haven't tried Datascript, you should really take a look! It does require a bit of work to make datascript transactions trigger re-renders, but the benefits are huge. It's like the difference between using a database and manually managing files on disk for backend code.

My understanding is that Om Next will integrate nicely with Datascript, so you can keep using it once you upgrade.

If you're interested in learning more about building UIs with Datascript, I'm giving a talk on Monday at the datomic/datascript meetup: http://www.meetup.com/sf-datomic-datascript/. I'll be going over Dato (https://github.com/datodev/dato), a framework for building apps like Circle and Precursor.

jlongster 2 days ago 0 replies      
I've spent the last few weeks building a side project in Om Next, and this article is spot on. Really excited to see CircleCI's plans to migrate, as it'll be fun to read their code and learn how they use it.

Relay and Falcor are great, but when I look at their docs it's unclear how to integrate with whatever backend I want (especially Relay). Looking at Om Next, it was totally clear how to write my own backend. The tradeoff is that everything is a little more manual, but that control gives you a ton of flexibility.

In a small amount of code, I have a client that can query financial data in a bunch of different ways, and if the data isn't available it sends the query to the backend, which executes it against a SQLite database and returns it to the client. The components are all unaware of this: they are just running queries against data and everything just works.

Combine this is with first-class REPL and hot reloading support via Figwheel (both frontend and backend) and I'm blown away at how fast I'm going to develop this app.

mej10 2 days ago 0 replies      
It seems like the setup I use is pretty similar to Om Next, but via independent libraries. I am looking forward to seeing the finished Om Next to compare.

ClojureScript is shaping up to be a fantastic way to program browser-based applications. This is what I use:

* Reagent -- another ClojureScript React wrapper

* Datascript -- An in-memory database with datalog query lang, this is used as the central store for all application data.

* Posh -- Datascript transaction watcher that updates Reagent components when their queries' return data changes

* core.async -- used for handling any kind of event dispatch and subscription. I do a unidirectional data flow type thing and it only took like 15 lines of ClojureScript.

This is one of the nicest front end development experiences I've had. Just the composition of these four libraries gives you a ton of flexibility and a good way to structure your application. You can use this setup to write a real-time syncing/fetching system with a backend database pretty easily.

pkcsecurity 2 days ago 0 replies      
Our team used Om for our app (balboa.io) for the first 3 months of development. We switched to Reagent and have been using that for the last 8 months.

We ran into the same problems with Om as the CircleCI guys, specifically:1) our front-end data model wasn't complex enough to merit a heavy-weight data access system that required a huge amount of extra digging to get right. We spent far too much time arguing about how to structure app-data, and it only got worse as the app got more complex. The cursor system in its first iteration was just too cumbersome (for exactly the reasons this author states). We kept trying to restructure the data model in order to get it to do what we needed. To be fair though, this is well known, and David Nolen has done a lot to alleviate this in recently releases (ironically by making it more Reagent-like).2) our app is end-to-end encrypted and requires pulling down potentially hundreds of blobs, decrypting them, and inserting them into the dom. Under these conditions, Om would kick it and the UI would grind to a halt.

We switched to Reagent, and found that it was far faster and "got out of the way" of development. Add-watch is amazing too. Our app is quite large (front end SLOC is around ~50k lines), and Reagent has scaled beautifully and is a beast at large-scale insertions (on the order of 1000).

Om has some delightful features (undo ability is very powerful, routes coupled with Secretary is also great for Om), and David Nolen is a genius, but I think even the author has to acknowledge that the app-data/cursor construct is more of a pain than it's worth...

saosebastiao 2 days ago 2 replies      
I really like David Nolen as a conceptual visionary. His work with Om and core.logic is great and has inspired a lot of derivative work. But I would never rely on his libraries in production. It seems like he always gets to 90% before moving on to the next new thing. 90% documentation, 90% cljs->js coverage, 90% tested, 90% issues addressed. I wouldn't touch Om unless I was willing to employ at least one person to work on Om full time.
moron4hire 2 days ago 0 replies      
> Most data is not a tree

This is huge. I think it might even be the single largest problem to most projects' progress. I've seen a lot of projects that have tried to force non-tree data into tree-structures, and it never works out well. Projects grind to a halt after 6 months to a year because nobody can keep track of the dance steps they have to do with the tree-oriented code to manage their graph-oriented data.

Real, actual tree structures are just incredibly rare. Even some things that "obviously" seem like they should be modeled as a tree are far better off as a directed graph. Like databases of family trees - it's possible someone is literally married to their sister! Less cringe-worthy examples involving large families living near other large families with generational overlaps causing the children of one group marring the grand-children of the other, and vice versa.

You don't really need React. If you can do the ostensibly hard work of figuring out the DOM edits yourself, your app will actually be faster than if you're using React, i.e. React has its own overhead. As long as the data relationship was right, I've never found it difficult to manage state thereafter. It's when the shoe doesn't fit that things become a problem.

The problem is, we have a systemic problem of treating front-end devs as not "real" developers, not capable of forging their own paths. It's not just from the outside-in, I see a lot of front-end devs lacking a lot of confidence in their own skills. As a culture, we yell at any JavaScript programmer going his or her own way, building their own thing. "Don't reinvent the wheel!" they are told. Screw that. I can think of at least 3 times off the top of my head that the wheel itself was significantly and usefully re-invented in the 20th century alone. The problem is not "reinventing wheels". The problem is this institutional fear of making ones own decisions, leading people to think they need to learn everything.

dustingetz 2 days ago 1 reply      
> Application state as a single, immutable data structure.

react-cursor gives this pattern in javascript, immutability and all, but with regular old javascript objects. It also comes with all the same caveats as in this article. (I don't speak for the creator of Om, I speak for myself as the author of this library which was inspired by Om and Clojure)


The beautiy of the state-at-root pattern with cursors, is that each little component, each subtree of the view, can stand alone as its own little stateful app, and they naturally nest/compose recursively into larger apps. This fiddle is meant to demonstrate this: https://jsfiddle.net/dustingetz/n9kfc17x/

> The tree is really a graph.

Solving this impedance mismatch is the main UI research problem of 2015/2016. Om Next, GraphQL, Falcor etc. It's still a research problem, IMO. The solution will also solve the object/relational impedance mismatch, i think, which is a highly related problem, maybe the same problem.

th0ma5 2 days ago 1 reply      
This is a more coherent summary of Om than exists elsewhere, including the official Om site.
Cshelton 2 days ago 3 replies      
So I've been looking at Elm recently, what would the advantages/disadvantages for something like Elm over Om/Om Next?
cheez 2 days ago 0 replies      
Clojure and ClojureScript for president in 2016! No, but seriously, one of the most promising pair of practically useful languages in existence.
CuriousSkeptic 2 days ago 1 reply      
I would love to see some code examples of this part "If we try to show a component that needs to know the current users initiated builds, that triggers an API call that asks the server for the data. If we stop using that component, we stop making that request. Automatically."

I'm currently knee deep in a react/redux implementation, which I guess is quite similar.

misiti3780 2 days ago 4 replies      
I am surprised their backend is written in closure. I would think it make hiring developers much harder (smaller group of people know it) and training people a lot harder. You can jump on to a project and learn enough go to fix bugs in a day or so (less than a week for sure). I am not sure the same could be said about closure.
pandeiro 2 days ago 0 replies      
What's not stated in the article, re: "Why we use Om", is that much of Om's adoption was because of its high-profile creator and all the status/momentum that brings along.

But I think it's approaching a consensus already within the CLJS community that, on API alone, reagent is the React interface you want.

It's extremely elegant and performant; probably the best frontend library I've ever used in close to a decade of web development.

tim333 1 day ago 1 reply      
>2 We could throw out the entire list and rebuild it in one go, but re-rendering large amounts of DOM is also slow.

Out of curiosity I tried swapping "<ul><li>Artichokes</li><li>Broccoli</li><li>Cabbage</li><li>Dill</li><li>Eggplant</li></ul>"

and the same without the broccoli and dill, back and forth a few thousand times using jquery.

The average time per change was 28 nanoseconds or 35000 changes per second (Chrome, MacBook Air). Trying swapping a list of 300 fruits for a list of 500 fruits was 1.4 milliseconds per change.

I wonder if using some convoluted framework to "solveor at least mitigate" this might be premature optimisation? (As well as actually slower).

zubairq 1 day ago 0 replies      
Nice article! I'm the author of https://github.com/zubairq/AppShare which using Om, and can say that Om Next is definitely the future of clojurescript apps
amelius 2 days ago 1 reply      
Can anybody comment on how security is handled in Om? How do you ensure that certain parts of the database (which may depend on complicated rules) are not inadvertently exposed to the client?
jdudek 2 days ago 0 replies      
> We could insert new list items into the existing DOM, but finding the right place to insert them is error-prone, and each insert will cause the browser to repaint the page, which is slow.

I dont think the last part is true. Browsers dont repaint (nor they reflow) the page until its really needed. So if you have a loop that modifies the DOM multiple times, but does not read from the DOM, there performance hit described by the author should not occur.

tonyhb 2 days ago 0 replies      
Essentially this is exactly what we have with Redux and pure JS. Gaearon has led the way here:

- Redux as your single state tree/graph

- Normalizr to denormalize data and store it in a graph, including pulling nested resources out as records

- Reselect to query on your denormalized data

And the best thing is this is production ready, in JS, today.

amelius 2 days ago 1 reply      
> Oms creator, David Nolen, likes to show off how easy this makes it to implement undo: just remember a of list old states, and you can reset! the state atom to any of them at any time.

How does that work if multiple users are collaborating on the same state simultaneously?

mwilliamson 2 days ago 1 reply      
The post mentions borrowing ideas from GraphQL and Falcor: if I used GraphQL or Falcor, is there some pain I'd hit that Om Next would avoid?
ricardobeat 2 days ago 2 replies      
The new plan sounds a lot like Redux.
faceyspacey 2 days ago 1 reply      
does anyone know how challenging it is to add Datomic subscriptions to Om Next?
reitanqild 1 day ago 0 replies      
For a lot of companies however a good first step would be to ensure that the pages work w/o Javascript.

When a basically crud website tells me my perfectly fine browser isn't supported I say: FAIL.

Microsoft silently adds Amazon root certificates to its CTL hexatomium.github.io
166 points by svenfaw   ago   64 comments top 10
FiloSottile 1 day ago 2 replies      
All Amazon roots are cross-signed by other trusted roots, so they were already trusted by all systems, including Microsoft: https://www.amazontrust.com/repository/

They are also on their way to be added natively to the Firefox root store: https://bugzilla.mozilla.org/show_bug.cgi?id=1172401

rusanu 1 day ago 1 reply      
AWS just announced Amazon Certificate Manager service, free SSL/TLS certs for assets hosted on AWS[1]. It makes sense to ask trust roots to add Amazon own certs.

[1] https://aws.amazon.com/blogs/aws/new-aws-certificate-manager...

xupybd 1 day ago 2 replies      
"Amazon is reported to have some very close ties to spy agencies."Why would we trust that any of the other providers would not co-operate with the CIA? Wouldn't they have to under the law?
z3t4 1 day ago 6 replies      
Isn't SSL/TLS certificates broken when all you have to do is to add one root certificate to MITM everyone else's SSL/TSL?
moviuro 1 day ago 0 replies      
See the info repo provided by amazon: https://www.amazontrust.com/repository/
vpcguy 1 day ago 0 replies      
Microsoft published their updated list of CAs on their website today, and Amazon is there. http://social.technet.microsoft.com/wiki/contents/articles/3...
nailer 1 day ago 1 reply      
If there's an issue here, it's not that the root stores adding Amazon's root certs are doing anything nefarious: it's simply that Microsoft should improve their communication.
xutopia 1 day ago 1 reply      
Can someone explain this like I'm 5? I'm not sure what this means and why it matters.
rockdoe 1 day ago 1 reply      
IIRC Chrome uses or at least used to use the Windows certificate store. So will it trust these automatically?
dredmorbius 1 day ago 1 reply      
Internet of Things security is so bad, theres a search engine for sleeping kids arstechnica.com
172 points by nikbackm   ago   102 comments top 11
50CNT 9 hours ago 5 replies      
I still don't get the Internet of Things.

The mental calculation just doesn't work out for most things. My personal rule of thumb is:

 benefitofmeaccessingXremotely(X) - costofotherpeopleaccesingXremotely(X)*riskofthathappening
Benefits being low for most things, costs high, and risks...uhmm...nah. Only exception I can think of is very limited amounts of sensors (eg. is X on?).

What's the benefit of me turning on a gas stove remotely? Almost none. What's the cost of someone else turning on my gas stove? Really high. How much is the risk? Way too high.

Then there's smart devices, another component of IoT. But how much smarts do we actually want? Screens are nice. Making my shower multi touch isn't (capacitive touch + water = no bueno. Imagine water from hell scenario and no way of turning it off with your wet hands). Fridge compiling shopping lists automatically? Neat. Cheap android tablet that comes with a fridge glued to it? Nah.

The only utility I see is locally connected devices. Using your phone as a remote. That seems handy. To a certain degree, we have that. Extra points if I don't need to download an app for everything, because don't you dare tell me that your blue-tooth on/off switch needs a 15mb .apk. If I gave one about the 14.9mb of branding you're including, I'd download your press kit.

There's some utility in home IoT wudget-thingimabobs, but I'm almost certain we'll mess it up to no end in our excitement. There'll be some legitimately useful products coming from it, but most of it will be utterly cringe worthy in retrospect.


saboot 10 hours ago 3 replies      
It's much worse than just passive webcams. Some devices which were never meant to be connected to the internet are out there.

Stoplights?HVAC Systems?Carwashes?Ice Rinks?POWER PLANTS?



EDIT: I looked at his more recent talk from last November ... the situation has not improved

"115 batshit stupid things you can put on the internet in as fast as I can go by Dan Tentler"


Featuring Spanish Chicken Controls

miander 12 hours ago 10 replies      
I'm not sure how many others share my view but I think that regulation is worth the benefit to security. I have always been very skeptical of the "but it'll hurt innovation" claim. Won't it promote innovation in new approaches for securing low-cost devices? It sure seems nebulous to me, but I am willing to be convinced otherwise.
DyslexicAtheist 5 hours ago 0 replies      
During a review of W3C WoT & ETSI M2M standards I noticed that security is totally ignored in these tech-standardization bodies. The standards leave security as an exercise to the industry and the maker communities (who are not spending money on security until they have an problem). That said, it's also not trivial to implement something that at first sight seems straight-forward like 802.15.4 Security[0][1] without a deep understanding of the security architecture supported by the underlying platform:

[0] http://www.jwcn.eurasipjournals.com/content/pdf/1687-1499-20...[1] https://www.cs.berkeley.edu/~daw/papers/15.4-wise04.pdf

Since the web is now getting "engaged" to the devices with CoAP and other protocols I wanted to create awareness of how bugs can spill over into the real world and do real damage there. If hacked insulin pumps or baby monitors don't scare you enough how about hacking a train? https://media.ccc.de/v/32c3-7490-the_great_train_cyber_robbe... ?? (everyone should probably watch this simply because SCADA strangelove guys are crazy and awesome)

Anyway to counteract the usually very "marketing intensive" tone of IoT groups on LinkedIn I decided to start this IoT Security group: https://www.linkedin.com/groups/4807429 it would be great to see people from all camps (IoT is a combination of 3 silos: 1) embedded, 2) web 3) infosec) actively contributing with technical topics in this group. I will keep it open to posts from marketeers but am heavily policing it for blogspam and remove any posts that are not security related).

Also I have some ideas about hackerspaces (http://hackerspaces.org/) which IMO every city should have and support. They're needed to propagate knowledge between these individual camps properly. (my contact details are in my profile in case you are interested to discuss more offline).

arthur_pryor 6 hours ago 0 replies      
well, that was mostly depressing, but i found the part about mudge and the UL-like initiative encouraging:

"Peiter Mudge Zatko is a member of the high-profile L0pht hacker group who testified before Congress in 1998, and since he's gone on to head cybersecurity research at the Defense Advanced Research Projects Agency (DARPA) before joining Google in 2013. In June, Zatko announced he was leaving the search giant to form a cybersecurity NGO modelled on Underwriters Laboratories."

and above that, a section about a similar "consumer reports" style rating organization. that was also the first time i'd heard of the group i am the cavalry, which seems like a cool idea (in principle, at least, without really knowing much about the actual group).

and i understand this objection to that sort of approach:

"Its not the same quality problem... UL is about accidental failures in electronics. CyberUL would be about intentional attacks against software. These are unrelated issues. Stopping accidental failures is a solved problem in many fields. Stopping attacks is something nobody has solved in any field. In other words, the UL model of accidents is totally unrelated to the cyber problem of attacks."

it is a very different problem in a lot of ways, but that doesn't mean that an approach similar in spirit or presentation is doomed to failure. and i think it does fit into the broad category of messy consumer information problems that are hard to solve with specific detailed regulation.

abrkn 9 hours ago 0 replies      
Brings to mind the Twitter account @internetofshit


purpled_haze 11 hours ago 4 replies      
While I agree this is bad, I think it's a misuse of the term IoT. Webcams have been around since the 90s.
gengkev 11 hours ago 1 reply      
Isn't it common sense to at least set a randomly generated password as the default one? Especially for something as sensitive as an Internet webcam.
exogen 8 hours ago 0 replies      
I have a Denon receiver with a web interface, which can control everything over HTTP (volume, source selection, firmware, etc). Of course there's no CSRF protection, so anyone could just control my receiver by getting me to visit a page that tried POSTing to 192.168.0.XXX it would be trivial.
davidgerard 8 hours ago 0 replies      
1995: Every object in your home has a clock & it is blinking 12:00

2025: Every object in your home has a IP address & the password is Admin


api 9 hours ago 2 replies      
Most embedded engineer types know nothing about and never think about security. One example I saw once was an FTP server where the auth commands worked but were irrelevant. All commands always worked. It passed the unit tests therefore it was good.
Google Paid Apple $1B to Keep Search Bar on iPhone bloomberg.com
155 points by aaronkrolik   ago   54 comments top 9
anfroid555 1 day ago 2 replies      
Oracle is really letting out Google secrets. First Android earnings, now this. Both said not to be public knowledge. Google going to go after oracle now?
Animats 1 day ago 6 replies      
Google also used to pay Mozilla to be the search engine there, until Yahoo outbid them.

It's amazing that Google search, which is quite useful, has negative market value as content. In the cable TV world, there are channels cable systems pay to carry, such as ESPN, and channels that pay to be carried, such as the Jewelry Channel. How did Google end up in the latter category?

such_a_casual 1 day ago 1 reply      
Title is misleading. Google didn't pay up front to keep the search bar, but rather agreed to share a percentage of the revenue generated, which ended up being $1B.
danielhughes 1 day ago 1 reply      
Therefore the iPhone share of total search ad spend is about $3B. This link puts the overall market (inclusive of Android) at roughly $9B in 2014. That makes sense if you assume iOS to represent roughly 1/3 of devices.


partiallypro 1 day ago 3 replies      
Bing powers Siri, and I assume now powers all of the internet search functions (unsure about Safari.) So I wonder if Microsoft paid this amount, or if Apple decided that less money made more sense so they could harm their competitor?
tianlins 1 day ago 0 replies      
In a world so abundant of information, i think user attention is indeed a significant resource every major player should fight for. I guess the distribution of attention follows power law, that a few entrances take up most of the mobile use cases. For example, I use Uber, Wechat, GMaps much more often than the other apps. Search bar definitely is one of the most critical entrances.
SatoshiRoberts 1 day ago 0 replies      
I wonder how much Google got charged for the opportunity in 2015
SatoshiRoberts 1 day ago 0 replies      
That's about 5% of what Google has made in profit from Android since ~2008.
SatoshiRoberts 1 day ago 0 replies      
I wonder how much Apple would pay for https://duckduckgo.com/ just to screw over Google.
Is Self Hosted Blogging Dead? robertnealan.com
188 points by robertnealan   ago   192 comments top 44
verusfossa 2 days ago 6 replies      
The intimidating thing for me was always how heavy blogging software was. Never really liked the idea of centralized hosting, but hosting some huge PHP blob with a database never felt like it was worth it. I'm hosting my own site now running Hugo and I love it. I agree that most people have moved to centralized hosting, but I'm seeing a resurgence of self-hosting with static site generators like jekyll, middleman or hugo. Things like static search[1] and static comments[2] are possible with some thought. Really neat and lightweight and with gitolite, I can keep my git repo containing the blog code on the server too, setup commit hook to rebuild the site and I'm maintenence free. I have some npm postcss scripts that build my scss, autoprefix it, etc and dump it into the assets for hugo to build from all in one go.

A lot of this is unncessary, I could just be using css. I like that there's not all this asset-flow magic built out, just simple npm with bash cli. Unix philosophy and very little heavy lifting. I think there's still hope.

Now if we can just teach casual users git...


burningion 2 days ago 8 replies      
It's amazing that the web, which was really originally built primarily as a distributed publishing platform, has gotten so damn complicated to publish to.

Right now I've got my own self hosted platform, running Wordpress on a Digital Ocean droplet. The constant security updates for Wordpress are a nightmare, and it seems I have to hack both my theme and my post code every time I want to make a slightly interactive post. Never mind that there doesn't seem to be a decent way to preview posts on mobile.

As others have mentioned, it seems the best way to get more people in control of their own platforms would be with easier static tools.

On that note, I've been really impressed with org-mode and pandoc. I've been writing and generating code within a text based environment lately, but it still feels as though the process hasn't really budged or improved much at all in the past 15 years. With org-mode and pandoc, along with babel, I can write and test code, embed images, and generate decent html/pdf all in one go.

But for the casual user, I think it's become more difficult to self publish over the years, not less. The tools we've built have gotten pretty embarrassing if our goal is to get as diverse of a population as possible speaking and sharing their ideas openly on the web.

Cheers to everyone still working on tools like org-mode, pandoc, and latex. It's still relevant, and it still does a great job. If you haven't checked them out, take a look. I was certainly surprised by how far these projects have been taken.

scandox 2 days ago 9 replies      
My problem with Medium is that it lends this amazing aura of credibility to everything that is published on it. I think they've hit on the design equivalent of the brown note (of South Park fame) which makes readers mentally incontinent vis-a-vis the credibility of the source of the actual text...

Or maybe it's just me?

lkrubner 2 days ago 1 reply      
I am curious when this short-lived moment was suppose to be?

From the article:

"There was a promising short lived moment where smaller, topic-oriented blog networks like Svbtle (amongst others) started appearing, but even those seem to have gone by the wayside and are increasingly being replaced by Medium."

Back in 2002 I co-founded a blogging company. At that time we were competing with the likes of Blogger.com and Typepad.com. There were many other companies, at that time, which I've since forgotten. At one point, around 2003 or 2004, we created a list of all our competitors, and there were at least 100 names on the list.

My point is, the vast bulk of all blogging has always been on 3rd party hosted blogging sites. Self-hosted blogging has always been rare. I self-host my blog, smashcompany.com, on a server at Rackspace, but this has always been a rare option.

All the same, I am intrigued by the question. If anyone has historical data on this, it would be fascinating to know when self-hosted blogs hit their peak. If Technorati.com has survived in its original form, then it would be in possession of this historical data, but sadly, the original Technorati.com is dead.

squeakynick 1 day ago 5 replies      
I'm embarrassed to say that I still hand write my blog directly in HTML using Notepad++ and manually FTP changes to the hosting company. Most of my blog is static HMTL, with a smidgen of script for analytics or occasional interaction. Every now and then I'll use some light PHP (typically when I need interaction with a database on the server).
evancordell 2 days ago 2 replies      
I've recently settled on a mostly-free self-hosting platform: Jekyll + Github Pages + Google Domains + Kloudsec.

Jekyll and Github Pages keeps the deployment simple and Google Domains has proven to be simple, cheap, and reliable. I tried Kloudsec out last week on a whim after seeing it on HN, and so far it's great - simple, free SSL with let's encrypt.

https://evancordell.com if interested. It needs a little more love before I'd really say I'm pleased with it, but I'm very happy with how cheap and easy it was to set up a personal blog with SSL.

blakesterz 2 days ago 0 replies      
I've been running a small hosting company since 2002. Started hosting just my blog, then friends, then their friends and so on. I used to have about 300 blogs total, now I'm down to about 250, and it's slowly dropping every month. A few of those people moved to other hosting, and kept the blogs, but really most of them just said "I'm giving up, no time to blog when I'm busy on Twitter and Facebook"

I think many people feel like they get out what they needed to get out on Twitter/Facebook. They used to write on their own blogs to get things out, now it's elsewhere.

27182818284 2 days ago 1 reply      
I self-host using a static site generator. I've found it to be very nice in the sense that it just takes pennies for the site to run.

I think a lot of folks are still running WordPress of some kind on their own Dreamhost, etc, accounts which feels like self-hosting to me.

davnicwil 2 days ago 1 reply      
I own a self-hosted blog and am actually in the process of deciding whether to transfer over to medium. To be honest I'm pretty much decided that I will, because it's just easier, not to mention I can save myself some hosting fees.

The key questions of the debate on the cons side of switching, assuming you're blogging for fun and not thinking particularly about advertising or massively customised SEO strategies, seem to be:

1. do I own my content2. will my content be accessible forever

As this post highlights, the answer to (1) on medium is YES. So, no problems.

The answer to (2) is also, for all practical purposes, YES, but you shouldn't depend on it.

But is this really such an issue anyway? I certainly assume that the vast majority back up their photographs, just by nature, and how difficult is it to back up the plaintext of your blog pieces too? If you have backups, and the answer to (1) is yes, then really, it starts to look like an easy decision.

forrestthewoods 2 days ago 0 replies      
I used Ghost for over a year. I was relatively happy with it.

I recently switched to Medium and couldn't be happier. With Ghost I was spending more time tweaking and maintaining purchased themes than I spent writing.

It's really really really fucking hard to run a blog that works well on desktop/tablet/phone and doesn't crash if you get a traffic spike. How many self hosted blogs can handle 500,000 hits in less than a day? Not many.

Medium will probably die someday. That's fine. I own my content and my content URLs. I'll simply port it to a new platform. It wouldn't be the first time.


K0nserv 1 day ago 1 reply      
I don't think that it's dead and self publishing I'd argue is easier and cheaper than ever. My blog has all the power of s3 for scaling with free SSL from Cloudflare and I pay peanuts for it. Current bill is $0.03 some months it gets closer to $1.

I've written about it here https://hugotunius.se/aws/cloudflare/web/2016/01/10/the-one-...

ThomPete 2 days ago 0 replies      
No of course it's not dead. Just like the desktop isn't dead just because mobile is exploding.
lips 1 day ago 2 replies      
As a blog consumer, rather than producer, I also have reservations about Medium-esque sites, but from the opposite perspective.

There's already an infinite quantity of interesting content to read, and it seems reasonable to expect rising quantities of worthwhile, as I find writing and creations that I was unaware of when they were being made. With all this stuff, I want to be able to control where and when I read, and how I filter, manage, follow, and store all this stuff. At some point, platform operations reflect a business plan, and that plan may or may not allow for one or more of my preferences, for reasons of $. I guess I just prefer a relationship where a standard or pseudo-standard allows the user control, to select differing vendor options at the very least.

Then again, as I'm barely capable of managing a basic server install, I'm fully aware of why people throw in with hosted systems. I'm hoping for great things from stuff like Sandstorm.

yumaikas 1 day ago 0 replies      
I host my own blog (http://junglecoder.com) on a VPS at the moment. But I went rather overboard with it, as I built my own CMS-lite in go. I was in college, wanted to learn how the web worked at a decently low level (lower than wordpress or rails).

What I've discovered is that having a VPS opens up a world of opportunities for network related things. I've used that site to host Ludum Dare entries, ClickOnce .NET apps, and a Wiki Profile image that I used to see if anyone was looking at my page on a company wiki. An SSH tunnel has allowed me to bypass some firewalls that block the majority of ports.... I've learned a lot on that server. Some of the best $50/year that I spend in terms of hosting stuff.

cookiecaper 2 days ago 0 replies      
I think the age of everyone having their own domain running their own code is starting to expire, just like the age of running your own email server. There's just too much spam and bad actors out there, you have to prove your site innocent to the big indexes before you can get anywhere. If you publish on Medium, Facebook, Blogspot, or another platform that has "rep", people assume the spam is filtered out by the platform, and they treat your content less skeptically.

Turns out AOL had the right idea the whole time -- people want platform-specific keywords and they want to trust the platform's caretakers to decide what's OK for them to see.

joelgrus 2 days ago 2 replies      
I "self-host" using Pelican + S3. It's super cheap (< $1/month) and pretty easy, the only real downside is that all of the Pelican themes are really ugly, and I'm not good enough at design to make a better one.
dberg 2 days ago 4 replies      
honest question, how did Medium become so dominant for all blog posting so quickly ? The design is no doubt beautiful but there is nothing special. Curious why so many bloggers magically decided to publish there all of a sudden. I see tons of tech posts there now.
skybrian 2 days ago 1 reply      
The hard part is not getting a website to serve an HTML page. It's that modern UI standards for HTML publishing are pretty high. Finding a theme that you like and most other people will like (let alone writing one yourself) is a hassle for most people who aren't front-end developers.

You can point to lots of web sites that are hard to read, but that just proves the point that people are rather finicky about it these days.

peterwwillis 2 days ago 1 reply      
Getting hosted by someone else is incredibly convenient. They take care of all the work of maintenance, security, reliability, and even give you tools to increase your visibility on the web and design your blog. IMHO only hobbyists or people with a very good reason should self-host. If you don't like one company's terms, look into the many other blog providers out there.
ne01 1 day ago 0 replies      
At sunsed.com we are 100% dedicated to create the best blogging (and in the near future, a full CMS) platform. We hope to re-energize the world of self-publishing with a managed solution that lets you import/export to any other CMS/Blogging platform!

Right now we are working on an IDE inside SunSed so anyone can create their own template with HTML++ (our own templating/programming language).

Here is a screenshot of of our IDE (I'm working on it right now):


We are going to announce HTML++ and SunSed 2.0 on HN in the next few months.

Happy hacking & blogging!

onion2k 2 days ago 3 replies      
I use and highly recommend hexo.io with it's S3 deployment plugin. It's as good as Jekyll but easier to modify and theme if you come from a web dev background as it's written in NodeJS rather than Ruby.
mark_l_watson 1 day ago 0 replies      
I went from years of using blogger.com to trying Wordpress for a few months. Then I switched to Jekyll and statically generated blog articles. In the end I went back to blogger.com because I figured that if I needed a 3rd party like discus for comments, then I might as well use blogger.com
tuananh 2 days ago 0 replies      
I still self-hosted my site (jekyll) on a tiny instance (128mb ram) from RamNode. free ssl by cloudflare.

it costs a bit more than $1 a month to run it.

elcct 2 days ago 0 replies      
I am working on a project that combines DNS, WWW and WebDav servers to simplify blogs self-hosting, so your blog can always sit connected on a mapped network drive and to add new website, you can just create a directory with new domain name. https://github.com/parkomat/parkomat
zwischenzug 1 day ago 1 reply      
I used Docker recently to host my own blog, in an attempt to put ads on it and whatever Wordpress extensions I wanted. Despite getting a healthy number of views (nothing extraordinary) the income was basically 0. It just wasn't worth the candle.
xylon 1 day ago 0 replies      
I just write my blog direct in HTML and host it on a Raspberry pi running FreeBSD and darkhttpd: http://www.naughtycomputer.uk/
KhalilK 1 day ago 0 replies      
I spent a couple of weeks developing a blogging system in Django last summer. Doing so allows for great customization and fits exactly my needs.

It was more fun than writing actual blog content though.

dredmorbius 2 days ago 1 reply      
I've been looking for intelligent conversation online for over 25 years. For a time it was Usenet. I mostly missed the Well, though I caught mailing lists, Slashdot, and for a brief moment, G+ (it's still there, and I've cultivated a useful community, though the reach is small).

I've done some exploration of just where intelligent conversation online lies, and frankly was surprised at the results:https://www.reddit.com/r/dredmorbius/comments/3hp41w/trackin...

The methodology uses the Foreign Policy Top 100 Global Thinkers list as a proxy for "intelligent discussion", the string "this" to detect English-language content, generally, and the arbitrarily selected string "Kim Kardashian" as a stand-in for antiintellectual content. Google search results counts on site-restricted queries are used to return the amount of matching content per site, with some bash and awk glue to string it all together and parse results.

As expected, Facebook is huge, as is Twitter. When looking at the FP/1000 ratio (hits per 1,000 pages) KK/1000, and FP:KK ratios, more interesting patterns emerge.

Facebook beats G+, largely.

Reddit makes up in quality what it lacks in size, but Metafilter blows it out of the water. Perhaps a sensible user filter helps a lot.

The real shocker though was how much content was on blogging engines, even with a very partial search -- mostly Wordpress and a few other major blogging engine sites. Quite simply, blogs favour long-form content, some of it exceptionally good.

But blogs suck for exposure and engagement.

This screams "Opportunity!!" to me. I've approached several players (G+/Google, Ello) with suggestions they look into this. Ello's @budnitz seems to be thinking along these lines (I'm a fan of what Ello's doing, but its size is minuscule, and mobile platform usability is abysmal.)

One of the most crucial success elements for G+ is the default "subscribe to all subsequent activity on this post" aspect. Well, that and the ability to block fuckwits (though quite honestly ignore would be more than sufficient). There's a hell of a lot else to dislike, but those two elements are crucial to engagement.

As for blogging, I'm a fan of a minimal design (http://codepen.io/dredmorbius/pen/KpMqqB) and static site generators.

cyphar 2 days ago 1 reply      
Nope. I run my own blog, which I implemented myself (using some nice Flask markdown thing as a base). https://www.cyphar.com/blog
erikb 2 days ago 0 replies      
It's certainly not dead, but the relation maybe has changed a little. It started off as a public diary service and place for discussions and now it's more of a traffic driver for marketing purposes.
j45 2 days ago 0 replies      
Before looking at self-hosted blogs it would have to be looked at how many new beginners are being created in self-hosting alone? Before we can self-host blogs, we have to self-host anything.
manuw 1 day ago 0 replies      
Medium & Co. are good for write stuff down in a simple way. I prefer self-hosting with jekyll.
praveenster 2 days ago 1 reply      
I am surprised nobody seems to mention indiewebcamp.com and withknown.com as alternatives to Wordpress or Ghost.
z3t4 1 day ago 0 replies      
I guess there are as many static site generators as there are blogging developers :P
torbit 1 day ago 0 replies      
nah. people (content marketers) post on medium in hope they click on their personal link in the author bio, which has their blog and content they wouldn't put on medium.
stolk 2 days ago 0 replies      
What's wrong with blogger.com ?
dba7dba 1 day ago 2 replies      
My observation is that about 90% of the Wordpress sites out there don't need Wordpress. All they really need is html/css.
pcurve 2 days ago 0 replies      
Remember Movable Type?
facepalm 1 day ago 0 replies      
Self hosted blogs miss the social graph, which drives traffic. That's why Tumblr took off, and I suspect it is also what is driving Medium.
minimaxir 2 days ago 1 reply      
This article is correctly using a rhetorical question for discussion. (The answer to the headline is "maybe"
misiti3780 2 days ago 2 replies      
yes - why self host when I can host my blog at github using jekyll in < 10 minutes ?
fenomas 2 days ago 1 reply      
I don't follow the author's complaints about terms and conditions. I suppose language like "we can change these terms any time and your use of the site constitutes acceptance" sounds ominous at a naive level, but what's the alternative? It would amount to some form of preventing users from using the site until they click "agree", and then doing that again every time the T&Cs change, right?
Zyst 1 day ago 0 replies      
In this particular ecosystem (HN)? Probably not.

Half the time I see Medium posts, the other half I'll see something hosted with Jekyll + github pages. Which technically isn't Self Hosted, but still quite different from just writing in Medium or something of the sort.

However I suspect HackerNews readers are not the average, and I do think there's a down trend on self hosting blogs, versus using Medium/Wordpress/Tumblr or even Blogspot.

       cached 24 January 2016 05:11:03 GMT