hacker news with inline top comments    .. more ..    1 Jul 2016 Best
home   ask   best   3 years ago   
The Daredevil Camera ribbonfarm.com
1208 points by zdw  1 day ago   150 comments top 37
Animats 21 hours ago 3 replies      
Beam-forming microphones are well known. Here's a fancy ceiling-mounted one for video conferencing.[1] Here's the theory in Matlab.[2] And, most impressive is the surveillance version, the Squarehead Audioscope Acoustic Camera.[3] This, a 1m disk with 384 microphones, can pick out a single conversation in a noisy arena.

There's a version which records all the audio channels in real time, using a big disk array. This allows the phased-array processing to be done during playback, so the "aiming" can take place later. The company now has its catalog of surveillance products password-protected, but in the past, there's been more info about that application. The DEA has purchased two.

There are also "acoustic cameras" for locating noise sources.[4]

All this is a spinoff of submarine passive sonar technology from decades ago.

[1] https://www.youtube.com/watch?v=Bepqav87D40[2] http://www.mathworks.com/help/phased/examples/acoustic-beamf...[3] https://www.youtube.com/watch?v=bgz7Cx-qSFw[4] http://www.acoustic-camera.com/

fudged71 23 hours ago 1 reply      
Artlav, this is an incredible project!

I have an idea for you. You mention that you want to use a webcam so that you can "overlay" this information on top of an image.

What if you could send it directly to your brain?

There is a technology called Brainport[0], which is effectively a 16x16 grid of electrodes small enough to place on your tongue. By feeding electrical impulses through these electrodes, you stimulate the nerves on your tongue. After two minutes of a "champagn bubbling" sensation, your brain rewires the senses from your tongue to your visual cortex and you can "sense" any input.

This technology has been used to give sight to the blind, balance to those with inner ear problems, and sonar to NAVY seals.

It would be amazing if you were the first person in the world to truly "see" sound.

[0] https://en.wikipedia.org/wiki/Brainport

wbeaty 23 hours ago 2 replies      
Very very cool!

And it's about time someone did it! Yours is the first hobbyist project I've encountered since my own extremely crude ten-dollar version in 1981. No square array or processing, mine was a mechanical scanner-disk: eight LEDs glued in a spiral to an old vinyl record album, each with an electret microphone and and op amp. Spin the disk with a motor at about 6Hz. Wind noise, so add foam to the mics. It worked. A mechanical acoustic-phosphor raster sensor and display screen, where sound patterns caused light patterns. The best part was to hold some headphones near it while playing 10KHz sine wave. This produced interference stripes on the scanning disk! I hoped to build a huge version someday, but somebody beat me to it: Tom Zimmerman, an MIT student who patented the dataglove, then used the money to work at Exploratorium Museum for a year (as Artist in Residence), building a scanning-microphone LED disk about 6ft wide.

Since then we have companies selling DIDSON full-blown video for underwater (with sonic illumination,) also a euro company with a 3D phase-array microphone imager positioned on a ring or a wire sphere. See:


Heh, the divers appear to be skeletons wearing scuba tanks.


ProAm 1 day ago 3 replies      
This is the definition of what a 'hacker' is, great read.
Bromskloss 1 day ago 9 replies      
Yay, a sound-field camera! I've been wondering if such things are made. Should I take the existence of this project as an indication that they aren't?

The use case I imagine is finding what part of a machine is emitting a noise, much like how you would use a thermal camera for finding what part of it is running hot.

xerophyte12932 19 hours ago 0 replies      
While the actual curiosity-following-adventure is amazing, i think one other aspect to all of this that deserves a lot of praise is his writing. He managed to take us on his adventure along with him. Not every blogger pulls that off successfully.

Kudos to that!

anotheryou 1 day ago 0 replies      
I made a smaller and smoother version of the final gif: http://malea.lacerta.uberspace.de/up/128.gif

(downsampled back to 16x16, than upsampled with photoshops more sophisticated resize and doubled speed for more fluent movement)

visakanv 1 day ago 3 replies      
This is insanely cool. I've often vaguely fantasized about stuff like this but this guy takes it all the way. He deserves a huge budget. Can we fund this somehow?
skrebbel 20 hours ago 3 replies      
A Dutch startup called Sorama has been selling this commercially for a while. It's very cool actually, they sell mostly to companies who want to make their hardware less noisy.

It turns out that noisy machines are often mostly noisy because a relatively small number of parts are too flexible, not screwed on tightly enough, and so on. Track that down and you know where your low-hanging fruit is. A daredevil cam is exactly what you need for tracking that down.

Their site is a tad unimpressive sadly, but their tech is very cool: http://www.sorama.eu

jd20 1 day ago 0 replies      
This is pretty cool, and reminded me of the far-field microphone array in an Amazon Echo (which uses 7 spaced out microphones). Which also makes me wonder, taking something like this (which has way more than just 7 mics) and applying it to the problem of far field speech recognition, are there large gains to be had? Just thinking of what new applications become possible, when you have the ability to print hundreds or thousands of miniature mics on a single board that will fit in a small-ish device.
anotheryou 1 day ago 0 replies      
Why is the resolution so much fixed to the number of mics?

Shouldn't 3 mics be somewhat sufficient for intermediate values as well? More mics => more redundancy => clearer image of the waves, but why do the pixels/blobs "snap" to a microphone?

edit: ah, he does not use timing (at 10fps obviously not) but needs continuous sound to see the wave pattern across the array...

edit 2: so do these gunfire locators only time stamp impulses? Or do they do continuous analysis? With natural, irregular sound, it should be possible to match and measure the delay between mics, no?

edit 3: the gunfire locators would need to analyze for gunfire characteristics anyways, to weed out false positives, but that could be separated from the locating.

edit x: oh cool, if you zoom out of the final picture so it's smaller on screen, you can even see some parallax effect happening: http://206hwf3fj4w52u3br03fi242.wpengine.netdna-cdn.com/wp-c...


rev_null 1 day ago 4 replies      
The timing issue between the microphones doesn't surprise me. Multi channel soundcards require a single ADC that processes all of the channels in unison (same for the DAC).

This is also why you can't just plug four stereo cards into a PC and expect it to make an 8-channel system.

bcherny 1 day ago 4 replies      
This would be pretty powerful if you combine it with speech recognition. If you put this array in the middle of a crowded room, can you better tease apart different audio sources?
keville 1 day ago 0 replies      
Excellent narrative on top of an incredible project!
jparishy 1 day ago 0 replies      
Almost, kinda related is this Invisbilia episode that's pretty interesting about a blind man that uses directed clicking in his mouth as a way of "seeing via echolocation". They talk about how it activates similar parts of the brain that vision does which I think is a cool parallel to what's been created here.


chris_va 21 hours ago 0 replies      
So impressive

If you get a x86 hooked up, doing some software cleanup might make the video look much better. Some ideas that you'll probably come up with on your anyway:

- Assign colors based on frequency clustering, rather than a heatmap (so, red=first freq, green=second freq, and so on).

- Persist colors if the frequency in a region shifts slowly enough (so every source stays mostly the same color).

- Remove echoes by finding correlated pixels.

numinos1 1 day ago 0 replies      
This reminds me of the "acoustic vector sensor" developed by a Dutch acoustics firm that can pinpoint any conversation in a crowd.


jvdb 1 day ago 1 reply      
Awesome project!

Nitpick; Duga 3 was never built. The one near Chernobyl is Duga 1.

acjohnson55 22 hours ago 1 reply      
Brilliant project! I had a similar idea several years ago, when I was in a master's program in music technology at NYU. I had classes on acoustics and concert recording that really blew my mind as I started to understand how sound behaves within spaces. The idea of a soundfield camera occurred to me, and it seemed like it could potentially demystify a lot of phenomena sound workers have to deal with largely by intuition. Things like reverberation, acoustic modes, propagation patterns, wave effects, etc. I hope this project continues to mature!
humbertomn 22 hours ago 0 replies      
Great post, Artlav! Great to hear from you after Exosphere in Chile last year. Eager for the next part of the project and hopefully you get some funds for your always interesting ideas.
sinaa 14 hours ago 1 reply      
Extremely cool!

Now is the reverse (vision->sound) also possible? Perhaps using an array of laser sensors, so that we can reproduce the sounds in the environment.

Also, is it possible to turn the images/video back into sound? Could be a fun experiment!

wesleye 13 hours ago 0 replies      
This is exactly what http://www.sorama.eu/ does. They have created a large array of cheap microphones to do an otherwise expensive setup. They work, for example with Coolermaster to make their cases more silent: https://www.youtube.com/watch?v=JLx9DsNcM6o
sbierwagen 1 day ago 1 reply      
Hey Artlav, when did you start posting on ribbonfarm? I used to subscribe to it, but unsubscribed a couple years back after Rao wrote a dumb post on information security.
tlarkworthy 1 day ago 0 replies      
The math is how underwater imaging sonars work. But there the sonar emits the sound too.


dubmax123 11 hours ago 0 replies      
This is awesome! I'm sensing some Machine Learning applications here.
SilasX 19 hours ago 0 replies      
I love it! Would love to see it extended to cover a distributed network of emitter/receivers to image an area like in The Dark Knight.
abcampbell 19 hours ago 0 replies      
Someone please give this guy $10m of funding to invent more cool stuff.

Regardless of commercial application, this guy can do #hardtech

rhema 1 day ago 1 reply      
"Daredevil" as in vision through sound. Very neat project! I wonder how much people can see in the dark through listening to sound sources by moving their heads for motion parallax.
nitwit005 1 day ago 0 replies      
Excellent project. I appreciate listing the insights along the way.
rasz_pl 1 day ago 1 reply      
>Unfortunately, we are still talking about a total of 64 Mbps of data. That needs a USB 2.0 sampling board that would pull the data out of the cells over the bulk interface

you mean a $10 EZ-USB FX2LP dongle?

You might be interested in this https://sourceforge.net/p/manyears/wiki/Main_Page/

Im sure you know about SeeSV-S205, its basically the same thing, but polished and packaged neatly for FAT commercial customers ($30K).

posted on slashdot in 2013 about the same thing https://tech.slashdot.org/comments.pl?sid=3742015&cid=437167...

amerine 1 day ago 1 reply      
wyager 1 day ago 0 replies      
One of the coolest and most expository posts I have seen in recent memory. Well done.
boxcardavin 1 day ago 0 replies      
This speaks to my heart.
zappo2938 1 day ago 0 replies      
We can tell the location of a sound by timing of the wave hitting one ear than the other. Sound Localization.[0]

[0] https://en.wikipedia.org/wiki/Sound_localization

oinsurance3 21 hours ago 0 replies      
soheil 1 day ago 1 reply      
I think the whole thing could be done pretty much all in software. Setup a bunch of phone numbers in Twilio, using a bunch of your friends iPhones, strategically located, call the numbers and process the data using Twilio API. Whatsapp or other real-time voice communication apps with an API may work too.
Investing Returns on the S&P500 github.com
587 points by minimaxir  1 day ago   335 comments top 42
jakozaur 1 day ago 9 replies      
I don't think it is fair to say that next 100 years will be same as last 100 years:

1. GDP growth is not as high as it used to be anywhere in developed world:http://www.oecd.org/std/productivity-stats/oecd-compendium-o...

2. USA is superpower at the peak. Plenty of other stock market economies hasn't been so successful. E.g. Argentina used to be one of the richest country in the world. Investing in history is easy, e.g. you can also say about Apple stock that it will always recover based on past data, but plenty of other companies hasn't.

3. Most of the 100 years we have inflation:http://inflationdata.com/Inflation/Inflation_Rate/Long_Term_...

Right now interests rate are close to zero, which is rare. Long term trends may change if we continue to operate in those climate.

Edit: Inflation was factored in the graphs. Sorry I was wrong about that.

chollida1 1 day ago 1 reply      
Cool, this is very similar to what I had people do at one point as part of the interview process.

Give them a bunch of historical data

- find me the longest period that we would be flat/negative

- find the best time to invest

- find the optimal portfolio off stocks to hold over a given period.

I think I've said this before but I see too many people who think that they need to have a huge public repository of code to show off in an interview.

If you want to be a programmer at a hedge fund just having some very shallow analysis like the above would get a programmer to the top of the resume pile.

Now if you want to be a quant the bar is obviously much higher:)

dap 1 day ago 7 replies      
The NYTimes has a great visualization of S&P 500 returns for money invested any year between 1920 and 2009 and withdrawn between 1921 and 2010:


maxxxxx 1 day ago 4 replies      
Looking at the discussions here I find it interesting that even in something as number driven as the stock market everybody argues about the meaning and the validity of the numbers. There really is no clear picture.

But somehow the regular guy is supposed to navigate his way through this jungle of conflicting, confusing or meaningless numbers. And considering the long time frames most people don't have much opportunity for leaning from mistakes and doing better. Once you realize that your strategy doesn't work you have already wasted many, many years and lots of money.

ozten 1 day ago 3 replies      
Does the stock data used suffer from Surviver bias?

That is a free download of historical data that lacks failing, delisted companies of the past.

tbrooks 1 day ago 3 replies      
Warren Buffett bet $1mm that S&P500 will outperform a hedge fund over a 10 year period.[1]

That's good enough for me, I'll follow the oracle.

[1] http://longbets.org/362/

rando18423 1 day ago 1 reply      
Lots of talk in here concerning superpowers falling, and not much about how it's way more favorable for companies to use debt financing in a low interest rate environment... Lol developers should stick to developing
vanderfluge 1 day ago 2 replies      
I'd be curious to see results for other countries, the graph of "Chance of Losing in the Stock Market" in particular. In the case of Japan, it appears that if you'd still have significant losses if you invested 25-30 years ago: https://finance.yahoo.com/echarts?s=%5En225+Interactive#{"ra...
Jabbles 1 day ago 0 replies      
"Buy and Hold" applied to the S&P500 actually means "sell when an individual company loses enough market cap, buy one that has gained enough".
tunesmith 1 day ago 2 replies      
The other reason none of these returns are realistic for an average person:

1) People don't get a lump sum at the beginning of their investment history

2) Ah, but you say, dollar-cost-averaging. The problem there is that people get more money to invest when times are good, and less when times are bad.

3) As a result, even when buying in responsibly, people are buying more when the market is high, and less when the market is low.

It distorts performance. Plus, betting on having average-or-better performance is a risky bet. Don't count on more than 2.5% / year lifetime.

chrismealy 1 day ago 2 replies      
The stock market isn't some immutable thing. It's the product of economic and social forces, the share of income given to capital and labor, international politics, etc. Basically stocks are a convenience for rentiers that the little people are allowed to access. There's no reason to expect the social forces that underly market returns will deliver the same returns in the future.
yread 1 day ago 3 replies      
Would be more interesting to compare it against a realistic return from a savings account instead of saying "if the index is worth the same after 20 years then you haven't lost anything". You would have almost 25% more even in a 1.1% savings account.
minimaxir 1 day ago 0 replies      
via this /r/dataisbeautiful post, with added commentary from the author: https://www.reddit.com/r/dataisbeautiful/comments/4q9iwa/40_...
ryandrake 1 day ago 2 replies      
Funny how his time horizon stretches out to 150 years, where the vast majority of people don't live past 100, and have probably, what 25-35 or so years of investing time in their lives? The insanely long term is a simplified look at the stock market as some money-multiplication machine, but I don't think it is really that to most people.

Given a normal person's time horizon, the difference between "did I start investing in 2007 or in 2009" is significant.

enoch_r 1 day ago 0 replies      
By defining the probability distribution as the historical distribution of one-year returns, you're severely warping the possibility space. Imagine two experiments:

Experiment A: you flip a coin 1000 times. You estimate the probability that the coin flips heads, and use this to estimate the odds of flipping 10 heads in a row.

Experiment B: you flip a coin 10 times. You repeat that experiment 100 times. You use the results of that experiment to estimate the odds of flipping 100 heads in a row (by looking at how many times it occurred, and dividing by 100).

Assuming reasonable results and a fair coin, you'll conclude from experiment A that the probability of a heads flip is around 50%, and that the odds of getting 10 heads in a row are (1/2)^10.

In experiment B, ~90% of the time you'll conclude that it's impossible to flip 10 heads in a row, and ~10% of the time (in the cases where you did flip 10 heads in a row at least once) you'll overestimate the likelihood of doing it again by a factor of 10 (at 1%, instead of the true value, <0.1%).

So when you only look at annual returns, you're effectively looking at an aggregate of 365 daily returns (or 365*6.5 hourly returns).

HorizonXP 1 day ago 6 replies      
It's funny that this is so non-intuitive. My wife continues to try to "time the market" despite me telling her that it's pointless over such a long time horizon. Maybe this will help convince her.
pbreit 1 day ago 5 replies      
I am not a financial advisor and this is not financial advice!

Robinhood seems like an OK way to keep a free portfolio of ETFs approximating a Vanguard all-in-one fund.

My super-unscientific portfolio is loosely based on Vanguard's LifeStrategy Growth & Moderate Growth funds with a sliver of MGK that seemed to both boost returns and moderate declines.

A $5k portfolio would be 6 MGK (10%), 19 VTI (40%), 23 VXUS (20%), 12 BND (20%), 9 BNDX (10%).

From what I can tell, you can hold this portfolio in Robinhood for free. Shouldn't need to re-balance more than once per year or so.



Pay off your credit cards and max out your 401k/IRA first.

the_watcher 1 day ago 0 replies      
I wouldn't say this is surprising. If you can extend your horizon, you can weather downturns that reduce your asset value, buy while demand is low, and simply wait for the cycle to reverse. Real estate shows the same pattern. The problem is that not everyone has the means to simply extend their horizon.
edpichler 1 day ago 0 replies      
I am a holder and, as a small stock market investor, this study is gold to me.As a holder, the worst years are the first decade, after that, the interests become explosives.

The difficulty to be a holder is to spend less on the present to get return in a future. Human being is definitely not good on this.

pfarnsworth 1 day ago 1 reply      
Try this on stock markets like the Nikkei, and I think you'll find completely different results.
rrecuero 1 day ago 0 replies      
I really like the write up & simplicity of the explanations. Obviously, choosing Vanguard Index funds and avoiding timing the market it is a great financial advice. One piece that I always find missing is that in my opinion, money is not worth the same as you age.

From my point of view, money in your 20s, 30s have 3x or 4x more value than when you are 50. You can go skydiving, scuba diving... When you are 60 your options to enjoy that money are much more limited. I think that should be factor in financial decisions that it is often overlooked

ianai 1 day ago 0 replies      
The take-away I got from this is: unless you have 30 years, forget the s&p 500 market.
thro1237 1 day ago 2 replies      
Is it possible to do it for dollar cost averaging? That is what happens if one didn't invest in a lumpsum but invested $1000 every month -- how does the returns look like for different time periods?
tempestn 1 day ago 0 replies      
One important thing to note is that although the far right end appears to narrow, that is not because returns become more certain over very long time ranges, but rather simply due to lack of samples over those very long periods.

What's more indicative of reality is the relatively constant vertical distribution, in a log scale, across any time range up to about 90 years, when the lack of data starts to play a significant role. So, in relative terms, uncertainty remains fairly constant over time. In absolute terms it actually increases. In a sense risk still does decrease over long time horizons, as the chance of earning over any given amount does increase with time; contrary to common belief though, the distribution of possible returns (another possible definition of risk) actually increases with time - or remains roughly constant on a log scale, as mentioned.

(You can find a great many opinions surrounding this by googling "time diversification".)

FuNe 14 hours ago 0 replies      
<fun-at-parties-mode: on>Ehm it's a bit of a long shot to assume next 100 years will be more or less business as usual. Having said that and after a quick look around the climate frontier, economy and international affairs I'd probably buy (if I had money to burn) defence industries stock. <fun-at-parties-mode: off>
sp527 1 day ago 0 replies      
This is not very interesting. Anyone who has attempted even a cursory inspection of market returns understands this rationale.

The more interesting problem is to attempt to train an investment curve based on some normalized metric for market value, which would be superior to both the lump-sum and DCA strats. What he's not showing you is that gyrations in the market have a tremendous impact on the long-run return outlook and this is extremely detrimental to the lump-sum strategy in particular. DCA is slightly better but still not perfect. You therefore want to pool cash over certain periods of time and invest more, relatively speaking, during periods when the market is trending lower. This is a tough model to construct because there are certain subjective features (how to normalize market value properly) and more complicated variables you have to account for like a negative ROI on holding cash.

All that said, I actually built a model fitting this description a while back and it works swimmingly.

kmm 1 day ago 0 replies      
I like the animation. You can see crises percolate backwards in time
chae 22 hours ago 0 replies      
As somewhat alluded to in the article, some of the graphs here extrapolate historical data and assume it to be a good predictor of the future. Without an understanding about the deep nuances of economics - for which someone else can supply good or bad reasons why this or that might happen - there is no reason why the future should necessarily rely on anecdotal evidence.
nullc 1 day ago 0 replies      
These graphs with inflation adjusted returns need a cash line, to show that in inflation adjusted terms cash is not some magical safe option that doesn't go under 1.0.
meric 23 hours ago 0 replies      
This is great analysis. Keeping in mind there's more motivation to do this kind of analysis during euphoria phase of the stock market than during capitulation phase of the stock market like in 2009 or 2010, I think this shows investing in the long term will cancel out losses in the short term.
bllguo 1 day ago 0 replies      
Very cool project, but - and I mean no offense by this - these views are taught in basic finance! There are similar graphs in standard finance texts. So it surprises me that so many people here seem to be caught off guard.

I look forward to seeing where the project will be heading next. In fact it is inspiring me to do similar analyses.

plg 1 day ago 1 reply      
an oft-neglected nugget of info that has great bearing on a buy-and-hold approach, esp with mutual funds: FEES

Fees can kill your returns

There are many low-fee or no-fee options

tim333 1 day ago 0 replies      
If you want to know the returns for the next 7 years, GMO has quite a good track record, based on ratios being mean reverting http://www.gurufocus.com/news/349958/how-to-play-gmos-latest...
Halaoo 17 hours ago 0 replies      
Now if only the next 100 years will be the same as the last 100 years! Even though it's a totally different world.
pilom 1 day ago 1 reply      
If you want to run your own analysis about what retirement might look like for you, I recommend the extremely powerful http://cfiresim.com/
eevilspock 1 day ago 0 replies      
What of opportunity cost? That should be factored in just as inflation is.

> After 20 years, you're almost guaranteed to sell high.

Not if you factor in opportunity cost. If I invest $100K and after 20 years I sell at $110K, which works out to a 0.5% APR, I wouldn't consider that selling high.

nradov 1 day ago 1 reply      
This is a silly analysis in that it ignores investment costs. You can't invest in the S&P 500. You have to either buy an index mutual fund, or buy the individual component stocks. Either way the investment costs will significantly cut into long term returns, even with a fund like VFINX.
javiramos 1 day ago 0 replies      
I have followed http://www.crossingwallstreet.com/ for years. I have traded the portfolio he lists and I've done pretty well. His focus is in the long-term...
styli 1 day ago 0 replies      
any chance you can show us some of the source code? curious to know how to build these type of graphs.
known 1 day ago 1 reply      
Just find out how to do insider trading without getting caught.

This is even better :)

"Give me control of a nation's money supply, and I care not who makes its laws." --Rothschild, 1744

thomaslieven 1 day ago 0 replies      
i've made simmiliar visulations in backtest app which shows returns on cash invested > https://itunes.apple.com/cz/app/backtest-stock-asset-portfol...
socrates1998 1 day ago 2 replies      
There are a lot of issues with this data.

1) Last 100 years has ZERO chance to be the same as the next 100 years.

2) The SP 500 adds and subtracts companies as they grow, fold and go bankrupt. So the only way this would be maybe work is if you invested in the ETF.

3) Macro-economic events are notoriously hard to predict and anticipate, and your returns are very much tied to how you time your entry point. If you invested in the beginning of 2008, you would have a 33% return, same for a 2001 entry. In the beginning of 2009, around 100%.

Or a mid 2013 entry would get you a 17% return for three years, around 5% a year.

And if you invested in mid 2014, you haven't seen ANY return, 0%.

The point is that entry points are insanely important. Getting 33% return for 15 years is horrible, especially given the opportunity costs.

This is the problem that you get when Warren Buffet fools you into "buy and hold". Yes, it has worked for him, but he is an insanely great investor. You can't replicate his returns. You can't even sniff his returns.

So what's the answer? I don't know, do what you want with your money, just don't listen to anyone who thinks they know the answer.

How to Compromise the Enterprise Endpoint googleprojectzero.blogspot.com
537 points by nnx  2 days ago   177 comments top 21
cypherpunks01 2 days ago 3 replies      
"Because Symantec uses a filter driver to intercept all system I/O, just emailing a file to a victim or sending them a link to an exploit is enough to trigger it - the victim does not need to open the file or interact with it in anyway."

That seems big. Is there any precedent on AV software vulnerabilities of this scope?

verelo 1 day ago 14 replies      
So, what do people run on their servers / macbooks for AV? Anything?

I was in a meeting just last week with our new "head of Security" who exclaimed when i stated our Macbooks nor or Ubuntu severs run any AV software (We run firewalls and things like fail-2-ban, but no traditional AV).

I know i'm going to get into a debate with them over this, so, what would be a good 'win-win' type position for me to fall back on to satisfy this point and not clutter my machines up with junk, if there is such a thing?

walrus01 1 day ago 3 replies      
From the perspective of a person who thankfully no longer has to support any Windows based platforms:

"Symantec considered harmful"

full stop.

Let's not forget this: http://arstechnica.com/security/2015/10/still-fuming-over-ht...

Symantec should have suffered the CA "death penalty" and had its trust removed from the browsers that hold most of the global market share.

tdullien 1 day ago 1 reply      
A note for everybody asking "why on earth does anybody run this software": When my company had to get corporate liability insurance in 2007/2008, the actual insurance contract stipulated "having AV installed on all machines". We did solve it by having an unused folder with ClamAV on every box, but I was impressed by the fact that AV is pretty much legally mandated for enterprises.
e40 2 days ago 6 replies      
It's hard for me to believe that anyone uses this crap software. A few years ago I spent hours uninstalling it for a friend. It has slowed his laptop to a crawl and he was about to buy a new one. After the uninstall, it was snappy enough to use for a few more years. Really, that software is some of the worst I've ever witnessed, and I've seen some shit.
paradite 1 day ago 3 replies      
I always wonder why, despite all these flaws and vulnerabilities, big enterprises still use them.

Is there some kind of "compliance" or "regulation" that mandates companies to install them on every workstation?

tmandry 1 day ago 0 replies      
A bug in their software would be forgivable. This article pointed out both an extremely poor design decision (lots of unnecessary code in the kernel) as well as a serious organizational problem (not doing vulnerability management). These are especially bad considering that they supposed to be a security company.

In both cases, one bad example means it's likely there are many more still undiscovered.

jacquesm 1 day ago 0 replies      
Anti virus is like a compromised immune system: it joins the other side and will help to kill the host in short order. It's a miracle these companies are still in business and it is very sad to see Peter Norton's name dragged through the mud like this over and over again.
yuhong 2 days ago 1 reply      
Win32k before Win10 used to do TrueType/Type 1 parsing in the kernel, with an entire bytecode virtual machine!
Kenji 2 days ago 0 replies      
I am not surprised in the least. Norton Antivirus is one of the worst of its kind. I've used it for many years. Every single virus/trojan/adware infection I got went straight through Norton Antivirus without it doing anything. Back as a kid I opened a lot of downloaded executables, like games, and some of them were infected. Later, I got more cautious with executables but got rid of all antivirus software - best software decision ever. My computers have never been faster.
wallflower 1 day ago 3 replies      
Many years ago, installing Malwarebytes Anti-Malware dramatically reduced the amount of on-site technical support calls for my well-meaning but too trusting ("I just clicked on it") parents. This was before I was able, with the help of my brother-in-law, to convert them to Apple/Mac.

Is Malwarebytes Anti-Malware still the gold standard for Windows Malware protection? What is the gold standard for Windows virus protection now?

ngneer 1 day ago 2 replies      
IMHO, the security industry has been guilty of adding complexity to existing systems rather than doing its duty of stripping it away.
electic 2 days ago 2 replies      
The software you buy to keep you safe actually exposes you to more risk than if you didn't buy it. How ironic.
sverige 1 day ago 3 replies      
Isn't Norton antivirus itself malware? And McAfee too, for that matter? I finally convinced my mom and my wife to stop downloading it everytime they update Adobe Flash. (Yes, they still do that. On Windows of course. Sigh. One thing at a time.)
shortstuffsushi 1 day ago 3 replies      
Here's a question I have every time I see "RCE" type issues, and I'm completely serious when I ask: what is the use case for allowing remote execution in your software? Why would you want to allow arbitrary code to be executed? Or am I perhaps misunderstanding this, is it some sort of break out of the program bounds which allows execution?
a_c 1 day ago 2 replies      
What are the reasons one would want to use an antivirus? Can someone share some insight on how does antivirus actually work?
NetTechM 1 day ago 0 replies      
Quite a few major enterprises use SEP/SEPM in combination with other IPS/IDS. Time to make sure everything is updated I suppose. Good work project zero.
Jedd 1 day ago 3 replies      
In the fast-moving world of IT security it's refreshing to see that Symantec's web site makes no mention of these profoundly important vulnerabilities on their landing page

They don't seem to have any Status / Current Alerts style pages -- but on their somewhat hard to find blog page we find the most recent update from the guys is from two days ago:

"Malicious app found on Google Play, steals Viber photos and videos"


EDIT: Oh, they have a Vulnerabilities page - https://www.symantec.com/security_response/landing/vulnerabi... - with the most recent entries listed as 13 days ago (blimey that US mm/dd/yyyy date format is uncomfortable).

beedogs 1 day ago 0 replies      
I've been saying this for years, but when are people going to realize that running Norton on your PC is actually worse than not running AV software at all?
Figs 2 days ago 3 replies      
> googleprojectzero.blogspot.my

Why is this linked to on a .my domain? Is this an official mirror, or is there something sketchy going on here?

FuturePromise 1 day ago 1 reply      
Windows 10 has a built-in antivirus that's very effective, safe, and doesn't impact system usability. There's little reason for anyone on Windows 10 to run Symantec/Norton.
Exercise Releases Brain-Healthy Protein nih.gov
465 points by brahmwg  2 days ago   176 comments top 17
jey 2 days ago 15 replies      
The connection between exercise and brain health has now been super conclusively and astonishingly shown, but what are some possible evolutionary reasons for this connection?

Maybe the extra energetic investment in brain health is only justified when there is an energetic surplus, and doing exercise correlates with having extra energy available? (As opposed to starving and thus needing to conserve energy.) But that doesn't actually make sense, since the extra metabolic expenditure of these processes can't possibly be all that high, and doesn't the brain use almost the same (huge) amount of resources whether it's at rest or very cognitively active?

schappim 2 days ago 1 reply      
For all the entrepreneurs wanting to bottle Cathepsin B (the "Brain-Healthy Protein"), unfortunately it has also been linked with tumour invasion and metastasis[1]. Further study is needed to know if the increase in Cathepsin B is a factor or simply correlation.

[1] http://www.ncbi.nlm.nih.gov/pubmed/14587299

mathattack 2 days ago 3 replies      
There is a strange dichotomy here... I do believe that exercise helps thinking, but when I was growing up, very few of the smarter people were involved in sports, either formally or informally. Perhaps 1 in 20 was on a team, and few were that active otherwise. The link to drama and music was much bigger.
tmaly 2 days ago 2 replies      
I would be more interested in a discussion of the benefits of the different types of exercise. I see there is this sentiment about seeing the same type of exercise is good for you articles every week.

Which types help lower cholestrol?

Which types are best for the heart?

bronz 2 days ago 4 replies      
well im surprised to be the first person in this thread to ask this question (if i am not mistaken) but what about supplementing the protein instead of working out?
igorgue 2 days ago 5 replies      
Why do we have an article saying basically the same thing every single week here?

I get it, and I think everyone here has, exercise is good for you, in many ways, don't sit on your ass all day.

I guess it's sad to see this becoming the "Good Morning America" of tech (not that we didn't have those ones already) or the ultra localized we-only-care-about-the-bay-area medium.

googletron 2 days ago 2 replies      
This is exactly the message we are trying to get across at Gyroscope.


As you can see, me there playing sports that evening, super exciting and really keeps me focused. Granted hard to take me seriously, but we have some users like Tatiana have done some amazing stuff


th0ma5 2 days ago 2 replies      
I had heard that even a brief bit of HIIT has all of the benefits of longer exercise times... Does this refute that or give evidence in support of that?
samuell 1 day ago 0 replies      
Related study from 2014, with more details on consequences, less on biological detail:

"Regular exercise changes the brain to improve memory, thinking skills"


davnn 2 days ago 2 replies      
Are there any studies on what kind of exercise, or what kind of intensity is most beneficial to overall or mental health?

Edit: Got back to my laptop and did some research myself.

Aerobic Exercise Training Increases Brain Volume in Aging Humans (and nonaerobic doesn't) -http://biomedgerontology.oxfordjournals.org/content/61/11/11...

Findings show that although the more intense, motorized running exercise induced a rapidincrease in BDNF, the elevation was more short-lived than with voluntary running. Suggests that longer, easier exercises might be more beneficial. -

Exercise effects on executive function are not doseresponsive, meaning that better fitness does notnecessarily lead to larger cognitive gains... physical activity levels that benefit cognition maynot necessarily be as intense as those levels required to increase cardiovascular fitness. - https://www.researchgate.net/profile/Michelle_Ploughman/publ...

neither duration (20 vs. 40 min) nor intensity (60 vs. 80% HR reserve) significantly affects the benefits of exercise if only the sBDNF increase at a single post-exercise time point is considered... -http://www.jssm.org/research.php?id=jssm-12-502.xml

High intensity interval training evokes (slightly) larger serum BDNF levels compared to intense continuous exercise. - https://www.researchgate.net/profile/Nicole_Wenderoth/public...

This meta-analysis provides reliable evidence that both acute and regular exercise have a significant impact on BDNF levels. (but animal models show that these can be gone soon after you stop training) - http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4314337/

The effects of acute exercise on cognitive performance:A meta-analysis - https://www.researchgate.net/profile/Yu_Kai_Chang/publicatio...

I can recommend the meta-analysis (last link), data is unfortunately somewhat inconclusive. There are basically two theories, the inverted-U theory, stating that moderate exercise is most beneficial and the drive theory, stating that the largest effects will be achieved with the largest intensity. Chang et. al., however, conclude that: The size of the benefit is dependent upon a number of factors, but results indicate that benefits are larger for more fit individuals who perform the physical activity for 20 min or longer. The appropriate intensity depends upon the time of measurement any intensity benefits cognitive performance during exercise, but lower intensities provide more benefit when the tests are performed immediately after exercise and higher intensities have more durable effects that can be observed even following a delay.

yarou 2 days ago 0 replies      
Exercise also increases BDNF gene expression, which deficits of are implicated in depression and addiction.
cpdean 2 days ago 0 replies      
pssh NICE TRY SCIENCE. You'll have to try a lot harder to trick me into exercising!
thehashrocket 2 days ago 1 reply      
Does it have to be a treadmill? I mostly do the elliptical (ski whatever it's called) since i'm not much of a runner. Just curious if I need to take it up a notch or not.
simonebrunozzi 2 days ago 0 replies      
Latin: "Mens sana in corpore sano" (healthy mind in a healthy body).
espeed 1 day ago 0 replies      
John Ratey (http://en.wikipedia.org/wiki/John_Ratey), the professor of psychiatry at Harvard Medical School who wrote Driven to Distration, recently published a book called "Spark: The Revolutionary New Science of Exercise and the Brain" (http://www.amazon.com/Spark-Revolutionary-Science-Exercise-B...).

Spark details how high-intensity cardio (like sprints or interval training) put your brain chemicals in balance in part by generating BDNF (http://en.wikipedia.org/wiki/Brain-derived_neurotrophic_fact...), which as Ratey describes, it's like "Miracle-gro" for the brain.

Last year my stress levels were getting out of control from working too much. At the time I was running at least two miles every day so it's not like I wasn't exercising. But then one day I changed from running a couple miles to running 50-yard sprints, as fast and as hard as I could push myself. The first day I only ran four sprints, but I felt euphoric the rest of the day -- the best I had felt in years. So I tried it again a couple days later, and sure enough it worked again -- I felt amazing.

So then I had to find out why this worked -- why a few sprints were so much more effective than running several miles. I started Googling and eventually found Ratey's book -- it explains the entire biochemical process of what's going on and why sprinting works.

It's an eye-opening read. Each chapter covers how high-intensity cardio affects things like stress, anxiety, depression, ADHD. I have ADHD but haven't taken anything for it in years (since I was in college), and I can attest that sprints not only fixed by stress levels, but my ADHD symptoms were almost non existent.

Here's a key point that Ratey makes throughout the book that completely changed my perspective on things -- he says that instead of thinking of exercise as something you should do to look good and build a healthy body, you should instead think of exercise as the key to building a healthy brain:

"We all know that exercise makes us feel better, but most of us have no idea why. We assume its because we're burning off stress or reducing muscle tension or boosting endorphins, and we leave it at that. But the real reason we feel so good when we get our blood pumping is that it makes the brain function at its best" (http://www.sparkinglife.org).

In the book's introduction he goes on to say, "Building muscles and conditioning the heart and lungs are essentially side effects. I often tell my patients that the point of exercise is to build and condition the brain."

In fact the brain exercise routine he recommends is similar to a weight workout routine, in that you have to push yourself hard one day, and then take a day off to let your brain recover, just like in weight training. Another key is when you sprint, always put everything you have into it. Run as fast and as hard as you can so you are constantly pushing your body and your brain past their limitations -- this is the key to growth.

Reposted from: http://news.ycombinator.com/item?id=5323019 (2013)

Also see: "How exercise boosts brain health" (http://www.kurzweilai.net/how-exercise-boosts-brain-health)

uptownfunk 2 days ago 6 replies      
If this is true, why are jocks commonly viewed as unintelligent?
nsxwolf 2 days ago 0 replies      
"Brain-healthy"... that's the BS language you hear in commercials for vitamins and supplements and breakfast cereals... "Heart-healthy whole grains", "Supports breast health", etc.
Languages Which Almost Became CSS eager.io
579 points by zackbloom  2 days ago   130 comments top 20
Animats 2 days ago 10 replies      
Layout should have been constraint-oriented, not procedural. You should be able to express "Bottom of this box is adjacent to top of that box", and such. All those constraints go into a constraint engine, and a layout is generated. This is something a WYSIWYG editor can generate.

To get a sense of how this could work, try sketch mode in Autodesk Inventor (there's a free 30 day demo) You can specify that a point must be coincident to an edge, that an edge must be coincident to an edge, that something must have specified dimensions, that some dimension must have a numerical relationship with another dimension, etc. Inventor goes further, supporting constraints on diagonal lines, circles, arcs, ellipses, etc. Whether layout should support curves is an interesting question, but the technology exists to make that work.

The people who designed HTML5 and CSS thought procedurally, not geometrically. It shows. Designers think geometrically, and a geometric constraint system would make sense to designers.

frik 2 days ago 1 reply      
> Contrary to popular perception, Mosaic was not the first graphical browser. It was predated by ViolaWWW

Neither was ViolaWWW the first grahical browser.

In fact the very first browser by Sir Tim Berners-Lee was already a graphical browser (even with WYSIWYG edit mode later known from Frontpage/Dreamweaver) - made possible by thr advanced NeXTSTEP operating system and its window builder IDE (nowadays known as OSX/macOS and XCode respectively): https://en.wikipedia.org/wiki/WorldWideWeb and https://en.wikipedia.org/wiki/NeXTSTEP

Grue3 2 days ago 7 replies      
DSSSL looks amazing. Truly a shame it didn't catch on. Maybe we'd have a client-side Lisp instead of Javascript too.
Semiapies 2 days ago 2 replies      
There was also JavaScript Style Sheets: https://en.m.wikipedia.org/wiki/JavaScript_Style_Sheets
schmudde 2 days ago 4 replies      
"HTML is the kind of thing that can only be loved by a computer scientist. Yes, it expresses the underlying structure of a document, but documents are more than just structured text databases; they have visual impact. HTML totally eliminates any visual creativity that a documents designer might have. Roy Smith, 1993"

Seems like an odd request in 1993. Sure, Prodigy had visual impact, but it was pretty hard to read. HTML's starkness seems in part due to the fact that styling options were limited by the technology and the ones that existed were easily abused.

Writing long sentences as headers (<h1>) or all-caps (caps-lock) were styling options at the time... and often misunderstood and abused.


ergothus 2 days ago 1 reply      
As a web developer, I see two main issues with web styling:

First, the web was built around sharing technical papers. That means HTML structure focuses on those elements that are relevant to papers (outline layout via H* tags, tables of data, not much else), and not the sort of things that marketing and sales want to push. (ads, rails/gutters, etc). Those of use that suffered through the early "slice-and-dice" method of making web pages are painfully aware of that. I'm a big fan of technical papers, and my expectations of flash and glitz are minimal (I, for example, hate the modern trend of not using the full width of my window.). Despite this, I feel we keep trying to stay true to the origins of the Web rather than allowing for the actual USE of the web.

Second, in an effort to keep the content machine-parseable as well as allow for agents of different devices, CSS is applied separately from content/structure (theoretically). Specifically, the concepts used DO NOT MATCH the concepts used in developing desktop applications. Even Flexbox, the most recent attempt to fix this, only loosely relates to the way desktop applications would layout the content.

I'm a huge fan of the GOALS involved in HTML/CSS, but after working on web stuff for over 20 years (not using CSS quite that long), I feel I can say it's been a failure. We've spent that all or most of that time with painful workarounds for basic tasks like: center content (particularly vertically!), Adding a left rail/right rail, filling the height of a container, matching the height of the visible window, making sure layered content is zindexed properly, and those are just the ones off the top of my head. We've invented and reinvented ways to do things like drop down menus, toggleable buttons, modal windows. Heck, from the very start people implemented their own authentication windows because the appearance and capabilities of the browser-based solutions didn't match the demands.

After 20 years, and with the benefit of all the experience of desktop development to add in, I feel like we shouldn't be fighting to manage such basic requests, that we shouldn't be reimplementing field validation and error messages YET AGAIN because even the latest advanced offerings just don't cut it.

We should be able to have:

* "flexible" content (appearances adjusts to visible space)

* machine parseable content

* attractive UI

...without it requiring the dramatic hoop-jumping we have today.

Touche 2 days ago 0 replies      
This is a fantastic lesson in history. I have nothing more to say than this is what I come to HN for.
paragraft 2 days ago 1 reply      
What a great post. I really appreciate these longer digs into the past that go behind the 'what' to explain the how & why of where we got where we are; and just as much the futures that could have been and why they didn't happen. I keep thinking there's room for a decent series discussing the evolution of Rust, since that design process was such a public thing.
freyfogle 2 days ago 1 reply      
I've never understood why you need CSS, I just do it all in VRML
dincer 2 days ago 0 replies      
@fat's talk about this [42:07]: https://www.youtube.com/watch?v=iniwPUEbPUM
koolba 2 days ago 0 replies      
> It is pretty clear how this proposal was made in the era of document-based HTML pages, as there is no way compromise-based design would work in our app-oriented world. Nevertheless, it did include the fundamental idea that stylesheets should cascade. In other words, it should be possible for multiple stylesheets to be applied to the same page.

> It its original formulation, this idea was generally considered important because it gave the end user control over what they saw.

Content (i.e. ad) blockers are a logical extension of this.

martyalain 2 days ago 1 reply      
Writing pages in a web site is a mess, polluted by different syntaxes, HTML, CSS, Javascript, jQuery things, MarkDown, ... It's a miracle that Wikipedia exists! Several works have been done to bring some unity, for instance Skribe, Scribble, LAML, SXML, but they generally lead to complex systems devoted to coders and forgetting web designers and, of course, beginners.

The {lambda way} project is built as a thin overlay on top of any modern web browser, and devoted to writing, composing and coding on the web, where the markup, styling and scripting are unified in a single language, {lambda talk}.

Commenting this work, somebody wrote this: Reminds me of John McCarthy's lament at the W3C's choice of SGML as the basis for HTML: "An environment where the markup, styling and scripting is all s-expression based would be nice."

The project can be seen here: http://epsilonwiki.free.fr/lambdaway/ or in https://github.com/amarty66.

Do you think that {lambda way} is on the good way?

petetnt 1 day ago 0 replies      
I love how Bert Bos' homepage pretty much uses CSS to the fullest extent possible (as it should) https://www.w3.org/People/Bos/
pablovidal85 2 days ago 2 replies      
I'd like to see CSS-shaped HTML like this:

 doctype html html { head { title Hello there script type='text/javascript' src='main.js' } body { p class='one two' { span Sample text } } }
Is there an HTML preprocessor using a similar language?

kowdermeister 2 days ago 0 replies      
To me, PSL looks the most promising, at least the conditionals would have come handy. CSS only 'recently' got features like the CALC() function which is a blessing.

However I have to agree with the decision to put it aside, because remember how implementing something seemingly simple as CSS went in the days if IE5,6. It was a disaster and something more complex like PSL would have been even worse.

throwanem 2 days ago 1 reply      

> When HTML was announced by Tim Burners-Lee


zackbloom 2 days ago 0 replies      
If anyone missed it, there's a discussion with Robert Raisch, the developer who made the first stylesheet proposal, in the article's comments.
intrasight 2 days ago 0 replies      
And several years before that, there was Motif Toolkit.
inopinatus 2 days ago 0 replies      
Love the RELEVANCE selector in CHSS. Could use that today.
FuturePromise 1 day ago 2 replies      
I spent a long time learning XSL/XSLT. The theory was we'd represent the data on a web page with XML, and then determine how it's to be displayed with XSL/XSLT. I think browsers still support it, but it never caught on.
NSA tracking cellphone locations worldwide, Snowden documents show (2013) washingtonpost.com
385 points by randomname2  2 days ago   172 comments top 18
datamoshr 2 days ago 17 replies      
The thing that bothers me most about this as a European is; I have zero say in this, in the US you can strike out against surveillance, you can write to senators, protest against terrible legislation. Actually have a voice, however faint it is. Whereas I don't get a say but the exact same treatment from your country. The Five Eyes have made me paranoid and the only escape seems to be downgrading your phone to a brick and carrying it in a Faraday cage. We may as well just go back to plain old telephones.
cdevs 2 days ago 2 replies      
Man I will forever be grateful for the eye opening insights Snowden has provided to us. I now check for https and use tor and always block cookies. How is there not a monument in every city dedicated to this hero?
schoen 2 days ago 0 replies      
Apart from the political conversation I've always tried to encourage a technical conversation about how our mobile phone infrastructure is really terrible for privacy on many levels.

CCC events have had many presentations about this in the last few years, about IMSI catchers and mobile crypto attacks and abusing roaming mechanisms and databases. And it seems there's more where that came from; the system is wide open in many respects to exploitation by a sophisticated attacker, governmental or not. (I read somewhere that people in China are buying and deploying IMSI catchers in order to send SMS spam to passersby.)

Some of the privacy problems are a result of economic factors including backwards compatibility and international compatibility goals. Some of the bad decisions for privacy were made by or at the behest of intelligence agencies, and some of those decisions are continuing to be made in standards bodies that deal with mobile communications security. Ross Anderson described some spy agency influence in early GSM crypto conversations (which is one reason A5/1 is so weak), and it's still happening at ETSI now.

I support political criticism of surveillance activities, but at moments when people feel overwhelmed and powerless, there is another front, which is trying to clean up the security posture of mobile communications infrastructure, or provide better alternatives to it.

We can find lots of reasons why this is hard ("Bellhead" communities are much less ideologically committed to privacy and opposed to surveillance; communications infrastructure is highly regulated in many places, and it's hard to get access to radiofrequency spectrum; people want worldwide compatibility; there's a huge installed base on both the client and server sides; many of the infrastructure providers around the world are directly beneficially owned by governments; spy agencies do actively try to influence standards-setting in this area, plus sabotaging implementations and stealing private key material) and it's probably going to stay hard. But maybe some of the people reading this are going to some day be tech billionaires or working in or running companies that have significant influence in the telecommunications space, and be in a position to personally make future generations of communication technology take privacy and security seriously.

jacquesm 2 days ago 8 replies      
How would America respond if it found out that say the UK is tracking cellphones worldwide, except for British subjects of course, but including all Americans on American soil?
bnastic 2 days ago 1 reply      
> The NSA cannot know in advance which tiny fraction of 1 percent of the records it may need, so it collects and keeps as many as it can 27 terabytes [...] The location programs have brought in such volumes of information, according to a May 2012 internal NSA briefing, that they are outpacing our ability to ingest, process and store data

27TB doesn't sound much, even by 2012 standards. The article doesn't specify if this is the total size, just the delta over some period of time, or something entirely different? Certainly not something NSA would "struggle to ingest"?

exabrial 2 days ago 3 replies      
Okay I hate to be the one to break the news to everybody here, but if you have a GSM phone this is quite trivial to do.

The NSA hasn't done anything groundbreaking here, except maybe a Google search.

ffggvv 2 days ago 2 replies      
This shows clearly that Putin is a dictator, that China is communist and that the USA spys the entire world to protect its citizens freedom. /s
bicubic 2 days ago 1 reply      
I'm on mobile and don't have any links handy, but it's fairly well known that you can ostensibly track every. single. handset. in the world if you can gain access to any one carrier's infrastructure. You can bet every spy agency from every country is doing this.
furyg3 2 days ago 1 reply      
If Americans (of which I am one) and the US government believe that it is self-evident that all men are created equal, then surely they should apply the principles that they have enshrined in their constitution to all equals when dealing with them, regardless of whether or not they are a US Citizen or where on earth they are.
jefe_ 2 days ago 2 replies      
Not long ago you could buy SIM cards from kioskos, corner stores, etc. This is becoming increasingly rare, even in places with otherwise poor infrastructure. The shift was rapid but noticeable.
progx 2 days ago 2 replies      
And why we still have terrorists?
cryoshon 2 days ago 2 replies      
they are tracking our every single movement, and aspire to track our every single thought

we have a duty to resist this totalitarianism by any means possible or necessary; fascism is here, and free men can't delude themselves with hoping for gradual change to the contrary any further.

tmaly 2 days ago 0 replies      
I am really amazed that more people are not outraged at this. When things like the Pentagon Papers came out or when Watergate hit the news, people reacted and change happened.

Today everything gets buried in a sea of noise and entertainment. Long term this cannot be good for the general health and welfare of society to not ponder on and discuss.

mouzogu 2 days ago 2 replies      
So now they can detect the location of the "target" and send a drone to kill them from the convenience of their office, before going on lunch break.

This really sickens me. They are inferring so much and therefore many innocent people are and will suffer.

To me this system is pure evil however much the nsa try to sugar coat or spin it. Who gave them the right to do this, to track people around the world and in many cases perform extra judicial assassinations.

mSparks 2 days ago 1 reply      
I wonder if they manage to track my cell phone more accurately than my cell phone manages to track itself.

My phone rarely seems to be sure what country it is in, let alone which town.

Simply lost count of the number of times I've been like, "yeah, I'm sure the weather is lovely where I was a week ago, but I'm more interested in where I am now".

Must be quite depressing for the NSA analysts stuck in their cubical watching people run around the world having fun while they stuff another donut down their fat american face.

anonbanker 1 day ago 0 replies      
..And people ask me why I don't have a cellphone in 2016.
kseistrup 2 days ago 0 replies      
Please edit the title to reflect the fact that the article is dated December 2013.
duncan_bayne 2 days ago 1 reply      
Well, in fairness, that's actually more aligned with their actual mission, not to mention legal under American law.

This isn't great news as a non-American but from their perspective, surely this is just the NSA doing their job?

Should you encrypt or compress first? appcanary.com
460 points by phillmv  2 days ago   234 comments top 33
nightcracker 2 days ago 6 replies      
There's no compress or encrypt _first_.

It's just compress or not, before encrypting. If security is important, the answer to that is no, unless you're an expert and familiar with CRIME and related attacks.

Compression after encryption is useless, as there should be NO recognizable patterns to exploit after the encryption.

vog 2 days ago 2 replies      
A more interesting question is whether to compress or sign first.

There's an interesting article on that topic by Ted Unangst:

"preauthenticated decryption considered harmful"


EDIT: Although the article talks about encrypt+sign versus sign+encrypt, the same argument goes for compress+sign versus sign+compress. You shouldn't do anything with untrusted data before having checked the signature - neither uncompress nor decrypt nor anything else.

mjevans 2 days ago 0 replies      
Where everyone seems to be getting confused is handling a live flow versus handling a finalized flow (a file).

* Always pad to combat plain-text attacks, padding in theory shouldn't compress well so there's no point making the compression less effective by processing it.

* Always compress a 'file' first to reduce entropy.

* Always pad-up a live stream, maybe this data is useful in some other way, but you want interactive messages to be of similar size.

* At some place in the above also include a recipient identifier; this should be counted as part of the overhead not part of the padding.

* The signature should be on everything above here (recipients, pad, compressed message, extra pad).

. It might be useful to include the recipients in the un-encrypted portion of the message, but there are also contexts where someone might choose otherwise; an interactive flow would assume both parties knew a key to communicate with each other on and is one such case.

* The pad, message, extra-pad, and signature /must/ be encrypted. The recipients /may/ be encrypted.

I did have to look up the sign / encrypt first question as I didn't have reason to think about it before. In general I've looked to experts in this field for existing solutions, such as OpenPGP (GnuPG being the main implementation). Getting this stuff right is DIFFICULT.

Animats 2 days ago 0 replies      
This is why military voice encryption sends at a constant bitrate even when you're not talking. For serious security applications where fixed links are used, data is transmitted at a constant rate 24/7, even if the link is mostly idle.
dietrichepp 1 day ago 0 replies      
Wow, what a trainwreck. So many comments in here talking about whether it would be possible to compress data which looks like uniformly random data, for all the tests you would throw at it. Spoiler alert, you can't compress encrypted data. This isn't a question of whether we know it's possible, rather, it's a fact that we know it's impossible.

In fact, if you successfully compress data after encryption, then the only logical conclusion is that you've found a flaw in the encryption algorithm.

kinofcain 2 days ago 0 replies      
Also interesting is which compression algorithm you're using. HPACK Header compression in HTTP 2.0 is an attempt to mitigate this problem:


tomp 2 days ago 6 replies      
I don't understand... Why couldn't you do CRIME with no compression as well? Assuming you can control (parts of) the plaintext, surely plaintext+encrypt gives you more information than plaintext+compress+encrypt?
arknave 2 days ago 1 reply      
I picked up on the reference to Stockfighter, but does anyone know if the walking machine learning game mentioned at the end of the article exists? Sounds like a fun game.
jakozaur 2 days ago 2 replies      
Would adding some tiny random size help? Based on my poorly understanding, if after compress, but before encrypt we add random 0 to 16 bytes or 1% of size that could defeat quite a lot of attacks (like CRIME).
IncRnd 2 days ago 0 replies      
Despite the question being flawed. The correct answer is a series of questions: Who is the attacker? What are you guarding? What assumptions are there about the operating environment? What invariants (regulations, compliance, etc) exist?

There may be compensating controls that invalidate the perceived needs for encryption or compression, for example. i.e. don't design in the dark.

Of course, the interviewer may just want a canned scripted answer - but the interview is your chance to shine, showing how you can discuss all the angles.

spatulon 2 days ago 0 replies      
That was a fun read. Do I detect a nod to tptacek's "If Youre Typing the Letters A-E-S Into Your Code Youre Doing It Wrong"?


biokoda 2 days ago 1 reply      
If you're compressing audio, the simple solution is to compress using constant bitrate.
js2 2 days ago 0 replies      
The paper cited in this article (Phonotactic Reconstruction of Encrypted VoIP Conversations) really deserves to be highlighted, so I submitted it separately:



jayd16 2 days ago 0 replies      
Would be great if Apple understood this and compressed IPA contents before encrypting.

Instead, when you submit something to the AppStore, you end up with a much bigger app than the one you uploaded.

To add insult to injury, if you ask Apple about this fuck up you get an esoteric support email about removing "contiguous zeros." As in, "make your app less compressible so it won't be obvious we're doing this wrong."

poelzi 2 days ago 0 replies      
if your compression can compress your encrypted data, you should change your encryption mechanism to something that actually works...
em3rgent0rdr 2 days ago 0 replies      
What if you compress and then only send data at regular periods and regular packet sizes? That way no information can be gleaned. E.g. after compressing you pad the data if it is unusually short, or you include other compressed data too, or you only use constant bit-rate compression algorithm.
Qantourisc 2 days ago 1 reply      
Maybe we need encryption that also plays with the length of the message / or randomly pad our date before encryption ? I am however no expert, so I have no clue how feasible, or full of holes this method would be .
hueving 2 days ago 2 replies      
That quoted voip paper isn't actually as damaging as it sounds. IIRC that 0.6 rating was for less than half of the words so if you're trying to listen to a conversation to get something meaningful, it's probably not going to happen.
itsnotvalid 1 day ago 0 replies      
I am always thinking, if the compression scheme is known, you would need some good noonce to avoid known plaintext (for example, compression format's header is always the same), and also by CRIME, which is to remover the dictionary of the compression.

I think it is best to use built-in compression scheme by the compression program to do the encryption first, as those often take these into account (and the header is not leaked, since only the content is encrypted).

panic 2 days ago 1 reply      
Has there been any research into compression that's generally safe to use before encryption? E.g., matching only common substrings longer than the key length would (I think?) defeat CRIME at the cost of compression ratio.
cm2187 2 days ago 2 replies      
Can't you just add some random length data at the end. You are defeating compression a little bit, but are also making the length non deterministic. I thought pgp did that.
arielweisberg 2 days ago 0 replies      
So what does this mean if I am using an encrypted SSL connection that is correctly configured?

Is this kind of problem not already dealt with for me by the secure transport layer? It would be a shame if the abstraction were leaky. My understanding of the contract is that whatever bits I supply will be securely transported within the limits of the configuration I have selected.

If I pick a bad configuration then yes shame on me, but a good configuration won't care if I compress right?

gravypod 2 days ago 4 replies      
Logically speaking, an encrypted file should have a high entropy set of bits within it. Compressing it would be low return, but higher security since the input file contained more "random" bits.

Compressing the source material will yield smaller results but will be more predictable as the file will always contain ZIP headers and other metadata that would possibly make decryption of your file much easier.

jtolmar 2 days ago 0 replies      
If I compress each component (ie: attacker-influenced vs secret) separately, concatenate the results (with message lengths of course), then encrypt the whole message, is that secure?

It seems like it should be, but I'm not an encryption expert. The compression should be pretty good, though.

khc 2 days ago 0 replies      
> The paper Phonotactic Reconstruction of Encrypted VoIP Conversations gives a technique for reconstructing speach from an encrypted VoIP call.

The technique to reconstructing speech clearly had its limitations.

draugadrotten 2 days ago 1 reply      
This blog is an interesting way to advertise to their target market: us.
gameofdrones 2 days ago 1 reply      
kstenerud 2 days ago 1 reply      
So if the length of the resulting message is leaking information, salt it by adding some extra random bits to the end to increase the length by a random amount.
arjie 2 days ago 1 reply      
None of this seems to apply to documents you generate to supply to someone else you trust. Compress and encrypt seems perfectly fine.
FuturePromise 2 days ago 1 reply      
Given the real risk of CRIME attacks, are there "compression aware" encryption algorithms?
justinzollars 2 days ago 0 replies      
vox_mollis 2 days ago 1 reply      
A lot of comments here suggesting that encryption increases entropy. While true, it only adds the key's entropy to the plaintext's entropy. In most real-world cases, len(m) >> len(k), so this is usually an insignificant increase of entropy. Compression also adds a trivial amount of entropy (specifically, the information encoding the algorithm used to compress, even if that information is out of band).
usloth_wandows 2 days ago 4 replies      
I thought this was common sense. Compress then encrypt. Encryption leads to higher entropy, therefore less effective compression.
Ebay posts every character a user types into the password box slashcrypto.org
489 points by slashcrypto  1 day ago   198 comments top 33
0942v8653 1 day ago 3 replies      
There is also the possibility of timing attacks on either type of request. By the length you can tell when the HTTPS request is most likely POST /PWDStrength, and from the times that the request is initiated, you can guess at some characteristics of the password (maybe they stopped typing for a second to verify requirements after typing 7 characters; maybe they stopped after 8 because they have to move to the numpad on their keyboard).

edit: the best sopution for this is probably to wait a specified amount between requests, rather than doing it with each character.

edibleEnergy 1 day ago 9 replies      
I reproduced it for fun with BugReplay, the site I've been working on for the past year: https://app.bugreplay.com/shared/report/3efa632d-5b51-45f1-a...Checks out, password is in the GET param.
supernintendo 1 day ago 1 reply      
Dear eBay,

Sending a request on each keyboard event to determine password strength is not only a security vulnerability, it's also poor design. APIs should primarily be used to consume external resources, not stand in for client side functionality.

If providing an API for password strength is important (i.e. you want to guarantee the same behavior across clients), think of your business logic as a resource and not a service. Rather than force the API figure to it out, have the API deliver the criteria for this behavior (regex strings, bounds of password length, etc.) and let your clients figure it out. This addresses the security concern, decouples your client side and server side logic and improves performance across the board by reducing network requests and absolving the server of this responsibility.

If you must go with this design, at least move from a `GET` to `POST` like others are suggesting.

Just my opinion,


snorremd 1 day ago 1 reply      
Sending your password as you type it as a GET request query parameter seems awfully hazardous. As you point out the password will appear in all manner of places, such as HTTP server logs. As the username/email is not included an ops person might not directly know from the GET request alone what user the password belongs to. It is not difficult to imagine however that they have enough info to correlate the IP address of the password strength request with a user.
wyldfire 1 day ago 1 reply      
> This is not a security vulnerability itself because I think they have implemented this for some reason

IMO just because the behavior is by design doesn't mean it's not a vulnerability. That said, this one seems like a grey area. I'd be worried about password information leaking by making TLS attacks easier in this mode.

esnard 1 day ago 2 replies      
Twitter also sends the password + email + name on each keypress once the user has entered at least 6 characters on it signup page. [0]

 [0]: https://twitter.com/signup

wimagguc 1 day ago 7 replies      
> there are some reasons behind our current solutions but I wouldnt be able to give you more details on it.

I'd be curious to know if anyone here can come up with a good enough reason for sending out the user's email & their password(-prefix) at every keystroke?

jfahrenkrug 1 day ago 0 replies      
"Let us help you make your password more secure by sending it over the wire a gazillion times."
chrisxcross 1 day ago 1 reply      
Google does the same. They regularly send your password to their server to rate it. A curl-example is provided below.

I think I already noticed that some websites used googles api to do the rating of passwords on their website but I can't recall where I saw it.

curl 'https://accounts.google.com/RatePassword' -H 'Content-Type: application/x-www-form-urlencoded' --data 'Passwd=jbcfaihrwefgbGWETZHGAESjbnajfcw24704%$&%!vf&Emailnotme@useless.domain=&FirstName=Hacker&LastName=News'

or another endpoint:

curl 'https://accounts.google.com/InputValidator?resource=SignUp' -H 'Content-Type: application/json' -d '{"input01":{"Input":"Passwd","Passwd":"GoogleBatteryHorseStaple","PasswdAgain":"GoogleBatteryHorseStaple","FirstName":"Hacker","LastName":"News","GmailAddress":"i-have@none.yet"},"Locale":"en"}'

cm3 1 day ago 1 reply      
Why is it that we didn't improve HTTP Digest Auth but let everyone implement their own mechanism, where the number of those using a challenge response protocol is not worth a mention? Do we have to wait until 2018 before https://tools.ietf.org/id/draft-yusef-httpauth-srp-scheme-00... can be a thing? Not saying SRP is the best option, but compared to what's implemented on websites right now, it is much better.

EDIT: I probably am missing details, but surely some secure challenge response protocol must be available for broad implementation in browsers without concern for patents, right?

Pxtl 1 day ago 2 replies      
For those who didn't read TFA - it does this for the password strength checker when creating a new password, not when logging in.

Honestly, I can see the challenge here. A truly robust password strength checker would use dictionaries, making it too heavy to run on the client, and for usability reasons you'd want it to check on keypress.

But it would be nice at the very least if they'd send it as POSTs in the body, not GET parameters.

brown9-2 1 day ago 3 replies      
Parameters sent via GET can get cached by proxies and they appear in log-files.

Not to argue in favor of sending sensitive data via GET, but I think it is worth pointing out that third-party proxies cannot see the URL or other parts of the HTTP headers or body when the connection is using HTTPS.

splatcollision 1 day ago 1 reply      
I knew there was a reason I always prefer POSTing data as opposed to GET query params.

It still gives attackers the knowledge that if they can get access to the logfiles, they can see passwords. Then the problem becomes getting access to the logfiles!

Any leak of relevant information about security is of potential value.

tempVariable 1 day ago 0 replies      
I did some penetration testing on the Snapshat application last year. It was also very chatty every key-press on the craete/login screens.
athenot 1 day ago 0 replies      
I hope they pad the requests with some random data, otherwise they are sending encrypted requests with very little entropy.
marme 1 day ago 0 replies      
They almost certainly do this to detect bots trying to change passwords. If the bot tries to change passwords for hundreds of accounts at once they will end up sending thousands of requests to the password checker and be ip banned and it can silently just reject every password they try to submit to not tip off the attacker that they have been detected.

It is a terrible way to implement bot detection but with ebay owning paypal they are on the hook for lost revenue so bot detection probably takes higher priority than other security due to the actual economic impact of bots who steal hundreds or thousands of account at a time being so bad for them

mdpm 1 day ago 0 replies      
How has no-one here made the observation that the reason for this is due to true password strength checks, that use existing password distribution data that is prohibitive in size to send to the browser?

They're not doing the wrong thing, and the risk of side-channel attacks on this infrequent behaviour (i.e., not authentication) are trivial compared to the risks of high entropy passwords that are also highly reused, and are thus vulnerable to trivial brute force attempts.

hyperion2010 1 day ago 1 reply      
One that I use regularly that seems to be missing is'set <N> <hour/minute> timer'. Also and amusing request from my father (who uses voice commands far more than I do): 'is there some way to print all of these out?'
iask 1 day ago 0 replies      
After reading the public post and these comments, do you think they (eBay) will give a better explanation...or better, an explanation...as to why they do this? Passwords are becoming difficult to maintain, even with a password manager. They should've, at least, obfuscate it in some way.
simbalion 1 day ago 0 replies      
I wonder if there is any example of a large corporation taking action after a flaw is submitted via an online email form? I think those forms are sent to people who's job it is to disregard their content as much as possible.
hartator 1 day ago 7 replies      
> Checking the password completely on the server is OK

I don't even agree with that, I think the best pratice should be to hash it on the client side before sending it to a server.

ivanhoe 1 day ago 3 replies      
how do you know that they didn't disable logs for this url?
paulddraper 1 day ago 0 replies      
The HN title seems odd. Most websites send all the characters (unless I suppose backspace is used).
chflags 1 day ago 1 reply      
So it's not possible to log on without enabling Javascript?

I guess that's one way to coerce the user into enabling Javascript, at least temporarily.

foobar20202 1 day ago 0 replies      
As an attacker this gives me information about how many characters are in password. Which can be quite useful information.
JamesUtah07 1 day ago 0 replies      
It'd be nice if they additionally implemented https everywhere while they're at it
fh973 1 day ago 2 replies      
Bot detection?
cocotino 1 day ago 3 replies      
Bad title: it works only while you have the password field focused.

Bad content: when you log in or register you send your password to the servers anyway. It's irrelevant, since all connections (as shown in your post) are made with https.

One could argue "they are seeing what you write even if you haven't sent it yet", but meh, it's just a damn password field, not a chat field.

So bad, bad, bad.

olantonan 1 day ago 2 replies      
> The main point I think is, that GET Requests are logged in log-files which are usually accessible by more people that the main database.

This is an outright assumption, and it's a bad one.

This is a non-issue, because they do NOT log these requests, and it's https.

So move on, this is just noise.

chris_wot 1 day ago 0 replies      
All those people who think that Amazon don't want anyone to work out their password complexity algorithm... You just generate a script that works out the minimum number of characters and then submit a password list to the strength service. Then you'll know all the strongest passwords according to Amazon, and from here you can hopefully find patterns to construct rules around running dictionary cracks.
venomsnake 1 day ago 2 replies      
If someone has broken ebay https they will surely be able to catch the whole password at the end.
emeraldd 1 day ago 1 reply      
Today's xkcd is surprisingly relevant:


bru 1 day ago 0 replies      
> why the website is sending docents of requests to Ebays servers.

"dozens" maybe?

D3 v4.0.0 released github.com
431 points by aw3c2  2 days ago   94 comments top 18
yoavm 2 days ago 3 replies      
D3 has the reputation of being super-complicated because of all the libraries that are based on it, "simplifying" it so that everyone can use it. In the past year I wanted to create pretty unique type of data visualisation, so I dived into D3 and discovered it a makes a lot more sense than I though. Of course, if you only want a regular bar chart, you'll do better with things like C3, nvd3 etc'. But if you want anything a bit special, D3 itself is very powerful and the documentation in pretty good - there's no reason to avoid using it directly.

Definitely looking forward to try the new release.

monfera 2 days ago 2 replies      
Fantastic work! D3 has a large API surface area, due to the functionality it covers. So even a relatively conservative major upgrade is a TON of work, and some of it might not have been pure fun only.

1) What's your opinion on the apparent case that the bulk of changes come from you, as society promotes teamwork so much? Is it a case of 'small team efficency'? (Btw. I know of others' contributions too, and reliance on Rollup, ColorBrewer, matplotlib/Viridis etc.)

2) What kept you going? It's probably at least a person-year of work just on your side that's not paid for by companies, not to mention some kind of startup bet. Have you alternatively toyed with the idea of starting a value-added layer, since so many for-profit organizations benefit from your work?

3) While it changes the API, and most everything is rewritten, it's IMO a conservative upgrade in that it doesn't steer away from familiar concepts and structure. It still does DOM binding, transitions, layout etc. fairly similarly. Have you considered much more disruptive departures? As an example, React's DOM diffing is more radically different from D3 selections than the change from D3 3.* to 4.* or the Grammar of Graphics as an API concept is more radically different from D3 than the 3.* -> 4.0 API changes. Or something like moving away from the 'functional objects' concept in favor of curried, fixed-argument pure functions, perhaps prefaced with the familiar chaining API would have been a large change (these are mostly examples rather than wishlist). What were YOUR ideas, had you wanted more disruptive changes, or 'D4'?

a_humean 2 days ago 1 reply      
Changelog (weekend reading): https://github.com/d3/d3/blob/master/CHANGES.md

Really excited about this release. It seems like nothing has been left untouched with major changes to everything from selections to layout generations and the modularisation of the entire lib (think lodash).

There are major namespace changes with basically everything being flattened ( d3.geom.voronoi -> d3.voronoi) and many of the typical patterns will have to be relearned (major changes to selections), so I don't see upgrading any existing projects and I may have to hold off on using 4.0 in new projects until I actually understand what has happened.

Favourite thing so far has to be the changes to selections, which I think will make things a lot clearer to newer users:https://github.com/d3/d3/blob/master/CHANGES.md#selections-d...

jonahx 2 days ago 1 reply      
Just want to say how much I appreciate the refactoring into smaller libraries. d3 has enough traction that it's not something they had to do, but it's the right thing to do, and probably took a good deal of effort and thinking.
iyn 2 days ago 13 replies      
What's the best way to integrate D3 and react? Or there's a better approach to charts/visualization in react than using D3?
blaze33 2 days ago 5 replies      
Does anyone know if there is a good way to do server-side rendering (svg files) of D3 charts while being able to reuse the same code in a browser?

We do a lot of charts at work to visualize electricity consumption and are currently thinking about how to rewrite the js we've accumulated over the years. I've found some attempts at doing server-side rendering but nothing really compelling for now...

D3 is a well thought out library to do interactive visualizations in the browser and having an easy way to do svg exports (for things like automated emails) would be a big plus. Doing screenshots through selenium ain't always so pleasant ;)

con_ssc 1 day ago 0 replies      
1. Are there any plans to have a TypeScript version of the lib?

2. Would it be possible to add jsdoc comments for autocompletion?

3. Are the polyfills for Map and Set still necessary now that we have that in es2015?

4. Why is the lib so heavy on argument overloads?

5. Are there any code style restrictions? I often saw loops and conditions without curly brackets and alike.

6. Are there any fields you'd need support in for maintaining or reworking the library?

I really like D3.js although I haven't found an application for me yet. I find it hard to get/update the date from a none standard REST endpoint and display them, especially if they are kind of multichanneled, like working times of multiple users unordered.

betageek 2 days ago 1 reply      
Looks like quite a lot to take in, where should we start? - are there updated tutorials, upgrade guide etc?
rusosnith 2 days ago 2 replies      
Is it possible to have V3 and V4 coexisting in a project?Like, adding some of the awesome v4 new things to an already working v3 project without having to rework all the v3 code to v4 compliance?

I've tried doing this with v4 beta, with strange behaviours :)

dnprock 2 days ago 0 replies      
For those interested making d3.js reusable for data analysis, check out https://vida.io. We create templates out of d3.js visualizations with dynamic properties. You can create dashboards from custom d3.js components. For more examples, see https://vida.io/explore.
gkst 2 days ago 0 replies      
Exciting release! I played with the force layout on canvas to graph reddit conversations http://ramiro.org/tool/graphit/

It's harder to get interaction done with canvas compared to SVG, but for large graphs the performance gains are huge. Curious to dive into the other new features.

Thanks for this incredible piece of software mbostock!

dswalter 2 days ago 1 reply      
Modularization aside (and it's terrific to have in the library), what additional functionality are you most excited about with this release?
sntran 1 day ago 0 replies      
For simpler need such as rendering a SVG map from Topojson or GeoJSON data, is there an alternative than D3 that can utilize React.js' rendering of SVG? With no animation, no charts, just map with zoom in and out and drag around, is there a benefit of using D3?
th0ma5 2 days ago 0 replies      
Did they remove the emphasis n the .enter() pattern yet? This is the weirdest thing compared to how just about every other library works, and it can totally be avoided by just using the data() stuff directly.
dmix 2 days ago 1 reply      
Any 4.0 tutorials available yet to dive in and try it out for a first-timer?
toothrot 2 days ago 1 reply      
Why didn't they name it D4
dasdas1111 1 day ago 0 replies      
nthitz 2 days ago 1 reply      
More comments here https://news.ycombinator.com/item?id=11994410 including from @mbostock. Not sure why it didn't stay on FP.
Spanish authorities raid Google offices over tax reuters.com
314 points by cocotino  13 hours ago   322 comments top 23
melenaboija 10 hours ago 7 replies      
I don't think there is a justification for big corporations to do what they do and they have to be pursued. I've been working for a bank in a tax haven in Europe and I've seen that financial engineering powerful people use has been a known and accepted tool for the governments for a while, and now that Europe is living a financial crisis governments start asking.

These companies play in an absolutely different league than employees, self employed and small companies in Europe where they make you pay strictly all the taxes to mantain our social structure. They have big teams of lawyers, financial experts and advisors specialized in tax havens just to make grow their revenues in billions avoiding to pay taxes in the countries they have presence.

Totally unfair for regular people like me.


lazyant 11 hours ago 0 replies      
Google translate of http://www.elmundo.es/economia/2016/06/30/5774ead6ca4741db16...

"the background of all these investigations are the lack of tax harmonization in the EU and tax strategy of the company, like others such as Apple or Twitter, for example. Sales declaring these firms in each country are unrelated to their actual billing. Subsidiaries in Spain-and other European countries act as agents of another parent company based in countries with lower tax burdens as Holland or Ireland. National subsidiaries taxed only by the minimum commissions they receive from their Dutch or Irish matrix, most of the turnover recorded paying minimum tax rates."

josephg 12 hours ago 5 replies      
I would expect google to keep all financial documents inside Google Drive. In doing so, I wonder what is left in the office for the Spanish authorities to find in a raid? (Google is ruthlessly paperless internally.)
conradfr 13 hours ago 4 replies      
I heard someone related to the Paris office raid comment when that happened. She was saying that multinational corporations always say that their organization and tax schemes are legal but that in reality it's so complex and spread in multiple countries that nobody really knows for sure and so it's only legal by default.
sheraz 13 hours ago 1 reply      
Remember that Google was also raided in Paris almost exactly one month ago [1].

[1] - https://www.theguardian.com/technology/2016/may/24/google-of...

george_ciobanu 10 hours ago 1 reply      
This is ridiculous and a form of bullying. If you want multinationals to pay tax, fix the laws. Were they really expecting Google to fake anything? No, they were looking to cause bad press. Form international alliances and create a proper incentive structure if you want to collect.
pfortuny 11 hours ago 2 replies      
Another PR stunt of the minister, Montoro. No more than that. He also makes public a lost of debtors to the IRS (which should be illegal but he gets away with anything).

Edit: Spaniard here.

jordanb 12 hours ago 2 replies      
Why can't Spanish (and French) authorities simply subpoena any info they need from Google? Is Google able to shield relevant data by keeping it in Ireland or the US?
kafkaesq 9 hours ago 0 replies      
From El Pas, a little over an hour ago:

In the case of the Spanish subsidiary, the Tax Office is analyzing why Google pays so little tax in Spain despite making multimillion-selling. With this action, the Agency's technical investigators seek to determine if the international structure of the world's leading Internet search engine is not legitimate. Toward this aim they will try to prove that functionality the group attributed to its Spanish subsidiary are less than they actually performed.

The record is linked to an investigation into possible tax evasion. The company, known for its Internet search engine, has spent years in the crosshairs of Finance for its tax system, as part of their income managed through Ireland. Thus, it could be artificially reducing its business in Spain and lowering its tax bill.

Tax Office searches Google Spain and Google Ireland for evidence of tax fraud


peter303 9 hours ago 0 replies      
Panama Papers showed a significant fraction of EU officials stash money in low tax havens. Icelands PM resigned as a result. The Panama Papers was a leak of terabytes of documents from a legal office who sets up these corporations and accounts.

Ironically the USA is a top tax haven due to anonymous shell corporations you can create in several states. Lots being stashed here due to anomymity and economic safety.

noahmbarr 11 hours ago 0 replies      
This reminds me of Zooloander smashing the iMac on the ground trying to find file/folders
ojosilva 7 hours ago 1 reply      
Actually, I find the bouncing around of fees and payments behind these internet companies very complex, and maybe even justified to some extent. The value-add that generates the taxable income is almost entirely elsewhere.

Let's say Google Spain got 100M in revenue and had 20M local expenses (mostly sales, local support and localization). Now, say they owe 30M to US HQ for the service, servers, R&D, IP, support, etc... among other expenses. (For simplicity, let's suppose everything is hosted and originates in California.)

That leaves Spain with 50M in profit, ready to be taxed by Spanish tax authorities. This sounds quite out of proportion. The great value-added is not in Spain, it's in the US.

Google HQ could easily establish Google Spain owes 80M in service costs, thus local profit is nil. To me that's fair: corporate has the right to charge whatever they see fit for their services and collect profits accordingly.

They way I see it, paying fees to the Bermudas is a US, not Spain, tax evasion problem.

nickbauman 10 hours ago 0 replies      
This all points back to the Multilateral Agreement on Investment, which reveals the intention of transnational corporations to try to limit sovereignty. It didn't pass but it's terrific object lesson (especially in the aftermath of Brexit).

The irony is that the economic incompatibility thesis claims liberalization of capital tends to undermine free trade, because capital flows make very volatile markets, ultimately making it much harder for trade to take place. So what happens is the reaction to freeing up capital is increased protectionism which is what's generally happened since the '70s.


em3rgent0rdr 11 hours ago 1 reply      
Could this be why Google calendar is down?

Is Google going Galt? :)

SixSigma 13 hours ago 4 replies      
Google uses Bermuda (a British overseas territory) to avoid tax liability in the EU

Google has paid 8.8 billion in fees to Bermuda, S&D group MEP Peter Simon complained. [1]

The irony being that the current president of the European Commission, Jean-Claude Juncker, was Prime Minister of Luxembourg when his administration was organising the low tax environment. [2] [3]

And now the EU claims that "the Anti Tax Avoidance Package is part of the Commission's ambitious agenda for fairer, simpler and more effective corporate taxation in the EU." [4]

[1] http://www.euractiv.com/section/public-affairs/news/google-f...

[2] https://en.wikipedia.org/wiki/Jean-Claude_Juncker

[3] https://en.wikipedia.org/wiki/Luxembourg_Leaks

[4] http://ec.europa.eu/taxation_customs/taxation/company_tax/an...

chvid 13 hours ago 2 replies      
I wonder what they expect to find? Or whether it is just a display of force.
ComodoHacker 11 hours ago 2 replies      
IMO Google is so rich and so profitable that it could afford to pay all taxes due in all countries fairly, without any "optimization", while staying profitable and not undermining any R&D program or long-term project.
xchip 8 hours ago 0 replies      
Pay Google pay! As everyone else!!
danielonco 10 hours ago 0 replies      
Don't be evil
KKKKkkkk1 11 hours ago 4 replies      
Spain shaking down Google, while the US shakes down VW. A trade war by any other name...
iagooar 9 hours ago 0 replies      
Ironic, how a corrupt government tries to pursue corruption.
dagi3d 12 hours ago 0 replies      
Even Google Campus :/
Mieaou 12 hours ago 0 replies      
Good! Let all those Brexit refugee companies move their HQs to EU.
Why Google Stores Billions of Lines of Code in a Single Repository acm.org
383 points by signa11  2 days ago   218 comments top 38
hpaavola 2 days ago 3 replies      
With my current client we decided to go with multiple repositories and came to regret that decision.

The product family we are developing contains website, two mobile apps (iOS, Android), three PC applications (OS X, Windows and one legacy application) and software for embedded devices.

Each product lives in it's own repository, most repositories use one shared component as a submodule and many product share common platform, which is used as a submodoule and products build on top of that. Test automation core sits its own repo. I built and maintain that test automation core and it's a pain.

Each product repository has it's own functional tests that use the test automation core to make the tests actually do something. So when ever I make changes to the test automation core, I need to branch each product and use the feature branch from my core in it. Then run all the tests in each repo and see that I did not break backwards compatibility. If I do break it, then I need to fix it through pull requests to possibly multiple teams.

I'm not the greatest git wizard in the world, so maybe someone else could easily maintain good image of the whole mess in their head, but for me this is a pain. And everyone else who maintains a shared component shares my pain.

Monolithic repo would not magically make all the pain in the world to disappear, but it would be so much more easier to just have one repo. That way I would need to only branch and PR once.

rzimmerman 2 days ago 5 replies      
I've worked a bit with "the big monorepo" (though nothing like Google scale) and my impression is that a lot of the benefits fall apart if you don't have people working to maintain the build environment, automated tests, developer environments. When it takes hours or more to run tests and produce a build it can really slow the team down. The ability to force updates on dependencies in a single big commit can be really worthwhile as long as you're willing to spend the time to build the tools and do the maintenance.
scaleout1 2 days ago 8 replies      
Please dont do it unless you are google and have built google scale tooling (gforce/big table for code cache)

benefits of monorepo

* change code once, and everyone gets it at once

* all third party dependencies get upgrade at once for the whole company

cons (if you are not google)

* git checkout takes several minutes X number of devs in the company

* git pull takes several minutes X number of dev

* people running get fetch as a cron job, screwing up and getting into weird state

* even after stringent code reviews, bad code gets checked in and break multiple projects not just your team's project.

* your IDE (IntelliJ in my case) stops working because your project has million of files. Require creative tweaks to only include modules you are working on

* GUI based git tools like Tower/Git Source dont work as they cant handle such a large git repo

Google has solved all the issues i mentioned above so they are clearly an exception to this but for rest of the companies that like to ape google, stay away from monorepo

OneMoreGoogler 2 days ago 5 replies      
During my 16 month tenure in Google, I worked on:

1. Android, using shell and Make

2. ChromeOS, using Portage

3. Chrome browser, using Ninja

4. google3 (aka "the monorepo") officially using blaze (but often there were nested build systems - I remember one that used blaze to drive scons to build a makefile...)

The diversity of the build systems significantly steepened the learning curve when switching projects. During orientation, they told me "All code lives in the monorepo, and every engineer has access to all code", but this turned out to be not true at all. If anything it was the opposite: more build system diversity at Google than at other places I worked.

stdbrouw 2 days ago 5 replies      
> Since all code is versioned in the same repository, there is only ever one version of the truth, and no concern about independent versioning of dependencies.

This sounds like horror to me: it's essentially a forced update to the latest version of all in-house dependencies.

Interesting article though. It feels like there's a broader lesson here about not getting obsessed with what for some reason have become best practices, and really taking the time to think independently about the pros and cons of various approaches.

wtbob 2 days ago 5 replies      
Anyone have a cache?

Without reading it, I've used monorepos and multiple repos, and far prefer the former. What people don't get is that any system consisting of software in multiple repos is really a monorepo with a really poor unit & integration test story. The overhead of managing changes to inter-repo dependencies within the same system is utterly insane I'd say that it's somewhere between 2^ n & n^2, where n is the number of repositories (the exact number depends on the actual number of dependency relationships between repos).

In fact, after these several years, I'm beginning to think that 'prefer' is not the word: monorepos appear, more and more, to be Correct.

DanielBMarkham 2 days ago 1 reply      
Couple of notes:

- This is a technique, and it's a toolset, but most importantly it's a commitment. Google could have split this up many times. In fact, this would have been the "easy" thing to do. It did not. That's because this strategic investment, as long as you keep working it, keeps becoming more and more valuable the more you use it. Taking the easy way out helps today -- kills you in the long run.

- This type of work just isn't as important as regular development, it's more important than regular development, because it's the work that holds everything else together.

- In order for tests to run in any kind of reasonable amount of time, there has to be an architecture. Your pipeline is the backbone you develop in order for everybody else to work

- You can't buy this in a box. Whatever you set up is a reflection of how you evolve your thinking about how the work gets delivered. That's not a static target, and agreement and understanding is far more important than implementation. I'm not saying don't use tools, but don't do the stupid thing where you pay a lot for tools and consultants and get exactly Jack Squat. It doesn't work like that.

deepsun 2 days ago 1 reply      
It works well when most of your code is written in-house. If you have a lot of external dependencies -- not so good.

The problem is what version of the third-party dependency is various projects in the BIG repo should depend on.

Article mentions that: "To prevent dependency conflicts, as outlined earlier, it is important that only one version of an open source project be available at any given time. Teams that use open source software are expected to occasionally spend time upgrading their codebase to work with newer versions of open source libraries when library upgrades are performed."

So if you have a lot of external dependencies -- you need a dedicated team to synchronize them with all your internal projects.

oblio 2 days ago 5 replies      
I have a question for Googlers: I keep hearing about refactoring all across the repo. How does that work with dynamic call sites (reflection, REST), etc.?

I mean, there's no way to prove something is actually used in those cases except for actually running the thing.

Do you just rely (and hope) on tests across all projects?

ACow_Adonis 2 days ago 1 reply      
As someone outside of google, I'm having a hard time seeing how this would actually work. Not as in, it can't be done, as in, is there actually empirical evidence that the supposed benefits (how do you know silos are lower than otherwise?) are happening as claimed because of the monolithic code? Do silos really come down, do big changes to underlying dependencies really get rolled out, or do people hunker down into their own projects, try to cut out as many dependencies as possible?

Perhaps the extra tools, automation, testing, etc helps to a large extent, I can see that being reasonable, but I don't see how they solve all the problems I have in mind.

Perhaps more so, if you've invested in all these automated tools, I am, perhaps (certainly?) ignorantly, not entirely certain what those tools inherently have to do with the choice of a monolithic code base? Couldn't many of them work on a distributed code base if they're automated? I mean, we're talking about "distributed" in the sense that its all still in the one org here...I realise that in practice, this distinction between monolithic and distributed is possibly getting a bit academic...

acqq 2 days ago 3 replies      
I've found an interesting detail, does anybody know more about this?

"The team is also pursuing an experimental effort with Mercurial, an open source DVCS similar to Git. The goal is to add scalability features to the Mercurial client so it can efficiently support a codebase the size of Google's. This would provide Google's developers with an alternative of using popular DVCS-style workflows in conjunction with the central repository. This effort is in collaboration with the open source Mercurial community, including contributors from other companies that value the monolithic source model."

valarauca1 2 days ago 1 reply      
Google gave a talk at the @scale conference in 2015 about this very topic. You can watch it here (30 minutes): https://www.youtube.com/watch?v=W71BTkUbdqE
ComodoHacker 2 days ago 5 replies      

 The Google code-browsing tool CodeSearch supports simple edits using CitC workspaces. While browsing the repository, devel- opers can click on a button to enter edit mode and make a simple change (such as fixing a typo or improving a comment). Then, without leaving the code browser, they can send their changes out to the appropriate review- ers with auto-commit enabled.
Do they still maintain CodeSearch for themselves? Was it so much burden to maintain reduced version of it for the public?

mac01021 2 days ago 0 replies      
arviewer 2 days ago 4 replies      
Does this mean that one Googler could checkout the complete system, and sell it or put it online? How many people have access to the complete repository? How big is one checkout?
makecheck 2 days ago 2 replies      
There are advantages and disadvantages to consider. Clearly monolithic repositories allow you to leverage commonality but its not free.

For example, single-repository environments may require you to check out everything in order to do anything. And frankly, you shouldnt have to copy The World most of the time. Disk space may be cheap but it is never unlimited. Its a weird feeling to run out of disk space and have to start deleting enough of your own files to make room for stuff that you know you dont care about but you need anyway. You are also wasting time: Subversion, for instance, could be really slow updating massive trees like this.

There is also a tendency to become too comfortable with commonality, and even over-engineer to the point where nothing can really be used unless it looks like everything else. This may cause Not-Invented-Here, when it feels like almost as much work to integrate an external library as it would be to hack what you need into existing code.

Ultimately, what matters most is that you have some way to keep track of which versions of X, Y and Z were used to build, and any stable method for doing that is fine (e.g. a read-only shared directory structure broken down by version that captures platform and compiler variations).

kev009 2 days ago 1 reply      
History as a guide, we will look back on this for the tire fire it is. monorepo and projects like Chrome and Android to me look like a company that is trying its best to hold itself together but bursting out at the seams the same way Microsoft did in the '90s with Windows NT and other projects. Googlers frequently use appeal to authority to paper over the fact that they are basically the new Microsoft.
zellyn 2 days ago 1 reply      
One thing to consider is that monorepo tooling is (outside of Google) still pretty immature.

At Square, we have one main Java "monorepo", one Go "monorepo", and a bunch of Ruby repos. The Java repo is the largest, by a huge factor (10x the Go repo, for example).

The Java repo is already noticeably slow when doing normal git operations. And I'm told we've seen nothing of the pain the folks at Twitter have had with git: their monorepo is huge.

We periodically check to see how the state of Mercurial and Git is progressing for monorepo-style development. Seems like Facebook has been doing interesting stuff with Mercurial.

But I still miss the fantastic tooling Google has internally. It really is so much better than anything else I've seen outside.

kngspook 2 days ago 1 reply      
Seems to be down... (I can't imagine the ACM is _that_ easy to take down..?)
qznc 2 days ago 0 replies      
How does Google deal with external dependencies? E.g. the Linux kernel. Do they have a copy of Linux in the monorepo for custom patches? Is there a submodule-like reference to another repository? Is there a tarball in the monorepo, a set of patches, and a script to generate the Custom-Google-Linux? What happens when a new (external) kernel version is integrated?
hbsnmyj 1 day ago 0 replies      
For me, it seems that the power of this model is not that "we have a single head", rather than, "we can enforce everyone use the newest library version (expect for sometimes create a whole new interfaces)".

Let's suppose we have a tool that can

* automatically check out the HEAD of each dependent repo

* run a complete integration tests across all the repo before any push/check-in.

This will work fine even with a multi-repo model, won't it?

Also, As mentioned earlier by others, The reason google can do it is because google can* maintain a powerful cloud-based FUSE file system to support the fast checkout

* run automated build tests before any submits to ensure the correctness of any build

So they don't need to maintain multiple dependency version(for the most time)

mk89 2 days ago 0 replies      
It's amazing to read this kind of articles, because they are the best argument to all those very opinionated (or simply arrogant) people who claim that "one repository is s*" or similar, such as "this technology is better than this one", "unit-tests yes/no", the list of religious wars could go on forever ...

Thanks a lot for sharing!

andrewguy9 2 days ago 3 replies      
This works because they have teams dedicated to reproducing the work everyone else gets from their ecosystem's community.

For us non-googlers, would you trade away the benefits pip,gem,npm for single source of truth?

jjnoakes 2 days ago 1 reply      
The monorepo solves source-to-source compatibility issues, but it doesn't solve the source-to-binary compatibility issues. For that you need a solid ABI, possibly versioned, unless every code checkin redeploys every running process everywhere.

Say version N of the code is compiled and running all over the place, and you make a change to create N+1. Well, if you don't understand the ABI implications of building and running your N+1 client against version N servers (or any other combination of programs, libraries, clients, and servers), then you'll be in a mess.

And if you do understand those ABI boundaries well and can version across them, I'm not sure you need a monorepo much at all.

MichaelBurge 2 days ago 0 replies      
I find it really hard to find anything when it's split into a bunch of smaller repositories. If you're going to do that, you should at least have one master repository that adds the tiny ones as git submodules.
sytse 2 days ago 0 replies      
This is a very interesting article. I believe there is value to using libraries about there is something to be said for monrepos. Interesting that Google and Facebook are working together to extend Mercurial for large repo's.

The article gave me the following idea's to extend GitLab: Copy-on-write fuse filesystem https://gitlab.com/gitlab-org/gitlab-ce/issues/19292 Autocommit using auto-assigned approvershttps://gitlab.com/gitlab-org/gitlab-ce/issues/19293 CI suggesting editshttps://gitlab.com/gitlab-org/gitlab-ce/issues/19294 Coordinated atomic merges across multiple projects https://gitlab.com/gitlab-org/gitlab-ce/issues/19266

carrja99 2 days ago 1 reply      
At previous job I had Google to thank for the team's god awful decision to choose perforce over git thanks to some silly whitepaper or article. They acted like git was some fringe version control system that no one would use professionally... just for fun toy projects.
dabn 2 days ago 0 replies      
My experience working in large corporations and smaller companies with both approaches tends to make me lean towards the multi repo approach.

Some details:

* Amadeus:huge corporation that provides IT services for the airline industry, handling the reservation process and the distribution of bookings. Their software is probably among the biggest C++ code bases out there. We were around 3000 developers, divided into divisions and groups.Historically they were running on mainframes, and they were forced to have everything under the same "repository".With the migration to Linux they realized that that approach was not working anymore with the scale of the company, and every team/product has now its own repository.

All libraries are versioned according to the common MAJOR.RELEASE.PATCH naming and upgrades of patch level software are done transparently. However Major or release upgrades have to be specifically targeted. What is more important for them is how software communicates, which is through some versioned messages API.There is also a team that handles all the libraries compatibility, and package them into a common "middleware pack". When I left around 2012 we had at least 100 common libraries, all versioned and ready to use.


financial software used in front/back office for banks.We had one huge perforce repo, I can't even begin to tell you what pain was it. You could work for a day on a project, and having to wait weeks to have a slot to merge it in master.Once you had a slot to merge your fix in master, chances are that code has changed meanwhile somewhere else and your fix can't be merged anymore. That was leading to a lot of fixes done on a premerge branch, manually on the perforce diff tool.

Also given the number of developers and the size of the repository, there was always someone merging, so you had to request your slot far in advance.Maybe the problem was that the software itself was not modular at all, but this tends to be the case when you don't force separation of modules, and the easiest way is to have separate repositories.

Small proprietary trading companyWe didn't have a huge code base, but there were some legacy parts that we didn't touch often. We separated everything in different repos, and packaged all our libraries in separate rpms.It worked very well and it eased the rebuild of higher level projects. If before to release some project would take ~1h, with separation of libraries it would only take 5 minutes. It was working well because we didn't change often base libraries that everyone was depending on.

plandis 2 days ago 0 replies      
I'd be curious to see how this works with third party libraries/etc...

For instance did everyone at Google migrate to Java 8 at the same time? That seems like a huge amount of work in a mono repo.

oxplot 1 day ago 0 replies      
gravypod 2 days ago 0 replies      
A lot of the benefit that comes from this code storage method doesn't really seem like the best solution.

The presenter in the video linked in this thread that this is very advantageous due to cross dependencies. I don't think that this is the correct way to handle a cross dependency.

I'd much rather handle it by abstracting your subset of the problem into another repository. Have some features that two applications need to share? That means you're creating a library in my mind. This is much better suited for something like git as you can very simply use sub-modules to your advantage.

Hell, you can even model this monolithic system within that abstracted system. Create one giant repository called "libraries" or "modules" that just includes all of the other sub-modules you need to refer to. You now have access to absolutely everything you need within the google internal code base. You can now also take advantage of having a master test system to put overarching quality control on everything.

This can be done automatically. Pull the git repo, update all the sub-modules, run you test platform.

I'd say that's a better way to handle it. Creating simple front end APIs for all of the functionality you need to be shared.

justinlardinois 2 days ago 1 reply      
> The Google codebase includes approximately one billion files

> approximately two billion lines of code in nine million unique source files

So what are the other ~991 million files? I don't doubt that there's a lot of binary files, but what else? Also what does "unique" source files mean?

fizixer 2 days ago 3 replies      
Why does google have billions of lines of code?

This is more of a rhetorical question. As a tech minimalist, the preferrable answer by a long shot would be that, internally, Google is keenly aware of the severe bloat and technical debt of their codebase and have clear plans going forward to drastically reduce the scale, by more than 1000x at least, without sacrificing any of the features/bug-fixes/performance of any of the code.

NicoJuicy 2 days ago 2 replies      
I suppose they don't do git pull :p, what's their source control management system?
amelius 2 days ago 2 replies      
I wonder if they also distribute binaries internally. Otherwise, setting up a developer machine could take really long. Like installing Gentoo Linux :)
twoy 2 days ago 0 replies      
The idea of working directory in cloud is dangerous, because it reveals that I haven't yet started anything haha.
Bluetooth 5 will quadruple the range, double the speed engadget.com
297 points by bokenator  2 days ago   244 comments top 35
creativeembassy 2 days ago 18 replies      
I could not care less.

- 1.0 to 1.2: Hard to pair, and very unreliable. First major version, I'm sure they'll fix it...

- 2.0: More bandwidth. Still hard to pair, still unreliable.

- 2.1: Adds "Simple Secure Pairing". Still hard to actually pair. Still unreliable.

- 3.0: More bandwidth. More features. Still hard to pair, still unreliable.

- 4.0: Bluetooth Low Energy released. Still hard to pair, still unreliable.

- 4.1: More features. Still hard to pair, still unreliable.

- 4.2: More features. Still hard to pair, still unreliable.

- 5.0: More range, more bandwidth. Still hard to pair, still unreliable.

I have the latest Apple "Magic" Trackpad, hooked up to a Mac Pro. At least once a day, latency will take a dive, or it will completely disconnect. I have to turn the trackpad on and off repeatedly, to see if it will finally re-pair by itself. No other recourse, you can't easily access bluetooth settings on a mac with only a keyboard. (Which I made sure to get with a USB plug, since my last bluetooth keyboard had the same issues.)

I also have a Samsung Level BT headset, paired with a Samsung Note 5 phone. I can listen to Google Music or Player.fm for between 1 and 10 minutes until the headset will suddenly blast noise at full volume, and become unresponsive until I turn it off and back on again. Left my ears ringing on more than one occasion.

I've replaced my "magic" keyboard with a USB one, and recently hooked the trackpad back up via USB. I've stopped using the Samsung Level and went back to an old pair of headphones with a 3.5mm plug (that Apple is now trying to get rid of.)

I had a past without wires. I am moving to a future with them.

micheljansen 2 days ago 3 replies      
That's nice, but the biggest UX problem with Bluetooth is still the pairing misery. It's 2016 and it's still nearly impossible to use any one Bluetooth device with multiple other devices. Try switching a Bluetooth headset from an iPhone to a Mac or convincing your car to switch from one phone to another.

It's a huge mess and its not just a matter of the technology not working as designed. These are fundamental problems that nobody seems to worry too much about, apart from a small number of vendors (Apple did a decent job solving the pairing problem with the Apple TV: http://9to5mac.com/2013/07/29/new-apple-tv-os-offers-nfc-lik...).

I realise it's pretty hard to beat the intuitive action of plugging physical cables into devices to connect two things, but if we really want to get rid of cables, things need to be easier.

modeless 2 days ago 10 replies      
I don't need more range or more speed. I need it to reliably connect and stay connected while devices are well within range of each other, and stop breaking every time I upgrade anything. Unfortunately that would require making it less complex, which is about as likely as a broken egg spontaneously reassembling.
Unklejoe 2 days ago 2 replies      
You know there's a problem when a majority of the comments (on a website filled with software engineers and other technologically inclined people) are all claiming that Bluetooth simply sucks in terms of usability. Imagine how hard it would be for someone like my parents to debug a Bluetooth pairing issue.

I think they need to focus 100% of their efforts on addressing some of these issues which seem to have been a problem since the beginning.

Bluetooth is almost there in my opinion. Its incredibly convenient (when it works), and I can envision how great it will be once they work all the kinks out. It has been getting better and Im confident it will keep improving.

Let me just add another data point:

I have a 2013 Android phone and a 2011 car. Luckily, the car supports playing audio through Bluetooth which is really cool when it works. However, every time I get into the car, theres a 50/50 chance that BT audio will actually work. The phone always pairs with the car, but it seems like it doesnt reliably negotiate the audio capability. Sometimes I can make a phone call which seems to reset the system and can cause it to start working, but other times I have to actually power cycle the phone.

The other issue is that every once-in-a-while, there will be this spontaneous audible crackle. After the initial crackle occurs, there will be periodic crackles about once every 10 seconds from there on out. The only way to get it out of this state is to make a call or restart the phone. It seems almost like theres some kind of memory leak in a buffer or something which causes it to eventually run dry and bounce off of being empty.

These issues seem more software related and probably have nothing to do with the Bluetooth standard itself, but I wont let that stop me from ranting.

Niksko 2 days ago 2 replies      
Working with Bluetooth on a project last year was a gigantic pain in the ass.

Linux support was reasonably good, though with bizarre quirks and changes of tooling between libbluetooth versions. OSX was a total nightmare.

The project is currently stalled because three days before I had to head off (I was doing all of the programming and troubleshooting on the software side) my collaborators decided to inform me that they would be using a different laptop to what they'd been using for the rest of the project, and when we tried our software with that version of OSX and hardware, it refused to work nicely. We eventually came up with a bizarre pairing ritual that involved removing devices, then quickly adding them, and in a specific order, and then that mysteriously stopped working and now I don't have access to hardware to fix it.

Knowing what I now know, if I'd had my time again I would have recommended ESP8266 based boards instead of the LightBlue Beans we were using. Even though one of our requirements was low power usage (which we certainly got through Bluetooth 4), it probably would have been less hassle to just make the WiFi modem sleep for a period and then transmit in bursts.

voltagex_ 2 days ago 2 replies      
I think a lot of the comments here could be attributed to the terrible Bluetooth software stacks that are around (Car head-decks, Android (all versions), Windows 8-10 default stacks).

I've got a brand new Plugable BT4 dongle that barely works in Win 10 because Broadcom haven't updated their suite so it relies on the default 10 drivers - you can't have a HFP and A2DP service running at the same time so a headset with speakers and mic won't work.

jamesrom 2 days ago 2 replies      
Bluetooth is very easy to hate. It never ever just works: the pairing rituals, the flakiness. It's annoying.

But in recent years it has become more and more invisible. You've probably used Bluetooth in the past 12 months without noticing. Invisibility is something that the Bluetooth SIG should strive for.

smegel 2 days ago 0 replies      
But will it constantly fail to connect to devices it has been paired with a million times before?

Don't know how I could live without that "feature".

zmmmmm 2 days ago 2 replies      
So much negativity about bluetooth in these comments ... and yet I can happily say that bluetooth has really changed my life. Bluetooth headphones allow me to walk around and exercise without an annoying cord trailing the length of my body. And I can get to the office and sit down with my mouse and keyboard and just start typing without plugging anything at all in. While it certainly had early problems, I'm super happy with it these days and especially the increase in bandwidth will be very welcome.
danjayh 2 days ago 1 reply      
For those who have given up on Engadget (I can't be the only one):


lewisl9029 2 days ago 0 replies      
What I want from my wireless devices is not more range or speed, but total freedom from wires, especially for charging.

I remember from an Intel demo from a while ago, where they showcased a number of peripherals using their inductive charging tech, where you can just dump them onto a large inductive charging pad along with your phone and tablet without having to fumble with plugging wires into each one. That's the killer feature for a wireless device, in my opinion.

DevikaG 13 hours ago 0 replies      
Its really exciting to see that kind of capabilities and potential that bluetooth 5 brings to the table for IoT. However, what excited me even more is the capabilities put forward by bluetooth 5 to boost beacon adoption and location-based services. Given how Google's recent updates such as Google Nearby and Android Instant Apps are also ones with location-based services at its core, bluetooth 5 onces it's launched will definitely boost beacon adoption to a significant extent.
sly010 2 days ago 0 replies      
Here is a device I would pay for:

A USB dongle that somehow pairs to my Apple Keyboard and Touchpadand presents itself as a standard USB keyboard and mouse to the OS.

I could plug said device to my cinema display's USB hub.This way both me and my wife could use the sameworkstation by simply plugging in the computer.

tdkl 2 days ago 1 reply      
Can't wait for nearby ads[1] to hit me from a far larger distance in the future.

[1] https://developers.google.com/nearby/

tranv94 2 days ago 3 replies      
Maybe I'm living in the past and haven't been informed, but is Bluetooth still unsecure?
x5n1 2 days ago 3 replies      
What does bluetooth not require support for multiple simultaneous devices. Have more than one device that won't let you connect more than one at a time.
_RPM 2 days ago 0 replies      
Try going from having bluetooth off on your phone to bluetooth being on in your car that has bluetooth capable device. It sucks. Every car I've been has trouble pairing if bluetooth wasn't on prior to me entering the car. For one car, I had to turn the engine off for it pair. WTF?
pinaceae 2 days ago 0 replies      
Oh anecdotes.

Use Bluetooth home spekaers, headphones and car radio all the time against my iPhone 6S - works really well.

What I do though is manually connect when I want this particular device to be connected (say my headphones in the gym), and then disconnect after use.

Might be magic trick most are missing.

0898 2 days ago 2 replies      
Forgive my ignorance, but how come the range is affected by the protocol? Wouldn't it be the aerial?
Someone 2 days ago 0 replies      
I guess that is with the same power usage, as it would be disingenuous if that were different, but it would be nice to have that confirmed.

Also, I guess that, for many IoT devices, keeping range and speed the same while decreasing power usage significantly (although, as a third guess, I expect 'double the speed' means that devices can go to low power mode quicker, potentially halving power usage of the entire device) might be more useful.

wjd2030 2 days ago 0 replies      
And then bluetooth became wifi.
tracker1 2 days ago 0 replies      
My hope for the future of phone/car interfaces is that once you've paired a phone, the touch screen basically becomes a display for the phone... I have a brand new (less than a week old) car, and the UI feels sluggish, and looks very dated at this point. I'd rather my N6P managed the whole thing. Hopefully BT5 can allow that to happen.
BuckRogers 2 days ago 1 reply      
The only reason I like Bluetooth at all is because the alternative is a bunch of USB receivers plugged into my NUC.

But I have to admit that it seems to work pretty well on my iPhone while years ago on other phones I had a lot of disconnects. I have one of the LG around the neck headsets and it's actually really good at this point.

I'd like to see more BT headsets for PC hit the market. The only ones I could find were from Turtle Beach.

pknerd 2 days ago 0 replies      
Wonder why it was not thought earlier? After BT we saw development in WiFi,GPS, RDID etc. Nobody thought that BT could help indoor for the shopping mall usecase present in the article.

Now they are planning for late 2016, means it will only be available in new phones from 2017

kin 2 days ago 0 replies      
Nearly every comment in this thread is about pairing issues. I can definitely agree with most that I don't care about range and speed as much as I do the usability of pairing with multiple devices.
mschuster91 2 days ago 0 replies      
Ah great, a new BT version once again, when even the LAST standard isn't properly supported (and especially documented!!!) in BlueZ. Not to mention Windows (which usually comes with a next-to-useless stack, and every other stack costs $$$) or OS X...
mtgx 2 days ago 0 replies      
With all the Bluetooth car hacking going on and with the emergence of "connected cars" and self-driving cars, you'd think they would've introduced some stronger security features for Bluetooth 5.0 as well.
rsync 2 days ago 0 replies      
Do I want longer range from bluetooth devices ?

I sort of thought the short range was, kind of, a feature ...

How far do you really want your mouse trails and your keystrokes to fly out into the ether ?

aleksei 2 days ago 0 replies      
While I'm not a fan of Bluetooth for data transfer (pairing pains), this could be great for cheap indoor location services, eg. navigating inside a building with your phone.
gambiting 2 days ago 0 replies      
And hopefully we will get bluetooth audio that doesn't suck?
tmaly 2 days ago 0 replies      
am I the only one, or does putting more radiation out into the environment pose substantial health risks.

All of this excess radiation is bound to cause some potential mutations in DNA. Adding even more is only going to increase the probabilities.

jtchang 2 days ago 0 replies      
I want better range, more bandwidth, less power consumption, and smaller footprint. Oh and make it super reliable.

One can dream...

x0ner 2 days ago 0 replies      
Curious if these upgrades include any addressing security.
coroutines 2 days ago 1 reply      
I kind of want an 802.11z that does wifi over Bluetooth for sub-802.11a conditions. Am I weird?
williadc 2 days ago 3 replies      
John Gruber from Daring Fireball has already written his review:

> Next year it will work great should be Bluetooths slogan.


Volkswagen's U.S. diesel emissions settlement to cost $15B reuters.com
292 points by ilyaeck  3 days ago   339 comments top 34
beefman 3 days ago 1 reply      
In the United States, the VW defeat mechanism abates 275,000 tonne/yr CO2 at the cost of 8,200 tonne/yr NOx.[1][2][3][4][5][6]

[1] NY Times: How Volkswagen Got Away With Diesel Deceptionhttp://nyti.ms/1ZjAV1w

[2] Vox: VW's appalling clean diesel scandal, explainedhttp://bit.ly/1MhJVuA

[3] FHWA: Annual Vehicle Distance Traveled and Related Datahttp://www.fhwa.dot.gov/policyinformation/statistics/2013/vm...

[4] EIA: How much CO2 is produced by burning gasoline?http://www.eia.gov/tools/faqs/faq.cfm?id=307&t=11

[5] EPA: Does the fuel used in fuel economy testing contain ethanol?http://www3.epa.gov/otaq/about/faq.htm#ethanol

[6] DOE: Fuel Economy of 2015 Volkswagen Jetta http://www.fueleconomy.gov/feg/bymodel/2015_Volkswagen_Jetta...

socalnate1 3 days ago 6 replies      
It seems odd to me that most of the financial windfall here goes to Volkswagen owners. (2/3 anyway). Wasn't the "damage" done almost exclusively environmental, which affects everyone?

Wouldn't it make more sense to use these large amounts of money to combat the actual damage that was done? (e.g. environmental cleanup initiatives?)

bluedevil2k 3 days ago 3 replies      
I'm a "proud" owner of a 2014 Passat TDI (purchased May 2014). Here's the math breakdown for me personally:

- $26,000 for the car, before TTL- First year, the car will depreciate 20%, so its September 2015 value would be $20,800.- VW will give me $5,100 (maybe more) plus that Sept 2015 value = $25,900- I don't have to make a decision until December 2018

Result: I get a car for 4.5 years for which I've paid $100

* Simplified math, doesn't factor time-value or the 0.9% interest rate.

gnoway 3 days ago 4 replies      
"Owners will have two years to decide whether to sell back vehicles..."

So the rumor is I can drive my affected Golf TDI for ~2 more years, and then get compensated @9/2015 value plus up to $5k on top of that? Honestly, that's a terrible deal for VW.

Edit: thinking more, maybe this is better for VW than an alternative where they have to make good right away. They don't have to scramble to get 500k cars repaired or off the road as quickly, and they can spread whatever makes up the rest of their hit over a longer period as well.

To me, this does signal that we're a lot more upset about the dishonesty than we are about the emissions themselves.

1024core 3 days ago 11 replies      
Where's the jailtime?

As an individual, you make one wrong statement to the FBI, and you are hauled off to jail. The company lied over 500,000 times to the government, and gets off by paying a paltry fine. No wonder companies continue to do this: there is no consequence to the employees who do this shit.

btilly 3 days ago 2 replies      
Compare with http://www.cnbc.com/2015/10/29/vw-excess-emissions-linked-to....

VWs actions left around 60 estimated dead. Around 120,000 days of people not being able to function normally. At a direct cost to the economy of around $450 million. And a cost to our quality of life that most would consider even higher.

All of which is bad. But no matter how you do the math, most of this fine is punitive dissuasion for others who might be tempted to also cheat.

(Note that most of the fine moves money from the left hand to the right without destroying it, so we are theoretically collectively left better off as a result. Plus we prevent about as much more damage as we had already.)

louprado 3 days ago 1 reply      
On September 22, 2015, VW stock had the trading highest volume / price decline. The stock closed $106 per share that day.

There is a trading strategy that assumes markets over estimate liability of lawsuits. While I am probably cherry picking data, it is notable that the stock closed at exactly $106 per share today. So much for that trading strategy in this case. The market pretty much nailed it.

chappi42 2 days ago 1 reply      
Economic war: US vs. Germany. Nothing else.

Nature would benefit if gasoline prices were 10 US per liter and people would stop to drive around senselessly. (I know the geographic circumstances make this impossible, but nevertheless...).

astraelraen 3 days ago 1 reply      
If you discount all the hypothetical environmental costs that at best guess are just that, guesses. And, if you were to consider this penalty in a somewhat satirical manner, it would have been cheaper for VW if they would have been killing the consumers of their cars.

I understand the punitive nature of the penalty and its amount, but the amount seems somewhat egregious given the fact that all VW did was violate regulations of a government body. Uber is constantly praised for skirting or directly violating state or local government laws and regulations, yet is seen in an overall positive light for the supposed benefit they are providing society. This is the government's heavy hand making an example of VW and it's violation of regulation.

wallace_f 3 days ago 2 replies      
I'm not following this closely, but I understand most at least 4 other auto manufacturers were doing essentially the same thing: https://www.theguardian.com/environment/2015/oct/09/mercedes...

Why the focus on VW, then, rather on the auto industry?

SwellJoe 3 days ago 25 replies      
I'm of the opinion that there should be a corporate death penalty (though I oppose the death penalty in criminal cases). There's some situations that are so ethically indefensible, wherein a corporation can cause incredible harm over a long period of time, that the only just outcome is for the company to be destroyed. The people responsible are shielded from any real consequence (how many people at VW knew about this, and how many are going to jail over it?). This is not a minor squabble over regulations; this is a conscious decision, involving many executives within the company, to willfully cause environmental destruction that directly costs human lives and health.

This is one of those cases where the corporate death penalty is the only just outcome I can think of.

$15 billion looks like a large sum; and maybe it's even enough to deter car companies from doing something similar in the future. Maybe. VW is worth $73 billion, and they generated a lot of money on the strength of their diesel campaigns. Amortized over the many years that they were shipping out these cars, it begins to look like a cost of doing business, rather than a massively punitive expense.

dmichulke 2 days ago 1 reply      
For context:

GM to Pay Record $35 Million Fine Over Ignition Switch Recalls (May 2014)


"The federal government struck a $35 million settlement with General Motors after the company failed to act for 10 years on an ignition switch defect that led to the death of at least 13 people and recall of approximately 2.6 million vehicles"

spriggan3 3 days ago 1 reply      
Does anybody remember when Volkswagen blamed the scandal on "the engineer culture" or something like that? like those engineers at Volkswagen called the shot and not the management ... ridiculous , like these engineers weren't asked to cheat ... I can't wait for the criminal investigation.
alkonaut 2 days ago 0 replies      
So because they marketed it as "Clean diesel" in the US their 2009 2.0 tdi owners get $5-10k plus an estimated value of the car as of 2015. Meanwhile in Europe we get a "fix" that reduces engine power and/or increases fuel consumption, and while the car has depreciated in value we get no other compensation, and seem to have no way of winning a class action against VW (Of course, if we did, we'd could kill VW instantly which I assume is not what any of the EU member countries want). I'd be perfectly happy with a much smaller sum or even a rebate on a new car. But no. Nothing.

The low emission 2.0 TDI models (Such as Audi's TDIe) were very much marketed as very eco friendly, and it meant a lot of corporate buyers with CO2 caps bought it in stiff competition with other cars. If the engines had met the NOx emissions, the consumption (and thus CO2) would have been higher, and the cars couldn't have been boughy, or wouldn't have been bought because of being less attractive with lower power or higher consumption. If this isn't marketing it as "clean diesel" I don't know what is.

mayneack 3 days ago 0 replies      
Does anyone know what the total paid by BP was for Deepwater Horizon? I can't seem to find a total anywhere here:


patrickg_zill 2 days ago 0 replies      
I am surprised that no one mentioned the other fraudulent actor in this whole mess: the CARB, California's Air Resources Board.

They are the ones who lied about their diesel engine standards for air quality. Under the guise of making tough standards they basically outlawed diesels in California.

Since CA is such a big car market this prevented most manufacturers from selling any diesel cars anywhere in the USA.

reality_czech 2 days ago 0 replies      
VW deserves this. Hopefully this penalty will discourage other automakers from cheating in this way. It would also be nice if Mr. Winterkorn and the executives who were complicit in this were prosecuted. I wonder how well they have covered their tracks.

I am glad to see there will be some money going to the owners of the affected diesels. A lot of these people bought the cars because they were marketed as more environmentally friendly. Then, they were left with an embarrassing car with near-zero resale value once the revelations came out.

I feel like the US has a bizarrely inconsistent attitude towards air pollution. The standards for NOx emissions for diesel construction equipment and ships are almost non-existent. If VW's actions are reprehensible, surely the lawmakers who created such a bizarrely fragmented regulatory regime are also to blame. http://www.greenhoustontx.gov/reports/closingdieseldivide.pd...

covi 3 days ago 1 reply      
Now's better than ever to buy a new Volkswagen. Because of the scandal, I've found crazy deals (~30-40% off MSRP) for new VWs.
ProfChronos 2 days ago 2 replies      
I am really amazed by the comments I read so I am voluntarily going to "defend" VW while I would not in other circumstances. To make sure this is understood: yes VW is guilty, it has violated customers' trust, they are a shame for the entire car industry and should receive strong punishment for that.

Yet, how can everybody forget about all other car manufacturers - especially US? They pretty much all lie about their gas emissions as proven by different experts and agencies [1] [2] [3].

Why? Because we have improved our gas emission limits to a level that most car manufacturers couldn't reach over the short run. Take the example of European car makers. The European Commission started to really regulate car emissions in 2010. At that time, car makers were faced with dropping sales (double crisis 2008 and 2010), stable/slightly increasing costs (wage inflation and poor labour market flexibility - German is an exception in Europe) and stable/slightly decreasing prices (due to competition and few new vehicles). In this context, how can you expect car makers to invest in "traditional cars" to reduce gas emissions and in electric and autonomous vehicles to fight against the competition of tech car companies like Tesla or Google. That is just not possible.

So who is responsible? Of course VW and other car makers are all responsible for that mess and the disastrous consequences on environment. But WE are also responsible: we want safer, cleaner, cheaper and stronger vehicles from traditional car makers but but you can't have everything all at once. Tesla can do it because that they start from a "blank page", with no turnaround costs. For VW, GM, Toyota, that's another story. Maybe we should just keep that in mind before charged them with a "corporate death penalty"...


ourmandave 2 days ago 0 replies      
The report this morning said the government gets $5b for environmental clean up and customers get $10b in fixes or buy backs.

We'll see.

There was a GM settlement like this (the pickup gas tank problem) and customers with existing pickups got a coupon good for $1000 off the price of a new pickup. So GM gets to sell you a new pickup and probably keeps any rebates or low financing.

Also the recent Ticket Master settlement for $420M. There's a daily allotment of coupons worth $5 off the ticket price on select concerts (read: not the popular ones) that are buried in a series of hard to follow links. They still have the same service charges like before, they're just more transparent about it.

tn13 3 days ago 0 replies      
Not sure why the government attorneys should get paid anything here. Government was so much incompetent here that they failed to catch VW well in time, besides arresting few people in VW I think some people from the government side need to be fired too.

The only worry here is that attorney generals might end up with more resources to harass small people now.

togasystems 3 days ago 0 replies      
Could someone purchase one of these vehicles today and still be considered for the buyback program?
eddd 2 days ago 0 replies      
Anyone knows if this 15B is treated as the "cost" of the business? In that way effective income tax would be lower for VW.
london888 2 days ago 0 replies      
What's sad is how the word 'emission' is always used instead of the real word: pollution.
xshareit 2 days ago 0 replies      
We should make a discussion about how we can prevent this kind of cases happen again.
known 2 days ago 0 replies      
FB purchased Whatsapp for $15 billion :)
kejaed 3 days ago 0 replies      
Das Payout
blobbers 3 days ago 1 reply      
What about my lungs?
rando18423 3 days ago 1 reply      
Only $15B?
roel_v 2 days ago 6 replies      
I lament the days where HN was a place for intelligent discussion. Today the top post to submissions like this is an idiotic populist gut feeling, complete with dubious metaphors, calls for violence, a complete lack of any understanding of why things are the way the are and how we got there, as well as a blissful ignorance of proposed and/or tried solutions to age-old problems.

There must be something in the water these days that spurs these insults on sane discourse, I have a vague feeling of recognition when going over current events in politics across the world the last few weeks...

ModernMech 3 days ago 3 replies      
You certainly wont be fined for excess of modesty ..
Aelinsaar 3 days ago 2 replies      
We detached this subthread from https://news.ycombinator.com/item?id=11990553 and marked it off-topic.
awt 2 days ago 1 reply      
amaks 2 days ago 0 replies      
Thank you, Volkswagen!
A Tragic Loss teslamotors.com
360 points by runesoerensen  5 hours ago   367 comments top 57
BinaryIdiot 5 hours ago 14 replies      
I mean I understand Tesla has to make a statement here and I understand they want to ensure everyone that it's not really their fault but to title a post "A Tragic Loss" and then spend the majority of the post discussing all of your car's safety features and how it wasn't your fault just seems tone deaf and distasteful to me.

Maybe they had to do it for legal reasons I don't know (I'm certainly not a lawyer) and I'd love to own a Tesla but couldn't they have worded this a little more sympathetic and a little less lawyer?

daveguy 4 hours ago 5 replies      
A direct reply from Elon Musk on twitter about why radar did not recognize the white side of a trailer across the road when the camera missed it:

"Radar tunes out what looks like an overhead road sign to avoid false braking events"



When the "overhead sign" comes down below overhead clearance of the vehicle the signal should not be masked. There should have been some braking action in this case. If there was not then the tesla autopilot is unsafe. This is the same blind spot discussed a few months ago that caused a tesla to run into a parked trailer using summon mode:


This seems like a serious flaw in autopilot functionality. Trailers are not that rare.

I would be interested if "autobrake/autofollow" functions of other car companies have similar problems.

jacquesm 5 hours ago 5 replies      
This is why driving AI is 'all or nothing' for me.

Assisted systems will lead to drivers paying less attention as the systems get better.

The figures quoted by Tesla seem impressive but you have to assume the majority of the drivers is still paying attention all the time. As auto-pilots get better you'll see them paying attention less and then the accident rate will go up, not down for a while at least until the bugs are ironed out.

Note that this could have happened to a non-electric car just as easily, it's a human-computer hybrid issue related to having to pay attention to some instrument for a long time without anything interesting happening. The longer the interval that you don't need to act the bigger the chance that when you do need to act you will not be in time.

spenvo 3 hours ago 3 replies      
The top comment in another comment thread, which has been "duped"[0] pointed out how marking a feature in cars as "beta" is irresponsible.

What's beyond the pale IMO is that when auto-pilot was first demonstrated (at the unveil event) - "hands on the wheel" was not part of the story. Journalists and (what appeared to be) Tesla employees were using the feature without hands on the wheel. It looked like Tesla cashing-in on the positive PR without correctly framing the limitations of the tech.

Furthermore, Tesla includes sensors to map the entire surroundings of their cars, but why can't they include sensors to ensure customers have hands on the wheel? (update: comment says they do, but the check frequency is low. why can't it be high?!) It's not just the driver's life at stake, it's everyone else on the road--Tesla should disable this feature on cars [unless it ensures] drivers' hands are on the wheel. Engineers/execs at other companies taking a more responsible approach must be furious at the recklessness on display. One death is too many.

Tesla Auto-pilot fail videos: https://www.youtube.com/results?search_query=tesla+autopilot...

It's incredibly unfair to other drivers on the road to let someone else use beta software that could cause a head-on-collision.

[0] - https://news.ycombinator.com/item?id=12011635

archagon 4 hours ago 7 replies      
This is on the road to being off topic, but still relevant given some of the commentary in this thread:

It makes me a bit sad that the political zeitgeist in the tech community is leaning towards "acceptable losses" when it comes to accidents in automated cars, to the point of pre-emptively expressing disdain at ordinary people reacting negatively to such news. I sense it's going to become harder and harder for us to talk about our worries and skepticism regarding automated driving, since the louder voices claim it will all be worth it in the end. Surely surely you're on the side of less death? But personally, I find the utilitarian perspective distasteful. We're perfectly happy to let technology (literally) throw anonymous individuals under the bus as long as less people die overall, but what if it's you that gets hit by an auto? What if it's someone you care about, not Anonymous Driver On TV? The point is that humanity is not a herd to be taken as a whole; every life has rights, including the right not to be trampled by algorithmic decisions or software bugs for the betterment of all. (Sure, you could argue just as well that we have the right not to be run over by drunk and otherwise negligent drivers, but at least this kind of death is not methodical and has some legal recourse.) I feel this perspective needs a strong voice in the tech community too, to counter the blind push forward at the expense of human lives.

Now, this isn't necessarily what happened in this case, but I find Tesla's behavior in these kinds of situations to be creepy and self-serving, at best. Is every death going to come with a blog post describing how much safer automated features are compared to human drivers? Every auto-related casualty is, and should be, a massive event, not a minus-one-point on some ledger in Elon Musk's office.

petercooper 5 hours ago 4 replies      
Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky,

I'm intrigued that the color is relevant in the car's case - wouldn't it be using some sort of radar to detect and map objects rather than vision? I appreciate I am probably missing something.

Animats 4 hours ago 1 reply      
"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied."

Now that's blaming the user for a design flaw. There is no excuse for failing to detect the side of a semitrailer. Tesla has a radar. Did it fail, was its data misprocessed, or is the field of view badly chosen? The single radar sensor is mounted low on the front of the vehicle, and if it lacks sufficient vertical range, might be looking under a semitrailer. That's no excuse; semitrailers are not exactly an uncommon sight on roads.

I used an Eaton VORAD radar in the 2005 Grand Challenge, and it would have seen this. That's a radar from the 1990s.

I want to see the NTSB report. The NTSB hasn't posted anything yet.

simonsarris 5 hours ago 8 replies      
> What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.

Tragic no doubt, but I'm relieved that this was not a "Autopilot did something very very wrong" story.

Autopilot has the potential to save a large number of lives (I'm sure Tesla execs are thinking about touting "estimated lives saved by autopilot" if the numbers work out, after a few billion miles), so I hope incidents like this don't hamper public perception and therefore research.

kafkaesq 4 hours ago 3 replies      
This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.

Given that Autopilot is generally activated under safer, more routine driving scenarios -- decent weather, regular suburban traffic and so on; for which we can naturally expect significantly lower fatality rates -- it doesn't sound like it's off to a particularly good "batting average" so far. Especially since we've been promised until now that self-driving cars will ultimately be not just incrementally safer, but categorically safer than human-piloted vehicles.

sverige 5 hours ago 0 replies      
Bottom line: Lots of work to do if the car can't see a semi-trailer in front of it. I would be willing to say that it's much more likely that the driver would have noticed in time if Autopilot wasn't on. It's likely that most average drivers will think so too, which pushes the acceptance of this technology somewhere further in the future.

And Tesla's statement does nothing to alleviate these reasonable doubts about putting your life and the lives of your family and friends in the hands of automotive software engineers.

mercurialshark 2 hours ago 0 replies      
The loss is of Joshua D. Brown, 40, of Canton, Ohio. A former Navy SEAL and technologist. Josh was a member of the Naval Special Warfare Development Group (SEAL Team Six) prior to founding Nexu Innovations.


johngalt 5 hours ago 0 replies      
The thing about self driving cars is that every accident will have a wealth of information in regards to how it occurred and the decision making that went into it. Once we understand and correct the problem every subsequent car will be safer. This is not the case with humans whom regularly fail to learn from consequences to other humans.

Imagine if the first time someone fell asleep at the wheel and crashed, you could just tell everyone "hey don't fall asleep at the wheel". And it just never happened again.

mathattack 5 hours ago 3 replies      
Tragic. Good that they own it, though I'm not thrilled with this:

It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot is an assist feature that requires you to keep your hands on the steering wheel at all times," and that "you need to maintain control and responsibility for your vehicle while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to Always keep your hands on the wheel. Be prepared to take over at any time. The system also makes frequent checks to ensure that the driver's hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.

greybox 5 hours ago 2 replies      
Entitled "A tragic loss" but reads like a massive disclaimer. . .
ChicagoBoy11 57 minutes ago 0 replies      
I wrote this on a comment thread 260 days ago when the autopilot features were introduced.

>A common phrase in aircraft cockpits nowadays is "What the heck is it doing now?" as pilots have migrated from actually flying the plane to simply being glorified systems managers.While planes have become so, so, so much safer because of all this automation, pilots uncertainty regarding autopilot functioning is a major concern nowadays, and the reason for several accidents.There are very interesting HCI challenges around properly communicating to the pilot/driver "what the heck it is doing" and clearly communicating just how much control the human has or doesn't have at any given point.This "announcement" certainly doesn't inspire any confidence that they have really thought this through deeply enough (I think they probably have, but it should be communicated like it). As a huge Tesla fan, I can't help but feel like I need to hold my breath now and make sure something terrible doesn't happen because of this, and it ends up leading to even more regs setting us back on the path to full car automation.

I really hate having been right. But from the ensuing discussion here about the rightness or wrongness about labeling features "beta" in the car, to the disconnect of how the technology was "meant" to be used versus how it was portrayed by the company itself, it is pretty clear they really fucked up.

The huge discrepancy between the cautionary tone of this press release and everything that came before it is a great reminder that while some of us may be really worried about preventing AI from having unintended consequences in our society, there are at the same time many more pressing issues that we must address that are very much caused by very human motivations and actions.

TaylorGood 5 hours ago 3 replies      
As most of you have already seen, Autopilot has detected similar situations: https://www.youtube.com/watch?v=9X-5fKzmy38
davidiach 4 hours ago 0 replies      
It seems the driver was Joshua Brown, the guy who posted the viral video of how his Tesla avoided a crash on the highway some months ago. Really sad.http://www.theverge.com/2016/6/30/12072634/tesla-autopilot-c...
CobrastanJorji 5 hours ago 1 reply      
> It is important to note that Autopilot is...still in a public beta phase...

No, it's not important to note that. You should not be able to hide behind the word "Beta" for systems that could kill people. Either you're willing to let people risk their lives on your software or you're not, and you were.

tachim 4 hours ago 0 replies      
"What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied."

That's really disingenuous. There's no way to know for certain what the driver did or didn't see, but there are a lot more salient features for them to go by other than the white trailer, like the fact that the cabin has driven across the road or that the wheels of the trailer were square in the driver's lane. A much more likely explanation is that the autopilot set expectations too high, the driver was not paying attention (like in all the youtube videos of the system in action), and the autopilot failed and crashed into the truck.

danso 58 minutes ago 0 replies      
Worth pointing out that this crash happened on May 7:


It's a little bit odd that the press release's sole specific reference to time -- "We learned yesterday" -- is in its 3rd word...which, for me, made me think the accident happened yesterday, or shortly before...Not two months before.

camkego 5 hours ago 0 replies      
Main issue is that once you take away 90% of the requirement for a human to focus on the task at hand (driving) they are going to ignore the remaining 10% where an exception occurs.
brianstorms 3 hours ago 0 replies      
As a pre-autopilot S owner, I've been skeptical of autopilot since the day it was announced. I think folks put too much trust in it, and edge cases, as anyone who's developed software/hardware knows are the gremlins that are almost impossible to completely account for. Edge cases are the things that have kept me from being excited about autopilot. Even 99.99999% reliability means eventually you run into an edge case. Simple math.

I also think that Tesla does not do anywhere near enough to educate brand-new owners, especially the non-techie later-adopter owners that are starting to buy Teslas in droves. They're turned loose with these cars and there are enough weirdnesses with the usability and user experience of driving and operating the car that it catches up on people. Add to that the cognitive distractions of 21st century life (esp mobile devices) and you have the makings of new kinds of "normal accidents."

I suspect that some time soon it will be a requirement that either DMVs issue new rules for having a license to operate a vehicle with autonomous capability, or, they'll require the manufacturer to certify that the buyer has gone through a certified course of instruction and training offered by the manufacturer or its designate.

It may well be that in this case, the victim, who was clearly a Tesla enthusiast and knew a lot about autopilot, may have passed any such testing by DMV or manufacturer with flying colors. Not the point. In general, as Tesla and other manufacturers make autonomy more mainstream, we're going to see these edge case situations more frequently.

Me, I view autopilot as a dangerous feature that I would never, ever trust. As a set of incredible SENSORS that help me with second-by-second situational awareness while driving, fine. As a replacement for my driving, hell no.

vkou 5 hours ago 1 reply      
Is this the new motto for autonomous vehicles? Move fast and break bones?

With these minor bugs, Tesla seems to be doing a solid job of poisoning the well for self-driving cars. I'd like to see them explain how their competitors should not be tarred with the same brush, once the political backlash hits.

It doesn't matter how many disclaimers you give before you turn on autopilot - a driver who focused on driving the car (As opposed to letting autopilot cruise) would have probably noticed a tractor driving across the road on a bright, sunny day.

It's a dangerous system. Instead of arguing about the trolley problem, I'd first like to see a car be good at making decisions that save its passengers.

blinkingled 5 hours ago 0 replies      
I don't feel like Autopilot will ever be able to take over 100% of the time - there are so many corner cases, so many unexpected/illegal maneuvers etc. to deal with that at least in the initial stages it would be wise to accept those limitations and not market it as Autopilot - instead make it DriverAssist or something.

It should really only take over in situations the driver failed to handle - asleep at wheel causing car to shift lanes when it is not safe - loud buzzer and lane correction to stay in the lane, not seeing object in the front - auto brake with buzz etc. Then it would be whole different story if it failed - the mistake was driver's to begin with even if driver assist could not correct it. And it will still save a lot of lives no doubt.

The whole idea that complex technology like this can guarantee 100% safe and reliable outcome 100% of the times is I think one hell of a over promise that will continue to cause people to become even more unaware and distracted than your usual car driver already is. And as this shows it will not always end well.

Edit: A Volvo Engineer had similar opinion - From The Verge - "Some autonomous driving experts have criticized Tesla for introducing the Autopilot feature, with a Volvo engineer saying the system "gives you the impression that it's doing more than it is.". That's exactly right.

suprgeek 2 hours ago 0 replies      
Here is the meat of the Post:

"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S"

1) The Sensors did not "see" the trailer due to Bright sunlight & <XYZ>2) The Collision was at the WindShield Level and above but not to the Front

Neither of these sound like "Extremely Rare" occurrences. That part is just pure PR Spin.

Request for TESLA - You are doing great things to move the whole industry forward. Do not F* it up by being too aggressive here.

1) Do not rely on purely Visual cues use (multiple) Radars/LiDARs/Some thing that works irrespective of Visible Light conditions - so an additional Radar unit on Top that is checking for obstacles at the height of the car would make sense.

2) Improved Crash sensors

Till then DISABLE the system except in very limited circumstances even if the owner tries to activate it.This is NOT a "Move Fast and Break things" situation. Some poor soul lost his life because of Tesla's inability to foresee this situation lulled him into a false sense of complacency (and partly his own inattention).

TheMagicHorsey 3 hours ago 0 replies      
I think it's a bit ridiculous that the Tesla feature is called an autopilot when it only covers a subset of circumstances a human could handle. I don't see how a human being would not apply brakes when they see a truck perpendicular across a highway ... it can't be missed. The guy was not paying attention simply because the Tesla smoothly handles 95% of circumstances ... something in the other 5% occurred and he relied on the machine because of the smooth previous experience ... and died.

The robot led him to believe it was more competent than it really was.

ikeboy 1 hour ago 0 replies      

Interesting, says model s was rated the safest of any car tested.

rsp1984 5 hours ago 3 replies      
Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.

It appears to me like this is a case that shows the limitations of stereo vision as the main sensor system guiding the Tesla autopilot.I would even venture to say that it could have possibly been avoided using LiDAR technology. According to various interviews however Elon Musk is not a big fan of LiDAR [1]:

I dont think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR, Musk said during at a press conference in October. I think that completely solves it without the use of LIDAR. Im not a big fan of LIDAR, I dont think it makes sense in this context.

[1] http://www.techinsider.io/difference-between-google-and-tesl...

YeGoblynQueenne 3 hours ago 0 replies      
So, reading this made me finally realise what I don't like about autonomous cars.

tl;dr: stupid bugs can now kill you dead.

You know how AI systems, when they fail, they really, really fail? Like, when you ask Google Image Search for "unprofessional hair" and it returns images of black womens' hair because it doesn't have a clue? Or when you ask Google Translate to translate "swallows" to Greek and it comes back with the Greek work for "swallowing" (true story) because it has no idea what either word means, just that they're somehow similar, ish?

That sort of thing. When AI fails, it fails in ways that nothing with a brain will ever deign to fail. People don't crash into the sides of barns because they mistook them for the open sky. People don't run over old ladies because they thought they were flowerbeds.

Human drivers fail in situations where they don't have time to react- but even given all the time in the world there are situations where a program will simply derp, and derp and derp again until the end of time. And if this happens in the middle of a life or death situation, people die. The same happens with autopilots on planes.

So this is what, well, bugs me. I don't guess I get to choose how I die (unless I do it myself) but I'm pretty sure I don't want to die because of a dumb bug in a stupid machine. Therefore, I don't want no cars being "smart" around me. I don't want to be the one driving them and I don't want others driving them around me.

The current tech simply can't avoid completely brain-dead mistakes and no road vehicle should be given autonomy until that changes.

fdsaaf 3 hours ago 0 replies      
I'm really not looking forward to the moralistic haranguing that's going to happen over the next few days. One commentator complained that the tech community is adapting an "acceptable losses" posture. I don't see a problem with this attitude --- anything else is unrealistic and is an argument against technological progress.

If our ancestors had adopted the attitude that we could do nothing not perfectly safe, we'd have never left the damn savanna. We'd have certainly never allowed aircraft to fly.

funkysquid 4 hours ago 0 replies      
I don't think they should have said anything at all in this case, beyond their condolences.

1. Autopilot didn't cause the accident, but they're sort of making it sound like it did by bringing it up

2. If people are arguing that autopilot contributed by making the driver feel like they didn't need to pay as much attention, the only way to defend against that is to blame the driver - why would you do that?

3. It's sort of invading the privacy of someone who recently died by detailing the crash, making it about Tesla. I don't believe other car companies would issue a PR release if one of their cars got in a crash, even with accident avoidance features on?

It somehow manages to feel self serving and make Tesla look bad at the same time.

gordon_freeman 4 hours ago 3 replies      
Nobody is talking about whether it is safe to put Beta software of such an advanced autopilot system in real world? I mean they are giving choice to people to turn on this Autopilot functionality as if it were just another driver assistance safety feature like 'Honda Sensing' but in reality it is much more advanced system and it seems Tesla is doing this to use the data to feed to it's upcoming autonomous technology. But at what cost!

UPDATE: Just read this news on NYTimes:

"The traffic safety agency is nearing the release of a new set of guidelines and regulations regarding the testing of self-driving vehicles on public roads. They were expected to be released in July."

Good and much needed step from NHTSA here.

dikaiosune 5 hours ago 0 replies      
I think it would have read a little better if the last paragraph was earlier. It ends up sounding tawdry after the massive disclaimer.
mxfh 3 hours ago 0 replies      
It's a blind spot. Dozens of cyclist get mowed down by right turning trucks every year in Germany alone and nobody cares about implementing a mandatory technological solution (beyond upgrading mirrors and optional camera systems) for that well defined space and situation at relative slow speeds, because costs to the logistic companies.

At least with the autonomous cars everybody still seems to care about eliminating more or less rare errors all together, because we seem less forgiving to technological than human error. And adding a fitting sensors and detection algorithms to eliminate that blind spot is relatively easy and cheap if the car has already a sensor network infrastructure.

I doubt that similar accidents with human drivers would even get their own statistical category.

_vk_ 3 hours ago 0 replies      
This sucks, but it seems likely that we've had the first victim of autopilot-induced inattentiveness.

From the deceased's viral video (https://www.youtube.com/watch?v=9I5rraWJq6E):

>I actually wasn't watching that direction and Tessy (the name of my car) was on duty with autopilot engaged. I became aware of the danger when Tessy alerted me with the "immediately take over" warning chime and the car swerving to the right to avoid the side collision.

Combine this with the strange claim that Tesla put in their press release:

>Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.

It's hard to see how one can miss a tractor trailer "against a brightly lit sky" when one is paying attention. The human visual system isn't that bad. A likelier explanation is that he just wasn't looking. :(

Splines 5 hours ago 0 replies      
All I can say is that I would not want to be a Tesla engineer on the Autopilot feature.

And I know what you're thinking - it's not because I think some poor engineer is going to get the axe (ok, maybe that's not what you're thinking). It's because I would not want to be responsible for the code that needs to deliver millions of people billions of miles in their car. It's a daunting responsibility.

davidw 4 hours ago 0 replies      
I'm not particularly a Tesla fan and have no skin in that game, but I think people can both be defensive of the technology and genuinely, humanly sad about the loss of life. I don't see this as being cynical.
niftich 5 hours ago 1 reply      
Other premium automakers have automatic braking and pre-collision systems that take effect regardless of whether the (enhanced) cruise control is engaged.

We have seen from other incidents that Tesla's autobraking will not engage when autopilot is disabled (potentially by tapping the brakes), which is contrary to widespread user expectation -- disclaimers notwithstanding.

However, in this particular case, it was the obstruction detection that failed. Based on their blog post disclaimer, it appears to be a visual sensing system; eye-height radar-based system used by most other manufacturers would have correctly detected the obstruction.

Is there an industry standard (yet) for the efficacy and behavior that these systems must meet?

ChuckMcM 5 hours ago 0 replies      
Very sad. I once designed a robot that used infra-red collision detection to avoid obstacles, my home built, and painted flat black, stereo cabinet registered as open space and the robot drove right into it at full speed. It was a quick lesson on sensor fallibility for me.

I've also observed human pilots on busy roads nearly colliding with obstacles when driving toward the setting sun, the visor pulled down and still trying to shade their eyes.

But robots aren't humans, and they don't have to rely only on vision, there are so many ways to "look" it seems like we should have several different ways of identifying obstacles. Different spectra at least.

itg 3 hours ago 1 reply      
The fact that Tesla will blame their users for anything that goes wrong makes me avoid considering one. Alone with all the hype they try to sell.

But what disappoints me the most is how the "tech" crowd will defend Tesla every time.Someone lost their life because Tesla can't put out a decent semi-autonomous driving systems and we are supposed to brush it aside?

United857 5 hours ago 0 replies      
I couldn't find any other details about the accident, but it seems from the description like it was a side collision at an intersection.

In order for that to happen, it's likely that one of the vehicles had to violate a stop sign or red light.

Note that the Autopilot isn't aware of these things (it won't stop unless a car in front stops). If the Tesla was the vehicle at fault, then most likely it was unawareness of this limitation on the part of the driver (or complacency thinking there wouldn't be any traffic).

java-man 2 hours ago 0 replies      
I could never understand why Tesla needed the auto-pilot feature. They have too much on their plate already. What possible benefit could it bring to justify adding such a high risk decision?
agumonkey 5 hours ago 0 replies      
The situation is a bit confusing, did the tractor cross in a hurry just before impact ? was the victim really blinded by the colors ? maybe it's just vocabulary, a tractor trailer seems huge and hard to ignore.

Even though I'm eager to see SDV benefits, Autopilot seems like way too much hassle for such possible consequences. An overengineered auto cruise.

vinceguidry 2 hours ago 0 replies      
If Autopilot had been on the truck, the accident would have never happened.
alkonaut 4 hours ago 0 replies      
After reading a lot of the comments here it's obvious that the best thing Tesla could do for the safety of this feature is to simply rename it "Magic Cruise Control" or "CoPilot" or something to that effect.
hristov 3 hours ago 1 reply      
I am not sure what happened in this situation, but I have to warn any HNers that have teslas or might get behind the wheel of one -- do not rely on the autopilot!!! If you are using the autopilot you must be in a heightened sense of alert, not lowered. The autopilot can get you into a dicey situation, and then you have to take over control and react very fast.
yardie 5 hours ago 1 reply      
Wow. I'm speechless.

I do hope they are able to learn more from this. This is just a bad coincidence of environment meets chance and someone died because of it.

I do think computer vision still has a but further to go. A white trailer and a white sky shouldn't be a problem. But everyday I'm impressed with how human sight can find and lock into the most obscure details. Like I've been sunblinded before and yet I knew something was in front of me because I noticed a grey shadow on grey asphalt and immediately hit the brakes.

We're almost there but not quite.

BoarAndBuck 5 hours ago 0 replies      
Condolences to the family. Since he was "a friend to Tesla" there might not be a lawsuit. If every Tesla owner is considered a friend, there might never be a lawsuit. These things are bound to occur. Remember Morrison and "Keep your eyes on the road...and your hands upon the wheel!"
aab0 5 hours ago 0 replies      
This sounds exactly like the previous Autopilot failure! A high-up truck where the Tesla slides under and crashes.
Artlav 5 hours ago 1 reply      
Somehow this reminds me of Dr. Daystrom defending the M-5 computer, back in the original series.
robbiet480 5 hours ago 1 reply      
It's all over CNBC right now, Tesla is down 2.58% in after hours after closing up 0.99%
ebbv 5 hours ago 0 replies      
I count myself as a big Tesla fan but this announcement is too heavy on the disclaimers and ass covering and too little on empathy and sympathy.
gohrt 5 hours ago 1 reply      
> The high ride height of the trailer

Is non-standard ride-height a safety failure? Should trailers have crash panels to avoid drive-unders?

scythe 5 hours ago 1 reply      
> Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky,

So this got me wondering: are there areas where retinae still do better than cameras at certain kinds of discrimination problems? Nature still has human technology beat on hardware in a few categories (particularly joints), but I didn't expect eyes to be one of them. When I look at or near a bright light, I can see objects close to the light much better than I can in a photograph, but I always assumed this was only because I had a cheap camera.

The corresponding question is: how expensive is the camera that prevents this accident?

BoarAndBuck 5 hours ago 0 replies      
return0 5 hours ago 2 replies      
Tesla should not allow autopilots. They have nothing to gain by the negative publicity they will get from cases like these. Regardless of how bad luck the driver had, tesla can always be blamed for relaxing the driver's reflexes by giving a false sense of safety. It doesnt matter if it was by default turned off or hidden in menus.

What other companies use this kind of driving assist system? The ones i know are only hinting to the driver , and disengage if the driver does not respond.

Got to be careful about these things. You don't put beta stuff in cars. It's like your doctor making drug trials on you. They should figure out a way that autopilot engages the driver and keeps his eyes on the road.

zallarak 5 hours ago 1 reply      
No other auto maker would do this. Please correct me if I'm wrong, but this makes me respect Tesla even more.

It's also a reminder to engineers to have an extremely high bar of excellence.

EDIT: thanks for telling me I'm wrong. This was a self-serving blog post on their part to damage control.

To Compete Better, States Are Trying to Curb Noncompete Pacts nytimes.com
292 points by dnetesn  2 days ago   202 comments top 25
rdl 2 days ago 5 replies      
I've been thinking of a mutual defense society for startups in Washington who in good faith hire someone and the employee is then threatened by Amazon or Microsoft, as a former employer, on non compete grounds.

I suspect if you were willing to completely go to war over a hire, you could ultimately get noncompetes invalidated. The problem is the chilling effect on marginal hires -- it usually isn't worth going all the way on hiring someone when you have other options, and even when it is, it makes sense to settle and move on (for both sides).

(From what I've heard, Amazon is particularly egregious in enforcing noncompetes. I believe they are an unconscionable and unconstitutional restraint on individual liberty, and believe the federal courts would agree, as well as the court of public opinion.)

jt2190 2 days ago 0 replies      

 *** MASSACHUSETTS TECHIES: THIS IS ABOUT YOU *** * Read the article * Contact your State Representative [1] and urge them to support the bill [2]
[1] https://malegislature.gov/People/Search[2] I think this is the bill text: https://malegislature.gov/Bills/189/House/H4434

tdicola 2 days ago 3 replies      
I've been fortunate enough to have an employer with no non-compete, but in the past with my first job out of college I just signed everything and was under one for a year after leaving. My thought process now is that a non-compete is just another thing to negotiate during the hiring process. I'm happy to sign one but I will absolutely require that I'm compensated at least for a year of salary, benefits, etc. while under the non-compete. It's only fair that if you're asking me not to work for a year that I can actually sustain myself and my family. Employers are just coasting off the fact that people will sign these things and ask for nothing in return--once people realize it's negotiable (even if you just plain walk from that job opportunity to something else) employers will have a much tougher time with them IMHO.
morgante 2 days ago 1 reply      
It seems particularly egregious that non-competes apply even in the case of layoffs or firing.

It's one thing to keep employees from running off to a competitor for a slightly higher salary.

It's unconscionable to fire someone and then prevent them from finding a new job in their field. That's egregious.

chollida1 2 days ago 2 replies      
It's nice for some anonymous internet person to say "Non compete clause? just don't sign them" but if you are ever lucky enough to get high up in a company this often just sometimes isn't an option.

So if you do have to sign one then the below is the advice I've been given by the employment lawyer's I've spoken to over the years.

1) make sure it says you are compensated for the time you can't work. ie if it says you cant' work for a competitor for a year then you should be getting a salary for the same duration. Some companies will try to give you a signing bonus and include language that the signing bonus is consideration for you waiving this compensation. I mean it's nice to get $50,000 in stock vested over 3 years just for signing, but it won't feel that way if your former employer also views this as compensation for you not working for a year.

I've had 3 or 4 employment lawyers go over this with me, both as an employee and as an employer and they've all repeated this, if the company doesn't' pay your salary during the non compete period then you just cant' sign it. Now they also stressed that this means the non compete is probably not enforceable but that won't be much consolation if they drag you to court.

2) Be very clear as to what "salary" means. So if you are a Google engineer and you have a salary of say $125,000 and then a bonus of say $100,000 worth of restricted shares vesting over 3 years and maybe a signing bonus of $50,000 worth of shares vesting over 3 years, you might brag that you just got paid $275,000, but the company will probably argue that they only need to pay you the pro rated amount of your salary over the waiting period.

Finance especially gets burned by this, as small salary and huge bonuses are how many hedge funds compensate their key employees, also known as the "you eat what you kill" compensation package.

3) Be very wary of telling your former employer of where you are going to work. There just is no real upside to it.

I've also been told not to hire former employee's in the first year you leave for a competitor. It's one thing for a company to loose you, but if you leave and take an entire team with you then even though you've probably done nothing wrong, it own't be very comforting when you are out of pocket $10,000's of dollars in lawyer fees and stress.

As always, IANAL, I've just happen to work in the most incestuous industry around, finance, and I've seen and heard too many horror stories of people leaving for another firm and bringing their team with them and then being in court for years.

Animats 1 day ago 1 reply      
None of those states will catch up with California. The other states are proposing weak limits on noncompetes - limited by industry, limited to "unreasonable" terms, etc. California also doesn't let employers claim rights in work you do on your own time. Both of those rights fuel California's startup culture.
harry8 2 days ago 0 replies      
It's really simple. If you want to enforce a non-compete you have to pay* the employee gardening leave for its duration.

Enforcing a non-compete when you've laid someone off is utterly indefensible and anyone who does this deserves to go bankrupt fast. Usually when forced to pay employers discover ruining an employees career isn't actually worth anything much to them and don't enforce. But if it's free it's a great lesson to current employees - we'll go after you if you want to leave so no you're not getting a raise.

*pay the maximum of the employees salary including any bonuses vs market rate, where the market rate calculation has to be defended by the employer in court and can be retrospectively re-applied. With the usual obvious basis of "the employee had offers for $x" etc.

etjossem 2 days ago 4 replies      
When an employer sends you their boilerplate agreement and you see "Employee will not work for a competing business", you're not looking at the only employment agreement they've ever signed. You're looking at the version most favorable to them. Strike out the "for 2 years following employment" and replace it with something reasonable like "while employed by the Company."

It's customary to ask for a ridiculous clause like this up front, because no candidate will walk away over the mere sight of it. They get one free chance to screw you over, just like when you were talking about salary and they asked what you made at your last job. But you have to do your part; politely bounce it back.

You are a professional and this is a negotiation. No one will begrudge you putting a reasonable counteroffer on the table.

angersock 2 days ago 2 replies      
I'm curious if there has ever been a case where an employee going into competition with their former employer has been a net negative for society at large.

Most of the tech we enjoy today probably would've been prematurely killed if the creators were under modern non-compete and NDA paperwork.

mancerayder 2 days ago 1 reply      
In New York State, the company has to demonstrate damage to it in order to win a lawsuit, and non competes are usually tied to some sort of previous granting of something, like stock options which can be lost. In practice judges rarely enforce them more than a year although the contracts stipulate longer periods.

Now, the above is just my understanding, I'm not a lawyer and new to this awful world of non competes.

Just the threat of a lawsuit can steer behavior since defense against one can be costly.

shermozle 1 day ago 3 replies      
In Australia it's quite simple. If you want an enforceable non-compete, you have to pay the person their salary for the duration. I'm personally quite happy to sit on the beach on full pay for a year or two if you guys think it's important!
anonymousDan 2 days ago 3 replies      
Wait, so in some states you can be prevented from working in your field, and the company doesn't even have to compensate you? That's insane! Why would you agree to that?
DannyBee 2 days ago 3 replies      
The old boogeyman that used to get trotted out by companies was the "they'll steal trade secrets and run away to other companies".

With the federal defense of trade secrets act now a thing, can someone tell me exactly what purpose at all non-competes still have that makes any sense?

(If it's "we spend time and money training people or signing bonuses or whatever", great, make them pay back the money if they leave too quickly)

ProAm 2 days ago 5 replies      
I thought it was illegal to prevent someone from earning a living?
OliverJones 2 days ago 0 replies      
If you live in MA, please write to your state legislator and ask her or him to move on this. We need noncompete reform here. This bill isn't perfect. But, as a child I know might say, "EMC isn't the king of me."



JustSomeNobody 1 day ago 0 replies      
My favorite is when the company you work for is purchased and the new owners come in and lay people off, then those who are left get 3 days to sign a non-compete. I begrudgingly signed because I had nothing else lined up at the time.
nfriedly 1 day ago 0 replies      
I have a friend who was "reminded" by his first employer that he had signed a non-compete after he changed jobs to a second (competing) employer. He ended up finding a third job at a company in a somewhat different field.

This was in Ohio where non-compets are enforceable, and the courts likely would have sided with the first employer if it had gotten to that point.

I don't think he ever realized what he had signed until he was "reminded" of it.

gwbas1c 1 day ago 0 replies      
I think requiring a high portion of a former salary during a non-compete is fine. Otherwise, it's just not fair to enforce.
roflchoppa 2 days ago 1 reply      
Yeah when I was working retail, there was a cat that i met that needed some help, (he was known by all the managers). I offered to help him on his project, to which my mangers found out, and were talking about firing me over it. I had no idea that that would violate my NonCompete clause that i signed to when hired, being that it was not related to retail AT all, and there were no transferable skills that I learned. Luckily my lead saved my ass, and told the managers "theres no way he knew that."
sitkack 1 day ago 0 replies      
I don't understand the companies that are going under, lay off employees and then still except the non-compete to be binding. I would at least expect where the company initiated the departure that the non-competes are no longer binding. That, and that I get a small stipend for the duration of the non-compete.
pc86 1 day ago 0 replies      
By "states" they mean Massachusetts. In many states they are all but entirely unenforceable (CA) or only enforceable under very specific circumstances (PA).
ckdarby 1 day ago 0 replies      
Paywall. Why Does HN allow this stuff to be submitted?
CyberDildonics 1 day ago 1 reply      
I had a non compete clause crossed out of a contract once. I was told they didn't usually change contracts, but they did.

I was also told 'it isn't usually enforced, there are plenty of people that have taken jobs at <only competition in the city>'. I said 'then it shouldn't be in there'.

Everything worked out for me but people need to remember a few things:

1. A contract is a negotiation/conversation/two way agreement. It is never a take or leave it offer in my experience.

2. Get the contract EARLY. Companies try to build momentum and act like a contract is a formality. Then they hand you something that takes away as much power from you as possible. They have lawyers, bargaining power, choice, time, experience, and a global view of salary that you might not. There is a huge imbalance of information.

3. If it is worth signing, it is probably worth having a lawyer take a look at it. A lawyer might bill at $250/hr. It might take 30-45 minutes to look through a contract and that can be a very worthwhile investment.

4. And of course, take the contract seriously. Don't sign something thinking the scenarios where it comes into play are too rare for you to care.

pg_is_a_butt 2 days ago 1 reply      
uh... isn't the whole point of "States" because a larger controlling body is a bad thing, and many groups want to make deals in their own ways? so then a Company WANTS to make deals their own way "Noncompete Pact" deals, and now you're telling them they can't do that... and you think by driving away these companies by not letting them do what they want, that you will somehow compete better? for what? the 50th spot on the economic rankings list?


zekevermillion 2 days ago 1 reply      
The article is seriously misinformed. It seems to imply that noncompetes can restrict employees from working in their field. That is not the case in New York. Post-employment restrictions are only enforceable to the extent they protect a "legitimate business interest" of the former employer. That interest does not extend to preventing former employees from practicing their trade. Employers use noncompetes abusively all the time. The problem is not the state of noncompete law. The problem is the nature of the legal system.
Extracting Qualcomm's KeyMaster Keys Breaking Android Full Disk Encryption bits-please.blogspot.com
296 points by laginimaineb  13 hours ago   87 comments top 13
koolba 9 hours ago 7 replies      
Full disk encryption (FDE) is a UX issue, not a technical one. You don't need a secure cryptographic processor, but the UX sucks without one.

A simple, working, FDE setup would be something like LUKS running at boot:

 1. Turn on phone 2. Phone loads up initial bootstrap OS 3. Phone prompts user for master key 4. Master key is used to unlock volume 5. Regular OS boot continues
If the master key has enough entropy, brute forcing it becomes impossible. The phone won't "disable" as there's no self-destructing component (i.e. "secure crypto chip") but that doesn't mean it can be cracked. Boil as many oceans as you'd like, you're not going to brute force 256 bits of entropy.

The UX problem is that the master key is a PITA to enter if it's long enough to be cryptographically secure. That's what a crypto chip is supposed to solve. A limited number of attempts with a shorter passphrase.

sdl 10 hours ago 0 replies      
So if I understood correctly, there are 5 requirements for such a system to be secure:

 1: secure/unmodifiable cryptographic processor 2: with unremovable rate limiting 3: and exclusive access to a hardware key 4: cryptographic processor has the only function of encrypting user data based on 5: hardware key and a user supplied pin/key
Errors done by Qualcomm:

 Violated 3: Hardware key not exclusivly readable by cryptographic processor Violated 5: Encryption based on derived key
Anything I overlooked?

(edited: formatting)

coldcode 13 hours ago 5 replies      
Since the security of Android depends on hardware and OEM software not under Google's control, depending on FDE is apparently pointless. I guarantee Google really wants to build their own branded phones with their own secure Android version and gain Apple's advantages in building secure systems because you own everything.
616c 8 hours ago 1 reply      
So out of curiosity, as someone who has a nontrivial passcode on his Android device that people constantly mock, how many characters are we talking to be safe.

I have known for a long time 4 digit numeric PINs are stupid. Sadly the San Bernadino case, for all the wrong reasons, taught me all the alternative auth methods are just as risky.

Should I be worried? I don't know. But as a long time Android enthusiast and power user who does not use Google Play on his phone and restrictive permission customization like XPrivacy, I am about to just give up and have a newish iPhone for secure stuff and knock around Android for the cool open source dev I aspire to with F-Droid.

devit 11 hours ago 2 replies      
It doesn't really break the encryption, as long as the password is strong enough to prevent brute forcing.

Relying on a weak password and a "trusted computing" mechanism like this one from Qualcomm to prevent an attacker with physical access from brute forcing it is not really advisable.

Using such a mechanism at all has downsides since it means that you lose the data if the mechanism or the entire device stops working, while otherwise you could simply move the SSD/flash/storage unit to a new identical device and have it just work as long as you type the correct password.

nickik 7 hours ago 1 reply      
I would use a longer password. But currently the unlock and the encryption password are always the same. That one of the issue I would like to see changed.

I would also like to have more fine grained rules on when I can unlock with a fingerprint, when pin and when I should be forced to put in the encryption password.

Additionally I would like to use U2F NFC token on my keychain as a second factor for unlock (if I have not touched the phone for X amount of time).

bognition 12 hours ago 3 replies      
Fascinating article, the more I learn about crypto and security the more obvious it becomes how hard this stuff is to get right.
pooze 1 hour ago 0 replies      
Could someone explain how and why it uses "0x1337" in order to validate the key?

Is that magic?


AdmiralAsshat 12 hours ago 2 replies      
Unfortunately, it seems as though fixing the issue is not simple, and might require hardware changes.

So any Android phone currently on the market is basically unfixable?

arrty88 12 hours ago 1 reply      
Would love to see the same analysis on Samsung hardware
ysleepy 4 hours ago 0 replies      
If your device is rooted, you can use a long password for encryption and a pattern or even no lock for the lockscreen.

I have set it up this way, it needed some sqlite hackery.XDA dev has appropriate infos.

woumn 9 hours ago 0 replies      
If anyone is interested when the author talked about an overview of the iOS security measures, I created a more in-depth review that is an easy read and gives you a good over-view of the security architecture used. I'm happy to hear feedback if you enjoy it. The blog can be found at: https://woumn.wordpress.com/2016/05/02/security-principles-i...
dcdevito 11 hours ago 0 replies      

the only way to fly

Huge helium discovery 'a life-saving find' ox.ac.uk
289 points by emilong  2 days ago   185 comments top 16
devishard 2 days ago 9 replies      
On helium rarity: helium can easily be produced by nuclear reaction. That's expensive, of course. But if we raise the price of helium to prevent depletion of resources, helium will be expensive too.

Given we have a way to make it, we aren't going to run out of the stuff, so I just don't think this problem is that dire.

awjr 2 days ago 3 replies      
Given the scarcity of this stuff, we really need to be looking at this in terms of centuries and maybe tight controls are still a good idea.
tomp 2 days ago 0 replies      
> Now, a research group [...] has developed a brand new exploration approach. The first use of this method has resulted in the discovery of a world-class helium gas field in Tanzania.

Hm... One try, one hit. Maybe it's not that rare after all?

aexaey 2 days ago 5 replies      

 Helium [...] is critical to [...] MRI scanner
Is that still the case, now that high-temperature superconductors are available, which work just fine while being cooled with much cheaper liquid nitrogen instead of requiring liquid helium?

JakeWesorick 2 days ago 6 replies      
If helium is so important and rare why do we put it in balloons?
perilunar 2 days ago 2 replies      
What is wrong with people why would you ever use BCf instead of kg? Same with barrels (oil) and kWh per day (power? energy? I get confused with all the different time units in there).

Use SI units FFS.

executesorder66 2 days ago 0 replies      

I found those interesting reads as well. I did not know the US had the largest supply of Helium.

nradov 1 day ago 0 replies      
I guess we'll be able to continue using helium mixes for open circuit scuba diving for a few more years. Hopefully that will allow sufficient time for rebreathers to become cheaper, simpler, and more reliable. I do feel bad about wasting a limited resource every time I go diving.
Animats 2 days ago 0 replies      
The "huge helium discovery" needs to get more real first. The startup Helium One has not yet drilled a well.[1] They're prospectors only at this point. You can read similar gold mining prospector reports if you're into that sort of thing.

[1] http://www.helium-one.com/projects/

mark-r 2 days ago 0 replies      
I wonder how accurate the estimate is for the size of the find? They say it was "independently verified", but if they were paying the bill for that verification they might have simply gotten what they wanted to hear.
desireco42 2 days ago 0 replies      
It just should be more expensive, that is all. That is at least my understanding of the problem.
dlio 2 days ago 0 replies      
What is it about the MRI/superconducting applications that precludes recapture?
randyrand 2 days ago 0 replies      
What percentage of helium goes into the residential balloon market?

Just curious.

AKifer 2 days ago 0 replies      
start a race and the next big thing will be a Helium bubble
mmaunder 2 days ago 0 replies      
The rift valley where this was found cuts straight through the Serengeti, one of the largest and most awesome national parks in the world. I hope this doesn't tear that area up.
rnnr 2 days ago 0 replies      
We run out of things we could possibly run out of!!!
What happens when you try to publish a failure to replicate in 2015/2016 xeniaschmalz.blogspot.com
327 points by brianchu  1 day ago   135 comments top 15
tehwalrus 1 day ago 3 replies      
The paper was a meta-analysis of nine unpublished studies. While the issue of non-publication of negative results is extremely important, this should not be the test case for it.

I agree with other comments that the lack of nul-results in journals is a massive showstopper for epistemology and confidence in scientific work. In my experience during my PhD, academics were unwilling to submit nuls, they wanted to find the actual answer and publish that instead - which leads to delays of years, leaving well known and obvious errors in existing papers unchallenged in public, and potentially leading to that scientist never even submitting the nul result.

faux_intellect 1 day ago 2 replies      
At this point, I think Google Scholar should step in and just put a replications section beside every scientific publication. People should be able to quickly and easily know how many times a study has been attempted to replicate and, of those attempted, how many times it has actually successfully replicated.

It's unfortunate that replications aren't taken more seriously these days, but it also doesn't help that, when there are actual replications, you have to scour the internet for them rather than having them readily available to you.

jknoepfler 1 day ago 0 replies      
The narrative would be more persuasive if it incorporated a story of how the paper evolved meaningfully in response to peer criticism. The question lingering in my mind after reading this is whether and how the paper was substantially revised (in light of reviewer feedback) between rejections. I'm sure it was (it has to have been, right?), but we don't get that feeling from the blog post. The author(s?) should have received a large amount of very good feedback between rejections from well-meaning peers in their scientific community. I don't recall reading about incorporating any of that feedback into subsequent revisions of the paper. The term "meta-analysis" probably should have been dropped after the first (pointed) rejection, for example, and the paper should probably have been broken down into two or three smaller papers rather than submitted as a 'meta-analysis' of unpublished work.

This is not to say that peer feedback wasn't taken seriously. I don't know that at all. But if the goal is to persuade a skeptical audience that academic publishing is broken, the author should articulate how they followed best practices in response to rejection letters from peer-reviewed journals. The alternative is to sound arrogant and self-defeating, which I'm sure was not the intent!

schlipity 1 day ago 6 replies      
Forgive me for not being an academic, so maybe this question is moot.

Why isn't there a place that links to a given paper so that discussion about the paper can be centralized? It could also contain links to papers that link to that paper, among them would/could be the failure to replicate information, adding to the discussion. And I don't really mean a topical "this is what's new" site, I mean a historical "This is the paper, and this is what people have said about it." sort of site.

This seems like a fairly elementary idea. The only seeming difficult bits I see are:

a) Getting (legal?) access to these papers.

b) Dealing with a large number of papers (millions?).

c) Authenticating users to keep the discussion level high.

d) Moderating the discussion in a way that doesn't piss off academia (impossible?).

e) Keeping the number of these sites (competition, if you will) low so that the discussion is not fractured between them.

It would seem like one of the "Information wants to be free" sites that host the papers that everyone shares with each other would be a great place to start something like this.

bandrami 1 day ago 2 replies      
There's a psychology journal[1] dedicated to only publishing null-hypothesis results.

[1] http://www.jasnh.com/about.html

arcticfox 1 day ago 2 replies      
So broken. I'm not involved in academia, so the most I can contribute are upvotes here and there, and giving respect to those who push against the current.
skosuri 1 day ago 4 replies      
1. There are many places this could have been published without an importance review, eg PLoS ONE.

2. I think anyone interested in the replication problem needs to read this piece [1] by Peter Walter. As he put it: "It is much harder to replicate than to declare failure.".

[1] http://www.ascb.org/on-reproducibility-and-clocks/

mcguire 1 day ago 0 replies      
This seems to be the original work in question: https://www.researchgate.net/publication/8098564_Reading_Acq...
danbmil99 1 day ago 7 replies      
Seems to me this issue is getting to the point where it could become an existential threat to the credibility of science in general. Note how climate-change deniers have recently used these sorts of arguments to challenge the consensus - is it really so far-fetched to argue that perhaps climate scientists are as biased as researchers in areas such as medicine and linguistics?

The pay-wall, blind peer review process seems broken beyond repair. There needs to be a better, robust method to publish every relevant study that is not utter crank, and get some sort of crowd-sourced consensus from researchers with credible reputations.

LanceH 1 day ago 0 replies      
There should be a failure to replicate journal. The standards committee would should be all about rigor so that just getting published there would be a demonstration of technique and ability if not headlines.
harry8 1 day ago 2 replies      
Any journal that refuses to publish a failure to replicate research they originally published without proper reasoning should be closed down. That journal should have such a reputational black mark next to it that nobody would want to publish there and anyone who already had should be at the door with pitchforks and torchers for tarnishing the researchers' reputations.

If it's was important enough to publish research saying "here's something" then it's important enough to publish properly done research showing "actually, probably it's nothing." By definition. Or it's not science, it's fking marketing and the journal should be treated with the same scientific reverence we reserve for pepsi-cola advertisements from the 1990s.

cpncrunch 1 day ago 2 replies      
PLoS one specifically says they will publish "Studies reporting negative results".
apathy 1 day ago 4 replies      
Put it on arXiv or f1000 for fucks sake. Who actually believes psychology papers anyways? The vast majority are fishing expeditions as best i can tell.

When the field starts enforcing minimal standards (as expected for, say, clinical trials, or even genetics studies nowadays) maybe someone will give a shit. Until then people like this guy who actually seek the truth will be ostracized.

arviewer 1 day ago 1 reply      
There should be a Nulled Science Magazine!
guard-of-terra 1 day ago 0 replies      
Some of those reviews are good materials for http://shitmyreviewerssay.tumblr.com/
My condolences, youre now the maintainer of a popular open source project runcommand.io
299 points by donnemartin  1 day ago   127 comments top 20
impostervt 1 day ago 6 replies      
I'm the creator of a just-mildly popular OS project (~700 stars on GH) and I had to give up being the main maintainer. Too many "issues" were just people who couldn't debug their own code, or "issues" where the person would describe some weird edge case but be unable to provide any example code. After enough of those, and getting busy with other things, I just couldn't keep up with it.

Luckily, one of the contributors stepped up and is now the main maintainer. I still read through the issues that come in and he does a great job of responding and keeping cool. I don't know how he does it.

jordigh 1 day ago 0 replies      
The most important thing about releasing free software (open source) is that you don't owe anything to anyone by doing so.

Virtually all free licenses actually codify this. If you read the SHOUTY CAPS part, it says exactly that: no responsibility from the author.

People can come and ask for you to fix something or support them or accept their patches. That's fine, and if you want to engage them, you can. But there is no obligation to do so.

Once I accepted this, the growing pile of bugs/issues/support requests on the Octave and Mercurial bug trackers become a lot less anxiety-inducing. I'll try to help, but if I couldn't... sorry, it's not my responsibility!

kmfrk 1 day ago 1 reply      
Is there a good reason GitHub doesn't support (issue) "moderator" roles instead of "contributors", which also gives write access? So much could be solved with better moderation tools.
smegel 1 day ago 7 replies      
> Open source produces quantitatively better software

As someone who mainly uses open source software for most of my work, I'm not so sure about this.

fagnerbrack 1 day ago 1 reply      
Yeah, just how I feel, see http://github.com/impress/impress.js and https://github.com/impress/impress.js/issues/435. Luckily the activity doesn't match a number of stars, although still, it is on the top 30 on Github.
andy_ppp 1 day ago 2 replies      
There needs to be a filtering of issues down to maintainers.

Github should think about how to build this IMO, it would make things much better if there were levels of involvement:

-1) Asked for help before and NOT valid

-1) Pull request not valid

-1) Noisey user (might have positive Karma but often creates irrelevant issues).

0) Not asked before

1) Asked before and valid

2) Asked before and committed fix

3) Pull request valid

4) Pull request merge

5) Complex feature merged

6) Contributor

7) Maintainer

Now each of these can be discussed how to get there but if you could by default only see one level above and 3 levels below that would help a lot.

The default count of issues and view of maintainers/contributors on github should not include anything below level 1.

sotojuan 1 day ago 3 replies      
Many have started adding collaborator status to any user that makes a non-trivial commit to both show appreciation and lighten the load.
franciscop 1 day ago 0 replies      
I discovered another way of getting contributors with Umbrella JS [1] (500+ stars), using up-for-grabs [2][3].

In total I'd say it's similar or even a bit more of effort per issue, however many people contribute and sometimes they step up and continue with more things.

Not sure how useful it is for projects that are more monolithic though, I was lucky with this one as Umbrella JS has many small modular parts.

[1] http://umbrellajs.com/

[2] http://up-for-grabs.net/

[3] https://github.com/umbrellajs/umbrella/issues?q=is%3Aissue+l...

jordigh 1 day ago 1 reply      
A lot of those 80% projects on github without licenses are simply not meant for public consumption. Think things like bash or emacs dotfiles or class notes scribbled into a txt file or a very personal project that has no hope of working anywhere but on the author's machine. I think people overstate how "post-licensing" people view open source (ne free software). For everything important, people still mostly slap free licenses on their projects.
ManlyBread 1 day ago 5 replies      
Question: why people seem to be so affected by unpleasant contributors? I mean, no one forces these people to use an open source project and no one has an obligation to support it at all, yet I see articles like this popping up pretty often. What's the harm in ignoring the unpleasant contributors and focusing on the ones that provide actual value to the project?
Keyframe 1 day ago 0 replies      
I salute you, open source maintainers. You are our heroes we don't deserve. I'm pretty sure I would become a murderer if I had to deal with what you deal with on a daily basis.
makecheck 1 day ago 0 replies      
This could also be viewed as a challenge to develop better tools.

For instance, maybe there is data mining to be done on issues to auto-aggregate them so that you can still make some sense of the list without manual tagging and without relying on each submitter to choose sensible categories.

Maybe there need to be open-source libraries aimed at unclogging issue systems, such as a system for in-app bug reporting.

Maybe issue-tracking systems should auto-upgrade priorities on stale issues, or auto-close them based on lack of activity from the submitter.

cyphar 1 day ago 0 replies      
Can we please stop using the term "open source" when referring to free software? Especially in the discussion of benefits of free software over proprietary software. What you're contrasting here is the bazaar model over the cathedral model (though the book itself is horrible).
kzisme 1 day ago 0 replies      
Another presentation that is in line with the thoughts of this discussion: https://www.youtube.com/watch?v=UIDb6VBO9os
nicolasMLV 1 day ago 1 reply      
Let's hope Github will find a way for users to be able to 'tip' maintainers. I concede it is very hard to find the solution (Who? When? How much?)
tropo 1 day ago 0 replies      
Dealing with users is nothing compared to dealing with Linux distributions.

The distribution will patch your code. You are powerless to stop this. They will add bugs. They will add ill-conceived and/or incompatible features that the users will come to rely on. Fedora adds a -Z option, and Debian adds a -Z option, and SuSE does too, and all of them are in conflict with each other and with the purpose for which you had reserved the -Z option.

Of course, the users will blame you.

orionblastar 1 day ago 0 replies      
There are a lot of beginners who try to learn programming by using GH. They might not know how to make an example, or how to reproduce the bug by giving directions.

When I worked as a programmer, I had coworkers and other employees I worked with who couldn't tell me how to reproduce a bug, or give example code, and I had to train some other programmers in the language to help them learn more to work with a team. You can't just WONTFIC an issue, and yes it is harder to work that way.

You cannot block or ban people who are annoying or ignorant in a corporate environment. You have to find a way to help them out, by educating them, or just trying to fix the bug as best you can even if you can't reproduce it.

I made a debug version of the program that trapped for errors, and wrote them to a log file and displayed a custom error message so I can tell what part of the program it had the error in. You have to learn how to innovate and work with annoying and difficult people who don't even know they are annoying or difficult. Yeah it is stressful, but that is the difference between an open source project on Github, and working for an employer as an employee or contractor.

lyra1337 1 day ago 0 replies      
i can count 16 wp-cli
Jedd 1 day ago 1 reply      
This is one of those articles that reminds you why we should prefer the term 'free software'.

 > First, a conclusion from the 2015 Future of Open Source survey: > Seventy-eight percent of respondents said their companies run > part or all of its operations on OSS and 66 percent said their > company creates software for customers built on open source. This > statistic has nearly doubled since 2010.
The 2015 survey on this website was responded to by 1500 people. (0.0000002% of the population)

They don't provide numbers for the 2010 response set.

 > Second, Nadia Eghbal, who is doing really great research into > the economics of open source, calculated that open source was > worth at least $143M of Instagrams $1B acquisition.
Likely wrong, but my feeling is most HN readers aren't thinking Instagram valuation is the best way to judge the value of free software.

 > I think there are a few reasons for this Cambrian explosion of > open source usage: > > 1. Open source is free to use, which means a company can spend money > on people (aka innovation) instead of software licenses.
Everything I've read in the past 15 years suggests that free software's value proposition isn't founded on capex (in fact it's probably misleading to look at these numbers)

 > 2. There are now a critical mass of reliable open source > components, which accelerates your products time to market.
Assertions without evidence. Plus a lot of us probably have thought a critical mass was achieved a long time ago.

 > 3. Open source produces quantitatively better software.
Assertions without evidence, despite popularity of opinion.

 > 4. Near and dear to me personally, open source permits > companies to collaborate on common problems without > complicated business agreements.
In my experience some of the most exhaustingly complicated licence discussions have been around the various free (or similar) licences on various lumps of code. At least with non-free licenced code you know nearly precisely where you stand.

 > "Open source" now means two things. > > Clearly, theres the official definition, a permissive > license which grants certain freedoms to the end user.
Clearly? No citation provided. Worse yet, 'open source' was clearly a rebellion by esr against the idea of emphasising the freedoms to the end user.

Eric's related rant at http://www.catb.org/esr/open-source.html is often provided ... but read it critically and you realise there's very little of substance there, other than negating the extant 'free software' position, and a call to authority fallacy.

 > But when people use "open source" today, theyre probably > referring to building and collaborating in public. In fact, > they may not care about the license at all over 80% of > projects on Github dont have an explicit license.
80% of github projects don't include a licence ... qed ... 'open source' equates to building and collaborating in public?

 > Why are so many people involved in open source? Well, for > all of the business reasons covered before. I also think its > joyful to get to work with people of a variety of cultures > and backgrounds. Additionally, open source has given me a > sense of permanence to my career, where the job Ive taken > from year to year has not.
I'm empathetic, but this sounds like a plea for help.

More importantly it precludes the more common, and nuanced, reasons that people generally provide for why they spend so much time and energy contributing to the free software movement.

The rest of the post feels a bit facebooky, with lots of 3-8 word pithy aphorisms encoded in a 1024x576 pixel image.

ashitlerferad 1 day ago 0 replies      
Hmm, neither of those definitions of "open source" appear to be correct. Great post otherwise.
Rails 5.0: Action Cable, API mode, and more rubyonrails.org
350 points by tenderlove  4 hours ago   115 comments top 20
mhartl 3 hours ago 2 replies      
Looks like now's a good time to mention that the Ruby on Rails Tutorial book has already been updated for Rails 5:


Sales actually just launched on Tuesday (announcement here: https://news.learnenough.com/rails-5-edition-of-rails-tutori...), and you can pick up your copy of the new 4th edition here:


That link includes a 20% launch discount, which expires tonight at midnight PDT.

As with previous versions, the new edition focuses on the core principles of web development, so there isn't much Rails 5specific material in the update, but I am planning standalone Learn Enough tutorials on things like Action Cable and Rails API (http://learnenough.com/).

jbackus 3 hours ago 3 replies      
I feel bad for Sean Griffin. He spent over a year overhauling the internals of ActiveRecord to add this attributes API. His work dramatically improves coercion and type enforcement for ActiveRecord users. Seems weird for this to only get a non-descriptive bullet point in "other highlights."

Here are the docs if anyone is interested: http://edgeapi.rubyonrails.org/classes/ActiveRecord/Attribut...

katabasis 6 minutes ago 0 replies      
I know that lots of other languages / frameworks compete these days for the title of "most-cutting-edge", but I love working with Rails. There's a lot to be said for the "stability without stagnation" approach. I come from a design background, did not study computer science, and am usually working as a team of one. Rails lets me leverage very finite amounts of time and theoretical knowledge into working software that is elegant, testable, and comprehensible. It is an amazing piece of technology, and I'm happy to see it's still going strong!
connorshea 3 hours ago 0 replies      
I've been following along with Rails 5 for many months now and I've been tracking progress on updating GitLab in our issue tracker[1].

Feel free to look at the relevant merge requests and use them as guides to upgrade your own apps :) We're still on 4.2.6 since we have a few gems we're waiting on, but I hope to change that in a month or two!

Favorite features are probably the various performance improvements and underlying improvements to the framework, as well as quiet_assets being moved into sprockets-rails.

I also wanted to give a shoutout to BigBinary's Rails 5 series, since it's been great for finding out about new features[2].

[1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/14286#note_42...[2]: http://blog.bigbinary.com/categories/Rails-5/

paws 3 hours ago 0 replies      
Congrats to the contributors and thanks for the hard work pushing this out!

Looking forward to Heroku working out of the box[1] and quiet assets moved to core[2]

[1] https://blog.heroku.com/container_ready_rails_5

[2] https://github.com/rails/sprockets-rails/pull/355

bcardarella 3 hours ago 4 replies      
We've moved all of our backend offerings from Rails to Elixir/Phoenix. Despite some questioning the value of anything below 100ms response times there is a lot of data backing up the idea that Elixir/Phoenix can lead to a more maintainable and more economical solution. I spoke about this recently at RailsConf: https://www.youtube.com/watch?v=OxhTQdcieQE

Don't get me wrong, I think Rails is an amazing technology but it doesn't fit the use cases and demands our clients have today.

themgt 3 hours ago 5 replies      
Looks like a solid and and relatively straightforward upgrade from Rails 4.2. It's hard not to feel Rails has become a bit of a slow-moving behemoth though, with this release four years after 4.0. I've still got a couple clients using 3.2 from 2012 and things aren't that different.

Smart money at this point seems like a significant portion of the Rails community could begin moving to Elixir/Phoenix over the coming years. The advantages from a language/VM level just look impossible for Ruby to overcome, along with a blub effect kicking in.

jesalg 3 hours ago 3 replies      
This has been long awaited:

 Post.where('id = 1').or(Post.where('id = 2'))

vassy 2 hours ago 2 replies      
By looking at the comments you'd assume a new version of Phoenix got released.
drchiu 1 hour ago 1 reply      
I personally really love using Rails. It's been very productive for me over the past several years I've been able to make a living off of this.

I see a lot of comments here about Elixir/Phoenix. Is the performance gain really that big? I currently serve 2-3 mil requests on Rails per day on around $200 worth of servers with at least one database call per request. In defense of Rails, there are so many libraries out there already built I can get an app up and running fairly quickly. I really think it's a matter whether the tool fits the bill.

cutler 1 hour ago 8 replies      
Concurrency aside, why is Elixir preferred over Ruby when it doesn't even have a native array implementation? No, lists and tuples are no substitute nor are maps with numeric keys as Jose has suggested. If you want an array in Elixir your only option is Erlang's implementation which ain't pretty - http://erlang.org/doc/man/array.html. When I raised this issue in the mailing list and on IRC the response was invariably a definsive "I've never needed arrays", "We use tuples for most things in Elixir" or "Array performance characteristics are difficult to optimise in a functional language such as Elixir". I just find this disappointing.
bratsche 3 hours ago 1 reply      
This is pretty cool. I've kind of moved on to Elixir and Phoenix for my web stuff these days, but it's still nice to see Rails going strong.
albasha 1 hour ago 3 replies      
I am very jealous as a Django developer. They decided to keep Channels as a third-party package after all, maintained by a single guy :(
Fuffidish 3 hours ago 0 replies      
Nice! The Turbolinks 5 presentation was really impressive
hartator 3 hours ago 0 replies      
The big things seems to be websockets with Rails cable. I've just skim through the documentation but seems solid. Anyone has a strong opinion on this?
fantasticsid 1 hour ago 0 replies      
I actually don't like rails' convention over configuration school of thoughts. It makes everything implicit. For any large rails app it's difficult to reason about how things are working, unless you learn all the conventions by heart (by the way, these conventions don't seem to be documented well)
fn 3 hours ago 0 replies      
Finally! Congrats to all the contributors!
justinzollars 3 hours ago 1 reply      
I wish I had API mode 4 years ago
desireco42 3 hours ago 0 replies      
A long awaited release is finally here, Yoohoo!

Now, let the job of upgrading begins.

Also, I love how everyone found a feature they've been waiting for and celebrating it. Rails, always bring joy (pain comes in few years ;) )

iamleppert 2 hours ago 0 replies      
All my new development these days centers around node + browserify/webpack with a react frontend, or just plain javascript for small projects.

WebSockets would have been a cool addition four years ago. There is little compelling case for new development. Sorry to be so harsh.

Employee 1: Yahoo themacro.com
270 points by dwaxe  10 hours ago   92 comments top 13
hitekker 8 hours ago 4 replies      
Excellent article; I thought the section copy-pasted below to be a rather non-trivial, unintuitive, and meaningful lesson:

> Craig : How was it to ride that wave, especially when the bottom fell out in 2000?

> Tim : When things are going well and youre in a growth industry, you dont have to deal with many difficult issues. Its the old cliche, winning solves everything.

> Craig : For sure.

> Tim : Its really true. It solves everything or maybe better said, it masks all your mistakes. A lot of the mistakes you make get masked because you receive almost no negative feedback.

> But then the bottom fell out and the board let Tim Koogle go. The upper ranks of management emptied out pretty quick, except for me and the CTO who stuck around. We got a new CEO and set of peers in upper management. Let me just say, I learned a whole lot more about business on the way down than I did on the way up.

overcast 8 hours ago 3 replies      
"I went there with a positive attitude. The work was interesting. It was rewarding and intellectually challenging. But after a few months, I remember looking at my boss and my boss boss, and my boss boss boss, and saying to myself, You know what? I dont want any of their jobs. I saw how they spent their time and didnt find it interesting or very rewarding. They spent their time managing meetings and office politics."

Exactly my sentiment.

taurath 5 hours ago 0 replies      
I do always love the underdog stories, but then you get to "I had to explain to my parents, who were paying my way through [Harvard] Business School". I realize a lot of people here had their educations paid for or at least got some hereditary assistance. Not sour grapes here - I'm happy with where I am from what I've had. I do hope people see how many more options become open to people though with parental support.
mathattack 7 hours ago 0 replies      
I like how he closes. That there are important roles for non-founders who jump in after seeing that an idea is a good thing.

On one hand, I guess it could be seen as a lack of confidence to not do your own thing. But on the other hand, it could be seen as not letting your ego get in the way of recognizing a good idea. I can see both sides and honestly, I dont know where the truth lies.

For me, my sweet spot is when I can say, Thats a great idea. Its just getting started. Count me in.

bluedino 8 hours ago 4 replies      
>> Come out to Silicon Valley and get a regular 9 to 5 at a place like at SGI [Silicon Graphics], and then you can moonlight with us. And well see where it goes.

That sounds so absurd. Today that would be like, "hey come out here and get a job at Instagram, and them after-hours we'll work on our own thing"

JoachimSchipper 9 hours ago 1 reply      
This is cool, but beware of enormous success becoming too salient - most employee #1's will have a less successful ride...
hbhakhra 8 hours ago 0 replies      
"On one hand, I guess it could be seen as a lack of confidence to not do your own thing. But on the other hand, it could be seen as not letting your ego get in the way of recognizing a good idea. I can see both sides and honestly, I dont know where the truth lies."

Thats an interesting distinction he makes between the type of person it takes to be an early employee vs a founder. Both take on huge risks but as described here its a very different profile.

AdamN 6 hours ago 1 reply      
Step 1: Go to a good school where you cultivate great connections.
cryptozeus 1 hour ago 0 replies      
Please do audio/podcast of this if possible
cdnsteve 5 hours ago 0 replies      
I miss the old yahoo in its heyday. I remember hearing their radio commercials .. yahoooo! Sounds like a great ride.
daveloyall 6 hours ago 0 replies      

 > [...] the internet was used exclusively for the > non-commercial sharing of information at the time. The idea > of commercializing the internet wasnt accepted by the very > people using the internet. Of course, the number of people > and the demographics of those people were rapidly changing. --Tim Brady, about the creation of the first f*^#&*! banner ad

nodesocket 6 hours ago 0 replies      
Would be interesting to hear how much equity he got as employee #1? What was the vesting schedule and terms?
a_small_island 9 hours ago 3 replies      
>"Craig : Hahaha."

What is the point of including this?

Ways to maximize your cognitive potential scientificamerican.com
276 points by brahmwg  13 hours ago   141 comments top 21
jasonellis 11 hours ago 5 replies      
Here is my Cliff's Notes version of the article's list:

5 Ways to Increase Your Cognitive Potential:

1) Seek Novelty. Openness to new activities correlates with IQ, because those individuals are constantly seeking new information, new activities to engage in, new things to learn, and new experiences.

2) Challenge Yourself. Brain games like Sudoku don't work to increase cognitive potential if you keep playing them. You play them, learn how the game works, then move onto a new challenge.

3) Think Creatively. This doesn't mean "thinking with the right side of your brain." It means using both halves of your brain to make remote associations between ideas and switching back and forth between conventional and unconventional thinking (cognitive flexibility) to generate original ideas appropriate to the activity you are doing. Like thinking both inside and outside the box when trying to solve a problem.

4) Do Things the Hard Way. GPS as an example. You may use GPS because you have a poor sense of direction. Using GPS will make it worse because you aren't giving your brain a chance to learn and build its ability to navigate. Same thing with auto-correct/spell check. You can't spell anymore because you rely on software to fix your mistakes.

5) Network. Whether on social media or in person, this gives you exposure to different ideas and environments that you otherwise wouldn't be exposed to. It allows you opportunities to practice the previous 4 objectives. Knowing more people gives you the chance to tap into more collective knowledge and experience.

FuNe 12 hours ago 3 replies      
"Efficiency is not your friend when it comes to cognitive growth. In order to keep your brain making new connections and keeping them active, you need to keep moving on to another challenging activity as soon as you reach the point of mastery in the one you are engaging in. You want to be in a constant state of slight discomfort, struggling to barely achieve whatever it is you are trying to do"Then working in IT (aka being in a constant state of noob-ness) is making you ever smarter.
CuriouslyC 12 hours ago 6 replies      
One of the most consistent ways research has demonstrated to increase connectivity in the brain is to learn new physical skills. Controlling the body in space appears to be particularly good at stimulating the growth of new synapses (and possibly new neurons as well, though the research is not conclusive here outside the hippocampus). Yoga is a good start. Modern dance, breakdancing, capoeira and gymnastics are all excellent if you're slightly more athletic.

Beyond that, practicing thinking in different ways really helps your brain develop. One thing that most people neglect is geometric/mechanical intelligence. Get some 3D puzzles, and once you get really good at them, start building simple machines. If you never got good at math, trying to pick up some advanced mathematics can be a good exercise as well.

hyperpallium 6 hours ago 0 replies      
> once the "training" stopped, they went right back to their previously low cognitive levels... not to create a lasting change.

This article criticizes previous methods for lacking enduring effect, but does not claim enduring improvement for any of the promoted methods (including for the boy with PDD-NOS, and dual n-back) nor revisit the issue. On the contrary, it later claims that on-going training is required. This is not "lasting change".

This article is written enthusiastically rather than scientifically. It reminds me of "In Search of Excellence", that had sensible, intuitively appealing advice, but whose supporting data turned out to be fabricated.

Also, quoting Einstein is a red flag. He wasn't a polymath (unless you count several areas of theoretical physics as wide learning).

Still, it's interesting, and what more can you expect from popsci Scientific American?

agarden 10 hours ago 0 replies      
First of all, let me explain what I mean when I say the word "intelligence". ...I'm talking about increasing your fluid intelligence, or your capacity to learn new information, retain it, then use that new knowledge as a foundation to solve the next problem, or learn the next new skill, and so on.

And when you define intelligence this way, it turns out that the best way to increase your intelligence is to practice learning new things.

But what if you defined intelligence as depth of insight instead? It would seem that were one to define it that way, dropping new skills as soon as the novelty wears off would be counterproductive.

robbiep 12 hours ago 2 replies      
I thought dual n-back had been discredited as a method of increasing cognitive performance. I know there must be some experts on here - would you care to comment?
atemerev 12 hours ago 5 replies      
I can relate to that. I am not super-intelligent, I have ADD, and generally consider myself to be lazy and distraction-prone.

However, for some reasons unknown, I have excellent working memory, which allows me to perform feats. Multi-choice exams? Can prepare for anything in few hours. Learn Scala (and another 5-6 programming languages) in a few months? Easy! I don't use password managers, as I remember all my long passwords. And credit card numbers. And phone numbers. This multiplies my intelligence quite significantly.

If only I could be consistently productive...

jobvandervoort 13 hours ago 3 replies      
The 5 ways listed are:

1. Seek Novelty

2. Challenge Yourself

3. Think Creatively

4. Do Things The Hard Way

5. Network

dom2 13 hours ago 0 replies      
> In order to keep your brain making new connections and keeping them active, you need to keep moving on to another challenging activity as soon as you reach the point of mastery in the one you are engaging in.

Mastery may be too strong of a word here but the intention is definitely clear. It seems to me that certain activities are more suited to naturally force novelty on someone. Musicianship comes to mind, as when one finishes learning a piece, they can advance to a more challenging one, which would be considered 'novel'.

taurath 4 hours ago 0 replies      
As someone who previously fit the diagnostic criteria for PDD-NOS as a child and no longer does - I wonder how much of the described effect comes from a potential, sparsely documented trait of HF autism spectrum disorders that starts a person at a lower cognitive level but has more potential than others to overcome such a handicap?

I'd love to see more study in this - at age 15 (whilst heavily medicated) I had the social skills of a 9 year old, generally because of the lag of having to find processes that worked for me where neurotypical kids had a naturally good environment to learn these things. Since striking out and finding my own path I've grown leaps and bounds and many people I respect as having good social skills and emotional intelligence call me charismatic.

My hypothesis is that autistic kids /require/ a rational framework with which to work in dynamic situations, but while young do not have enough well-developed/healthy cognitive maps or experience to achieve a workable one until later. Couple that with low expectations and special treatment - necessary to stave off active pain but eventually turning into a crutch - had I stayed on medication/done what the doctors/teachers/parent said I have no doubt I would probably be on some sort of disability or at the very least not have the skillsets that have given me success today.

All that said, I'm certainly a firm believer of being able to grow cognitively at any age. There's just a hell of a lot of inertia that is very easy to get into - habits die very very hard and require a lot of effort to overcome. When you don't have the ability to do a hard reset and move away, get out of the space you're in it gets harder.

tatool 12 hours ago 0 replies      
As the article is pretty old I suggest looking at a more recent discussion of the topic:http://www.nature.com/nature/journal/v531/n7592_supp/full/53...
SNvD7vEJ 11 hours ago 2 replies      
So plowing through many different games on e.g. Steam, and just playing each game for a short period (not trying to excel or improve your scores) before moving on to the next game, could somewhat satisfy #1 and #2 (novely + challenge)?
neovive 9 hours ago 0 replies      
This fits well for anyone working in web development. The quote: "You want to be in a constant state of slight discomfort, struggling to barely achieve whatever it is you are trying to do" explains how I feel whenever I'm starting out with new frameworks, languages and tools. The webdev world is in a constant state of flux. This year is a great example for me, as I'm in the process of learning: ES6, Typescript, Angular2, RxJS and Webpack--I'm always in a state of "slight discomfort".

I just have to avoid using "Google" and "StackOverflow" to parallel the author's experience of travelling in Boston without GPS, but I don't think I'm ready for that yet.

mcguire 11 hours ago 0 replies      
Hypotheses concerning intelligence from this article:

1. It is correlated with short term memory.

2. It is anti-correlated with experience. "Efficiency is not your friend."

3. Technology affecting cognition (think of a map or a calculator) acts as a crutch to reduce the required intelligence for an activity. Just like physical technology.

4. Intelligence is correlated with social skills.

basseq 8 hours ago 0 replies      
By the way, this is why Lumosity is generally regarded as "crap" by the scientific community (and why they were fined $2M for deceptive advertising[1]). Any "improvement" you see in playing Lumosity's games isn't improvement in mental acuity, but efficiencies in repetition (#2).


koolba 12 hours ago 3 replies      
Since when has Scientific American resorted to listicles?
1024core 11 hours ago 2 replies      
My problem is that whenever I encounter something that requires serious brain power, I start feeling sleepy. Anyone else feel this way?
panglott 12 hours ago 1 reply      
It is one thing to improve the IQ of rapidly-growing children who have developmental disorders, and quite another to improve the fluid intelligence of adults.

Brain training games don't boost IQ http://www.vox.com/2016/6/22/11993078/brain-training-games-d...

dc2 9 hours ago 0 replies      
> While Einstein was not a neuroscientist, he sure knew what he was talking about in regards to the human capacity to achieve.

This line gets to me because it paints a neuroscientist in an unqualified light. This kind of implicit trust breeds pseudoscience through inflated egos.

trentmb 10 hours ago 3 replies      
> Novel Activity>triggers dopamine>creates a higher motivational state>which fuels engagement

Is there anyone else that doesn't get this reaction?

I usually just feel tired and then ennui sets in.

ome6a 11 hours ago 2 replies      
Well, I can tell you this... I did tests long time ago and since the test I got many contacts around since my IQ is extremely high. I don't socialize with people I had only few people with who I have spended some longer time and I have noticed since the begining of our friendship untill now their way of thinking extremely changed. Sometimes I even don't like this... I feel like a battery which is charging others for nothing.
Dunning-Kruger and other memes (2015) danluu.com
241 points by ikeboy  3 days ago   143 comments top 22
jpatokal 3 days ago 4 replies      
"The less someone knows about a subject, the more they think they know." I think that's an exaggeration and understood by most people to be one.

What is, however, absolutely clear from the original study is that incompetent people have a highly inflated assessment on their own abilities: in all four experiments (!), the people with actual 10% abilities rated themselves at 50-70%. This is perfectly in line with eg. Urban Dictionary's definition[1]:

"A phenomenon where people with little knowledge or skill think they know more or have more skill than they do."


[1] Chosen intentionally because this is a "popular" source, not an academic one.

mevile 3 days ago 4 replies      
> The pop-sci version of Dunning-Kruger is that, the less someone knows about a subject, the more they think they know.

Author's take on Dunning-Kruger is a strawman. I haven't seen that version be a "meme". I also dislike this single word rejection, the calling it a "meme", of how people talk about DK, it's like name calling or something. Unwarranted and arrogant dismissal. It's one thing to be wrong, it's another to be wrong and then also haughty about it. I feel like most people that I've seen bring DK up understand what the implications of it are. My favorite thing I've read about it is that the less you are competent in something the less you are able to gauge competence in that something.

> In two of the four cases, theres an obvious positive correlation between perceived skill and actual skill, which is the opposite of the pop-sci conception of Dunning-Kruger. A plausible explanation of why perceived skill is compressed, especially at the low end, is that few people want to rate themselves as below average or as the absolute best.

In one sentence he's declaring someone else's opinion on it as pop-sci than offers his own similarly silly take on it. Oh wait he showed some charts. I did like seeing that people with more skill saw themselves as better than people with less skill, but conjecture on what people want to think of themselves? That's pop-sci.

ternaryoperator 2 days ago 1 reply      
This article touches on one of the highest-payback practices I've developed over the last few years: going to the original sources. I am constantly rewarded by this; typically finding out that the downstream analysis misunderstood some aspect, latched on to only a fraction of the whole story, or willfully misrepresented by speculating on absent data or by inserting a plausible narrative for items that fit a private agenda.
danbruc 3 days ago 4 replies      
The static typing example seems weird to me. I did not read the entire linked summary but only the first five and last three papers discussed there and they mostly hint at at least some positive effect for static typing but the author of the summary essentially just dismisses the results for various reasons. I am not saying that all the judgments in the summary are necessarily wrong but overall that summary seems a pretty strange basis for saying that static typing is worth nothing. And the author of the submission is also the author of the summary.
tikhonj 3 days ago 0 replies      
The type system question is different from the psychological examples: the problem is not people misinterpreting evidence, but that reliable empirical evidence simply does not exist. Papers on the matter are sparse, completely uneven and full of methodological issues.

Personally, I'd argue that "statically typed" vs "dynamically typed" does not even make sense as a single question. There's more difference between Haskell and Java than between Java and Python, and an experiment comparing two identical languages with and without static typing won't tell us much beyond those two languages. (I recall seeing at least one paper that did this; it's probably worth reading, but not for making broader conclusions.)

Moreover, there simply isn't a compelling way to measure most of the things that programmers actually care about like expressiveness, productivity or safety. Existing measures (like counting bugs, lines of code over time, experiments on small tasks often performed by students... etc) are limited, full of confounding variables and quite indirect in measuring what people actually care about. I've looked through various studies and experiments in software engineering and while some are compelling, many are little more than dressed-up anecdotes or "experience reports".

It's especially hard to study these things in the contexts that matter. What we care about is experienced programmers who've used specific technologies for years applying them in teams, at scale. What's easy to experiment on is people who've just learned something using it on tiny tasks. Observational studies that aim at the broader industry context are interesting but hard to generalize because of confounding variables and difficulty of measurement.

In the absence of this sort of evidence, people have to make decisions somehow, and it's not surprising that they overstate their confidence. We see this in pretty much everything else that doesn't have a strong empirical basis like questions around organizing workplaces, teams and processes. Just look at opinions people have about open offices, specific agile processes or interview procedures!

Another side to the question is that languages inevitably have a strong aesthetic component, and talking about aesthetics is difficult. But you're certainly not going to convince anyone on aesthetic matters with an experiment or observational study, any more than you can expect to accomplish anything like that in the art world!

dahart 2 days ago 0 replies      
Something I didn't realize before is that, meme or not, Dunning-Kruger tested perception vs skill on basic tasks, things that if someone asked me I might easily mistake my own ability, since it's something I'd feel like I should know how to do.

Ability to recognize humor isn't what I'd even call a skilled subject matter, and it's not something we learn in school or normally get exposed to graded metrics or comparisons against other people.

These aren't highly skilled subjects like Geophysics or Law or Electrical Engineering or Art History. I'd be willing to bet it's a lot easier to both self-identify lack of ability and admit lack of ability in a subject the more skilled it is.

stepvhen 3 days ago 1 reply      
I like to think SMBC[1] presents a more accurate graph of confidence vs knowledge, but I don't know enough to really speak about it.

[1]: http://www.smbc-comics.com/?id=2475

dahart 3 days ago 0 replies      
John Oliver did an awesome bit recently on scientific studies and how popular conceptions of them, especially media portrayals, completely distort the results.


ywecur 3 days ago 0 replies      
I'm curious as to why nobody here has yet to comment on OPs claims about "Hedonic Adaptation". I've been told by various sources that this is the way the brain works, even in my recent biology class where the teacher would say that "dopamine sensitivity" was to blame.

It seems like a really big deal to me if he's right, and could really change your outlook on life.

musesum 2 days ago 0 replies      
D-K is one of my favorite patterns. This is the first time I've seen these charts. Some questions about methodology:

The x-axis shows quartile, not score results. If the range was between 80 and 90%, then all participants were accurate in assessing their ability as "above average". [EDIT] I doubt that's the case, but would rather see scores.

How was the self declared expertise in "humor" judged? That seems pretty subjective. Maybe the subject is hilarious to his or her friends.

Did the subject know what the examiner's definition of "logical reasoning" is? Was that street logic or discrete structures? What if the subject was able to glance at the test questions? Only then answer the question as it pertains to the test. How would the results change?

Grammar is idiomatic. In some places "over yonder" is contextually concise. Other grammatical forms may never occur. How is self-assessment over tacit expertise judged? Maybe another glance at the test?

Maybe Dunning-Kruger shows that there is a disconnect in how examiner and subject interpret a question? Maybe, it is a matter of saving face in saying that you're above average? Maybe, because the subjects are college students, that they actually are above average? Or maybe, these are above average participants that aren't quite sure of the question, so they say that they're above average?

mwfunk 3 days ago 0 replies      
The idea that there's an inverse relationship between how much someone thinks they know about a subject and how much they actually know is pretty timeless. When people refer to Dunning-Kruger I take it to mean shorthand for that phenomena rather than a reference to results from a specific study done in 1999.

I may be misremembering, but when I first saw references to it on Slashdot, etc., it was from people reacting in amusement that someone was able to quantify and measure what seemed like such a commonly experienced aspect of human behavior. If someone had done an academic study on the increased likelihood of friends having scheduling and availability issues around weekends in which one friend was moving to a new house but was too cheap to get movers despite having plenty of money to do so, it would've gotten a similar response. :)

Since then, it's just been convenient having a name ("Dunning-Kruger", that is) for a concept that was widely understood but didn't have shorthand for referring to it. I'm not surprised that the study itself wasn't definitive and airtight.

irrational 2 days ago 1 reply      
One thing I never see in the income/happiness studies is - Is this just for a single person, or is it for a family? And if for a family, then what size is that family? I can see being happy earning 75k/year and being single, but not so much if I have eight other family members to support with that same salary. Is there some sort of "number of people being supported on this income" adjustment to the income/happiness studies?
MPSimmons 2 days ago 0 replies      
This article had more assumptions in it than examples of assumptions it was complaining about.
cowpig 3 days ago 1 reply      
> Apparently, theres a dollar value which not only makes you happy, it makes you as happy as it is possible for humans to be.

> If people rebound from both bad events and good, how is it that making more money causes people to be happier?

I saw graphs that proved happiness causes money. What did you see?

disclaimer: I am trying to be snide on the internet. What I mean to say is that I was confused by the use of the word "cause".

mwexler 2 days ago 0 replies      
tommynicholas 3 days ago 0 replies      
Dan Ariely and his team has done some great work on the "happiness" meme, and they generally support the popular notion that there are massively diminishing returns to accruing wealth. Yes, (as this post shows) happiness does continue to increase as you accrue wealth, but there are other things that you can do - including giving money AWAY - whose returns on happiness and satisfaction do not diminish. The point is, if you take a long view on life and what to focus on, getting to a certain level of financial stability should take a high priority, but becoming incredibly wealthy should not.
coverband 3 days ago 0 replies      
Is this a clever ruse to test whether we'll read the cited sources? ;^)
thomasahle 3 days ago 1 reply      
All the income happiness data seems to stop shortly after 64k. Hardly evidence that there is no plateau.
sklogic 2 days ago 1 reply      
Was ok up until type systems. Please stop citing this pathetic "empirical study" already, it's totally unscientific.
59nadir 2 days ago 2 replies      
I love that the people who come into the comments to argue about the benefits of static typing seem to have totally missed the point that the post argues that you need evidence, not just beliefs.
Camillo 2 days ago 1 reply      
This comment breaks the HN guidelines. Please post civilly and substantively, or not at all.

Note how much better it would be with just the first paragraph.

We detached this comment from https://news.ycombinator.com/item?id=11994296 and marked it off-topic.

dkarapetyan 2 days ago 0 replies      
But type systems do help. You don't have to go far to notice the shortcomings of any large enough project written in python, ruby, javascript, etc. Whereas a project of equivalent scale written in c#, typescript, java, dart, etc. is much easier to maintain and debug. So given enough discipline and enough good programmers I agree that there isn't much difference but in practice this is not the case and having the compiler double check your work helps a lot.
Simple Ways of Reducing the Cognitive Load in Code chrismm.com
280 points by christianmm  2 days ago   195 comments top 39
iamleppert 2 days ago 5 replies      
"Use names to convey purpose. Don't take advantage of language features to look cool."

I can't say enough about this. Please write code that is easy to read and understand, not the most compact code, and not the most "decorated" code, or "pretty" code or neat because it uses that giant list expression or ridiculous map statement thats an entire paragraph long.

Similarly what bugs me is when I receive a pull request where someone has rewritten a bunch of code to take advantage of new language features just for the hell of it and that did not lead to an increase in clarity.

I guess its in vogue now to add a lot of bloat and complexity and tooling to our code. "Use the simplest possible thing that works." Tell that to the Babel authors with their 40k files...

jorgeleo 2 days ago 4 replies      
"How can a new developer just memorize all that stuff? Code Complete, the greatest exponent in this matter, is 960 pages long!"

First... do not memorize, but internalize, understand why they work, and when to apply which one. Use them to solve the problem of your code been read in 6 month by a serial killer that know your address.

Second... 960 pages. If you really want to advance the craft, if you really want to become a better developer, then you don't measure by the number of pages (<sarcasm>what a sacrifice, I have to read</sarcasm>), you measure by the amount of gold advice on the book. 960 pages is a lot of gold.

Third... If you read the whole blog and understood the value in following the Cliff notes to the Cliff notes that this post is, then you should be looking forward to read the 960 pages.

jonhohle 2 days ago 6 replies      
His second example to "modularize" a branch condition is not functionally equivalent in _most_ in-use programming languages:

 valid_user = loggedIn() && hasRole(ROLE_ADMIN) valid_data = data != null && validate(data) if (valid_user && valid_data) 
Is not equivalent to:

 if (loggedIn() && hasRole(ROLE_ADMIN) && data != null && validate(data)) 
His version will always execute `validate()` if `data` is not null regardless of whether the user is logged in or has the appropriate role. Not knowing the cost of `validate()`, this could be an expensive operation that could be avoided with short-circuiting. It also seems somewhat silly (and I know it's just a contrived example), that a validation function would not also perform the `null` check and leave that up to the caller.

majewsky 2 days ago 6 replies      
I really like the advice from "Perl Best Practices" to code in paragraphs. Sometimes, a large function cannot be broken up usefully, because a lot of state needs to be shared between the different parts, or because the parts don't have a meaning outside of the very specific algorithm.

In that case, code in paragraphs: Split the function body into multiple steps, put a blank line between these and, most importantly, add a comment at the start of the paragraph that summarizes its purpose.

Now when someone else finds your function, they can just gloss over the paragraph headings to get an idea of the function's overall structure, then drill down into the parts that are relevant for them.

collyw 2 days ago 3 replies      
Get a decent high level architecture, good, consistent database design and you don't write anywhere near as much application code. Start hacking about using one field for two purposes or having "special cases" and everything starts to get messy. These special one off cases will involve adding in more code at the application level increasing overall complexity. Repeat enough times and you will code a big ball of mud.

Instead people argue about number of characters per line or prefixing variable names with something and other such trivialities. (These things do help readability, but overall I think they are quite minor in comparison to the database design / overall architecture - assuming you are writing a database backed application).

ViktorasM 2 days ago 5 replies      
Stopped reading at "Place models, views and controllers in their own folders". No worse way to organize your code than classify by behavior type. "Here are all the daos", "here is all business logic", "here are all the controllers". You add a feature as small as resource CRUD and scatter it's pieces across the whole code base.


reikonomusha 2 days ago 2 replies      
I've noticed recently that especially in online discussions, the term "cognitive load" is used as a catch-all excuse to rag on code that someone doesn't like. It appears to be a thought-terminating clich.

There's definitely room to talk about objective metrics for code simplicity, which are ultimately what many of these "cognitive load" arguments are about. But cognitive load seems to misrepresent the problem; I think it's hard to prove/justify/qualify without some scientific evidence over a large population sample.

With that said, the article presented fine tips, but they seem to be stock software engineering tips for readable code.

taspeotis 2 days ago 1 reply      

 How to reduce the cognitive load of your code (chrismm.com) 304 points by ingve 90 days ago | 232 comments

escherize 2 days ago 3 replies      
I'd like to add one: let your tools do the work for you. It may seem like a pain to learn the tooling behind what you do, but once you internalize it, it becomes a superpower.

An example is that I use Clojure Refactor Mode (with CIDER) for emacs. A trick (and treat) that a lot of Clojure code uses is the arrow macros: -> and ->>. Clojure Refactor Mode has thread-first, thread-first-all, thread-last, thread-last-all and unwind. Since I've committed those to my long term memory, I can just call thread-first-all on something like:

 (reduce * (repeat 4 (count (str (* 100 2)))))
and get:

 (->> 2 (* 100) str count (repeat 4) (reduce *))
This is so huge, because many times changing the levels of threading makes reasoning about the code so much easier.

valine 2 days ago 2 replies      
As a junior dev I can confirm the advice about junior devs is very accurate. An anecdote: I recently started working with a team on their half completed web app. They had so many dependencies, and tools for managing dependencies, it took me far longer than it should have to become productive. It's obviously not my place to question which technologies they use, but it can be frustrating.
dreamsofdragons 2 days ago 0 replies      
This article is a mix of good advice, terrible advice, and conflicting advice. Statements like "Avoid using language extensions and libraries that do not play well with your IDE." are foolish. Pick your language primarily on the best fit of the language for the problem space, second, pick a language that you're comfortable with and knowledgeable about. Picking the wrong tool simply because it works well with another tool, is horrible advice.
robert_tweed 2 days ago 1 reply      
This is a nice post on the subject of readability, though I mostly like that the title does not use the often misused word "readability" at all. I now prefer to talk about understandability instead, which usually boils down to cognitive load.

This is one of the things that Go has got very right in its design, though it is often badly misunderstood. Advocates of languages like Ruby often refer to the "beauty" of the code while ignoring the fact that many of the techniques employed to achieve that obscure the meaning of the code.

The main problem I have with the term "readability" is that it encourages writing of code that reads like English, even if it obscures the details of what the code does. In the worst cases, the same set of statements can do different things in different contexts but that context may not be at all obvious to the reader.

One of the first books I read when I was learning C years ago talked about avoiding "cutesy code". That was particularly in reference to macro abuse, but it's always stuck with me as a good general principle. It applies equally to excessive overloading via inheritance and many other things that make it hard to tell what a given statement actually does, without digging around in sources outside of the fragment of code you are reading.

In many ways the art of good programming is, aside from choosing good names for things, maintaining the proper balance between KISS and DRY.

userbinator 2 days ago 0 replies      
On the other hand, maybe increasing the cognitive load is beneficial to everyone in the long term:http://www.linusakesson.net/programming/kernighans-lever/
randomacct44 2 days ago 1 reply      
My current pet-peeve:

- If your code deals with values where the units of measure are especially important and where they may change for the same type of value in different contexts, PUT THE UNITS USED IN THE VARIABLE NAME!

I work primarily with systems that talk money values to other systems, some of which need values in decimal dollars (10.00 is $10.00) and some that need values in integer cents (1000 is $10.00).

Throughout our codebase this is often referred to helpfully as 'Amount', unfortunately :( So much easier when you can just look at the variable.... 'AmountCents' -- this naming convention alone would prevent some bugs I've had to fix.

Which points to something deeper that I've come to realize. Your code speaks to you, in the sense that when you come back to your own code 6 months later, there's a certain amount of "I don't know what this is doing" that you can chalk up to just not having looked at it for 6 months, but there is also an amount where you have to say "no, actually I didn't write this code clearly at the time". When evaluating my own progress that's a big metric I use - on average, how am I understanding my own code later?

What I try and watch out for in myself is when I find myself not making something explicit in the code because of domain knowledge that I have. The 'Amount' example is a good one of this. The domain knowledge is that I know this particular system wants values in decimal dollars -- I mean it's totally OBVIOUS isn't it? Why would I bother writing 'Cents' at the end for something so obvious?

Yet, even referencing domain knowledge is a higher cognitive load than just reading 'Cents' in the variable name. Not to mention the next engineer that comes along -- it's likely they won't have that bit of 'obvious' domain knowledge.

I would vote both 'Code Complete' and 'Clean Code' as two must-read books for any programmer.

TickleSteve 2 days ago 2 replies      
...and another:

Use of whitespace (vertical and horizontal) to group and associate code with related parts.

Its a trick borrowed from graphic design, but negative-space works really nicely.

jboy 2 days ago 1 reply      
This article is a good start, but I found it much too light on detail. Each section ended just when I was ready for it to dive into details! For example, in the final section "Make it easy to digest":

> Using prefixes in names is a great way to add meaning to them. Its a practice that used to be popular, and I think misuse is the reason it hasnt kept up. Prefix systems like hungarian notation were initially meant to add meaning, but with time they ended up being used in less contextual ways, such as just to add type information.

OK, great, I agree -- but what are some suggestions/examples of good prefixes? What are some examples of bad prefixes that we should avoid?

To illustrate the sort of detail I'd like to read, here is an example of my own of good/bad method names that would be greatly improved by judicious use of prefixes.

My standard go-to example for ambiguous naming is the std::vector in the C++ STL. There is a member function `vec.empty()`: Does this function empty the vector [Y/N]? Answer: No, it doesn't. To do that, you instead use the member function `vec.clear()`. There is no logic a priori to know the difference between `empty` & `clear`, nor what operation either performs if you see it in isolation. You must simply memorize the meanings, or consult the docs every time.

In the C++ style guides I've written, I've always encouraged the prefixing of member function names with a verb. Boolean accessors should be prefixed with `is-`. The only exception should be non-boolean accessors such as `size` (which has its own problems as a name). Forcing non-boolean accessors to be preceded by a verb invariably results in names like `getSize()`, where `get-` adds no useful information, clashes with the standard C++ naming style for accessors, and really just clutters the code with visual noise.

Using these prefixes: (depending upon your project's preference for underscores or CamelCase)

 .empty -> .isEmpty() or .is_empty() .clear -> .makeEmpty() or .make_empty()
As an additional benefit, the use of disambiguating prefixes also enables the interface designer to standardize upon a single term "empty" to describe the state of containing no elements in the vector, rather than playing the synonym game ("empty", "clear", etc.). The programmer should not need to wonder whether "clear" empties a vector in a different way.

tmaly 2 days ago 1 reply      
I own Code Complete, but I felt I got better value out of the Clean Code book combined with the Pragmatic Programmer.

I did find some value in Code Complete, but it is a little too long for my tastes. The naming and abstract data structure sections were probably my favorite parts of that book.

cauterized 2 days ago 0 replies      
I happen to find the principle of single level of abstraction does more to reduce cognitive load than all these tips put together.
ninjakeyboard 2 days ago 0 replies      
My biggest pet peeve is when people use pattern names in class names. You don't need to call things strategies if you're composing in behavior. Just call it the behavior.

val weapon = Sword() weapon.attack(up)weapon = Bow()weapon.attack(left)

Often the pattern's implementation drifts a bit from the by-the-book implementation and it ends up being something ALMOST like the pattern but it's not quite anymore. Or it's more. Then the pattern name is still stuck there and it causes more confusion than it helps to clarify.

batguano 2 days ago 0 replies      
I'm surprised no one has cited this:


drtz 2 days ago 0 replies      
All code does not need to be easily understandable by a novice developer. Minimizing cognitive load is certainly a good thing, but using overly simple grammar for a complex task leads to unneeded verbosity.

When writing software, as with any form of writing, you should keep your audience in mind as you write.

donatj 2 days ago 0 replies      
Fluent interfaces make code a joy to write and a huge burden to later review and reason about, particularly when the object you are interacting with changes mid way through on certain calls. They are something I loved when I was younger, but now doing code reviews they are the bane of my existence.
Roboprog 2 days ago 0 replies      
This article does a good job of encapsulating the prevailing Java "ignorance is strength" (worse/longer is better; abstraction is bad) paradigm.

When are the right-tailers (in the bell curve) ever going to let you use new features to make your code shorter? Why learn complex concepts like "multiplication", when "tally marks" will do?

I'm afraid I'm with Steve Yegge on this one, in regards to dislike of the "tools to move mountains of dirt" aspect.


hoorayimhelping 2 days ago 0 replies      
>Junior devs cant handle overuse of new tech.

heh, I'm a senior dev and I have trouble with the overuse of new tech. It's hard for me to learn when there are too many variables in play; early on, it's hard to know which bit is doing what.

unabst 2 days ago 0 replies      
Maximize order. Order is the lubricant for information. And if you back your reasons with guiding principles (aka philosophy) the specifics will remain obvious as well as sort themselves.

These are the only ways to reduce cognitive load and they apply to any situation where one needs to understand something. After all, code is about understanding.

Anecdotally, the specific methods mentioned in the article that seem most valid stem from the guiding principle of maximizing order, which drastically reduces the cognitive load of the contents of the article.

markbnj 2 days ago 2 replies      
Good advice and worth reading especially for younger devs. With respect to...

>> Prefix systems like hungarian notation were initially meant to add meaning, but with time they ended up being used in less contextual ways, such as just to add type information.

Hungarian notation was pretty cumbersome to read, actually, and I think the main reason it fell out of use is that editors and IDEs began to make type and declaration information available for symbols in a consistent way, so it was no longer much of an advantage (and perhaps a disadvantage) to use a manual convention that was usually applied inconsistently.

vinceguidry 2 days ago 0 replies      
I started doing this a year ago and it really helped me to maintain code. My new goal is to be able to read other's code, make it more readable, and fix the problem just as fast as it would have been without slight, constant refactoring.

I want to run a team so I can teach the whole team to work this way. Then I'll handle all the complex refactorings, which I really enjoy doing, while they greenfield new features. If they can write code this way, then I'll be able to refactor it without having to study it to figure out what it's doing.

fchopin 2 days ago 0 replies      
For the most part, I agree with this. The biggest problems I've had at work have been due to constructs that were a neat idea but just add to the complexity of figuring out the application. Throw in a bunch of business-specific engineering terminology that is not defined anywhere for the development team, and it becomes a wicked PITA to learn.

However, the one-liner example and the chained English-sounding methods, I think might be taken the wrong way. Both can be done well.

athenot 2 days ago 0 replies      
Along the same lines, use the right language for the abstraction you are dealing with. In a server environment, I prefer to have modular services linked together with some message queue.

In web projects, this is what has kept me coming back to CoffeeScript: we found it less distracting visually, given the kind of code we were writing (heavy call-back oriented, lots of chained methods).

lenzai 2 days ago 0 replies      
""" Storing the result of long conditionals into a variable or two is a great way to modularize without the overhead of a function call."""

Ridiculous !Not only caring about overhead is misleading, but introducing local variables is against refactoring principles.

andy_ppp 2 days ago 0 replies      
Or rather more simply - do code reviews and decide on which of these things you want to include and teach everyone about:

a) the agreed way

b) other code they haven't worked on

in the process. Finally if you know someone else will be reviewing your code you'll produce better code in the first place.

kuharich 2 days ago 0 replies      
qaq 2 days ago 0 replies      
"Using MVC? Place models, views and controllers in their own folders" This works on smaller projects on large projects it's often easier to group things by component
Waterluvian 2 days ago 0 replies      
"clever code isn't." Is what I try to teach all who will listen.

Good code should never be illegible to newbies. And if they can read your code, they can learn way faster.

dingleberry 2 days ago 0 replies      
If you don't have to name things, you have zero chance of getting bug caused by naming things

That's why i love anonymous function. it frees me from names overload

stuaxo 2 days ago 0 replies      
Have to disagree on not placing null first in comparisons, it's a good way to avoid bugs.
0xdeadbeefbabe 2 days ago 0 replies      
I believe maintainable style is less important than knowing how the thing behaves.
thaw13579 2 days ago 1 reply      
These seem like good rules to follow, but there's nothing to suggest that they reduce cognitive load. To make that claim, you need experiments testing brain function or at least people's behavior...
indubitably 2 days ago 1 reply      

 Keep your personal quirks out of it Dont personalize your work in ways that would require explanations. I like taking advantage of variables to compartmentalize logic.

We built voice modulation to mask gender in technical interviews interviewing.io
303 points by HaseebQ  1 day ago   334 comments top 49
benkuhn 1 day ago 13 replies      
I would really have liked to see them check whether the gender-blinding actually worked. Even if someone's voice is modulated, they may have gendered behavior patterns that could influence someone's performance ratings. I wouldn't be surprised if I could guess the gender of a voice-modulated person pretty accurately from other cues.

To combat this, they could have asked the interviewer during the performance assessment whether they thought the applicant was definitely male/probably male/unsure/probably female/definitely female. Then you could use voice modulation as an instrument for perceived gender and get a better estimate of the true effect of perceived gender when controlling for actual gender.

Hope they do something like that for the next round of experiments!

xienze 1 day ago 13 replies      
I honestly feel like this sort of thing is a waste of time.

Let's pretend someone invents the perfect "gender bias-free" system: no names, no faces, voices are flawlessly transcribed on-the-fly so you can't pick up on speech patterns. What if it works perfectly and you still end up hiring more men than women? I think exercises like this may end up giving people answers they don't want to hear.

So, stop wasting time with stuff like this. If you're bothered by the fact that there's more men in your company than women just hire more women until you get your desired ratio. You've already made your minds up ("there's too many men here, clearly we're biased against women during the hiring process") and I doubt anything will make you think differently, so just go ahead and fix the imbalance.

Jemaclus 1 day ago 1 reply      
Hi Aline! Great read, as always.

I had a couple of thoughts while reading:

1) One of the things you mentioned was that men whose voices were modulated to sound like women tended to score better than women whose voices were modulated to sound like men. Do you think that's because the men actually performed better than women, or do you think it's possible that interviewers have a lower set of expectations for female interviewees? Or some other reason?

My hypothesis at this point is that, all other things being equal, it seems like if an interviewer expects you to do poorly because of an unconscious bias against women, and you actually do perfectly average, the interviewer might rate you higher simply because you surprised him. Does that make sense?

2) How did your partners (Mattermark, Yelp, etc) react to your findings? Will this change how they hire in the future or how they use interviewing.io?

3) Given enough time and resources, do you think it's really possible to eliminate gender bias in interviews (through techniques like this, or otherwise), or do you think the best we can do is minimize it?

4) Piggybacking off #3, do you do follow-ups with the companies after you've placed people? I'd be interested to see whether women who did well in the interview actually do well at the companies long-term -- perhaps those biases extend beyond the interviewing process, as well!

Keep up the good work! :)

smokeyj 1 day ago 4 replies      
I'd be more interested in quantifying productivity and seeing if there's a systematic discrepancy between productivity and gender. Either the market is accurately pricing talent or there's a delta to be exploited.

I think if a paper was released tomorrow that said "female programmers 30% undervalued as observed by double blind coding challenge" head hunters would fix that problem overnight.

ranko 1 day ago 1 reply      
Gender blinding in orchestral auditions (which usually involves screening the performers from sight and thus is pretty-much foolproof) has been shown to improve the percentage of women who are hired.

See, for example, 'Orchestrating Impartiality: The Impact of "Blind" Auditions on Female Musicians', Claudia Goldin, Cecilia Rouse: http://www.nber.org/papers/w5903

gravypod 1 day ago 3 replies      
Why not just use text? There are many speech characteristics of gender that will never be found and masked by modulation.

Simple placements of words, parts of speech referring to the self, and other things are all much better clues to gender then voice frequency and pitch. For example, when speaking to some people who I know who are either naturally very high or low pitch, or are trans-gendered, there is a lot more information that can be contextually extracted from the content of their speech.

Really, just do text-based interviews. It will allow people to revise what they are thinking and really allow them to mask their gender.

randyrand 1 day ago 1 reply      
Someone off-topic but: Laws forbidding sexual and racial discrimination in hiring are a waste for everyone's time involved.

Do I really want to work alongside someone that "isn't racist" or sexist just because the law forbids it? No of course not. I'd rather work next to people that aren't racist and sexists regardless of if the law forbids it. I want to work at companies where people actually want me. How difficult is that to understand? =(

Most people want less racism/sexism. But this is not the way to do it. I'm in favor of removing this law and letting discriminators be upfront about it. So that I can knowingly avoid them without wasting hours and potentially years of my life.

mmastrac 1 day ago 1 reply      
I had a demo from Aline on this last week after seeing it on Twitter and reaching out. It was a really interesting experience - it masks the voice pitch but leaves the person's characteristic of speaking (is there a word for that?) intact.

You can also access the recordings after-the-fact to hear how you did. I think this is valuable for candidates who want to improve their interviewing skills.

FWIW, I heard it both ways - my voice as female and her voice as male. The pitch shifting was very convincing both ways.

Jemmeh 1 day ago 3 replies      
>"Contrary to what we expected (and probably contrary to what you expected as well!), masking gender had no effect on interview performance."

I've seen quite a few studies pointing out this "confidence gap" now. They have shown women to be less confident, and it shows in their actions and speaking patterns. I still hear these speaking patterns even in the voice modulated versions-- statements sounding like questions with the upward inflection at the end. Vocal fry. Lots of "uhm, well, erm" flustered speak. So it doesn't surprise me that even with a modulated voice these women still sound unsure of themselves and thus they still didn't interview as well.

As women there are some actions we can take. It's good to be aware of these speaking patterns and try to break them--but ultimately there are so many of these patterns that show that the only real solution is to try to internalize confidence. It's easier said than done. And we can't ignore the fact that there are reasons outside of ourselves for why women are not as confident as men. It's not just an internal issue. While we can take some actions for ourselves, we should still be looking at the external reasons that caused the issue in the first place. Why are large amounts of women less confident compared to men? Let's tackle those issues, too.

When women try to be confident, it can get frowned upon. A man being assertive is a leader, a woman is bossy. Women are quicker to hit walls for just how confident they're allowed to be before they get the "bossy/bitchy" label thrown at them. The below article makes good counterpoints for the confidence-gap, and provides sources to some studies regarding women and confidence. And I've seen women call each other bitchy/bossy, too. This is something that society as a whole needs to work on.


balls187 1 day ago 2 replies      
> its not about systemic bias against women or women being bad at computers or whatever. Rather, its about women being bad at dusting themselves off after failing, which, despite everything, is probably a lot easier to fix.

This conclusion (imo) should have been bolded. I disagree that getting women to "dust themselves off" is an easier fix than if there were inherent bias against women in interviewing.

My theory is that this is tied to (In the US) men being more tolerant of taking risks, which leads to being more tolerant of failure, which leads to being able to more easily "dust oneself off."

I'd imagine if you looked at women in this study who played competitive sports, they would be less likely to leave the platform.

Risk-aversion might also contribute to women being less likely to ask for a promotion, raise, or negotiate salary.

I disagree that "dusting yourself off" as being an easy fix similarly for why I think "pick yourself up by your bootstraps" is not an effective way to elevate people out of poverty. There are more cultural biases to overcome than a simple behavior change.

elcapitan 1 day ago 2 replies      
I want applicant-side voice modulation to mask my insecurity and non-native language skills.
minimaxir 1 day ago 1 reply      
> Contrary to what we expected (and probably contrary to what you expected as well!), masking gender had no effect on interview performance

The main issue with gender and hiring is unconscious bias on the interviewer's end, which voice modulation does make sense as a tool to avoid bias. Per the grading criteria, they are judging on skill, problem solving, and communication. I'm not sure how voice modulation would affect the interviewee in those three criteria, which makes the results of the experiment not surprising.

gmarx 1 day ago 0 replies      
The result does not surprise me at all. I realize this was not the kind of study that would pass rigorous review. I am surprised the author thinks this is a surprising result. The kind of people who do technical interviews have been taught since childhood that the system is biased against women and that they need to look out for this bias. If anything I would think the occasional super tech woman would be enough of a surprise that the average interviewer would have an unconscious bias in favor of her
hasenj 1 day ago 0 replies      
Honestly the modulated voice sounds like a gay man, or a transgender, or someone who is physically male but is trying hard to speak like a female. It's probably not the pitch itself but the manner of speaking.
onetwotree 1 day ago 0 replies      
So I don't think that hiding factors that (we think might) influence biased people is the right way to address bias.

Let's say a woman uses this voice modulator in an interview and gets a job that she wouldn't have otherwise. Now she's going to work in the same shitty, sexist environment that would have denied her the job had they known she was a woman. What problem have we actually solved here?

I think that the problem isn't so much that women can't get jobs in tech as that they don't want to work in an industry that is (correctly or not) perceived as being a sexist sausagefest.

We have to change attitudes and culture to address bias, not simply put on blindfolds.

mxfh 1 day ago 1 reply      
Can you pick your own filter?

Would go for Laurie Anderson's Voice of Authority anytime.https://www.youtube.com/watch?v=YajQNIAY78k&feature=youtu.be...

Used to great extent here: The Cultural Ambassador - Livehttps://open.spotify.com/track/3uM0L8r6lJGaU54v9RjwJK

yummyfajitas 1 day ago 0 replies      
It would be great if the author posted the raw numbers, so we could check the statistical analysis for ourselves. I'm concerned by the low number of females in the sample - was it high enough for the study to have significant statistical power?
humbleMouse 1 day ago 0 replies      
This is a nice idea but seems like political correctness taken to the extreme.
teamhappy 1 day ago 0 replies      
Note to self: Write screen play about Equilibrium-esque future were everybody wears burkas with built-in voice decoders.
breitling 1 day ago 0 replies      
Planet Money did a great podcast on this recently [1]. It's pretty cool how it works.

[1] http://www.npr.org/sections/money/2016/04/22/475339930/episo...

hharnisch 1 day ago 1 reply      
This is interesting because it starts to peel another layer of the problem. Yes there are still cues that can give away gender, but this seems another step closer to making the interview process a science. The next logical question from here is "why are women more likely to quit after 1-2 bad interviews".
fractalsea 1 day ago 0 replies      
In the intro you say that data from >1000 interviews shows:

> men were getting advanced to the next round 1.4 times more than women

> men [...] had an average technical score of 3 out of 4, as compared to a 2.5 out of 4 for women

The conclusion of your experiment was

> masking gender had no effect on interview performance

So the question is: why do women perform worse (by the above metrics) even if the interviewer does not know they are a women?

Unfortunately you spend the remainder of the article showing that women are more likely to quit after bad interviews, and hypothesise that this is due to lower self confidence. This is interesting and all, but it does nothing towards explaining the discrepancy you described in the introduction!

rottyguy 1 day ago 1 reply      
I always felt they should do something like this for politicians during a race. No names, age, gender, political leanings, etc. Just interviews and debate.
alarge 1 day ago 0 replies      
I guess the results here would surprise you if you assumed that hiring bias was primarily the result of an explicit bias against {gender, race, etc.}. While there is certainly this class of bias, I personally haven't found as much of it in the technology field as there is purported to be in general society.

What I have found, instead, is a more subtle bias that goes something like this: "I view myself (or that guy over there) as the model employee. I'm looking for someone who thinks and acts like we do -- not just on our level, but views problems the same way, is likely to arrive at the same solutions, has the same energy level, etc.". I believe this attitude forms the fundamental underpinning of both company culture and hiring bias. I've seen it work successfully over the short term (particularly in startups where you're trying to build a small team of really smart people that can build something quickly), but I think it is counter-productive over the longer term.

When I'm hiring, I simultaneously try to answer three questions: (1) Is this person capable of doing the job I'm hiring them for? (or grow into being capable), (2) Is this someone I could work with and who could work with my team?, and (3) What unique life experiences or perspectives does this person have that they could bring to the team and improve our culture or decision making?

What I've found is that some folks over-focus on (1) and sometimes (2) without adequately recognizing the value of (3). I personally believe that you don't remove bias by trying to legislate against it - you remove it by getting people to desire the alternative - diversity.

NTDF9 6 hours ago 0 replies      
Here's an alternate hypothesis. Maybe, just maybe, men and women are different? Maybe men and women have different strengths and aptitudes?
j0e1 1 day ago 1 reply      
> Maybe tying coding to sex is a bit tenuous, but, as they say, programming is like sex one mistake and you have to support it for the rest of your life.

Hilarious! But I wonder if it is true.

ajmurmann 1 day ago 0 replies      
Are there actually studies that show that the interview process rather than the pipeline is the issue? At my previous job we ended up making an offer to every female candidate who made it to the phone screen. This wasn't because we preferred them because of their gender but because they were really good. Many men never even made it past the phone screen. We still ended up with a majority of men on the team.
kafkaesq 1 day ago 0 replies      
Which (by design) would unfortunately mask character, warmth, and authenticity, also.
zw123456 1 day ago 0 replies      
I think this is a great effort and I applaud their efforts. There are also a lot of very good points made in all the comments below, summary:

1. Simply changing the frequency of the voice does not necessarily mask gender, other intonations are still present or make the person sound perhaps gay or otherwise still feminine sounding.

2. There are other socialization issues as well as biological/chemical that make men more aggressive than women.

One of the key things I have learned over the years is an appreciation for the different way that women approach problem solving than men, both technical problem solving as well as organizational. Both have merits, but a blend can be very powerful, a diverse work group has intangible value.

I think rather than attempting to create a male avatar for women this lesson needs to be embraced more widely.

alansmitheebk 1 day ago 5 replies      
This is ridiculous. At some point we need to face up to the reality that most women in the US are just not interested in working in technology. There is no evil conspiracy to keep women out of tech. Maybe high school guidance counselors and parents don't encourage women to go into tech. Maybe there is someone to blame, but it's not the tech industry's fault.
dghughes 1 day ago 0 replies      
There's more to a person than the tone of their voice men and women speak differently use different phrases, words and cadence.
bitL 1 day ago 0 replies      
So, is it like interviewing vocodered Smurfs?
NetTechM 1 day ago 0 replies      
I liked this article, well written and thought provoking. I am curious if this attrition effect plays a large part in graduate rates from STEM programs.
nxzero 1 day ago 0 replies      
If gender bias is the issue, focus on bias detect and mitigation; not masking gender, which to me sends the wrong message.
vonklaus 1 day ago 0 replies      
Hiring is a complex process that is extremely subjective accross role, organization and sub-sector of the tech industry. To the extent this result is correct within the population of interviewing.io; it is in fact quite positive on many levels, if unfortunate on others. Specifically-- again accepting the methodology used here, interviewers are unbiased in their reviewing.

If we consider simply this subset however, I suspect there are several factors which contributed to this result:

* As indicated by others, it was unclear if this actually successfully modulated voice AND that other factors of influence did not leak gender to the interviewer.

* interviewing.io may simply attract men who are better than women.

* The current climate provides a lot of resources and support for women entering tech. Many organizations have made a large push to hire women and thus the top and mid tier women (who are significantly less numerous than male counterparts) are hired into organizations and thus not applying here.

* places where candidates learn about interviewing.io could differ based on gender. A contrived example being males learning about it on HN, while female counterparts learning about it through a short part-time coding bootcamp.

* there was an experience gap or significant skill gap between genders. This was alluded to above.

* women on interview.io are generally worse programmers or perform worse in technical interviews.

* not enough data for statistical significance.

This is a pretty interesting result and could actually be a positive thing for interview.io. It is possible they are objectively evaluating candidates and it is simply a marketing problem which they can adjust for.

There are also some non-trivial differences between men and women which likely matter even in this context. Amy Cuddy does an amazing TED talk (and a longer one as well) about poses and cues and their effects on perception.

If the idea of this investigation was to make a larger observation about the industry, it would be interesting if they could correct for experience & skill level possibly by something completely objective like HackerRank for example. If seperated into 3 skill bands, it would be interesting to compare the actual interview results accross similarly skilled populations. To correct for bias it may be useful to tell interviewers the candidates will be anonymized modulating both voices to try and have both genders voices sound alike as a single neutral voice, ideally while still allowing for pitch and intonation.

Be interested to see a follow up as the site receives more candidates and exposure and grows their organization, ect.

blisterpeanuts 1 day ago 0 replies      
"...it appeared that men who were modulated to sound like women did a bit better than unmodulated men and that women who were modulated to sound like men did a bit worse than unmodulated women..."

The author admits it's not statistically significant, but nonetheless it somewhat makes sense, if you take human nature into account.

If there's any bias in the technology field, it's probably that males are assumed "smarter", but women are still liked and desired.

So, if a female is not answering questions as sharply, but is masked as a male, the bias is to deduct extra points. If a male is answering Q's while masked as a female, the interviewer is pleased and awards extra points.

Just my theory, but it seems to fit. Again, though... a very small sampling.

andrewclunn 1 day ago 0 replies      
I wonder if the voice modulation had any discernible impact on communication ability across the board.
vegabook 1 day ago 0 replies      
So the point here, hopefully, is to allow truly talented female coders to overcome the bias against women generally, which exists because of the fallacy of averages.

Hopefully, the idea is not to push the erroneous view that women are on average equally good at coding than men. That is objectively untrue.

I can, with enthusiasm, support a technology which helps the (smaller number of) women who are genuinely good coders to shine through the bias. I cannot accept any technology which seeks to mask the reality that in this particular domain of modern life (coding), men are usually better performers than women. Because that would be regressive.

thaumasiotes 1 day ago 0 replies      
How are they getting attrition numbers? I signed up with interviewing.io many months ago, and never heard from them again. Sure enough, their website still advertises "join the waiting list" rather than "join". Any significant attrition should leave them completely devoid of users... right?
yanilkr 1 day ago 1 reply      
Sweet that some one is trying new things with ideas.

But Oh boy, Life does not revolve around tech interviews. Make peace with it. An old school psychologist would observe us and say that the people obsessed with tech interviews faced an emotional toll of rejection during early career and are trying to get over it.

Google, Facebook and other companies are like parents of the tech kids and the kids are seeking some kind of approval or validation from them via passing their technical interviews. You are not cool enough if you don't crack our interview.

Its possible we are doing something very wrong here.

koolba 1 day ago 1 reply      
Interviwer: "Hi, my name is Roger. I'll be interviewing you today."

Interviewee (modulated voice): "Hi Roger. Nice to meet you. My name is Alice."

Hmmm ... Speech hints aside, this may not work 100%.

virgil_disgr4ce 1 day ago 0 replies      
I was excited about this until I realized the sample size is incredibly small and there are few controls :/
sevensor 1 day ago 0 replies      
As we keep learning to our dismay, you can't solve social problems with technology. I've got to give them credit for trying, and for reporting the result, but I'm not at all surprised.
grb423 1 day ago 9 replies      
Is anybody studying the reasons behind the huge gender disparity in roofing, welding or kindergarten teaching? I think there is a tremendous gender bias in those and other fields that is going unstudied, because nobody cares or because those fields aren't as cool or important.
anon2016 1 day ago 3 replies      
So it masks gender by making everyone sound "like a dude" according to the videos. That's not masking gender. Make everyone sound like a robot and the blog post title will be more accurate.
awesomepantsm 1 day ago 0 replies      
So their conclusion is that the people who do worse, use the product they happen to sell less?

Sounds like bullshit to me.

jarmitage 1 day ago 1 reply      
Amazing how the author believes they have proved that there is 'no systematic bias' with This One Simple Trick!

Edit: sorry for the tone, but I just find this whole article bait. The headline is bait. The writer acknowledges the limitations of their study (basically that it doesn't prove much) but then makes a whole bunch of extrapolations anyway, and makes those guesses the meat of the article rather than investigating why their methods didn't work or how they could be improved or what would be required to produce conclusive evidence.

myohan 1 day ago 1 reply      
cool but it's sad to know that this is required to maintain gender equality
AndreyErmakov 1 day ago 2 replies      
So we all realize technical interviews serve no practical purpose and should be quietly abandoned, but instead of looking into the future and discovering better ways of identifying talent, people keep inventing stuff that makes the torturing mechanism even more sophisticated so that the suffering can be prolonged.

I'd like to see the people behind this project apply their technical skills to something more useful to the industry and the society in general.

>> to get to pipeline parity, we actually have to increase the number of women studying computer science by an entire order of magnitude

Any woman who's gone through that process will likely want to get another career, that where people are treated with more respect. And I suppose many men are having the exact same thoughts.

The more you mock your talent pool, the more actively that talent is running in the opposite direction, just to get out of this mess.

Startup incorporation checklist github.com
265 points by hberg  2 days ago   110 comments top 30
swampthing 2 days ago 3 replies      
It's worth mentioning that incorporation by itself doesn't get you very far in terms of protecting against personal liability. It's really just the first step in company formation. You'll want to appoint directors and officers as well. And to protect against departures, IP issues, etc., you'll also want to issue stock with vesting to founders, and have everyone enter into IP agreements.

We've automated all of this at Clerky - you can do everything completely online using our software. We do a ton of company formations. If anyone has any questions on the topic, feel free to ask!

mcorrand 2 days ago 1 reply      
Some other resources that have served me well - not for incorporation per se, but the next few steps in setting up a healthy corporation:

- Docracy.com has some good templates (contributed by some incubator I can't recall) for bylaws, ip assignation, founder terms, terms of service and privacy policy and customer contracts, etc.

- Listing a phone number with one of the large online directories helps with various verifications (including EV SSL if you need it and facebook page)

- insureon.com to shop for insurance.

- Get bookkeeping software. Right away, and keep it up to date

Edit: the docracy docs are by Techstars: https://www.docracy.com/userprofile/show?userId=30and Orrick, a law firm: https://www.docracy.com/p/10881/orrick

gjkood 2 days ago 2 replies      
Setting up a corporate entity (any kind) in California gives you a mandatory minimum $800/yr bill from the CA Franchise Tax Board.

So if you are planning to incorporate in CA, be sure you are serious and don't mind an automatic $800 on the expense side without anything on the revenue side.

Please correct me if I am wrong on this.

Also, does anyone know why this is the case? I haven't seen this in a few other states where I have resided. What does this buy my company in terms of benefits? Was there some ballot proposition that got passed to levy/enforce this fee?

ttmarek 2 days ago 5 replies      
Has anyone here tried Atlas (https://stripe.com/atlas)? I'd love to hear your thoughts on it.
nodesocket 2 days ago 1 reply      
All this legal tape and bureaucracy is really an impediment to a young startup, especially if you're trying to bootstrap. I was personally hit with the california LLC fee of $800 a year (was three years behind) and one day I logged into my Silicon Valley bank account and saw $3,300 withdrawn by CA Franchise Tax Board. Not the happiest day of startup life.
jahewson 2 days ago 0 replies      
Warning: there's a really important point missing from this list - pay taxes in California. There's a minimum "Franchise Tax" of $800 plus a percentage of California-derived revenue. All entities doing business in California (be that selling to customers based there or having employees there) must register with the state and pay this. The first year is free though - the minimum is waived.
a3camero 2 days ago 1 reply      
HN has a global audience. Might be worth noting that this advice is aimed at Americans. This could be very bad advice for non-Americans.

I'm a lawyer who deals with startups in Toronto. Quite a few people ask about how to set up a Delaware corporation because they've seen online that that's how you create a startup.

emilyfm 2 days ago 0 replies      
Although I appreciate this list is aimed at a narrow case (incorporation in Delaware with operations in California), I'd still pay a little more attention to the "choose a name for your business" stage to avoid surprises.

At the very least, check the name isn't a registered trademark in a similar business (at uspto.gov), and that there isn't another significant company with the same name globally (opencorporates.com is a good starting point, as well as Google).

You don't want to limit international expansion later by picking a name that is a well-known company already. For example, when Burger King expanded to Australia in the 1970's the name Burger King was already trademarked locally, so they had to rebrand as Hungry Jack's.

Of course also do a Google search to see if words in the name are widely used, perhaps meaning something unsuitable in another language.

Finally, although maybe becoming less important, it's useful if the .com domain is available (either unregistered, or at a reasonable price).

Better to do all this before spending the money on incorporating with a name you might need to change later.

archagon 2 days ago 3 replies      
I've heard that for small businesses in California, incorporating in Delaware for tax reasons doesn't legally make a whole lot of sense because California will want to tax you anyway. Is that the case? If so, why are so many people still set on incorporating in Delaware?
spriggan3 2 days ago 1 reply      
Nice thanks, I'd like to see such lists or guides for other countries such as Canada or European ones, if it makes sense. Of course it doesn't replace a good lawyer but at least to get an idea on how hard it is for founders.
usrbintaco 2 days ago 0 replies      
You can save on the incorporation services by downloading forms directly from a given state's Secretary of State site. Most states have clear instructions on how to fill everything out and where to submit the paperwork and payment. Incorporation service companies are great, but if you're trying to bootstrap a startup every dollar counts.

Additionally, if there will be multiple owners/members of the new entity I strongly suggest buy/sell agreements (basically a contract stating who can sell their interest and when and to whom - saves a lot of headache later to figure these things out up front).

tacos 2 days ago 1 reply      
I see no mention of sales and use taxes. Welcome to Hell.


joshuaheard 1 day ago 0 replies      
This checklist is a fine first step in completing the externalities of corporate formation. However, it is only the first step. There is a whole second step of issuing stock, appointing directors, holding a first meeting, electing officers, adopting bylaws, and a whole slew of internal items that should be on the check list as well. Most of these are forms that could be handled by a lay person as well, but you won't get any corporate liability protection without them. There are also annual meetings and such that are required to maintain the corporation. If you are serious about DIY incorporation do some research, you could probably find some good manuals out there with forms and everything.
guiseppecalzone 2 days ago 1 reply      
As a quick plug, you can sign your incorporation docs with HelloSign (http://www.hellosign.com) and fax Delaware with HelloFax (http://www.hellofax.com).

(Co-founder of HelloSign and HelloFax)

harrisreynolds 2 days ago 1 reply      
Cool checklist. Would you mind if we input this checklist into our tool? - https://www.processd.com/

We have a way to publish checklists like this so users can actually check things off.

See our Show HN post for more details! :-)

eonw 1 day ago 0 replies      
i always incorporate in my home state when starting a new venture, its much easier to manage.

all assets can later be sold to another LLC or Corp as the need arises, but doing all these steps in advance can just cause undue headache and complexities.

also, california is about the least small business friendly state in the union. and AFAIK, having a delaware corp doesnt save you all that much in taxes anyways. so glad i no longer have to deal with that state.

complex corp setups also confuse some investors, certainly if you are raising capital from those not savvy in the plethora of ways corps are setup to circumvent taxation and game the domicile game. they basically think you are a scammer(been there)

just 0.02

homero 2 days ago 0 replies      
Can someone please tell me what to check under management for a ca llc?


I've been told I'm a member. I'm also the only person. Should I be the manager? Is all members plural and I shouldn't use that? Thanks.

_RPM 2 days ago 3 replies      
If my company won't be making money or revenue for quite some time, what does that mean for me in terms of the IRS? Will we get penalized for NOT having revenue?
donatj 2 days ago 0 replies      
Where is CAAS "Company as a Service" as a business? Particularly for single person companies, I could see this being a very useful service.
louprado 2 days ago 1 reply      
InCorp is referenced in the non-free tools section but not in the recurring fees section. Is this an oversight ?

Isn't there a DE requirement that you have an agent(?) operating within the state if your HQ is in CA ? Incorp currently charges me $100+ a year to do this.

merrywhether 2 days ago 3 replies      
How relevant is this to a single person wanting a corporate front to make small amounts of side money from a website or app? Is an in-state LLC more preferable in that situation? Is any type of incorporation even necessary?
ErikAugust 2 days ago 1 reply      
How about shutting down a company? Anyone have any tips on that?
ex3ndr 2 days ago 2 replies      
How we can open checking account in SVB? They rejected to open account for us saying that we are not tech vc-backed startup, but we are! Or any other possibilities?
smnscu 2 days ago 0 replies      
[Sorry for off-topic] Could someone point me to a similar resource for starting a startup in Germany, preferably in English?
pbreit 2 days ago 0 replies      
Someone should do Stripe Atlas for US startups.
Scarbutt 2 days ago 1 reply      
As a foreigner (non-US), why create the company in the U.S. instead of somewhere like Ireland which has lower taxes?
arturventura 2 days ago 1 reply      
I would love If anyone extend this information for foreigners (in my case EU) wanting to open a US based company.
MarlonPro 2 days ago 0 replies      
Any additional requirements if one of two partners live outside US?
_RPM 2 days ago 1 reply      
This is great. Just what I need. My non technical co founder and I are starting a company and I want to make it a real company before we start meeting 3 days a week.
pstrazzulla 2 days ago 0 replies      
What about trademarks?
WireGuard: next generation in-kernel modern VPN wireguard.io
362 points by Aissen  2 days ago   151 comments top 24
zx2c4 2 days ago 10 replies      
Wow, I launched this 10 minutes ago and somebody already put it on Hacker News. Spectacular!

I'm the author of this and would be happy to answer any questions you have.

hueving 2 days ago 4 replies      
Why is this in the kernel? It seems to me like a failed separation of concerns compared to running this in userspace. There shouldnt be anything magical requiring this level of coupling.
teddyh 2 days ago 1 reply      
Note: This project, despite having the text of the GPL 2 in a file named COPYING, is not actually licenced under GPL 2. The actual copyright statement found in source files is Copyright 2015-2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.. It does not reference the GPL. This means, legally, that nobody can do anything with it.

If the author actually meant to licence it under the GPL 2, he should read and follow the instructions in the GPL itself, more specifically the section at the end titled How to Apply These Terms to Your New Programs.

asymmetric 2 days ago 1 reply      
Just wanted to point out that the author is also behind the excellent password manager pass: https://www.passwordstore.org/.


ksec 2 days ago 2 replies      
I know asking for MIT / BSD or Apache 2 may be a little bit of stretch.

But any chance of LGPL?Or would we need a conplete reimplementation for BSD?

amluto 2 days ago 1 reply      
A couple questions:

How do you deal with MTU? OpenVPN's handing is particularly bad [1]?

Is the whole protocol in-kernel or just the data plane?

For admins who want to provision large number of clients, do you ever plan to implement some kind of certificate hierarchy?

[1] https://community.openvpn.net/openvpn/ticket/375

vonnyfly 1 day ago 1 reply      
Do you have try this in China internet?As we know, China has a more complicated network.UDP packets have a large loss.

If it could work better than ocserv, this vpn will be a milestone.

nikolay 2 days ago 1 reply      
Is there a way to connect from macOS to the VPN?
Retr0spectrum 2 days ago 3 replies      
Can someone explain why being in-kernel is considered a feature?
throwanem 2 days ago 1 reply      
Looks awesome! I can't wait to play with it when it hits 1.0.

In advance of that, I'm curious: What's the tl;dr: on how this compares to Tinc? In particular, I'm wondering what WireGuard's mobile story looks like, especially in comparison with Tinc's (which is pretty rudimentary as far as I can tell), and about the extent of effort that's likely to be involved in ongoing configuration management.

felixding 15 hours ago 0 replies      
Does it work against Deep Packet Inspection? I live in China, most popular VPN protocols can't pass through the great firewall.

Sorry for my English.

bulatb 2 days ago 1 reply      
What happens if a client gets assigned a new IP that isn't in the server's whitelist? How do they connect?
dmitrygr 2 days ago 1 reply      
""Encrypt entire IP packet using peer ABCDEFGH's public key.""


1. RSA & ECC are both too slow for this to be performant

2. Padding???

Certainly you meant "encrypt using a negotiated symmetric key". And earlier you meant to say "negotiate symmetric key with peer"

Anonionman 1 day ago 0 replies      
It would be useful to have option to connect to VPN server trough SOCKS/HTTP proxy like you can do with OpenVPN, and it is useful for traffic obfuscation and combining VPN with Tor/I2P. Another thing that I like is XOR patch for OpenVPN (https://github.com/clayface/openvpn_xorpatch), that can be very useful in hiding you are connecting to VPN.
maxhou 2 days ago 1 reply      
Does the protocol implement any kind of negotiation (ciphers, ...) ? if not, how would you handle future type of attacks against the then hardwired constructions ?

I fully agreed that being in-kernel is the right choice for performance, but the chosen constructs excludes the possibility of using any type of existing crypto hardware accelerator that shines in the IPSEC use-case (cache cold data == no cache flush overhead, fully async processing of packets with DMA chaining). Time to start lobbying SOC vendors :)

amq 2 days ago 1 reply      
Performance comparison would be welcome, especially on low-end hardware. For example, I'm getting 12 Mbps with OpenVPN BF-CBC on AR9341, I wonder what could WireGuard achieve.
wmf 1 day ago 0 replies      
I'm surprised to see that ChaCha-Poly is faster than hardware-accelerated AES-GCM. Any ideas why?
wargame 2 days ago 1 reply      
Is it possible to use TCP-only?
naasking 2 days ago 2 replies      
Seems like an interesting project. How does it compare to zerotier:


eximius 2 days ago 1 reply      
Please let me know if this has already been posted.

Is there any worry that someon might perform a DoS attack against a client by replaying a valid packet from the target from different hosts so that the other servers cannot correctly route to the real IP of the target? This is based solely off of the description on the homepage and it might not be possible due to implementation details I'm unaware of.

tw04 2 days ago 0 replies      
Why would I want this over softether?
maffblaster 1 day ago 0 replies      
Nice work! Exciting!!
lawnchair_larry 1 day ago 0 replies      
You don't want this in the kernel.
Burnout and Mental Health stephaniehurlburt.com
328 points by ingve  3 days ago   111 comments top 25
eknkc 3 days ago 2 replies      
Oh this is too real.

I spent 3 years of my life, working ~100 hours a week on the same project. It was a startup (local one) and I can't remember the details of that time. I went to 2 vacations, only to spend my time working on my laptop, just by a pool rather than at the office. I carried my laptop to everywhere. Kept my phone accessible 7/24. Even got stressed on ~1 hour flights I had to take, because I'd be offline. I planned to quit but never acted on it. That mental state somehow locks you in.

As far as I can remember, my only comfort was my cat. He'd sleep on my lap while I was working at home. Like a stress reliever.

Then he died one day without a warning. I just went ahead and quit my job the next day.

Best decision of my life. I had some stock in the company and sold it during a later investment round to our older VC. Spent 6 months doing absolutely nothing. Then started a new company, started working healthy hours. Started cycling, eating healthy. Lost a ton of weight. Taking care of yourself makes everything work much better in long term. Later, even found a cat with a broken rib on the street, barely breathing. He's now sleeping next to me.

Please, please if you are in a similar situation, just stop. Doesn't worth it.

jondubois 3 days ago 7 replies      
This sounds like the past 10 years of my life. I can't even enjoy normal life activities anymore... And I can't afford to take a long enough break to recover (if that's still possible?).

I don't even care about memories anymore - I'm in survival mode; I'm just a bullet moving towards its target.

It's like I've been running through a tunnel as fast possible for a long time and I can finally just see the light at the end of it, but I know there's a train coming in my direction and if I don't make it out of the tunnel in time it's going to get really dark and ugly.

ryandrake 3 days ago 2 replies      
> I was used to being scared of being fired-- not that long ago, I'd been scraping by, had a hard time finding work. I didn't fully realize that I had more power now as a programmer. I didn't think about power struggles-- how other people did go home, but because they took the risk of standing up for themselves.

This is a real part of burnout--the realization that as employee #3422 you don't have much bargaining power. But I came to the opposite conclusion to the author's.

You either work to burnout levels or they'll just replace you with someone who will. I used to be a really cocky 20-something programmer, not afraid of getting fired or laid off--until it happened! Ending up about a month from insolvency gives you a realistic perspective on the power imbalance between you and your employer. Combine that with having a family to support, and you become much more willing to go into the "burnout zone" in order to keep the bills paid.

tequila_shot 3 days ago 5 replies      
This is exactly what I am going through right now. It's been 5 months and I don't have any recollection of my first month.

I'm in the middle of a high profile project which goes to UAT in two weeks, and I know it is slated to fail. There's just too many issues.

I have been working 85+ hours every week constantly for the past ~5 months. Being an Immigrant ( H1B) makes it more difficult. All I do these days is code for 15+ hours everyday with an impending weight hanging over my head constantly that I'll be made a scapegoat if the project fails.

I've started to lookout for opportunities, and since it's September, the number of companies offering jobs in my domain are _very_ less.

What did you do when things like these happened?

m4x 3 days ago 0 replies      
It's really important to pay attention to symptoms of burnout and deal with them as a possibly life changing injury. Never ignore it. Never think "I'll deal with it after this project"

I didn't realise this when I was younger and suffered three burnouts. The first two sucked but were reasonably short term problems. The third destroyed my ability to think clearly and I'm only now starting to regain the ability to think as I used to - after a ten year break in a completely different career.

Don't ignore burnout!

nul_byte 3 days ago 2 replies      
I hope I don't sound conceited here, but this was me and I resolved the issue.

Meditation. 15 minutes a day is all it takes, and I get myself to that place of not giving a care in the world and feeling all the stress just drop away. The only way I can describe it as, is like when you go on a beach holiday and you're lying in the sun, not sure what day it is (Saturday, Wednesday?) and feeling content.

Its now become vital and essential for me to mediate, it feels as important as sleeping and eating. I don't have any aspirations to become enlightened, I just wanted to stop waking up fearful in the morning and content during the day and night.

JDiculous 2 days ago 0 replies      
I completely relate to this.

I realized things needed to seriously change when I couldn't recall how long I'd been living in NYC or my apartment (4 and 3 years), and when I did the math I couldn't believe how much longer it was than I thought. The last year or so has basically been a blur aside from a few interesting memories (eg. vacations). It's not even like I've been working really hard, I've just been living a dull life on autopilot doing the same thing over and over again.

I've noticed that time passes slower when you're doing more interesting things. When I'm on vacation, the days feel much longer and fuller and I remember them in vivid detail. On the other hand, a month at the office is often just one continuous blur (I wrote about this here: http://jdiculous.blogspot.com/2016/04/slow-down-time-routine...). Also I'd been struggling to wake up before 10am and generally feeling sleep-deprived during the week, but on vacation I naturally wake up bright and early at 6:30-7:30am and fully awake.

I just quit my job and flew to Tokyo just to experience something different. My first day I woke up at 7am and went to bed at 7am. That single day was more memorable than the weeks if not months before I quit my job.

I've just managed to land a remote job. My plan is to travel around, and I'm hoping that the freshness of constantly being exposed to new environments and experiences outside my comfort zone will keep me sharp and more engaged with life rather than another forgettable year in an air-conditioned open office at the same desk around the same people.

devy 3 days ago 0 replies      
In my own opinion, job responsibility and stress level are proportional.

A year ago, I was burned out from a VPE level job for two years before and after moving down to a Engineering Lead (developer) job, responsibility greatly reduced, so were the stress level. I am able to take care more on myself and my family. So for the pay-cut and title downgrade, it's a good tradeoff so far.

Having said that, by no means responsibility is the only factor, I also feel that the work culture is also geographical. For instances, from my anecdotal experiences, same type of company in U.S. and Japan are working longer hours than the ones in Canada or France. A startup job in S.F. may seem a bit more stressful than a similar job in the Midwest/South. Your milage may vary.

PS: 8-hours of sleep is a great way to help me wean off the heavy coffee-drinking lifestyle. I remember reading an article recommended by my Jawbone UP wristband app about the quality of sleeping - REM sleeping is the key for brain consolidate memories. [1][2]. As as aside, I love Jawbone's health awareness content. Too bad, they are going under :(

[1] http://www.huffingtonpost.com/dr-michael-j-breus/whats-in-a-...

[2] http://www.ncbi.nlm.nih.gov/pubmed/15560767

another_account 2 days ago 0 replies      
For anyone currently trying to self-medicate with illegal drugs. Please, please get help. There is no shame. Im a recovered heroin addict. I lost everything and am currently rebuilding.

I started on the opiates and benzos to self-medicate. I had all the symptoms, panic attacks, getting really ill whenever i took a break. Complete lack of perspective. Just stress from all sides, work, relationships, bills. Insane working hours for years. So, of course as my tolerance rose so did the dosage and strength.

CBT, therapy, meditation, exercise and stable routine are what i have utilised to get myself out of what i would call, without any sense of over-dramatising, hell.

It is not worth it. Please, please get help. There is no shame.

simonswords82 3 days ago 2 replies      
This article strikes home with me...I've not told this story publicly before:

I've burnt out twice, it ain't pretty and once you've experienced burnout it opens doors in your brain that are difficult and time consuming to close again. So you're best off making sure those doors remain closed by taking care of yourself :)

For the first five or so years of my software company I worked like a dog on meth. It wasn't smart working, it was sheer brute force and ignorance that got the business off the ground. Come Christmas every year I would be a shell of myself. Christmas was the only time I really stepped back from work, and I inevitably got ill most years as my body recovered from the sustained beating I'd given it that year.

My first burn out was 2011. The recession had kicked in, clients were going away or shrinking their spend with us in droves. To compound matters my personal finances were intricately linked to the businesses success or failure due to the personal guarantees I'd signed on loans and the office space. I started to suffer panic attacks and general anxiety. I tried to self medicate both positively (exercise) and negatively (drugs), but ultimately hit the deck hard.

The noise in my brain caused by the anxiety and stress had my brain at 100% CPU all the time, leaving nothing for work or, well, anything. It took about three months before I could go back to work in any normal sense of the word.

Fast forward to last year (2015) around Christmas time. Another very busy year but this time business is brisk and so I was rolling around in our moderate successes. Again, December came, and I hit the deck. This time I had a wicked health scare to boot. It took about four to five months, so slightly longer, to recover from this one.

Fool me once and all that. I've now restructured the business so that it's almost impossible for me to work myself to death. I appointed a managing director to take care of the day to day stuff that was burning me out, leaving me to focus on strategy and leadership - the things I'm good at.

It's ironic that I called my business Atlas - Clever Software because he's the Greek God condemned to hold up the sky for eternity. I condemned myself to holding the heavy weight of a very complex and difficult to scale business. I'm very lucky to not have long term physical or mental issues as a result of the stress I needlessly endured. I know other founders who are not so lucky.

It's due to this backstory of mine that I have an deep dislike of the 'hustle' culture. It's alright to bust your backside to get a business off of the ground, but at all times you owe it to yourself and those who love you to put your health and wellbeing first. You've got one body but there's millions of business/work opportunities out there that you don't have to kill yourself to make a success of.

inestyne 3 days ago 0 replies      
I think this has more to do with how we operate at high speed. I delegate my memory to list apps and simple text notes. Since I did not have to store the information for any length of time in my head it's hard to recall that information later. Same break down as note taking in high level college courses. If you never read the notes it's like you never heard the lecture.
markpapadakis 2 days ago 0 replies      
I suppose we all have stories to tell and advices/tips to share; I too have gone through intense burnout, and initially it was just too hard to deal with.

Eventually, it turned out what really worked for me (it also worked with some friends who tried this) is to just do something else for a change.

Work on small, contained projects - maybe on unrelated problem domains, and consider changing your habits and do more things for fun, but whatever it is you do, don't try to 'force' yourself to do it.Working out at the gym work wonders. Getting some good night's sleep also helps.

I would also snap out of it after a few days. This never failed me, but I realize that it may not work for everyone.

Disruptive_Dave 2 days ago 0 replies      
Here's another sucky thing that can happen - you unconsciously adjust and adapt to the conditions that cause burnout and end up accepting them as "the way it is / life." It happens slowly and without notice. There is no "rock bottom" so you're never forced to confront the situation head on. It's like death by a thousand paper cuts.
cableshaft 2 days ago 0 replies      
Holy crap.

This article made me realize that I don't remember hardly anything of the time I spent working at one of my startup jobs. Bits and pieces of specific events stand out, but not a lot. The last six months I was working there I worked at least 60 hours a week, and it crept up to 80 hours a few times. The hours let up after I was hospitalized for a day because of all the stress. I didn't do much else besides sleep and go to work. Maybe that's why, it just blended together.

And I have a crappy memory in general, so I don't know how much of it is because of that. Anything more than a few years ago is kind of hazy.

But still, it's almost like that year and some change might as well have not existed, other than to put a few lines on my resume. But there are much easier ways to do that.

20andup 2 days ago 1 reply      
I just came back from a 5 day holiday. I was working so much on my own project that I burnt out. It took me a few days to realize what was going on since my project was getting easier and yet I felt a lost in motivation.

So I locked my computer up, took a road trip for 3 days and did literally nothing for another 2 days. I started working again yesterday afternoon and was quite surprised how much more agile my mind has become.

It's hard to let go and relax sometime when there is always so much to be done. I think we are need to realize everyone is human and every few months we need some break no matter if we feel like we do or not.

After working for 10 years now, I now realize that nothing is really every that urgent. No matter what anyone tells you. Its your perception (or your boss's) that make it urgent, but objectively, it probably isn't.

The world won't stop spinning just cause you took one more day to finish something. Burn yourself out, and it may even take up more time.

fredleblanc 3 days ago 1 reply      
I gave a talk on burnout and "digital overwhelm" in 2015. It's a bit of my story and some tips on coping under the weight of the world. If you're interested: https://vimeo.com/147213533 Slides are linked in the comments.
_yosefk 3 days ago 5 replies      
For someone with no responsibilities except work, 70 hour weeks are sustainable (did it for 3 years, not to be recommended, but no burnout.) People report way larger numbers of hours though and probably have to handle more than just their work; I don't know where things start to break down, they ought to somewhere.
cm3 3 days ago 0 replies      
Burnout comes in various forms and intensities, but what's most important is that at least the first one will not be recognized as such. You have the symptoms but it's not like the flu or a headache, so you have no idea that you're burning out. If you're a developer and started coding as a hobby, try to recognize it early and find a better job or fix something else that will resolve the root causes, but try not to let it ruin the love of coding if you can. This sounds easier than it is, I'm well aware of that.
njloof 3 days ago 0 replies      
I'm really surprised to read the article she linked... Labor laws in Quebec do not exempt game employees, they should be getting overtime anytime they're over 50 hours a week...
sp527 2 days ago 2 replies      
One of the best things I've ever done for myself: quit my job.

Stating this not to brag but to make my point: 8 months ago I was clearing north of $140k plus options at a top tech co (plus all the usual perks/full benefits) at age 23. The path was probably open for a management position and easily over 200K per year by age 30.

I hated it. The work was agonizingly mundane. The environment was intellectually letheragic (a tremendous irony given the pedigree of my coworkers). I was pulling long hours to write code that I knew had little real value. And everywhere I looked I saw constraints and barriers to doing something meaningful. Eventually it got to the point where I started having a lot of anxiety and even moodiness and feeings of hopelessness.

I felt like what I was forcing myself to become was suffocating the person I wanted to be, deep down. So I asked myself, honestly, if the money was worth trading off so much of my life. My answer was 'no'. However, I think I might be an outlier in that regard. A lot of other people place a high value on experiences, socializing, buying nice things, etc. There are only three things I can ever remember giving me genuine contentment: building complex things, learning (reading, programming, lectures), and hanging with a handful of very good friends. Turns out working a job was diametrically opposed to my main priorities in life. And the weird thing is I'd known that for a very long time before having a decisive 'epiphany' about it. I'd been running from that truth because it was so contradictory, relative to what society tells us we should do and value. This line of reasoning was essential in clearing the final mental hurdle between myself and the decision to pursue entrepreneurship. I had to reconcile myself with having less in the immediate term and the likelihood of long-run financial consequences in taking a break from working in the industry. I acknowledge that this isn't possible for everyone. Some people have dependents, mortgages, and other obligations. But if you can somehow make the numbers work and don't find the luxuries of a six figure salary as compelling as the opportunities you trade away, then you owe it to yourself to stop and think about it.

As for what happens on the other side: freedom is a beautiful thing. I'm now convinced there's 'one easy trick' to becoming a 10x engineer and that's quitting your job. You learn out of necessity when you have few resources and no fallbacks. It also seems obvious now in hindsight that the surest way to realize your full potential is to work on your own terms building something you care about. I get to see friends more often than I have in years and I'm working with two of my best friends on a startup. I've also been exercising consistently and eating better. But by far the most energizing change in my lifestyle has been my improved sleeping habits. Imagine a world in which you don't have to set an alarm clock and always get enough sleep - that's what you get to do when you work on your own terms. I can't be sure about this but I feel like I might actually be improving in mental acuity as well, which I would attribute to getting more sleep and the compounding effects of expanding my skillset (full stack engineering, PM, marketing). I think the phenomenon of interdisciplinary study leading to improved cognition is fairly well-studued and I'm now realizing that entrepreneurship is at least a good approximation of that. I furthermore spend several hours per day in flow because there are no meetings, emails or other interruptions.

All this adds up to a lifestyle that's so dramatically superior to where I was before that I have absolutely no intention of going back, if I can avoid it. Sorry for the poorly-constructed stream of consciousness. I intend to write a blog post on the subject at some point, but wanted to get this down here in case it helps anyone trying to make a decision. If you're going through burnout and feel like your personal narrative resembles mine, I hope you'll take some time to consider quitting to work on something that gives you purpose and that might have value to others. Regardless, I'll end with the handful of things I needed to keep hearing back when I was burning out: there IS an other side and you WILL get there. It's okay if you have to quit because the safety net for software engineers is incredible and your health is comparatively fragile. By far the most important thing for you to do is let the good people in your life be there to help you get through this.

nonofficial10 2 days ago 0 replies      
>I realize I don't need to have caffeine to work.

Impressive. Once I said to my coworker that I'm not drinking a coffee because I can't sleep at night. She replied she drinks coffee because she can't stay awake in the daytimes without coffee.

brendonjohn 2 days ago 0 replies      
The time it takes to go into burnout is supposed to be the time it takes to restore yourself.
mk-61 3 days ago 0 replies      
Still recovering. Slowly. To those, who seeking for help, I can recommend a very good read: "The Power of Now", by Eckhart Tolle.

I can even recommend it even if you never experienced burnouts, PA's.

stephengillie 3 days ago 0 replies      
Being able to stop working when you reach burnout (and not starve or go homeless) is a nice luxury to have. Things can always be worse.
AI fighter pilot wins in combat simulation bbc.co.uk
202 points by tpatke  2 days ago   177 comments top 31
linschn 2 days ago 9 replies      
I read the paper, and read up about the techniques used to do that (because the paper is very light on details). I came back completely underwhelmed.

This makes (clever) use of hundreds, if not thousands, man hours of painstakingly entering expert rules if the form IF <some input value is above or below some threshold> THEN <put some output value in the so and so range>.

The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.

This kind of techniques has some nice properties (its "reasonings" are understandable and thus kind of debuggable and kind of provable, it smoothes some logic rules that would otherwise naively lead to non smooth control, etc.) but despite the advances presented here that seem to make the computation of the model tractable, I don't see how it could make the actual definition of model anywhere near tractable.

Also, I dislike having to wade though multiple pages of advertising before I can find the (very light) scientific content.

--Edit: I realize I am very negative here. I do not mean to disparage the work done by the authors. It's just that the way it is presented make it sound way more impressive than it is. It's still interesting and novative work.

YeGoblynQueenne 2 days ago 2 replies      
For those who read this piece of news and don't understand why there is no mention of machine learning, neural networks and deep learning, that's because the system described is a typical fuzzy logic Expert System, a mainstay of Good, Old-Fashioned AI.

In short, it's a hand-crafted database of rules in a format similar to "IF Condition THEN Action" coupled to an inference procedure (or a few different ones).

That sort of thing is called an "expert system" because it's meant to encode the knowledge of experts. Some machine learning algorithms, particularly Decision Tree learners, were proposed as a way to automate this process of elicitation of expert knowledge and the construction of rules from it.

As to the "fuzzy logic" bit, that's a kind of logic where a fact is true or false by degrees. When a threshold is crossed, a fact becomes true (or false) or a rule "fires" and the system changes state, ish.

It all may sound a bit hairy but it's actually a pretty natural way of constructing knowledge-based systems that must implement complex rules. In fact, any programmer who has ever had to code complex business logic into a program has created a de facto expert system, even if they didn't call it that.

For those with a bit of time in their hand, this is a nice intro:


Negative1 2 days ago 4 replies      
AI Fighter Pilots have been killing me in Flight Simulations for at least 30 years now using similar systems. From the paper, they basically use an expert system using something they call a Genetic Fuzzy Tree (GFT), which seems suspiciously like a Behavior Tree where the nodes are trained. They trained the GFT then had it go up against itself where Red team was the 'enhanced' AI and Blue was supposed to be the human (this part was odd to me).

After they completed the training they put it up against real veteran pilots and the AI basically did a few things. It would take evasive maneuvers when fired upon and fire when in optimal range. That's pretty much it. And you know what? That's really all modern pilots need to do. It's amazing what they did with Top Gun, making this stuff not look boring. In the end of the day it's just wait for some computer to tell you that you have target lock and press a button. If attacked, take evasive maneuvers and pray. Takeoff and landing on a Carrier is the scariest part.

I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).

Even so, I enjoyed reading the paper (not the article) so would recommend it if you're into Game AI at all.

vbo 2 days ago 14 replies      
If we assume the wars of the future to be fought by AI-driven warmachines, can we abstract the matter further and have virtual wars? Our AI versus your AI fighting on computational resources provided by, erm, Switzerland. Nobody gets hurt and no money is spent building and destroying warplanes. Everybody wins. And have a prize pot, so actual invasion of territory is not necessary. Bulletproof solution, may I say. What do you mean it won't work?
Aardwolf 2 days ago 0 replies      
They did only one simulation? Strange to report on details of one single simulation when more makes sense.

Why not do hundreds of simulations, with different amounts of attacking and defending jets. Sounds like fun, must not be a problem to find pilots who want to do this simulation, it's merely hundreds of hours of gameplay :).

Or was it like, they did hundreds, but this is the only one where the AI won, and it had 4 planes while the humans had only 2?

hackuser 2 days ago 0 replies      
The Pentagon is betting on human-AI teaming, called 'Centaurs'. The foundational story is this:

Back in the late 1990s, Deep Blue beat the best human chess player, a demonstration of the power of AI.

Around ten years later, a tournament of individual grandmasters and individual AIs was won by ... some amateur chess players teamed with AIs.

AIs aren't good at dealing with novel situations, humans are; they complement each other (and I'll add: unlike most other endeavors, in war the environment (the enemy) is desperately striving to confuse you and do the unexpected. Your self-parking car would have more trouble if someone was trying everything they could think of to stop it, as if their survival was at stake). Also, we strongly prefer humans make life-and-death decisions; hopefully that turns out to be realistic.

prodmerc 2 days ago 2 replies      
Huh, couple that with an aircraft not bound by human limits (no life support, much faster maneuvering with no loss in decision making) and it should be awesome. And terrifying.
tdy721 2 days ago 1 reply      
Was this Raspberry Pi powered? This story makes that claim: http://www.newsweek.com/artificial-intelligence-raspberry-pi...

If that is true, it puts this achievement in a totally different class.

sleepybrett 2 days ago 1 reply      
I imagine an AI pilot always has a path to victory since they aren't subject to red/black-out and can thus pull crazier maneuver than their human counterparts.
willangley 2 days ago 0 replies      
The Alpha paper, "Genetic Fuzzy based Artificial Intelligence for Unmanned Combat AerialVehicle Control in Simulated Air Combat Missions" is open access and available online:


AKifer 2 days ago 3 replies      
In every considerations, an AI pilot has all the advantages in a physical combat, no G force limit, precise maneuvers, instant reactions, full time awareness. The only question is, will the rules of war allow an AI to kill a human ? Or how a human decision can be inserted in the loop.
cygnus_a 2 days ago 1 reply      
MAD is the future. And righteousness is the enemy. Don't mess with us. Don't mess with them.

Also, do the world a favor, and don't innovate new weapons. They leave an indelible affect on the collective mind.

matt_wulfeck 2 days ago 0 replies      
> Because a simulated fighter jet produces so much data for interpretation, it is not always obvious which manoeuvre is most advantageous or, indeed, at what point a weapon should be fired.

This is changing very rapidly with hardware-accelerated RNN chips being researched by Google and facebook.

I wonder about communication though. All the enemy fighter needs to do is jam any signals used by the jets to communicate. I wonder if they could rely on laser/line-of-sight communication instead of RF frequencies.

matheweis 2 days ago 2 replies      
They made a movie about this in 2005 (Stealth); looks like it's only taken 10 years for the first half of the plot to unfold.

Now we just need the AI to go rogue and target it's master ;)

juandazapata 2 days ago 0 replies      
ONE simulation? This is hardly news. It'd be more interesting if they did hundreds or thousands of simulation. One data point means nothing statistically.
jjwiseman 2 days ago 0 replies      
I'd like to know how this system compares to TacAir-Soar: http://ai.eecs.umich.edu/people/laird/papers/AIMag99.html
infinotize 2 days ago 0 replies      
I've been losing to the AI fighter pilots in DCS:World[0] for years.

[0]: https://en.wikipedia.org/wiki/Digital_Combat_Simulator

pc2g4d 2 days ago 2 replies      
Fighter jets feel like something that could be effectively tackled using genetic algorithms. Algorithms that get shot down are weeded out. Algorithms that shoot down enemies are promoted. Yeah?
Aelinsaar 2 days ago 1 reply      
That's interesting, but it had a 2:1 numerical advantage too, which does matter.
SocksCanClose 2 days ago 0 replies      
For many years John Boyd and the "Fighter Mafia" helped to plan, build, test, and then manufacture fighters that had optimal "performance envelopes" that enabled them to maintain dominance in the sky. Perhaps this concept means that the new "performance envelope" is going to be one of software. This argument is fleshed out here: http://warontherocks.com/2016/02/imagine-the-starling-peak-f...
gldalmaso 2 days ago 1 reply      
I imagine in real life conditions adversaries would focus on sensory attack types then?

Are there sensors that are immune to scrambling and bad data?

ourmandave 2 days ago 0 replies      
My first thought was of that little bastard UFO in Asteroids. It's pew-pew gun would never miss me.
dmvaldman 2 days ago 0 replies      
I find myself imagining a world where the weapons trade is replaced with bootlegged AI software trade.
ADanFromCanada 2 days ago 0 replies      
News at 11. One robot pilot beats another robot pilot.

"The AI, known as Alpha, used four virtual jets to successfully defend a coastline against two attacking aircraft - and did not suffer any losses."

"Alpha, which was developed by a US team, also triumphed in simulation against a retired human fighter pilot."

Key words here are "also" and "simulation" and "retired".

Click bait much?

kingmanaz 2 days ago 0 replies      
In the clip below, one of mankind's last manned aircraft pilots--flying his fighter with a mind interface--attempts to destroy his AI-controlled fighter replacement:


Perhaps honor can't be programmed.

fedxc 2 days ago 0 replies      
Why is this news? I lose to AI games all the time...
0xdeadbeefbabe 2 days ago 0 replies      
Does it go without saying that actually running a simulation is super easy? At times I feel locked in by my operating system, so I wonder how these guys did it.
partycoder 2 days ago 0 replies      
Can be deadly, but if it's predictable it can be controlled.For example, a gator. A gator is deadly, but can be manipulated because of its predictability.
ratsimihah 2 days ago 0 replies      
Ender's Game!
sandworm101 1 day ago 0 replies      
What form of combat was this? It sounds as if they were dogfighting, something that is more myth than reality these days. Fighters fight but they don't engage on the equal terms, the duel we see in films. What were the BVR conditions? Was this a missile fight or with cannons?

The concept of two flights approaching each other, seeing each other, and not engaging until they are in dogfighting range is silly. To get two modern fighters close enough for a proper turning fight at least one side will have to be taken by surprise. Otherwise, the long-range missile fight will either decide the matter or place one side in such a poor position that they will withdraw. (Either they are down or will have so reduced their energy that a turning fight isn't an option.)

jameshart 2 days ago 0 replies      
AIs beat humans in simulated combat continually. It's called 'losing a life in a video game'.
10-Year Exercise Periods Make Sense quora.com
318 points by sama  3 days ago   143 comments top 27
sama 3 days ago 4 replies      
I agree with Adam's post and intensely disagree with A16Z's post on this topic.

I don't think companies should take back stock compensation on a technicality. It'd be silly to even discuss taking back cash compensation when someone leaves a company!

I appreciate Adam starting this trend years ago.

jeffdavis 3 days ago 5 replies      
Founders are committed and in for the long haul, and either make a lot of money or none.

Startup employees make less money on a nice exit, but aren't as committed and can work for a few companies (maybe 2 years each) to improve their odds.

So having 10 years to exercise makes a lot of sense for the second group.

Forcing the employees to stay until liquidation makes zero sense for the second group. So you need to give the people that do commit at that level a package that more closely resembles a founder.

Otherwise it just distorts the market in all kinds of ways. Nobody would want to work for you until it looks like liquidation is around the corner, which means startups would constantly need to be positioning themselves on the auction block rather than focusing on lasting growth.

In addition, it creates the normal kinds of distortions associated with illiquid assets and immobile people.

SeoxyS 3 days ago 0 replies      
For young startups, I always recommend allowing Early Exercise. Put simply, it's the right by employees to exercise their options before they vest. The company retains a right to repurchase those options should the employee depart before they vest.

This enables them to exercise them as soon as they're granted, which greatly reduces the tax burden in two ways:

- First, the strike price and the value of the option are the same when they're granted, which means that the spread (i.e. the difference between exercise price and value of the options exercised which the IRS considers profit for AMT) is zero. Therefore, no taxes need to be paid. I've been stung by a 5-figure AMT tax bill on exercised options that were completely illiquidall of which would've been avoided had I exercised early.

- It starts the clock for long-term capital gains. You need to hold the actual stock for over 1 year to be taxed at capital gains rates instead of income tax rates. Federally, this can lower your tax rates from up to ~40% to ~20%. (would've been 15% pre-Obama!) In CA, for state taxes there is no distinction, so you'd still be paying income tax rates of ~10-13%

Keep in mind, if you early exercise, that you must file an 83b election with the IRS within 30 days, or the tax consequences can be severe. (If you don't, you'd be taxed on the spread at the current option value every time some of your options vest.)

Now, I think extended exercise windows are great too, and ideally option agreements would have both. I think generally, early exercise makes more sense for employees who join pre-Series-B, while extended exercise windows make more sense for later stage employees.

andreasklinger 3 days ago 0 replies      
Wow never disagreed with a16z content so far.

> at the same time disadvantaging employees who remain loyal to their employers just kicks the can down the road

The underlying assumption that people only leave companies because they are not "loyal"

People get fired, people get mobbed out of teams, company cultures change, companies fail in management. employees lives change, people need to move to other countries.

The whole notion about "loyalty" almost appears action-movie-like. "ARE YOU WITH ME? HELL YEA!"

It's already hard enough to convince highly skilled people to join companies vs founding their own. No need to further decrease the upside compared to being a founder.

tlrobinson 3 days ago 0 replies      
From the A16Z post:

> This solves all of the issues: cash rich vs. poor; competitive offers; and the bad incentive problem (e.g., encouraging employees to quit to build their own diversified stock portfolios).

Says the VC whose business depends on a diversified stock portfolio.

A couple paragraphs above he admits that "median time-to-IPO for venture-backed companies is closer to 10 years". That's not a reasonable amount of time to expect employees to stay at a job, and seems like a recipe for burnout and/or "rest and vest".

ska 3 days ago 1 reply      
I think there is fundamental difference of opinion here, exposed by Adam's and A16Z's posts.

A fairly typical early stage employee will forgo hundreds of thousands of dollars in salary over a vesting period, in exchange for options.

The philosophical difference is here: At the end of that period, do you think of the shares as the employees, earned in exchange for both the work done in those years, and the hundreds of thousands the company saved on salary? Or do you think of the options as an ongoing incentive to keep the employee with you (perhaps still below market rates), in exchange for the chance of a big payout later?

Technical employees often feel the former, and will point to the fact that they've "given" the company much more in salary reduction than many early round investors paid per share they own outright. Corporations often state the latter point, or some variation, particularly pointing out that later employees don't have the same leverage on the option pool. Option agreements often encode the latter.

devit 3 days ago 7 replies      
Isn't any vesting for non-founding employees completely broken?

If the employee loses the stock when he's fired early, then the company has a huge incentive in firing him a day before he vests, and thus he should regard the vesting compensation as nonexistent.

If the employee retains the stock when he's fired early, then he can just get himself fired to ignore the vesting period, making the vesting pointless.

It seems that vesting can only work if the employee is so essential that the company would never fire him because the company would then be highly likely to fail, which should only apply for founders in a functional company.

ak2196 2 days ago 0 replies      
Quora was definitely not the first startup to do 10 year exercise period, not by some margin. My Lime Wire stock options from 2000 had a 10 year exercise with a 6 year vesting schedule, no cliff and vesting every 3 months. Here's the proof: http://imgur.com/6eTUyui

Adam's on the right track though. I just had to write a 6 figure check today to exercise my vested options at my current employer because of the 90 day clause. It makes me angry because the company's official stance is that the board wants to use stock options as an employee retention tool. I was fortunate enough to have had the cash but a lot of other people are not and there is no secondary market. So if you get fired or have to quit during a bad market you are basically screwed.

abalone 3 days ago 4 replies      
I agree with this, but what do you guys think about minimum service periods? Like requiring 2 or 3 years? Companies like Pinterest and Coinbase have added that condition.[1]

Greater portability could in theory lead to higher turnover even among happy employees. They might go on to found their own company sooner. They might see good financial sense in diversifying their options portfolio. Yet young companies need the team to stick together for a certain time. Especially very small startups at the YC stage -- turnover is very harmful.

Note: In Adam's example, nobody leaves the pre-IPO company in under 4 years of service.[2]

[1] https://github.com/holman/extended-exercise-windows

[2] "imagine a company takes 10 years to IPO. Employee A works at the company from years 0 to 4. Employee B works there from years 4 to 8. Employee C works there from years 8 to 10."

cloudjacker 2 days ago 0 replies      
Man Silicon Valley companies are living in a parallel dimension!

They collectively think they have the LUXURY to hire employees that are in love with their random idea

And they collectively think that the employees have the LUXURY to play russian roulette with the compensation terms

Let's address that, because these factors have are completely disjointed with the success of the company and the employees' INTEGRITY (instead of "aligned incentive") to deliver amazing products and code

mesozoic 3 days ago 2 replies      
Wow I guess since employees should already value most stock options at near zero it's hard to value them any less.
smsm42 3 days ago 0 replies      
Longer exercise window would be very valuable, especially for employees not having big cash pile laying around somewhere. That would raise the value of options a lot. The other suggestion though - longer vesting period - would have the reverse effect.

4 years vesting options in startup are "extremely risky investment that with much luck and hard work may pay off". 8 years vesting options in a startup means "I guess Las Vegas gambling is too boring and way to little risk for you? How would you like to gamble with 10 years of your life?"

4 years vesting options in an established company is "we'll pay you if you agree to suffer us and drag yourself to work long after it stopped being fun for you". 8 years vesting options in an established company is "for how much would you agree to sell us your immortal soul?"

In short, long period vesting for options may make total sense for company issuing it. It would have very low value for employee, and even long exercise period would not compensate for that.

laurencerowe 2 days ago 0 replies      
You can't pay rent or save for a mortgage downpayment with illiquid stock options.

It takes something like $200-400K to get on the housing ladder in the Bay Area so the idea of putting it off for 8-10 years with no guarantee of success is already unattractive.

To shackle yourself to a single company for the duration? Nuts. When did you last work anywhere for 8 years?

hkmurakami 3 days ago 0 replies      
I've written several times in the past that only companies with substantial negotiating leverage against the gatekeepers of capital can afford to buck what is considered standard.

Hence we've only seen the hottest companies achieve 7-10 year exercise terms. https://github.com/holman/extended-exercise-windows

I've argued that as a cohort, YC is the best candidate to make a large push against VCs and make 7-10 years vesting terms an industry standard. Learning that this is now the case is incredibly exciting. https://news.ycombinator.com/item?id=11198991

harj 119 days ago | parent | on: Fixing the Inequity of Startup Equity

We're excited to make 10 years the new standard option exercise window for startup employees. Each of us have personally experienced someone close to us dealing with the stress of trying to exercise their options within 90 days and it sucks.We'd like to see more companies making this change, we'll be keeping the public list of YC companies who have either implemented or pledged to implement an extended window, updated here: https://triplebyte.com/ycombinator-startups/extended-options

jdoliner 3 days ago 1 reply      
> There is no concern for how many shares we granted in the past to other employees or whether or not they are still holding them; the only concern is the current market... it would be irrational not to increase the option pool if thats what was needed to be able to hire someone.

This is the part of this post that I can't believe is true. At the end of the day a company only has so much equity. How can the amount you've given out be of no concern in issuing stock to new employees? Isn't that tantamount to saying equity isn't scarce? When does the amount you've given out become of concern and in what context? If the answer really is that the amount of equity you've already given out never becomes a concern to any aspect of your company then why would you ever limit the amount of equity you give to employees?

lifeisstillgood 3 days ago 0 replies      
Everyone has a burn out point, a point when the company and the culture and just your life stages mean you want to move on. Being handcuffed to one is bad for everyone.

I would worry about the value of employees who basically wanted to leave the company four years ago but are only hanging in because their shares are worth a million. Surely it would be better to get those people paid and then out the door rather than keeping your senior influential ranks full of people who stopped caring years ago.

Surely there is a better way?

Stock options are some kind of payment - so why not treat it as a pro rata accrument. You are the first hire - you get 2% of the company if you stay ten years. Leaving after five to get married and move country? Fine here is 1%, just sign here, and we all are happy.

No matter how nice your arresting officer is, everyone resents handcuffs.

dasil003 3 days ago 3 replies      
Thank you for this Adam, as an early-stage startup guy who still hasn't made his FU money, this really nails all the salient points for me. Scott Kupor tries to decorate his article with references to employees' interests and considerations, but it's clear the guy has spent his career on the on the management/finance side where he doesn't really understand what it means to be a ground-level early-stage contributor to a young startup. Consider Kupor's "solution":

> But, a way to truly compete for the very best and long-term oriented employees would be to offer even greater amounts of employee options grants. For example, why not offer stock option grants that are 50% more than the nearest competitors but with the provision that a departing employee cannot exercise his or her stock options unless there has been a liquidity event? If you stay, youre a serious owner, but if you dont want to be part of the company for any reason you wont be an owner. This solves all of the issues: cash rich vs. poor; competitive offers; and the bad incentive problem (e.g., encouraging employees to quit to build their own diversified stock portfolios).

I don't even know where to begin with this. First of all, unless you are a VC, you don't have visibility into the market for options. Even if you did, startups are not commodities, you can't compare shares of early stage companies directly to each, particularly when you are a single-digit employee, you are going to be shaping the actual future of the company. Not only should the offer you receive reflect the value that your particular skills and expertise will bring the company, but you also have to gauge the potential of the company itself. 1% of a $1B company is worth a lot more than 2% of a $100M company, and of course how much funding will you need to get there?

Obviously these things aren't predictable, but as a prospective employee you have to try. After all, unlike investing, you only have one working lifetime to spend as employee. That puts a different perspective on these things from the VC really is building a portfolio and playing the odds. Since the VC is not directly pulling the levers, startups are effectively fungible to them.

But the part that really burns me up about his "solution" and it's purported comprehensiveness, is the idea that early stage employees who leave before a liquidity event don't deserve any equity at all. I'm sorry Scott, but that is absolute horse shit, and frankly it really will make me think twice about taking any investment from A16Z in the future. The early stage employees who take a huge pay cut in order to build something from scratch which will most likely fail completely, are making a huge investment in the company. They will literally pave the way for all the later employees to even have a company to work for.

Can you imagine if VCs made the analogous argument that angel investors should not be entitled to their returns unless they matched the later VC investments? "That would be preposterous! Obviously those angels took a big financial risk and deserve their returns!" Financiers would never be this short-sighted, but somehow Scott thinks that someone putting their blood, sweat and tears into startup for a below-market salary are only as valuable as their latest month of work. I respect the role of capital in startup creation, I really respect it because I don't have it, but even so, money is nothing without execution, and A16Z would be nothing without talented founders and employees who are willing to sacrifice a lot more than them to bring a successful company into this world.

Even if you are a complete sociopath who is interested solely in the short-term benefits to the company, you still wouldn't want to take this tack because (as Adam very aptly pointed out) then you end up with a lot of dead-weight in the company that's just hanging around to cash in their options.

Startups are not fungible, employees are not fungible. Treating employees like humans is not only the right thing to do, it's how you cultivate reputation with "cash-poor" top performers. The danger for VCs like Scott Kupor is there will always be an army of sycophants and yes-men ready to consecrate his every word just to get a piece of that juicy VC fund, but they are in real danger of having their lunch eaten by the expanding reach of angels that actually worked their way up out of the trenches themselves and understand the tech employee mindset.

pfarnsworth 3 days ago 0 replies      
This is a well thought out answer, and frankly embarrasses the response from the VC. Of course the VC wants to protect his own interests, he's just obfuscating it by pretending he's talking about "wealth transfer" and "fairness". What a bunch of BS, and I'll never work for a company that he is "advising". Who knows what sort of dirty tricks he'll play against the employees.
a_small_island 3 days ago 1 reply      
>"He suggests paying 50% above market in stock, but including a clause that all employees must remain at the company until a liquidity event or else they cannot keep any stock at all (even if they could come up with the exercise cost)."

Curious to what feelings this invokes for startup employees (nonfounders, investors) on HN.

snowwolf 2 days ago 1 reply      
I understand where the 10 year exercise period came from and I can understand the arguments against it.

A solution I haven't seen put forward is a compromise between the common 90 day window and the 10 year window, which is to have an exercise period equal to the amount of time you were an employee. This discourages people from bouncing around jobs collecting equity but gives a reasonable timeframe in which to exercise if you do want to leave after putting 5 years into growing the company.

genericpseudo 2 days ago 0 replies      
The answer is clear not easy, but clear; refuse to work with people who act in ways you find unethical.

If Scott Kupor's position is a company's position, and the total package value (including salary, benefits, etc), isn't acceptable to you when valuing options at zero and given you don't control the company, and they can fire you at any time, you have to then they're on the list. Refuse to work with them and tell your friends.

If you disapprove of his or A16Z's attitude, just don't accept investment from him. Let the market tell them they're wrong.

anf 3 days ago 1 reply      
Why doesn't everyone just exercise as soon as they join a company? At least at earlier stage startups, it seems that the amount of salary offered as compansation is at least an order of magnitude more than the amount of options. Given this ratio, it seems like most employees should have enough in liquid savings after even a few years to avoid taxes on the appreciation between stock grant and vest times.

Of course, there's the possibility that a startup will tank, but even in that case, losing out on having bought stock seems much smaller than the opportunity cost of not having worked at a sure-bet tech giant.

cpks 2 days ago 0 replies      
I felt dirty reading the a16z post. Really dirty. They tried to phrase screwing over the employees for the investors as somehow employee-friendly.
bigbossman 3 days ago 1 reply      
Have any companies implemented a sliding scale for the duration of the exercise period? 10 years makes sense for a super early stage startup, and 90 days is reasonable for public companies. I would think that some shorter windows can be implemented for companies at different growth stages -- perhaps by financing schedule, revenue size, expected time until exit, etc.
zekevermillion 3 days ago 0 replies      
Scott basically argues that there should be a 10-year cliff on vesting. That's what it means to price employees out of their equity comp if terminated early.
morgante 3 days ago 6 replies      
Scott's post genuinely makes me angry. It uses subtle language to imply that employees are inferior individuals who are lucky that the owners of capital deign to share anything with them.

In Scott's worldview, choosing to leave a company before it has exited is inherently disloyal. Even if they're paying you under market. Even if you could contribute more value elsewhere.

I wonder if he would accept similar terms:

1. Reduce his salary at a16z to something minimal. (<$100k)

2. He only gets his carry in a company if he invests in every subsequent round. If they ever decline to follow-on, it's clearly a sign of "disloyalty" and they should forfeit all equity.

I agree with Adam that it's at least nice to see the owners of capital so nakedly betraying their worldview (diversification is all well and good for them, but employees owe infinite loyalty).

I will think long and hard before ever working for a company where Scott is on the board.

This part is particularly troubling:

> One existing solution to the dead equity problem has been and still can be to make exceptions where appropriate for certain exiting employees.

It's essentially an argument for cronyism. The people who most need equity extensions are those unlikely to have the connections and political savvy to get them. I strongly suspect such systems would work to further disadvantaged already disadvantaged groups.

ergothus 3 days ago 4 replies      
I followed this link expecting to see a comment about some sort of "encoding" of the human body relating to long-but-not-indefinite period of physical exercise. Instead it's about stock options.

As the article offered no background, I'm lost as to what is being discussed. In the last 20 years I've never had the same employer for 10 years, so can someone ELI5 what is being discussed? Thanks in advance!

       cached 1 July 2016 02:11:02 GMT