hacker news with inline top comments    .. more ..    21 Jan 2017 Best
home   ask   best   2 years ago   
What I Wish I'd Known About Equity Before Joining a Unicorn github.com
1366 points by yossorion  2 days ago   551 comments top 78
superqd 2 days ago 13 replies      
This has caused me some level of sadness in the past. I worked for a startup (started 6mo after founding with only 20 people and stayed for 8 years to 200+ people and 50million in revenue). During a number of phases, I worked for months at a time giving up weekends, late nights, holidays and even vacation time to get product out the door and beat the competition. I racked up 50k options, mostly all for less than a dollar a share strike price. What was painful to me, was that when I left after 8 years (I grew weary of it all, especially management) I had 90 days to pay $34k to get my options or lose them. That was painful cause I didn't have the extra $34k I could just throw away (had no idea when they'd sell), but hated the fact that none of the thousands of extra hours I worked (I kept track) counted for anything. One could argue I was paid a decent salary. Only on paper though, since my effective hourly rate was 3/4 what I'd have made at a non-startup working a regular 40-50 hour work week (i.e., I had to work more hours to make the same pay I could have gotten for fewer hours at a non-startup).

After I left, 18 months later, the company sold and my options would have earned 10x the $34k of the strike price. I.e., I would have made $300k if I could see the future. I just find it painful that at that moment, all your time is effectively worthless, and only the $34k would have counted for anything, even though I gave far more than that in extra hours.

Needless to say, I am somewhat hesitant to put in too many extra hours anymore and almost never work weekends or holidays.

jaymzcampbell 2 days ago 16 replies      
As always the main rule you need to live by is value the equity at zero and you'll be (maybe) happy. Short of being a founder (and thus not really being offered equity) I have never treated these things as anything beyond a minor on paper "bonus". Given you'd be lucky to get anything more than 1% even as a first employee I find them next to worthless as early stage motivators. Which is how everyone seems to play it - "we're all in this together" - mmm. As long as the salary is market rate I ignore equity altogether.
adhambadr 2 days ago 8 replies      
It has always baffled me the way founders treat employees and investors so vastly asymmetric. Ive been involved in rounds close enough to see how just the "hint" of a potential investment and all the numbers, financials, cap tables are sent in one big email to their analyst, while some early employees (who controversially have worked just as hard as the founders) have no clue who owns what and whats going on. I get it without money we can't build anything, but without good employees everything else is multiplied by 0. The math in the article is unique to the U.S but I think the "essence" behind it is quite universal.

What stops founders from offering a company wide "vested Share vs. Cash" with an equal cap for everyone on each new round ? For e.g founders planning to sell 10% of their own share while raising round in the so called "Take money off table", all employees get the 'right' to exercise the same option, hence instead of dilution to the new value its straight selling the value they created ? what are the arguments against this ? For the investors its the same, and if the cap of how much of the vested % you get to sell is kept realistically low it should not risk decreasing the value of the private stock.

while i agree with Jaymzcampbell as an employee you're better off with dropping the "hope" the paper value of what you own means anything, however its contradicting to the popular piece of employee incentive tool that is quite essential in acquiring& keeping good talent.

rrhyne 2 days ago 7 replies      
Really surprised how few people know about this legislation to fix the tax laws that cause one of the biggest issues with options.


It made it through the house and was approved by senate finance committee but is now stuck in a bill about retirement savings legislation.

Even finding information about the bill on the web or twitter is incredibly difficult. Please tweet, blog, etc and call your senators to support!

johngalt 2 days ago 2 replies      
The equity payday funnel looks bleak.

1. Will this company succeed?

2. Once it succeeds will my equity be valuable?

3. If my equity could be valuable, will it be diluted before I can get paid?

4. If not diluted will it ever be liquid?

5. If there is liquidity will I be able to participate? Or only founders/investors.

6. If employees are able to extract real dollars, will I be forced out, laid off, constructively dismissed in advance to reduce what I could take home.

All I see is a succession of methods to keep me on a treadmill chasing a carrot. Until the startup is large enough to take away the carrot.

This perception is hurting startups as a whole. Because you will not be able to convince early stage talent to work for equity. It is not enough to tell engineers 'well you should learn more about equity so you can't get ripped off so easily.'

taternuts 2 days ago 2 replies      
> The working conditions at Silicon Valley companies are often the best in the world

I'd take regular, sane hours and the ability to have a life over worthless perks like ping pong/foozeball tables and customized snacks. I can bring my own snacks, buy my own lunches with as long as I have a decent salary and that really doesn't bother me. The only real perks in a startup are more control over what you are building as a team, the challenges you get (or have to, depending on your outlook) to face, and the ownership you feel in the immediate product and it's future development. You sacrifice everything else for that.

luckystartup 2 days ago 6 replies      
This is all true. I moved to San Francisco to join a startup as an early employee. The biggest surprise was when I had to empty my savings (and borrow a lot of money) to exercise my stock options. I filed an 83b election so that I didn't have to pay any taxes immediately, but $20,000 was (and still is) a huge amount of money.

I had no idea it was so expensive to join a startup. At least, if you want to avoid golden handcuffs for the next 10 years. I'm extremely glad that I made the decision to exercise my options. I left after 2 years because I couldn't stand working there anymore, and I had vested most of the shares that I had exercised.

If golden handcuffs had forced me to stay, I think I might have had a mental breakdown, and I don't think my marriage would have survived.

My former startup is now a very successful unicorn, and I'm starting to hear talk of an IPO in the next few years. I think my shares could be worth somewhere around $5 - $10 million. This is absolutely life-changing money for me, seeing as I could happily retire with $500k.

Sometimes I can make it a whole day without thinking about it, but it feels like I'm just burning time until I can finally cash in these shares and never worry about money again.

Can anyone relate to this?

fermigier 2 days ago 6 replies      
Should have been titled "... in the USA" as tax rules are very different in other countries.

For instance, in France, you only owe money to the taxperson when you sell your shares, for a profit. If you sell for a loss, this is tax-deducible.

tibbon 2 days ago 2 replies      
I've worked at several small startups, in the range of seed to C-rounds. Except for the one that I've was co-founder, I never knew when/how to ask or negotiate options things. It always felt like something that was supposed to happen at 'other companies' and not the one I was applying/negotiating to work at.

I know I should in theory ask to see the cap table, but it seems awkward and if shown it right then I'm not sure I'd entirely know how to read it properly (along with the terms), or how to immediately negotiate from it.

Stock options have been frequently presented to me as just a standard piece of paper offered, and not a thing for negotiation. It feels easier and less scary to haggle on salary (which I do quite well at generally).

Is it even reasonable to ask for twice as many options when I'm negotiating? Or is that like asking for double the salary and not reasonable?

Is it reasonable to ask for a bonus (at an A-round startup) in terms of options after being there for a year?

hedora 2 days ago 2 replies      
I'd add a few things:

- You probably won't have a 10 year horizon if you are joining a company that is now a unicorn. You likely will if you found a company that later becomes one.

- Sarbanes-Oxley is a big villian here. It pushes the cost of legal compliance through the roof for public companies, forcing companies to delay IPOs until revenue is higher. In addition to delaying liquidity events, it prevents small traders from owning stock in companies that are ramping from ~100m valuation to ~1b, which is why there are so many unicorns now. This hurts normal investors big time, and helps people with access to private markets. (source: friendly neighborhood VC)

- the article doesn't go into AMT, where the IRS forces you to knowingly overpay tax when you exercise ISOs, then (slowly) pays it back over the years.

calcsam 2 days ago 1 reply      
The most poignant line is near the end: "It's really tough to ask these [questions] without sounding obsessed with money, which feels unseemly, but you have to do it anyway."

Basic due diligence on a startup offer is asking for # of shares outstanding, last company valuation, strike price. Advanced due diligence is talking about things like extended exercise windows, secondary sales, and liquidation preferences.

Unfortunately, basic due diligence is rare enough that if you do ask a potential employer the latter kind of question, there is a risk of coming off as overly mercenary.

The way of talking to potential employers that I've seen work is to ask questions in increasing complexity, sharing your conclusions along the way, and signalling why you're asking these questions.

After you ask the basic due diligence questions, you can share the math you're doing on stock value various exit scenarios (a good base assumption is to assume an exit at the current valuation).

That typically lays good groundwork for having "advanced" due diligence conversations about an extended exercise window and shows you're serious. In contrast, if the company isn't willing to share valuation or total share numbers, this is a huge red flag as it prevents you from doing the basic math.

This is a tool I built giving engineers the questions they need to ask, in order to do that basic math on what their stock is worth: http://www.optionvalue.io/

mavelikara 2 days ago 5 replies      
> The correct amount to value your options at is $0.

Agreed, but ...

Try to negotiate a deal such that the employer gives you a one-time sign-on bonus which, after taxes, will pay for the early exercise of the offered equity, and get the employer to give you the paperwork for filing 83(b) election.

This values the equity at $0, but prevents drastic financial implications (at least for the initial grant) should it actually become worth any real money.

What does HN think about such a scheme?

AnonMcThrowAway 2 days ago 1 reply      
Early employees usually get hosed, and I wouldn't sign up again unless comp was at least equivalent. The best large companies are much better run than they were 20 years ago, generally pay much better, offer a fair chance of stock appreciation (and it's liquid!) while offering more opportunities for professional growth. I speak from a lot of experience.

True story: I was the first employee at a startup that raised > 15M from top investors and sold to a big SV company for several multiples of the total investment. I left before the company sold, but had low single digits of ownership. Terms of the sale: investors were made whole, founders made 'house-changing' money (low millions each) + really nice salaries. Common stock was zeroed.

Granted, the founders probably had to work hard to sell the company, but as an early employee, I took quite a few risks as well and the reward was definitely asymmetric.

TheLarch 2 days ago 2 replies      
I was so naive when I joined my first startup. When we were purchased, it came to light that the main guy never got around to signing my stock option agreement. He is a fucking mensch and signed it after the fact.

Character buys a unique, abiding respect.

equalarrow 2 days ago 6 replies      
Great post and I totally agree. I recently talked to my financial advisor about my current company and we went through all the numbers for various pricing scenarios (of a public offering) over the next 4-6 year, at various valuations. From his point of view, he encourages me to stay the course - quite the opposite from most of the tech friends I know (most usually don't stick around after a few years).

On a side note, I haven't used it in a long time, but why all the hate on Jira? I mean, I remember it does everything including making my breakfast for me, but is it really that bad? I don't remember it being that bad, but maybe others would like to chime in on why they like/dislike it?

Osiris 2 days ago 0 replies      
I left a startup after 2 years. I was offered a contract job that had a gross salary that was twice my startup salary. Even after taking into account the cost of benefits (health, vacation, 401k, etc), I would still make about 60% more net.

I did some math and even if the company sold for a decent amount in the future, my shares wouldn't have been worth the amount in pay I would have lost between then and the liquidity even. So, I started up my own LLC and started doing consulting/contracting and I've never felt happier.

I feel like I'm in control now and I don't have to beg someone for a raise or worry about why I'm getting screwed in the next round of funding.

k2xl 2 days ago 2 replies      
It's really unfortunate that most startups appear to be set up with ISO shares. The company I am at now is an LLC and distributes RSUs, which meant when I joined I was able to file an 83/b form which minimizes my tax impact.

At my last company, I exercised options. I owe the IRS tens of thousands of dollars due to AMT this year (not that it was unexpected, as I did heavy research beforehand).

Can anyone shed light why companies aren't set up to distribute RSU's (and allow employees to fill out an 83/b form within 30 days of being granted?). Is it not preferrable to investors for some reason?

The worst part of the AMT and exercising ISO shares at a startup is that it is nearly impossible to make an informed decision on whether or not to exercise (and how many shares to exercise). You can't possibly know your tax liability until next tax season when all your tax forms come in.

Last year, I called maybe 5 different tax accountants for advice on how to estimate what my tax impact would be for exercising shares and got 5 different answers. This stuff is COMPLICATED.

Finally just got TurboTax and plugged in some guesses of my deductions, etc and got some type of estimate. Filed some 1040ES's last year to minimize the penalty and hopefully will get close.

Another sad fact is how few people at startups are even educated on the subject. While one can argue it is up to each employee to do their own research, I think it is in startups ethical interest to have their CFO team give an overview of the stock plan and what kinds of things employees may want to ask their accountants about.

jarjoura 2 days ago 0 replies      
I see the current cold climate as the direct result of companies now taking a long time to sell or go public. The frameworks setup for early employees to make good on their loyalty and hard work were designed during a time when companies would only stay in startup mode for 4 to 5 years max.

Now that companies are planning on 10+ years to IPO or sellout, they aren't changing their employee incentives to match expectations. No one wants to work for free, or cheap for an entire decade at one company.

I actually applaud Snap Inc. for pushing ahead with an IPO early on in its lifetime. It's going to make millionaires out of all its early employees and start a 2nd wave to startups in LA that San Francisco hasn't had since Facebook and Twitter.

People in SF are still waiting for AirBNB and Uber, Lyft, Stripe, Github, etc, these companies are turning over employees already that should have minted local millionaires ready to start the next wave.

rosser 2 days ago 1 reply      
Apparently, "forward exercise" (also called an 83(b) election) isn't something The Fine Article's author has ever heard of. Nearly every one of the tax consequences this article bemoans could have been avoided, with just that one move.

Yes, it means you need to have the cash on hand, but honestly, taking out a bank loan to forward exercise your option grant would probably be orders of magnitude cheaper than wrestling with the 409a valuation or AMT or any of that garbage especially for very early stage employees.

And it starts the long term capital gains counter earlier, too.

chris_7 2 days ago 5 replies      
> fixed PTO

Why on earth is this a downside? "Unlimited vacation" is a scam.

mikeflynn 2 days ago 0 replies      
"The correct amount to value your options at is $0. Think of them more as a lottery ticket. If they pay off, great, but your employment deal should be good enough that you'd still join even if they weren't in your contract."

Totally agree with this and I think it's the core of the whole piece. I tell everyone who asks about options the same thing before getting in to the details.

temp246810 2 days ago 1 reply      
When I joined zenefits they offered me 500 shares before raising the 500MM round.

Then after the 500MM round they were 5000 shares.

When I was negotiating my offer, I didn't budge on getting a market rate salary and in the end, I got it.

Why? Because they did some hand wavy arithmetic and told me my 5000 shares would be worth 500K at some point. Yeah no thanks. As employee #500, I knew better.

What happened? I hated it there. Left after a year. Company lost half of its valuation.

Moral of the story: don't compromise your salary for equity, even for a unicorn. The only exceptions are if you are truly an early employee. If you're not sure whether or not that's you, then it's not you.

Even then you aren't safe, I saw them fire other engineers for no other reason other than they had too much equity.

Careers are messy. Even the people who WERE one of the early employees and got a shitload of equity eventually got their salaries adjusted.

Don't compromise on your salary.

pascalxus 22 hours ago 0 replies      
There's a simple solution to this. Just, don't take any salary that's below the highest market rate you can get. The second you start taking "equity" as compensation, you're putting yourself at extreme risk, needlessly. If you take a 10K salary cut for options, it's the equivalence of taking the higher salary and investing 10K in that single company. No self respecting financial advisor would ever tell you to take 10K or More per year and invest it in 1 single company - especially if it's the very same company your actually working for! That's just compounding High risk ontop of already high risk.

Even if you wanted to do this, you don't need to be an employee of a company to do that. Just join some venture Seed fund that you can invest your 10,20,30K per year, etc. Your chance of success is approximately the same, but at least you won't loose your job when it doesn't work out.

And, if your just out of school, or have no other options, then by all means go work for the start up. In this situation your not giving up a higher salary to do so.

And always make sure you have work/life balance - the company won't do that for you, you have to make it happen.

xutopia 2 days ago 1 reply      
I've been sucked in to paying money to exercise my stock options after I left a company. On paper I could pay off my house today... except I'll most likely never see that money.

Nowadays I ignore equity and look at the bona fide package they offer and how enticing the challenge they have can be and decide based on that. Equity promises just don't fall in the balance anymore.

beat 2 days ago 0 replies      
Startups are a great way to get rich - if you're a founder.
tensafefrogs 2 days ago 0 replies      
"If the company sells for a more modest $250M, between taxes and the dilution that inevitably will have occurred, your 1% won't net you as much as you'd intuitively think. It will probably be on the same order as what you might have made from RSUs at a large public company, but with far far more risk involved. Don't take my word for it though; it's pretty simple math to run the numbers for a spread of sale prices and dilution factors for yourself before joining, so do so."

This is key when you are thinking about joining a startup. If you can land a job at $BigTechCompany that pays a bonus and RSUs that refresh every year, it's likely a much safer bet and will have much lower risk.

brilliantcode 2 days ago 0 replies      
so what happened in the last 10 years? Low interest rate venture capital (basically rich peeps trying to get richer using even richer peeps monies) have caused a market discrepancy which is now showing signs of major correction.

Now VC funded folks who were told they could be the next Zuckerberg have finally figured it out-your life is being commoditized into hedged call options for the rich with high probability of recouping their speculative bets. Your time is always going to be cheaper than a VC's and they've figured out a way to make it even cheaper at your expense and for their own gain.

People are figuring out their stock options aren't actually worth much and that they've just spent a huge chunk of their life helping the rich get richer with the illusive dreams of becoming the next Larry Page or Zuckerberg.

It's almost identical to the ebb and flow of workers in startups. Following this logic, we can clearly see these zombie unicorns are not going to be able to monetize like they've been able to raise money. We will see the rise of bootstrapped companies fighting each other to gobble up the vacant marketshare.

The worst that can come out of this is loss of economic productivity (VC investments have yielded economic returns for the 0.1% at the expense of the rest).

And of course lot of Venture Capital partners finding out their portfolio of 10 variants of Uber or Tinder is going to need more money to keep their share valuation high as capital is drying up due to global uncertainty.

They might not be able to buy a Lamborghini SV Roadster and a penthouse in downtown Vancouver. The world's smallest violin for the rich is always expensive and at great costs to society. Kevin O'leary worshippers call wining and dining "free market forces". A free market that serves less than 1% of the population with none of the benefits trickling down to the rest.

I had a blast not reading the article.

abalone 1 day ago 0 replies      
> The correct amount to value your options at is $0. Think of them more as a lottery ticket.

This is trivially false. Lottery tickets are not worth $0. Take that into account when evaluating this commentary.

Basically, 90 day exercise windows are evil and the source of great pain. However my view is the author is folding in a bunch of anxiety around the general risk of startups. They seem to imply that a primary reason for delaying an IPO is employee retention. This is a poorly supported theory. Golden handcuffs are not a great retention tool as a poorly motivated employee is minimally productive. Rather the primary reasons to stay private are higher valuations afforded by the private market and less regulatory oversight.

Look, if you can get an awesome RSU offer from a super solid public company then go for it. But don't buy into the implication here that startups barely offer better deals. Do look for an extended exercise window since companies are staying private longer these days. But generally speaking you're going to get a significantly bigger chunk of options in a startup than a mature company, with commensurate risk. Do it if you believe in the startup and handle the uncertainty. It is not the same as a lottery ticket; you can improve the odds with your own labor.

dandare 2 days ago 2 replies      
> How many outstanding shares are there? (This will allow you to calculate your ownership in the company.)

Is it not true that company can (and usually will) issue new shares and dilute your stake at every investment round? (I am just a layman like you)

andreasklinger 2 days ago 0 replies      
Imo it's the unspoken truth in our industry that nowadays the real opportunity costs aren't held by the founders nor investors but by (non-junior/experienced) employees.

It comes down to a very strong but important question: why should anyone work for your company

shams93 2 days ago 0 replies      
Yeah I had 1,000 shares of yahoo in 99, then you had IPOs but the price crashed so rapidly I lost my shirt before I was able to exercise and the shares never recoverd it took me 8 years of hard work to pay off the huge tax bill.
deedubaya 2 days ago 0 replies      
I've recently left a company and am within my 90 day window to exercise my options.

It feels much more like playing roulette than making an investment. I have no idea if there will be a liquidation event at all, nor do I know how much that'll end up being if I can hold on for that long. Oh, and I'll be paying taxes on those shares all along the way (assuming the value goes up, which is another uncertainty).

The odds of coming out on top are not in my favor -- and I've chosen to not exercise my options. I came to this decision based off

a) plainly looking at the odds -- the company isn't going to be a unicorn no matter what bullshit the founder and investors are spouting

b) given a non-unicorn style exit, the cash these stocks would earn me probably wouldn't be significant anyway.

We live in an age where not only are investors letting themselves be taken for a ride, but the employees are as well. I'm now concerned with salary exclusively in my negotiations -- I can take an that extra $XXk per year and put it into the stock market with more reliable results.

SCAQTony 2 days ago 0 replies      
I had 26,000 share in a company TouchCommerce that got bought out by Nuance for about $250-million. I netted about $9,000.

What did I learn? Start your own company is best. second best is to license a trademark or a patent to the startup to ensure that you are square in the center of the equity pie. YMMV

kuchi 2 days ago 0 replies      
About trying to find an answer to: Does the company's leaders want it to be sold or go public? And [ ..] time horizon I think its impossible to find the right answer. I worked in a startup where every year the founders would tell us that next year wed go to IPO. Then that year comes, and they wouldnt: one year it was another companys big IPO that had sucked the market dry, one year it was that valuations were low, one year it was that the cost of IPO was high, and on and on. I dont think the founders were intentionally lying, but this question in advance, was not the right question because no one knew the answer. This question basically only begs for an answer to please the audience. So instead I think you have to look into the market and stats of that year as a whole to guesstimate where the company may be headed.
smrtinsert 2 days ago 0 replies      
Let me save you some reading: bargain down the options and go for the increased salary instead. I suppose that's not true if you're planning on staying long enough until you can exercise them, but I've been very happy doing that at the last few places I've worked at.
tobltobs 2 days ago 1 reply      
Imho without asking a lawyer, which should be an expert in this field, you will never be able to understand the value of the offered deal.
lukejduncan 2 days ago 2 replies      
There's a lot of advice to value your options at $0. I'm curious how people do the math when considering moving from a big company with RSUs that are liquid at vest to a startup (doesn't have to be a unicorn). Big company RSUs can be a big part of your annual total compensation. Do thinking about a "fair market salary" do folks consider that their base + risk adjusted RSUs? Seems like the best advice I've seen here that might be applicable would be to negotiate down equity in favor of base comp. especially if you consider that equity will likely be granted as bonuses during your tenure.
tokentoken 2 days ago 0 replies      
Interestingly I've had the opposite experience. I work in the crypto space. My monthly comp is a combination of bitcoin, and some units of the crypto token that we invented that will power the app we are developing. When I first signed on, the token was not yet released, but the plan was for it to be minted and released on crypto exchanges way before our app is actually complete. This allows people to speculate on the future success of our app. Once the coin is out there, we have no control of it, it becomes an independent asset that anyone can trade without our approval or knowledge. This makes the coin perfectly liquid with an actual value.

Since when I first signed on, the token wasn't out yet, we had to negotiate a value for it. The value we agreed on ended up being much lower than the actual value when it was finally released. This created a strange situation. My monthly salary, which was at one point a combination of money (bitcoin) and some pie-in-the-sky uncertain token, simply became money + money since it was all liquid. I was therefore getting paid much more than expected, and more than another engineer of similar skill would require. This creates pressure on the founders to consider letting me go - even though I was a critical component of getting it to where it was. The psychology when the equity is not liquid seems very different. Even if a company's valuation starts to become much higher than expected, the fact that there still has to be an unlikely far-in-the-future liquidity event for any of it to be worth anything, significantly changes the dynamics. But when your engineer is simply getting paid quadruple market value in real liquid money, thoughts start to materialize that they can simply exchange me for 4 other engineers.

altern8tif 2 days ago 3 replies      
What's the better option to motivate employees then? Profit-sharing rather than equity?
JumpCrisscross 2 days ago 0 replies      
> depending on the company you join, they may have restricted your ability to trade private shares without special approval from the board

If your shares have this restriction they are practically worthless. Also be wary of sneaky language on page 40 of a random document signing away your ability to sell--have seen this from otherwise-reputable Silicon Valley names.

pbkhrv 2 days ago 0 replies      
If you are thinking about starting a company AND doing right by your employees, consider using alternative ownership structures like ESOP (https://www.nceo.org/articles/esop-employee-stock-ownership-...).
corford 2 days ago 0 replies      
Not familiar with the US tax system but does the concept of "growth shares" exist over there? They're a fairly standard thing in the UK and negate most of the income tax and social security issues mentioned in the post. You're just left with CGT to pay on an eventual sale (and there are ways to reduce even that). Also, because they're shares there's no 90 day problem on leaving, you're awarded them on a simple vesting schedule and that's that.

Bonus: the IRS recognises them so American employees of UK companies can enjoy the tax benefits too.

https://www.twobirds.com/~/media/pdfs/expertise/employment/e... is a nice primer if anyone is interested.

bogomipz 2 days ago 0 replies      
>"Private markets do exist that trade private stock and even help with the associated tax liabilities. However, it's important to consider that this sort of assistance will come at a very high cost, and you'll almost certainly lose a big chunk of your upside. Also, depending on the company you join, they may have restricted your ability to trade private shares without special approval from the board."

It is true that you might lose a big chunk of your upside by going to a secondary market but if the alternative is to leave the ISOs on the table it might not matter.

Also firms that provide a secondary market will give you a loan to purchase those shares if they line up a buyer for you so you don't have to come up with this money on the spot yourself. SharesPost does this.

milfandcookies 2 days ago 2 replies      
"Your options have a strike price and private companies generally have a 409A valuation to determine their fair market value. You owe tax on the difference between those two numbers multiplied by the number of options exercised, even if the illiquidity of the shares means that you never made a cent, and have no conceivable way of doing so for the forseeable future."

This is either incorrect or I'm misunderstanding it. The purpose of the 409a valuation is to set the strike price of the options. The strike price is the fair market value of the common stock.

Also, to parrot everything everyone else is saying, equity should be valued at zero. Out of the 200 or so 409a valuations I've performed, there might be 10 companies where I would consider the equity to be valuable in the long term.

rcheu 2 days ago 0 replies      
I've really appreciated how Quora handled stock options, especially in contrast to all these horror stories. Quora uses 10-year exercise periods[1], and provided me with a spreadsheet regarding what the outcome for me would be given some valuation and dilution (with some example outcomes from other companies of similar size). The last round of funding allowed employees to liquidate some of their options/stock as well.

[1] https://dangelo.quora.com/10-Year-Exercise-Periods-Make-Sens...

katzgrau 1 day ago 0 replies      
"The 'you' of today needs to protect the 'you' of tomorrow."

Amen. Don't ever be afraid of looking out for your own financial interests. If you aren't, you are at a disadvantage. Anyone who makes you feel "unseemly" for minding your own benefit (which tends to be a lot of people) is either naive, a stooge, or someone else looking out for their own best interests.

annetee 1 day ago 2 replies      
I have a question: I know for sure that my company is going to IPO this year and I plan to stick around to see it happen. By the time it happens I will have about 30% of my options vested which I could choose to exercise.

Is there any reason why it might be advantageous to exercise early? My current plan is just to see how the IPO goes and then consider exercising at that point - it means a lower risk for me because I'll know exactly what they're worth and if they're even worth buying.

perneto 2 days ago 0 replies      
On the same topic, https://www.scribd.com/doc/55945011/An-Introduction-to-Stock... has detailed advice on what to do as a startup employee in the section about options. (tl;dr: early exercise, 83(b) election, ask for nonstatutory stock options instead of incentive stock options).
KirinDave 1 day ago 0 replies      
"A modest $250M".

Uhh, "modest" is surely 20-50M. $250m is quite a bit even for a fast growing company.

serge2k 2 days ago 0 replies      
> Worse yet, by exercising options you owe tax immediately on money that you never made. Your options have a strike price and private companies generally have a 409A valuation to determine their fair market value. You owe tax on the difference between those two numbers multiplied by the number of options exercised, even if the illiquidity of the shares means that you never made a cent, and have no conceivable way of doing so for the forseeable future.

Is this a rule that should be changed? Why can't these just be capital gains taxes owed when you sell?

rcurry 2 days ago 3 replies      
"Worse yet, by exercising options you owe tax immediately on money that you never made."

For NQSO this is true, for ISO this is false. The exercise of an ISO grant is not treated as ordinary income.

moflome 1 day ago 0 replies      
Confirmation, I think, of many of these observations from the VC/HR perspective: https://medium.com/positiveslope/dont-get-trampled-the-puzzl...
FruityCode 15 hours ago 0 replies      
Wow, I've started working in a startup and this is really helpful to understand some processes and to start speaking the same language they speak. Thank you!
jMyles 2 days ago 2 replies      
> Your options have a strike price and private companies generally have a 409A valuation to determine their fair market value.

Is this exactly accurate? My understanding was that you owe gains tax on the difference between the 409A and (strike price + wages traded for options).

In other words, if you take a $1000 / month cut for one year in exchange for options, you get to add $12,000 to your cost basis for the purpose of calculating gains taxes.

Is this incorrect?

ezconnect 2 days ago 0 replies      
The first use of the acronyms should be defined. It's hard to read write ups that have acronyms that are not defined on first use.
bcherny 2 days ago 1 reply      
> Founders (and favored lieutenants) can arrange take money off the table while raising rounds and thus become independently wealthy

How does this work?

akras14 2 days ago 0 replies      
I remember interviewing at one of the biggest Unicorns and a recruiter told me "we pay lower salaries, but we make up for that with generous stock options" to me it sounded as, "we don't want to pay you much, so here is some over bloated Monopoly money to keep your dumb ass happy"
vadym909 1 day ago 0 replies      
Why don't companies fight the IRS to allow the tax from exercising stock options of non-public companies to be deferred till the stocks are traded. After all if there is no real gain why be taxed on theoretical gain?

Time for a Bay Area tea party?

ellisv 2 days ago 1 reply      
I'm a bit surprised cashless exercising wasn't mentioned (although perhaps the author isn't aware of the option).

Here is a link to a HN discussion on equity compensation from about a year ago: https://news.ycombinator.com/item?id=10880726

bitwize 2 days ago 1 reply      
Accepting equity instead of cash is like asking for your paycheck to be denominated in Bison Dollars. If they want to add some options on top of my salary for the full amount I'm worth each year, that's one thing. But options in lieu of part or all of one's salary is tantamount to a cut in pay.
patmcguire 2 days ago 0 replies      
This is the natural consequence of founders keeping board control.

Remember all those evil VCs who ousted founders, meddled in companies, and endlessly pushed for bigger and bigger risks in the hope of a massive one in a million IPO?

Turns out some of that was good for employees. No one who makes decisions needs liquidity events like they used to.

Bahamut 2 days ago 1 reply      
I found it funny he slagged on using JIRA, as oftentimes I found startups using Rally, which is a lot more painful :( .
dmode 2 days ago 0 replies      
This is the reason I have declined couple of unicorn offers. I treat equity portion as 0. Especially, options. I will rather work for public companies, who can actually compensate using liquid RSUs that let me buy nice things.
jlj 2 days ago 2 replies      
When pre-ipo options are granted, is the company required to tell the number of fully diluted shares outstanding? Without the denominator it is impossible to estimate the value. If they refuse to tell it or are secretive, does the employee have any recourse?
tehabe 2 days ago 0 replies      
Maybe I'm naive but when someone is paying me with equity which I can't really sell to anyone else but him, I want him to buy them back, to the money I would have gotten if it were actual pay.

Everything else is just a pay cut.

alexee 2 days ago 0 replies      
Is there anyone here who sold pre-IPO equity using companies like https://equityzen.com? Can you describe your experience?
conjectures 1 day ago 0 replies      
Could someone point me in the direction of a good basic introduction to the mechanics of options, rounds, dilution etc?
rampage101 2 days ago 0 replies      
Are there any stats about the average return on the options handed out? It seems most people have a few fail stories, or an unusual success story... not much of a sample size to go on.
andy_ppp 2 days ago 0 replies      
I really do not see the point of joining a startup; you can earn more money contracting and it's guaranteed money, not some imaginary future payoff.
Kiro 2 days ago 2 replies      
I thought I could exercise my options 5 years after sign date, regardless of liquidation. Have I misunderstood or do I have some special deal?
sybhn 2 days ago 0 replies      
"In the worst cases, you might even have to use JIRA."

that's funny... but seriously, what's the big deal about Jira?

princetontiger 1 day ago 0 replies      
Never work at a startup. A book could be written about the pervasive amount of bullshit and lying that exists in startups. Many of the people enfolded in these vehicles are no longer passionate about tech, but rather passionate about money.
princetontiger 1 day ago 0 replies      
I would never work for a start up or a startup environment. A book could be written about the bullshit and lying that goes on in some of these companies.
kapauldo 2 days ago 0 replies      
Excellent writeup.
Ashish_J_S 1 day ago 0 replies      
Thank you!!
foo101 1 day ago 0 replies      
I am just an engineer. I don't understand a lot of the terms and concepts necessary to understand the linked article. I tried going through the Wikipedia articles for the terms I was interested in but I don't think I can make sense of it all without a kind teacher to help me out. So here I am turning to you, HN, to be my teacher. Here are the questions I have. If one of you could answer just one question from this list, it would help me a lot. I am sure it would help other people like me.

While answering, please quote my entire question with the Q<number> so that people don't have to scroll up and down to correlate the answers with the question.

Q1. Quote from article: "Your options have a strike price and private companies generally have a 409A valuation to determine their fair market value. You owe tax on the difference between those two numbers multiplied by the number of options exercised." My question: What is strike price? If I have accumulated say $30K worth of options, but I can afford only $10K, can I buy only $10K worth of options while leaving the startup?

Q2. Quote from article: "Due to tax law, there is a ten year limit on the exercise term of ISO options from the day they're granted. Even if the shares aren't liquid by then, you either lose them or exercise them, with exercising them coming with all the caveats around cost and taxation listed above." My question: Say I get buy ISO options for 30000 options for $30K from a startup while I leave the startup in 2017. Say, that startup still remains private in 2027. What are my options? Am I going for a total loss of $30K? If the startup hasn't gone IPO, how can I possibly exercise my 30000 options in 2027? What does the article mean by "exercise them" in this case? Does "exercise" mean buy the 30000 options for $30K or does "exercise" mean selling the options for a possibly larger price after the startup goes IPO?

Q3. Quote from article: "Some companies now offer 10-year exercise window (after you quit) whereby your ISOs are automatically converted to NSOs after 90 days." My question: How is NSO different from ISO? When the article mentions that NSOs are "strictly better" does it mean that I don't have to pay a penny to buy the NSOs but they remain in my account for free?

Q4. Quote from article: "Employees want some kind of liquidation event so that they can extract some of the value they helped create" My question: What are the events that count as liquidation events?

Q5. Quote from article: "Even if you came into a company with good understanding of its cap table" My question: What is the cap table? Why do I need to know this number? Can you explain this with some examples?

Q6. Quote from article: "New shares can be issued at any time to dilute your position. In fact, it's common for dilution to occur during any round of fundraising." My question: How does additional funding dilute my position? If I bought 30000 ISO options at say $1 per option, and I can sell it one day for say $2 per option, I am still making money. Why does it matter if additional funding occurred between buying and selling?

Q7. Quote from article: "If the company sells for a more modest $250M, between taxes and the dilution that inevitably will have occurred, your 1% won't net you as much as you'd intuitively think. It will probably be on the same order as what you might have made from RSUs at a large public company, but with far far more risk involved." Can someone show some approximate calculation for this? This is what I see: 1% of $250M is $2.5M. Say I lose 30% in tax I am still left with 0.70 * $2.5M = $1.75M. Can one really earn $1.75M from RSUs? The RSUs I have got at large public companies are of the order of $10K to $50K only.

Q8: Quote from the article: "Tender Offers". Can someone elaborate this? Can a startup force me to return my options in exchange for tender offers? Or is it a choice I have to make, i.e. to keep the options or go with the tender offer?

Q9: Quote from the article: "How many outstanding shares are there? (This will allow you to calculate your ownership in the company.)" How? Can you provide an example to calculate my ownership? Can you also provide an example of what that ownership means for me, if the company is sold for say $200M? Can you also provide another example of what that ownership means for me, if the company goes public and the price of each stock option is $10 after it goes public?

Q10: Quote from article: "Have there been any secondary sales for shares by employees or founders? (Try it route out whether founders are taking money off the table when they raise money, and whether there has been a tender offer for employees.)" What does this mean? How does it affect me?

intrasight 2 days ago 1 reply      
If you believe in the company, be like David Choe (painted Facebook murals) and take compensation in stock.
Google Has Started Penalizing Mobile Websites with Intrusive Pop-Up Ads scribblrs.com
828 points by sply  1 day ago   371 comments top 51
Smirnoff 18 hours ago 6 replies      
I really would like to see Google penalize websites that force you to login after google showed these websites in the results.

1. Take Linkedin for example: you search for a person on google; google shows a linkedin result; you go to linkedin but you are greeted with giant popup asking you to login to view info. Ridiculous.

2. Same with Quora: they come in results with basic info, but when you go to their page, they forward you to registration/login page.

These practices are not ok in my book. Surely, they can do whatever they want on their websites but if Google indexes you and shows some info in search results, then you better show that info on your page without forcing me to register.

PS: To be clear -- this behavior happens on mobile version of their websites. Not sure how it plays out on desktop.

Animats 1 day ago 16 replies      
What's really stupid are sites from which you can buy things, but then pop up an ad for something else. Fandango, which sells movie tickets, does this. As you're trying to get to the "buy ticket" page, they shove movie trailers for other movies in your face.

I mentioned a site earlier today which sold plumbing supplies.[1] They pop up a "gimme your email" box which 1) cannot be dismissed, and 2) isn't even theirs, it's from "justuno.com", a spamming service.

These outfits have lost sight of what their web site is for. They're putting obstacles in front of a customer who's about to give them money. This is usually considered a big mistake in retail.

[1] https://www.tushy.me/

ageitgey 1 day ago 12 replies      
Most users hate these pop-ups and cheer this move from Google. But let me add a little context to why these ads are so prevalent and why some companies view this move as Google abusing their power.

If you visit to any "guide" website like TripAdviser, Yelp, etc, these days on a mobile browser, you'll notice that the sites often barely let you do anything without downloading the native app. They all but refuse to let you see content and throw up "Download our app!" pop-ups everywhere.

By traditional logic, that seems insane. Why are they putting so many roadblocks between the user and the content? Surely that must be driving away users, right?

The reason for this behavior is that Google is systematically destroying the SEO traffic of these sites by adding their own competitive features to search result pages that appear above organic results.

If you search for a restaurant / hotel / flight on your phone, Google will often show its own custom widgets above the organic search results. It's not unusual that zero organic search results are visible "above the fold". The more Google does this, the more the share of clicks goes to them instead of to organic search results in these types of searches.

That means that even if these guide companies have #1 search rankings for every possible search term, they are seeing their SEO traffic plummet every month because they can't compete with Google's "above #1 result" placement. So as a defensive move, some companies are basically giving up on SEO traffic in the long term and trying to forcefully convert many visitors as possible into users who visit directly via a native app (and thus bypass Google). They know that every web user who doesn't download the native app is ever less likely to ever find them again via a search result page.

So to these companies, they see this change from Google as another anti-competitive move because Google is taking away one of their last remaining lifelines for user acquisition.

Personally, I find those full-page ads super annoying and hate them too and think they should go away. But like anything complicated, this isn't a simple black and white move to benefit users. It's also a strategic move that helps Google and hurts some competitors.

netinstructions 1 day ago 2 replies      
Funny, because Google Adsense offers "Page-level vignette ads" which are full page interstitial ads shown for mobile devices.

The penalty must not apply because:

> They're displayed when the user leaves a page, rather than when they arrive on one, so the user doesnt have to wait for them to load


FreakyT 1 day ago 0 replies      
Good. Those have been becoming increasingly prevalent to the mobile web's detriment.

I don't mind a few ads, but many of these interstitials are downright maliciously designed, making the entire page load consistent on hitting a tiny "x" target, presumably designed with the intention of facilitating accidental clicks on the ad.

aresant 1 day ago 4 replies      
You mean intrusive like the AMP header on every !@$! mobile page now that's not only annoying but breaks the standard UX?
alphonsegaston 1 day ago 4 replies      
I'd really like to see them work on improving relevancy instead of swinging their corporate weight around at whatever "benevolent" end they decide is important this week. Considering how much time I have to spend nowadays tweaking queries and futzing with the search tool options to get relevant results, I'm starting to look at all of these moves much more cynically. Taking on anti-patterns is great, but not when your search experience is rapidly becoming one.
quadrangle 1 day ago 0 replies      
"Google Has Started Penalizing Mobile Websites with Intrusive Pop-Up Ads"

I totally read this as "Google Has Started Penalizing Mobile Websites [by using the penalty of imposed] Intrusive Pop-Up Ads" instead of "penalizing those websites that use Intrusive Pop-Up Ads"

nhumrich 12 hours ago 2 replies      
While I applaud Google for doing this, it's also very scary that Google has that much power that they can basically make anyone on the web do anything by threatening ranking blackmail. Google is starting to use more grey area tactics to control things (such as disabling accounts for those who resold a pixel). Makes me start to actually worry about the power Google has.
jrochkind1 1 day ago 3 replies      
Why only "mobile websites"? I hate em just as much when I'm viewing on the desktop.
chinhodado 1 day ago 3 replies      
For me, the most annoying thing while browsing on mobile is the vibration ads/fake alarms. It's horrible. I haven't even seen a single good use of this vibration API, as it is only ever used for things like "Your phone haz virus click here now".

Why isn't there an option to disable it in Chrome is beyond me.

quadrangle 1 day ago 3 replies      
Use uBlock Origin people! In Firefox on Android. Don't see any pop-up ads! I can't believe how much pain people subjective themselves to needlessly!
makecheck 1 day ago 0 replies      
The only thing a web site should need to measure is how long its visitors stay. I know that the instant I see any pop-up garbage, I immediately leave: I dont care what the pop-up might say, I dont care where they mightve placed a little "X" to dismiss their message, I simply go BACK and I DONT return.

Do not allow yourself to be bullied. Yes, services have some nonzero value but your time also has tremendous value and you should not undersell it by putting up with stupid crap. Any site that shoves things in your face is being disrespectful, it is wasting your time, and it is costing you, which is not OK. Let those sites die out.

iwlbebnd 1 day ago 1 reply      
I've largely stopped using chrome on mobile because of the lack of ublock. When I do use chrome it's with JavaScript disabled.

Despite a few UI differences switching to Firefox on mobile with ublock has been excellent.

StuieK 1 day ago 0 replies      
If users hate these, shouldn't google's ability to rank the best pages already take care of this problem without special casing it?
webartisan 8 hours ago 0 replies      
The problem doesn't seem to be related only to web. These dark interstitial patterns are present in every platform.

Recently used Ola's (Uber's main competitor in India) native app on android, and right when you're about to book a pool ride, at times, they'll show you a full page interstitial advertising pool rides. And if that wasn't ridiculous already, they provide no way to cancel the popup. The only way to proceed is by clicking "Try share".

And when you do so, it throws a generic "Uh Oh, Something went wrong error", and you're basically stuck without a ride.

evolve2k 1 day ago 6 replies      
A client has just asked me to add a pop-up "whatch this vid, join our newsletter", when the user scrolls to about half way down the homepage for their SAAS startup. Further the pop up is not to reappear for 90 days.

They got the approach from attending an online marketing workshop that suggested this increases their list.

Felt like a bit of an anti-pattern to me.

Anyone have advice as to if this is effective or if it will be affected by today's announcement?

chmars 1 day ago 2 replies      
What's about intrusive cookie warnings?

(They are apparently mandatory in the European Union and Google made them part of the Adwords rules some time ago.)

natch 22 hours ago 0 replies      
This is great news... I'm normally not always a fan of Google's every move but when they use their position to encourage a better web, their power is awesome and we can look forward to the effects.

The headline was confusing. Google has started using intrusive pop-up ads to penalize mobile sites? This writing has clarity issues.

>Web pages need ads to operate, but...

Plenty of web pages (websites?) are operated without ads, out of love for a topic, desire to build a name, or other reasons. Web pages don't NEED ads. Well some have been built to rely on them, and can only survive with ads, but certainly not all web pages.

Aside from that and the headline, great level of detail in this article about the exceptions, the general sizes, and the rollout.

therealmarv 1 day ago 0 replies      
So can we get rid of forbes.com quotes finally?
thebspatrol 1 day ago 0 replies      
Kind of scary that Google is so much of a gatekeeper to the entire internet that they can essentially decide which websites get visited and coerce them into submission.
bradlys 1 day ago 0 replies      
This title confused me. I thought Google was penalizing mobile websites by injecting intrusive pop-up ads.
valine 1 day ago 0 replies      
I wonder if forbes will have to remove that annoying "Quote of the day page" or if this only applies to JavaScript popups.

I also wonder how google distinguishes between things like floating nav bars and elements that obscure content.

beefsack 1 day ago 1 reply      
I wonder how much of the internet userbase, like myself, just close a site the moment a pop-up appears which takes me away from what I want to be doing.

Has anyone here done any AB testing on it and have some numbers?

em3rgent0rdr 1 day ago 2 replies      
How about Google let me install extensions on Chrome in Android?
antihero 14 hours ago 0 replies      
I think the most annoying trend is those redirect-to-store type ads. I think they're probably seen as malicious but there needs to be more done to prevent scummy advertisers doing this. You get to a point where you literally cannot use sites because the second after the page load you're redirected through a whole load of dodgy looking servers and eventually to the store to get some garbage app.
shadowSeeker 19 hours ago 0 replies      
Many of these pop ups strategically block navigation options in the end adding up to unnecessary hits on some 2ndry linked page(via intrusive ads) or making sick stickies keeping us away from main material for which we were initially there making us dependent on AdBlockers. These sites find that out and stop access to their content(their concern maybe genuine but ugly process) until we stop AdBlockers taking away our freedom altogether. I have stopped going to most News Websites for this.
amelius 1 day ago 0 replies      
I want sites penalized when they take part in user-tracking.
bborud 11 hours ago 0 replies      
Good. Fuck'em. This is the most eloquent response these sites deserve. They know this is annoying so there is no excuse for doing it while claiming it adds value or somesuch bullshit.
digitalmaster 1 day ago 0 replies      
This is actually pretty impressive for a ad company to take steps that are overwhelmingly better from a UX perspective but also directly target online ad revenue models. #bold #impressive #hardProblems #thumbsUP
chimpscanfly 1 day ago 0 replies      
Look, popups are great for marketing, but this penalization isn't bad from a marketing and UX standpoint.

Popups, while effective, are being overused, which means they will become less and less effective.

On top of that, too many are poorly created and don't work on mobile, making it difficult to impossible to close out. This is unfortunate.

I've long held that we need a less intrusive "popup" that nudges instead of disrupts users. Basically a Hello Bar style that while catching my attention is something I can easily ignore.

JumpCrisscross 1 day ago 0 replies      
Looking at you, Forbes...
aphextron 14 hours ago 0 replies      
I've never understood this pattern anyways. I swipe right and look for another website the moment I see a full screen popup on mobile. I can't imagine they are effective.
notatoad 1 day ago 0 replies      
what about the websites that implement adsense's "please fill out this survey to view the content" ad? do they get penalized.
herbst 13 hours ago 0 replies      
I hope this also penalizes facebook. They are using half of the screen to force me to a account. I know it is supposed to be less, but try with 1600x900
eumenides1 1 day ago 3 replies      
I wish Google would penalize websites with pay walls
webartisan 1 day ago 0 replies      
What's disheartening is that many websites have started finding workarounds to show these interstitials after a delay or in between screen transitions.

The inherent problem is that app experiences somehow lead to better conversions, and no company would want to lose revenue due to dropped install numbers.

TwoBit 9 hours ago 0 replies      
Can we penalize web sites that beg people to use their app instead?
apercu 1 day ago 0 replies      
This thread is hilarious. I wish I had time to upvote each comment.
t_fatus 17 hours ago 1 reply      
if they could stop showing results in GoogleNow which, once clicked, immedialtely redirect me to some kind of creepy scary vibrating ringing page telling me I've a virus on my 'NEXUS 5X - Orange SAS' that would be even better
bogomipz 1 day ago 0 replies      
The article states"

"In short, if a web page puposely hides content behind an ad or forces interaction with an ad"

Does this mean that it doesn't include those nauseating "follow us/sign up for our newsletter" light boxes that plague the web now?

Also why would this only be for mobile? Are they any less of a scourge for desktops?

amirmansour 1 day ago 1 reply      
How will this affect UI patterns like Modals?
jbicha 22 hours ago 0 replies      
But will they penalize mobile apps with intrusive pop-up ads?
dhp1161 13 hours ago 0 replies      
they need to penalize imgur for doing this
khana 1 day ago 0 replies      
Welcome news.
bluetwo 22 hours ago 0 replies      
So... all mobile sites?
gagginaspinnata 18 hours ago 0 replies      
agumonkey 1 day ago 0 replies      
additional ideas for google:

- penalty for non no-js fallback

st3v3r 1 day ago 2 replies      
st3v3r 1 day ago 2 replies      
serge2k 1 day ago 4 replies      
Oh good, Google abusing their power again.

I guess as long as it's for "good" reasons.

edit: would any downvoters care to explain how google being able to arbitrarily dictate web content is a power they should have?

NHTSAs full investigation into Teslas Autopilot shows 40% crash rate reduction techcrunch.com
783 points by fmihaila  1 day ago   312 comments top 25
Animats 1 day ago 4 replies      
It's interesting how vague this is. There's an NTSB investigation still pending into a specific Tesla crash.[1] The goals are different. NHTSA asks "do we need to do a recall?" NTSB asks "exactly what, in detail, happened here?" NTSB mostly does air crashes, but occasionally they do an auto crash with unusual properties. Here's the NTSB report for the Midland, TX crash between a train and a parade float.[2] That has detailed measurements of everything. They even brought in a train and a truck to reconstruct the accident positions.

It took a combination of problems to cause that crash. The police lieutenant who had informed the railroad of the parade in previous years had retired, and his replacement didn't do it. The police marshalling the parade let it go through red lights. They were unaware that the traffic light near the railroad crossing was tied in to the crossing gates and signals. That's done to clear traffic from the tracks when a train is approaching before the gates go down. So ignoring the traffic signal took away 10 seconds of warning time. The driver thought the police had taken care of safety issues and was looking backwards at the trailer he was pulling, not sideways along the track. People at the parade were using air horns which sounded like a train horn, so the driver didn't notice the real train horn. That's what an NTSB investigation digs up. Those are worth reading to see how to analyze a failure.

[1] https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

[2] https://www.ntsb.gov/investigations/AccidentReports/Pages/HA...

uncoder0 1 day ago 4 replies      
After looking at the report it looks like Tesla ran into the same issue we did in the 2007 DARPA Urban Challenge. The trailer was higher than the front facing sensors. We and most other teams had all assumed 'Ground Based Obstacles' meant that any obstacles on the test track would make contact with the ground in the lane of travel. DARPA decided to put a railroad bar across the street and expected cars to back up and do a U-Turn when they encountered it. The bar was too high off the ground for our forward LIDAR to see it so we collided with the bar at nearly full speed.[1] The sad part about this is that when we were drinking after dropping out of the challenge our team leader said something along the lines of 'At least we know no one will ever die now from the mistake we just made.'

[1] https://www.wired.com/2007/10/safety-last-for/

snewman 1 day ago 8 replies      
Tesla comes off extremely well in this report. For one thing, the 40% statistic cited in the headline appears to be well supported by the NHTSA report (section 5.4) and actually manages to frame the incident in a very positive light:

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

I had hoped to see more information about this specific incident. For instance, any data on whether the driver had his hands on the wheel, what steps the car had taken to prompt his attention, etc. But that doesn't seem to be included.

xenadu02 1 day ago 1 reply      
For those who don't want to signup to Scribd just to download a publicly available PDF: https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF
stale2002 1 day ago 7 replies      
Oh, hey, will you look at that.

The imperfect, incomplete, beta, level 2 self driving cars that were supposed to be the "dangerous" area of self driving are ALREADY better than human drivers.

Can we stop the politics and deploy all the real self driving cars to the road immediately, since the government has proven that even the shitty variety is safer than humans?

sxp 1 day ago 3 replies      
The 40% number isn't very informative. The report has multiple notes about it:

ODI analyzed data from crashes of Tesla Model S and Model X vehicles involving airbag deployments that occurred while operating in, or within 15 seconds of transitioning from, Autopilot mode. Some crashes involved impacts from other vehicles striking the Tesla from various directions with little to no warning to the Tesla driver.

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to[21] and after Autopilot installation.[22] Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

21 Approximately one-third of the subject vehicles accumulated mileage prior to Autopilot installation.

22 The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual Autopilot use.

So the actual rates of crashes for Teslas using Autopilot vs Teslas not using Autopilot aren't reported.

randomstring 1 day ago 2 replies      
Waiting for the headline: "Human Fails to Prevent Accident, Outraged Public Calls for Banning of all Human Drivers"

The obsession with perfection in self-driving cars is misplaced, they just need to be demonstrably better than humans.

This is obviously the future.

huangc10 1 day ago 5 replies      
Can anyone who is in the industry comment on how Autopilot performs in poor weather (ie. flash floods, thunderstorms, snowstorms etc...)

All I can find from the article about weather was in section 3.1:

> The manual includes several additional warnings related to system limitations, use near pedestrians and cyclists, and use on winding roads with sharp curves or with slippery surfaces or poor weather conditions. The system does not prevent operation on any road types.

lz400 21 hours ago 2 replies      
I think there's sometimes a lot of marketing and hand waving in this type of argument "crashes go down with autopilot". Most car accidents are caused by drunk or old people, and they drag the average up. if you tell me a Tesla autopilot beats a drunk guy it won't surprise anyone. Now as a non-drunk, young(ish) driver but experienced and careful, my statistics don't look anything like the average, they look at lot better. You have to convince this demographic, not beat the averages otherwise it's not rational for me to buy the feature. I'm guessing this goal post is a lot harder.
cbr 1 day ago 1 reply      
This is really good news. A major worry with driverless cars has been that companies would be harshly punished for accidents, even if there was a dramatic reduction in crashes overall.
brilliantcode 1 day ago 0 replies      
For your reference, a level 4 automated car will look something like this:


Imagine you are too fucked up to drive. Your car will be able to pick you up. Do you need Pepto Bismol too? Your car will pick it up from a drive through billed through your license plate. I'd give roughly 15~20 years for this to take place.

bcaulfield 1 day ago 0 replies      
So I'm far less likely to crash if I use this, and I have something to blame if I do. Everybody wins! (Except the engineers).
themgt 1 day ago 0 replies      
I don't always like Gladwell, but his piece on the Ford Pinto and the NHTSA philosophy towards auto safety more generally is quite worth the read [1]. I hadn't considered the intersection of this and self-driving car tech, but I wonder if NHTSA will basically take the position that as long as self-driving tech saves lives overall, a few "bugs" where the car kills the driver are an acceptable trade-off.

[1] http://www.newyorker.com/magazine/2015/05/04/the-engineers-l...

em3rgent0rdr 19 hours ago 0 replies      
I only want to use open-source code for something that my life depends on. That way it can be open to any one to inspect some can independently determine if the code is behaving as desired.
mrtron 1 day ago 1 reply      
What other car company can even recover the airbag deployment rate per mile?
ridiculous_fish 22 hours ago 0 replies      
This means that autopilot must be engaged at least 40% of the time (Amdahl's law!). Tesla owners, is that realistic?
ChuckMcM 1 day ago 0 replies      
That is a pretty remarkable report. It essentially holds Tesla up as an exemplar of the standard other car makers will be expected to achieve.
brighton36 5 hours ago 0 replies      
The truck was visible for at least 7 seconds prior to the crash in the full report - another article here:


Strangely enough, giving more people autopilot would probably be better than letting people drive. I think Tesla's picked the right time to enable it, since the cross-over point between autopilots being better than humans in general use cases has been reached.

Call it a beta if you want, but it's a pretty damn promising beta.

schraitle 1 day ago 0 replies      
Does anybody know what the "Population" field indicates at the top of the report?
zekevermillion 1 day ago 0 replies      
Impressive. 2/5 reduction is a lot of lives saved.
tn13 1 day ago 1 reply      
40% figure is meaningless unless the absolute numbers are reported. How do we know if this difference is statistically significant ?
NamTaf 1 day ago 0 replies      
I've railed on about the safety issues of autopilot before and how I'm not entirely comfortable with the pace they've developed compared to the considerations of human-machine interfaces and driver attentiveness, particularly given my (moderate) exposure to these sort of problems in other industries. Thus I'm particularly interested in that section of the article.

What I found interesting is that figure 10 shows that as you jack up the independence of the machine, the level of driver distraction accordingly increases. Adaptave Cruise Control (ACC) shows a significantly higher percentage of shorter-duration off-road glances than Limited-Ability Autonomous Driving Systems (LAADS). Additionally, countermeasures help to alleviate some but not all of that increase in distraction. Importantly, this is coupled with the point that the duration in which drivers have to react to most impending is under 3 seconds. This may seem obvious but it's a critical set of data to help objectively demonstrate the risks involved with losing or even reducing alertness.

It goes on to say that Tesla has addressed the risks of mode confusion, distraction, etc. and has implemented solutions to address this unreasonable risk, which they in turn define as abuse that is reasonably forseeable. In this, they're talking about the reasonably forseeable risk of eg: the driver not understanding if they're in autopilot or not. It goes on to mention that Tesla has also changed its driver monitoring strategy to promote driver attention, which I take to mean detecting hands on the steering wheel.

Either way, Telsa's main approach to dealing with driver alertness by testing for hands on the steering wheel. My concern is that this doesn't consider the alertness of the driver to their surroundings, particularly other vehicles that may be approaching them or the process of anticipating hazards (approaching an intersecion where there's a blind corner and adjusting focus to pay more attention to what may come from it, for example). I don't see how Tesla's countermeasures address this.

The physical act of manually driving causes drivers to maintain alertness not only to where they're going, but also the situational alertness of what's around their vehicle. Specifically, it's the process of random actions that requires an taking input, making a decision and executing the appropriate action that maintains this alertness. If the driver isn't having to make those random decisions and take action then their alertness drops. Autopilot, even with hands on the wheel, eliminates much of that random decision-making and reacting.

When you drive, you mentally note the vehicle over your shoulder that is in the lane next to you, and subconsciously consider that they may do something insane. You consider those blind corners as you approach them and that vehicles may spontaneously appear out from them. You see a truck on the road which is approaching a bend and give it a wider berth because its centre throw may cause it to cut the corner into your lane slightly. These are all tasks that you do, that you may not do as well or at all when autopilot is steering, because you are not as engaged with the driving process.

Critically, I don't see how ensuring hands are on the steering wheel causes these alertness tasks to continue as frequently as manual steering. The driver may be in the physical location to quickly take over, but they may not be in the mental location to do so. This is the major issue I have with the rapid autopilot development based on my experience in related areas where maintaining situational alertness proved to be very difficult when the person was engaged with only a limited scope of requirements to prove their conscious presence.

I feel like the report doesn't really drill in to this as much as it needs to. It begins to touch on it around Figure 10 but sort of hand-waves it away saying 'Telsa considered it discharged their responsibility to make sure drivers stay focused by implementing countermeasures', but I believe it's more nuanced than that. It investigates the extent to which Tesla's system is good at ensuring drivers are physically present (that is, their hands aren't on the passenger seat making breakfast) but it doesn't really look at the mental presence that delivers situational alertness.

That mental alertness is the major sticking point for me. I don't really have a solution beyond "drive manually" which isn't reasonable, because this technology is here to stay and will continue to grow, but it's why I've always been bearish about the rapid pace of rollout of these driverless technologies, particuarly when advertised as 'beta'. As I've said before, no amount of disclaimer and 'hey, you should do this' really changes how drivers behave once the equipment is placed in their hands.

sandworm101 1 day ago 3 replies      
Great. There is no doubt that driver assists cut down on crashes. But what tesla has on the road is far from a total eyes-closed autopilot. That is an inflection point with this tech that nobody has dared to test on the public road. I remain unconvinced pending those trials.

Also, still havent seen any autodrive handle off-road driving such as boarding a carferry or navigate a construction zone manned by an inattentive flag person.

battlebot 1 day ago 0 replies      
I don't completely trust the NTSA and I'm skeptical about auto-piloting cars but accept that more and more of those will be on the roads. I will never ride in a vehicle that lacks an override mechanism.

In general, I think we are moving way too fast towards these self-driving vehicles because certain factions want to try and replace long and short haul truckers with robotic systems that are cheaper and damn the consequences.

dkonofalski 1 day ago 3 replies      
I don't really know why this is surprising. Computers are already better than humans at most tasks that involve a limited set of behaviors and they have infinitely better response time than humans (and continue to get better). How could anyone think that a report like this was going to end up any differently?
Removing Python 2.x support from Django for version 2.0 github.com
709 points by ReticentMonkey  1 day ago   384 comments top 26
Flimm 1 day ago 3 replies      
The next release, Django 1.11, will be a long-term support release, and the one after that, Django 2.0, will no longer support Python 2.


I've grow to highly respect the Django project for its good documentation, its healthy consideration for backwards compatibility, security, steady improvements and all round goodness.

yuvadam 1 day ago 3 replies      
This call has been made a while back, and it makes perfect sense. Python 2 is slowly being EOL'd and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore.

Sure legacy projects still need support and for that they get the 1.11 LTS, but otherwise it's really time to move on.

rowanseymour 1 day ago 0 replies      
I'm glad they are making a clean break from Python 2 and I hope this pushes other projects in the ecosystem to fix those remaining libraries without Python 3 support. It does get a bit frustrating when things break between Django releases, but they have a good system of deprecating things for a couple of releases beforehand. And at the end of the day, Django is for people who want to build websites, not life support machines... and I think they're doing a decent job of striking a balance between breakage and stagnation.
nodamage 1 day ago 8 replies      
I have a Python 2.7 project that has been running smoothly for many years now and I'm having trouble finding a reason to upgrade to Python 3. The project uses the unicode type to represent all strings, and encodes/decodes as necessary (usually to UTF-8) when doing I/O. I haven't really had any of the Unicode handling problems that people seem to complain about in Python 2.

Can someone explain what benefit I would actually gain from upgrading to Python 3 if I'm already "handling Unicode properly" in Python 2? So far it still seems rather minimal at the moment, and the risk of breaking something during the upgrade process (either in my own code or in one of my dependencies) doesn't seem like it's worth the effort.

stevehiehn 1 day ago 5 replies      
Good. I've been getting into python a bit because i have an interest in datascience. I'm mostly a Java dev. I have to say the python2/3 divide is a real turn off. Many of the science libs want to use seem to be in 2.7 with no signs of moving.
oliwarner 1 day ago 0 replies      
A whole pile of people complaining about upgrading Django highlights two things to me:

Not enough people are using tests. A decent set of tests make upgrade super easy. The upgrade documentation is decent so you just spend 20 minutes upgrading broken things until it all works again.

People pick the wrong version. I've seen people develop and even deploy on -dev and it makes me cry inside because they'll need to track Django changes in realtime or near enough. Pick an LTS release and you get up to three years on that version with security and data-loss upgrades and no API changes.

misterhtmlcss 1 day ago 1 reply      
Is anyone going to talk about what this means for Python and Django? I read the first 30-40 comments and they are all about off topic stuff related to Django, but still the core premise is the committed move to Python 3.x going forward.

What do people think of that?! I'm a newer dev and I'd really really love to hear what people think of that and what it means for the future rather than side conversations about how bad their API is, how good it is, how good their Docs are and how bad they are.... Blah blah.

Please!! This community is filled with some of the most brilliant minds and I for one don't want to miss out on this chance to hear what people think of this change.

Please please don't reply that you disagree with my POV. That's irrelevant, but please do if you are interested in the initial topic. I'd be be very excited to hear your thoughts.

So Django moving to Python 3.X Go :)

erikb 1 day ago 0 replies      
There are only two possible opinions here:

A) You mostly have Python3 projects: Then you like it because you know more ressources will be spent on your pipeline and having more Py3 packages is also helpful.

B) You still have Python2 projects: You hate it, because it pushes you out of your comfort zone.

But I have to say, we want our langauges to develop as well. We want our packages to get attention. And there was lots of time to switch and experiment with switching. Ergo, it should happen. Even if you don't like it as much, that's where things are heading. Deal with it, move on. Let the community help you, if necessary.

gkya 1 day ago 0 replies      
This is a nice patch [1] to review for Python coders. Seems to me that most incompatibilities are provoked by the unicode transition.

[1] https://patch-diff.githubusercontent.com/raw/django/django/p...

karyon 1 day ago 0 replies      
The related django issue is here: https://code.djangoproject.com/ticket/23919

there are lots of other cleanups happening right now. It's a real pleasure to look at the diffs :)

myf01d 1 day ago 1 reply      
I hope they just find a way to support SQLAlchemy natively like they did with Jinja2 because Django ORM is really very restrictive and has numerous serious annoying bugs that have been open since I was in high school.
gigatexal 1 day ago 0 replies      
This is great news. It will help move people off their python 2 code bases even more. Kudos to the Django team.
Acalyptol 1 day ago 2 replies      
Time to introduce Python 4.
karthikp 1 day ago 2 replies      
Oh boy. And here I am still using Py2.7 with Django 1.6
mark-r 1 day ago 1 reply      
I was surprised to see the elimination of the encoding comments, I thought that the default encoding would be platform dependent. After a little research I found PEP 3120 which mandates UTF-8 for everybody, implemented in Python 3.0. It also goes into the history of source encoding for 1.x and 2.x. I wonder why there aren't more problems with Windows users whose editors don't use UTF-8 by default?
romanovcode 1 day ago 1 reply      
Good, it's about time this nonsense ends.
ReticentMonkey 1 day ago 1 reply      
Can we expect the async/await introduced from Python 3 for async request handling or maybe some heavy operations ? Something like sanic: https://github.com/channelcat/sanic
hirokiky 23 hours ago 0 replies      
Say good bye to django.utils.six. yay
gojomo 1 day ago 0 replies      
Because incrementing version numbers is free, Django might as well bump the Python-3-requiring version number to Django 3.0.

Lots of beginners and low-attention devs will find "Django 3 needs Python 3" easier to keep straight than "Django 2 needs Python 3".

alanfranzoni 1 day ago 7 replies      
So, after a poor evolution strategy that lead the Python world to be split in two and forces maintainers to offer two versions for the same library, and upstream maintainers to offer support for two different python versions, the same is happening for Django!

I speculate that the latest Django 1.x will remain used - and possibly the most used - for a lot, lot of time.

daveguy 1 day ago 1 reply      
Seriously? The entire change to "unsupport" the majority of Python code is a mass delete of from __future__ import unicode_literals and utf-8 encoding? Is that really the extent of the "too difficult to maintain" code? There will be a split.
scrollaway 1 day ago 2 replies      
Oh my god stop. You're all over this thread. What bit you?

This is the price you pay for staying on an old version. You do not get to stick to an old version AND demand that others do too.

You CAN stay on Python 2. You CAN stay on Django 1.11. It's LTS. So is Python 2.7. You get to use both until 2020 with no issues. After that, not upgrading is a technical debt that will start to accrue, faster and faster as you can no longer use recent versions of various software.

You are free to make your infrastructure immutable; you then become responsible for it of course. And the money you're not willing to spend porting to Python 3 today will be money you spend on costs related to being on outdated infrastructure, years in the future. That's a tradeoff. Banks do it a lot I hear. A bunch of companies still use ancient hardware and technologies nobody would think of starting a business with today. These companies make billions.

You know what the employees of these companies aren't doing? They're not bitching on HN that the tech they're using is no longer supported.

jdimov11 1 day ago 3 replies      
Says who?? Someone with delusions of grandeur, obviously. Because that's not up to anyone to say. Python 2 is obviously NOT going away any time soon. You can't just look at reality and claim the opposite just because it pleases you. Python 2 is here to stay and is in MUCH better shape than Python 3, in terms of actual production usage globally. Python 3 is a bad joke that someone wants to force down people's throats for NO good reason at all.
belvoran 1 day ago 0 replies      

Yea, I know, shouting is not the best thing, but this is a really good news.

jonatron 1 day ago 0 replies      
Django was designed for making content based sites and CMS's quickly. It wasn't designed for webapps and REST APIs, and it can be used in those cases, but it's not great. I'd look at other options.
U.S. sues Oracle, alleges salary and hiring discrimination reuters.com
478 points by monocasa  2 days ago   466 comments top 34
freditup 2 days ago 16 replies      
That means we so far have:

* Palantir sued for not hiring enough Asians [0]

* Google sued for not turning over compensation data [1]

* Oracle sued for hiring too many Asians

While it's possible that discriminatory processes have happened at all these places, it seems these lawsuits can be targeted at whoever one wishes. It's always going to be possible to find data that indicates discrimination, unless companies hire in exact quotas (which would also be discriminatory really).

[0]: http://www.usatoday.com/story/tech/news/2016/09/26/palantir-...

[1]: http://www.usatoday.com/story/tech/news/2017/01/04/google-su...

tabeth 2 days ago 8 replies      
Couple points:

First, this is where the meat is: https://www.dol.gov/newsroom/releases/ofccp/ofccp20170118-0


"...Oracle nevertheless preferred Asian applications over other qualified applicants in the Professional Technical 1, Individual Contributor Job group and in the Product Development job group at statistically significant rates." [1]

[1] https://www.dol.gov/sites/default/files/newsroom/newsrelease...


Perhaps I'm naive, but I'll say it again and again until someone with influence hears me: large companies should do anonymous interviewing. I've interviewed with Oracle, Cisco, and many of the "old corporate-y" companies. There's ZERO reason the interview process can't be completely anonymized. Their interviews (from my limited experience) are completely impersonal and done on an ad-hoc basis anyway.

That being said, it seems this issue may be more of an H1B1 issue, which inherently cannot be made anonymous.

swframe2 2 days ago 2 replies      
I worked at Oracle in 1992-1998. I hired a white male employee who complained later that I was hiring too many indian employees. I went backed to look at the applicants and noticed that 90% of them were indian and that I'd contacted every non-indian applicant. I'm not white or indian so I was very eager to make sure I was being fair. My non-scientific conclusion after working at a lot of other companies is that very successful companies paid a lot better and had much more interesting work than Oracle (remember oracle missed the .com shift in a major way); I feel oracle gets so few the applications of whites because most of them had much better jobs and opportunities. In that situation, it might require oracle to pay more to retain white employees.

I left oracle in 1998 and returned in 2003. I did notice a dramatic shift in the employee demographics at that time. Areas that used to be mostly white or mixed were now entirely indian. I'm not sure of the reasons but I've worked at places that are much more successful than oracle and I suspect it is not oracle that is discriminating as much as it is a lot of better companies enticing talent away. I've seen many of oracle's brightest employees working at more successful companies.

CodeSheikh 2 days ago 4 replies      
"Oracle was far more likely to hire Asian applicants - particularly Indian people - for product development and technical roles than black, white or Hispanic job seekers."

Can the DOL back up this claim with salary data to see if Oracle is abusing the H1-B visa system by purposefully keeping wages low?

Most of the time it is just easier to hire Asian/Indian employees because they are readily available (larger proportion of population entering tech filed via education or change of career).

It would be interesting to see how far DOL can stretch this.

Edit 1: Improvement

jiggliemon 2 days ago 2 replies      
I feel that I can provide a small yet worthy piece of context. We have a difficult time interviewing ANYBODY for our open reqs. And like every other technology company I've ever worked at; a majority of resume's I've reviewed had Indian names. This seems especially weighted given how the Oracle brand and database are still desirable properties and tech in India.

Your average white dude in Silicon Valley may have a fair to negative view of Oracle. While (from my experience) your average Indian dude's opinion of Oracle is more favorable.

temp246810 2 days ago 4 replies      
Question: as an employer what exactly am I supposed to do if when I put a req out for an engineer, >>>>50% of applicants are asian?

Is the burden really on me to dig through the remaining applicants for good ones?

These laws really sound like people with no skin in the game demanding things for which they don't know all ramifications.

Infinitesimus 2 days ago 7 replies      
> The department also said that during its investigation, which began in 2014, Oracle refused to provide relevant information about its pay practices.

That's the part that doesn't help Oracle at all in this.

If you really don't have anything to hide/cleanup, why avoid providing useful information for 2+ years? (Granted, 'relevant information' might be loosely defined here)

Tangent: I know several employers use the concept of a "salary band" for roles. I.e. entry level "Software Engineer" can make between $70k and $85k depending on some metric. I wonder how often our preconceived biases/prejudice are used as an excuse to put people in lower salary bands.

Is there even merit to these? On one hand, people have different skill levels even in a role. On the other, shouldn't the salary be tied to title?

mooooooooo 2 days ago 4 replies      
Is "culture fit" illegal?

Let's say that an Indian manager hires more Indians because he feels like they are easier to work with. Now, he doesn't just hire Indians, but he is statistically more likely to do so over a large quantity of hires. Is that against the law?

ausjke 1 day ago 0 replies      
I now sue NBA for having too few whites and Asians.

The goal is to make sure everywhere people are distributed on their population and color and gender and age ratio, instead of merits, after this micro-level-equality is accomplished everywhere, let's then work on making sure everyone has their fair share of fortune, finally, communism will be realized and the world will be a peaceful place, starting from US!

in_cahoots 1 day ago 0 replies      
When I started my first job, I was shocked at the amount of racial self-selection there was. Although the company itself was diverse, many teams were 80%+ white/Chinese/Indian. If you looked closer you saw that they had all gone to the same schools (albeit in different years or even decades) or had worked together previously.

This wasn't a top-tier company, so my theory was that candidates with options went elsewhere, while those without options stayed here and hired people like themselves. I could easily see the same thing happening at a place like Oracle.

okreallywtf 2 days ago 0 replies      
Quotas and affirmative action seem to me like hack workarounds to try to solve the real problem, and that is that our society itself is highly unequal.

None of these business can tackle the real problems on their own, but even assuming that one "race" generally has an advantage because of educational background or some other factor that makes them more attractive hires, simply hiring them and ignoring everyone else (that isn't exceptional) only perpetuates the problem.

Therefore, attempts are made to resolve the issues in ways that an individual organization with a limited reach can: quotas, diversity hires. The problem is that the problems that create a disparity between the hires are long term, where as most of these companies have to think of the relative short term.

Because we as a society can't (or won't) tackle these issues in a holistic way, we're always going to have some hack workarounds that feel like BS to a lot of people because they are. If all public schools were well funded[1] and taught it wouldn't be such an issue (in a few decades maybe), but instead we've got a model that simply perpetuates the problem. Even if we throw money at the schools it won't fix the problem because a lot of kids growing up in poor areas have problems at home [2]. We know how to fix these problems but we can't even agree as a society that we even have a problem.

We're going to continue having this conversation for 100 years because thats how long its going to take our half-assed measures to work if they work at all. In the richest country on earth we can't agree that all public schools should be equally funded (hell, we're about to have to fight for the actual existence of public schools as all).

[1] http://www.npr.org/2016/04/18/474256366/why-americas-schools...[2] https://www.edutopia.org/blog/how-does-poverty-influence-lea...

sqlplus 2 days ago 2 replies      
If you walk the floors of Oracle office in California, you are not going to feel like you are working in the US. It almost feels like an Indian enclave. In my opinion, this is a bad thing.

In the Database division, the division that also makes most of the money for Oracle, most people including Directors, VPs, SVPS, etc. are Indians. Many are Chinese too. They have been in this company for the last 10 to 20 years. They understand all the politics of the company inside out and use it to their advantage to rule (exploit?) their subordinates. It is almost as if they have setup their kingdom in this company. You might be wondering how all of this relates to salary and hiring discrimination.

* Most of these Indians at the top of the ladder, seem to be hiring only Indians.

* These Indian SVPs and VPs hire Indians for cheap. A unique cultural thing about Indians is that they are obsessed about saving money, e.g. lowball prospective candidates; sometimes the negotiation can last for 2-12 months before an offer is made!

* It helps these VPs if there are Indian engineers or managers under these VPs. Indians don't counterquestion their superiors much. So when these VPs find insane ways of saving money, e.g. not spending the budget for team outings, project parties, etc., the Indian engineers seem to oblige. One team activity or team outing only once in 2 years is not unheard of in Oracle while our neighbourhood goes for such outings every quarter. Indian VPs rejecting employee's request for stationery is also not unheard of. It is necessary to hire Indians so that they don't question when VPs reject reasonable requests.

* Another thing unique about Indian culture is utter lack of respect for schedules. The SVPs define schedules for a product release in a random fashion without ever consulting the managers or the engineers. Guess what? The schedule is not met. The company remains in a never ending loop of reschedule, miss schedule, repeat. The same goes for meetings. Meetings start late. They end late. Nobody cares that there might be another meeting that people may have to go to. A few Americans might look upset but who cares about the minority! They can get away with this kind of disrespectful scheduling when the majority are Indians.

If someone claims that they hire Indians because they are more skilled than Americans, then I call bullsh*t. It is true that there are more Indian engineers than Americans but it is also true that for the number of people Oracle needs there are enough skilled Americans as there are Indians. So if the hiring was fair, one should see an equal number of Americans and Indians.

Disclaimer: I work for Oracle. I am Chinese. I love Indians. I have many Indian friends.

eva1984 2 days ago 1 reply      
> technology company systematically paid its white, male employees more than other workers and unlawfully favored Asian applicants in its recruiting and hiring efforts.

Wow...this is big.

exabrial 2 days ago 2 replies      
The real solution to the problem is everyone needs to share their income level with their co-workers as well as their performance reports. And don't be afraid to leave!
necessity 2 days ago 0 replies      
You know what actually is hiring discrimination? Quotas. Racism, if racial quotas.
aswanson 2 days ago 4 replies      
Who in their right mind would want to work for oracle in 2017? That's like fighting for a gig at DEC circa 1997. They're doing candidates a favor by refusing them entrance on the Titanic.
ioda 1 day ago 0 replies      
I am Indian. (I used to work for Oracle). If I were to hire, given everything else is equal , I would prefer an Indian to a non-Indian. You can attribute it to cultural bias, or whatever.

But hardly 'everything else' is equal. I hire people who can get 'shi* done' , and who are easy to be managed. Period. And I assume most of the managers would do so. So if Indians are easy to be managed, and are getting hired, it is not racism. Because, they were getting hired for the easiness of managing them, and not for the Nationality.

The issue of white males getting paid more is certainly racist. But the sad part is one cannot verify it, because one cannot isolate the 'merit' part of the salary , from the part attributed to the 'race'

(By the way, I never hired anyone while working at Oracle.)

dqv 1 day ago 1 reply      
Serious question:

If it's against the law to have a disproportionate amount of a certain group employed (or paid a certain way), but it's also against the law to discriminate in the hiring process (and in the termination process), how is a company like this supposed to comply?

* Hire more of x group - it's illegal because you'd be discriminating against a group to meet the employment quota

* Fire x group to equalize the amount of people working - it's illegal because you're targeting a specific group of people to fire

There doesn't seem to be any way to meet these requirements without some form of discrimination at some step of the way.

Does the government have "legalized discrimination" policies to allow companies to become compliant?

elastic_church 1 day ago 0 replies      
Oracle America in Redwood Shores H1B salaries, for reference


lgleason 2 days ago 0 replies      
Irrespective of one's opinion.

The part relating to H1B visas is likely to be targeted under the new administration. The part about the white males, not as likely.

asher_ 1 day ago 1 reply      
As someone unfamiliar with US law, could someone explain whether it's illegal for companies like Oracle to pay white, male workers more (as stated in the article) or if it's only illegal to pay them more because they are male and white?

The former seems obviously absurd. So, assuming it's the latter, has the government supplied any evidence that this has happened?

jza00425 1 day ago 1 reply      
I assume, they are gonna sue those Chinese restaurants too?
tu7001 1 day ago 1 reply      
This is silly, obviously, if I'm hiring somebody, I discriminate the others.
supergeek133 2 days ago 0 replies      
I'm interested to see how this ends up, and how other large companies (Google, Amazon, etc) haven't fallen victim to the same target by DOL.

In this case is it different because of direct hire? The company I work for seems to just contract out quite a bit to emerging markets.

patmcguire 2 days ago 2 replies      
How easy is it for new DOL administration to back out of this? Trivial, right? No real consequences.
udkl 1 day ago 0 replies      
"Oracle was far more likely to hire Asian applicants - particularly Indian people - for product development and technical roles..."

Another employer with significant hiring with such practises is CISCO

falsestprophet 2 days ago 3 replies      
This is the first legal test of the pervasive practice of Indian nationals hiring predominantly or exclusively other Indian nationals from their same region in India.


From the Department of Labor complaint:

"...Oracle nevertheless preferred Asian applications over other qualified applicants in the Professional Technical 1, Individual Contributor Job group and in the Product Development job group at statistically significant rates."

Who is it who is favoring Asian applicants for this job group?

This sort of bias is a danger when hiring decisions are made at a team level rather than a systematic company-wide process like at Google.

phkahler 2 days ago 2 replies      
How does the US government have standing in such a case? If such practices are not legal, shouldn't the law specify a remedy? If not, how has the government been harmed?
nameisu 2 days ago 0 replies      
not surprising . pay less and still get work done. I have the same experience in automotive sector. GM, FORD, FCA
davidf18 1 day ago 1 reply      
A woman, Safra Catz, was co-president and is now co-CEO which means that they are accusing a woman of discriminating against women.
muninn_ 2 days ago 3 replies      
SO much this.
nanistheonlyist 2 days ago 4 replies      
richard___ 2 days ago 1 reply      
Why can the US sue a private company for this reason? I thought private companies were exempt from these laws
titomc 2 days ago 2 replies      
Why am I not surprised ? Yet another H1B discrimination story to rant about. (though the article do not say anything about the 'Indian' people's visa status, isn't it obvious ?)
Amazon Web Services in Plain English (2015) expeditedssl.com
621 points by apsec112  20 hours ago   73 comments top 22
michaelbuckbee 19 hours ago 15 replies      
Hey, Author here. This is old and I haven't added some of the new services that AWS has released since I first wrote it.

Whenever this list comes up there's generally a group of people that dislike it for trying to be at least mildly humorous (The whole concept for it started with my developer friends and I joking about some of the names and how opaque they were, so not sure what I'm supposed to do).

There were a couple substantial edits I made to it where a few funny lines were cut in favor of better explaining what/how something worked.

I also started fleshing out some of the services with slightly more in-depth articles about them (such as this discussion of AWS Buckets where I compare Amazon's CTO to a character from 28 Days Later - https://www.expeditedssl.com/aws-s3-buckets-of-objects

I've sometimes thought that I should try and make it into an ebook or something, but there's always been something more interesting to work on. Thanks to everyone who has enjoyed it, shared it with their friends and hopefully took their first steps to messing around with AWS.

dcw303 20 hours ago 2 replies      
This is really useful for a layman like me who doesn't have a lot of exposure to AWS.

Anything similiar for Azure? I would really like to understand the difference between the different types of app services, and especially how they relate to the project templates in Visual Studio.

beefsack 20 hours ago 1 reply      
Calling S3 "FTP" is a bit misleading, I would have just called it "File Storage" and explained it along the lines of FTP instead.
pyreal 15 hours ago 0 replies      
I just discovered a bunch of interesting stuff that I had no idea AWS provided thanks to the cryptic names. Most notable is Elastic Beanstalk - had no idea that was a PaaS!
tobych 5 hours ago 0 replies      
This answers one of my big pet-peeves, which is the useless word salad written by marketing pinheads that you have to decypher whenever you're comparing products or evaluating which tier of some service that you need, e.g.: "Upgrade from Basic to Ultimate if your stakeholders need to leverage content analytics and optimize dynamic competencies". No customer can ever sue you for misrepresentation if no one can ever quite figure out what you were claiming your software could do.
velodrome 19 hours ago 0 replies      
GCP does not really need one of these. It is a lot easier to understand. The only time it gets confusing for people are the papers or projects they were based on (e.g. StackDriver, BigTable, etc).
dsmithatx 12 hours ago 0 replies      
I would highly recommend this resource for further reading.


ankurdhama 15 hours ago 1 reply      
Does anyone know who are the bunch of geniuses that come up with the names and decide upon what to use? World deserve to know about them.
noer 12 hours ago 0 replies      
It's worth noting on the SES explanation it says:

>You could use it to send a newsletter if you wrote all the code, but that's not a great idea.

You actually can use a self hosted solution like Sendy to send marketing emails & newsletters via SES & only pay for the emails you send using SES

tharibo 17 hours ago 1 reply      
Do we have the same for Microsoft products?Even as a developer, I can't understand half of what they're proposing. Like what the hell is Sharepoint anyway?

Or what is SAP?

z3t4 8 hours ago 0 replies      
cygned 19 hours ago 2 replies      
Still wondering why AWS does not provide a solid PaaS solution like Heroku does (they are on AWS, though) - or am I just overlooking it? I would like to host a few node.js/Clojure apps but I don't want to have the hassle with virtual machines/IaaS.
dangle 16 hours ago 0 replies      
Thanks. <3 Can you also rewrite all tutorials please?
frostymarvelous 12 hours ago 0 replies      
This is very annoying to read on my S6 in chrome 55.0.2883.91
roomey 16 hours ago 0 replies      
Wonder what the vmware service on Amazon will be called - maybe Elastiware
sukruh 18 hours ago 1 reply      
It would be cool if someone did a similar thing for the Apache Big Data projects.
a012 20 hours ago 1 reply      
> Code Deploy> Should have been called: Not bad

The one service name that's self declared.

kasparsklavins 20 hours ago 1 reply      
Broken on mobile.
dustinmoris 17 hours ago 0 replies      
Someone needs to teach expeditedssl.com how to build a website in 2017 that can also be consumed on mobile without content being cut out.
howfun 16 hours ago 0 replies      
Machine LearningShould have been calledSkynet
slightlyCyborg 19 hours ago 1 reply      
Updoot, because I was thinking just the other day how absurd Amazon's naming scheme is. Did an engineer think of that sh#!? If Amazon was run by Musk instead of Besos, a harsh email would have been sent to the employees to cut that sh#! out.

Source: Acronyms seriously suck https://twitter.com/davejohnson/status/602951117413216256

Too much sitting, too little exercise may accelerate biological aging sciencebulletin.org
455 points by devinp  2 days ago   301 comments top 31
jknoepfler 2 days ago 1 reply      
If the headline read "people with advanced cellular age exercise less" no one would read it, but it's a more plausible interpretation of the data.
Yaggo 1 day ago 3 replies      
I sit ~10 hours daily, including 2+ hours driving. I'm naturally skinny but haven't done much physical exercises in recent years ("busy"). Now, at age of 34, I'm starting to realize that my body won't last forever and finally started to exercise few times per week, after I found motivating enough guide. It amazing how much more energized it makes you feel.

[1] https://www.julian.com/learn/muscle/intro

finid 1 day ago 2 replies      
For almost a year I spent practically all day sitting in front of a PC monitor (used to work at home), with no exercise.

Then I developed pain in both knees, worse on the left knee. I figured that it was because of too much sitting, so I got a standing desk. In about a month, knee pains gone, completely.

Never even bothered to go see a doctor.

After standing for too long, I started having pains in my lower back. Solution? I got a bar stool, so I now alternate between standing and sitting.

What I learnt is that our body's joints (knee, elbow, waist, ankle, etc) were not designed to remain in one position for too long. Movement lubricates them.

david___sh 2 days ago 16 replies      
My view might be classified as 'odd', but this is how I look at this problem, and this is what I ended up doing:

I do not mind living a decade shorter than those ones who are more active than me.

I have tried to be active, but could not fit it into my mentality. Note that I did not say my lifestyle, I said my mentality. I am unable to be active. I was not an active person a a kid, and I am not one as an adult.

If I aggregate all those activity hours together with all the stress about being more active (including following the effectiveness of standing desk news), it could maybe worth a couple of years of my life. I subtracted those years from that one decade, and decided that I am ok with not having that part of my life.

I continue programming 14 hours a day in the sitting style.

reasonattlm 2 days ago 0 replies      
Telomere length as presently measured in white blood cells is a measure of immune health before all other factors: how often are new cells turning up with long telomeres (thus how well is the thymus and bone marrow stem cell population doing), how often are existing cells dividing and shortening their telomeres while doing it (how much stress is the immune system under, how much war is it waging), how many senescent cells are hanging around lowering the average, that sort of thing.

Exercise is very well associated with better immune function. So not a surprising result beyond the fact that they actually managed to get a result at all, as telomere length measured this way is actually a pretty terrible metric of aging. The correlations with aging only show up in large populations, and even there you'll find as many failures to identify associations as successes in the literature. For an individual knowing your immune cell telomere length won't tell you a great deal that you don't already know, and nor will changes over time. The numbers will be all over the map, and won't compare usefully at all with other individuals in your circumstances, unless you have a few thousand of them to compare with.

If you want a decent biomarker that might actually prove to be actionable, DNA methylation patterns after the model pioneered by Horvath et al look fairly promising. (There's even a Florida company offering an implementation as a service these days).

The sitting question is generating a lot of ink in the research community. All sorts of large epidemiological studies have tackled the topic. I think it remains unsettled as to whether it is the sitting or whether it is the inactivity: decent arguments from data could be made either way. If pulling in data from the broader context, however, the inactivity looks more compelling. Accelerometer studies are becoming more common since the miniaturization and cost reduction that came with cell phones, and these are showing that even very low levels of exercise appear to make noticeable differences to outcomes in later life - at the level of housework and puttering around the garden.

Philipp__ 1 day ago 9 replies      
I am fairly young, 22, mostly sitting in front of the computer, even sitting while studying. So it could be said I sit around 10 hours per day.

When I was younger, up until the end of high school, I used to do sports, but since I got to college, I simply stopped caring. My body is skinny, really skinny, I weight 61kg, and I had that weight for last 7 years. But one day I thought maybe I should do something, not for the looks, but for the feel of my body, and physical body exhaustion just feels healthy sometimes.

So I thought running might be what I am looking for. Any advice on that? I am sure someone is doing it so (with lower body weight), any advice or thought on that? Btw when it's not winter, I skate, especially in summer, I use to skate all night, for 5-6 hours.

rokhayakebe 2 days ago 2 replies      
Here is something most people can do that will have great impact on your overall health: take a walking break at work. Besides the health benefits it will help you clear your head and possibly find a solution to that problem, get some sun, chat with a co-worker about life, etc....
Question1101 1 day ago 9 replies      
What's the best exercise just to stay healthy? And what frequency? I would hate to exercise for years just to find out it actually harmed my health.
winter_blue 1 day ago 5 replies      
I work on personal projects, read HN, or do other things on my computer while I'm at home. I'm sitting at a desk and using my computer for 12+ hours a day. I rarely ever exercise, or even move my body much besides my fingers.

This is really terrible news for people like me.

blueside 2 days ago 4 replies      
I switched to a standup desk 5 years ago. I got a dog 3 years ago. I don't sit anymore and she forces me to take her for two walks almost everyday.

I don't exactly have more energy and I don't feel like I'm going to live forever, but I have noticed I am aging slightly slower than my colleagues who are forced to sit down in their offices for over 12 hours a day.

MrQuincle 1 day ago 2 replies      
The ROI:

Spending time with mindless exercise: 1/48 part of a year times 70 years is 1.46 years.

If true that's around 8 more years in return for every 1.5 you put in. Nice ROI. :-)

pazimzadeh 2 days ago 2 replies      
This seems like one of the things that VR/AR might fix. Speaking as someone who tried VR for the first time at the mall yesterday (HTC Vive).

Reminds me of Bret Victor's Seeing Spaces http://worrydream.com/SeeingSpaces/.

umberway 1 day ago 0 replies      
These reports have an empiricist slant to them and there's usually no attempt to explain the result. No doubt there's truth to be found here, but note that mere act of sitting cannot be bad for health, otherwise seated meditation would be bad for health too (which it isn't).

I conjecture it is the combination of sitting and hard focus on work or video games which is relevant. The brain withdraws attention from the body (which, being supported, isn't required much beyond breathing and digestion). The result seems to be bad things like inflammation, poor lymph circulation, etc, though I think this isn't understood. I remember it was said that, a few decades back in the UK, bus drivers (seated) had more heart attacks than bus conductors (standing).

johndoe4589 1 day ago 0 replies      
It could also very well be that those participants in the study who didn't "exercise" were lonely, were more or less alone or single, had no one to go take a walk with, have lost their closest relatives, have lost their significant other, etc. All those could factor in as much if not more than the "exercise". (and hence spend much more time indoors, probably watching the telly, and then things go from bad to worse in old age as health problems cascade)

And I quote "exercise" because it's not clear from the article if it's something like gymnastics, or the participant reported 30 min of daily walking and other activities like cycling. Point being, you could be unhappy and exercise a little bit every day and still be worse off than someone who's happy and loves to sit and watch the bird sings. Who knows.

I'd wager these days the quality of our food, and the bonds we have with people around us is far more incidental on our lifespan.

Here I am thinking of the less explored perspective:



> "One of the tragic outcomes of loneliness is that people turn to their televisions for consolation: two-fifths of older people report that the one-eyed god is their principal company."

db1 1 day ago 2 replies      
Anecdote:I used to work from home and pretty much sat down the whole day. I would still go for a run once or twice a week, but I would quickly experience a sharp pain in my lower back that made it really hard to run. Some Googling led me to believe that my tight hip flexors might be causing my back pain, and that sitting down all day could lead to tight hip flexors. I've since started working for a couple of hours per day at a ghetto standing desk and my back pains are pretty much gone.

Another thing that I really highly recommend is Joe DeFranco's stetching programme, Limber 11 (https://youtu.be/FSSDLDhbacc). You can do the whole thing in about 10 minutes, and it leaves your whole body feeling nice and relaxed.

EternalData 1 day ago 0 replies      
My solution to this is to set myself weekly goals in terms of steps. It changes your routine when you do that. At 12.7k steps a day, I basically have to elongate my walking commute, and find create ways to stand up and move all day.
EGreg 2 days ago 3 replies      
So they found that the group which sat for 10 hours but did 30 mins of exercise had cells just as robust as the group which ... what, sat around for 2 hours?

Where are the details?

Don't be fooled. Other studies showed that exercise doesn't negate the damage done by sitting:



Get off your butt, especially if you work on a computer. Take breaks to alternate between mental and physical activity.

ergot 1 day ago 0 replies      
I do yoga for my back and ensure there's good lumbar support on any chair I sit on. They say sitting is the 'new cancer' and prevention is often the way to go. Here's an interesting article on some yoga exercises you can try for back pain: http://www.buzzle.com/articles/yoga-exercises-for-back-pain....
kingkawn 1 day ago 0 replies      
This article (only a summary in the link below because we still live in the dark ages of research paywalls) from the Annals of Internal Medicine claims via meta-analysis that exposure to sitting will harm your health regardless of exercise.


pizza 1 day ago 0 replies      
So, telomere length as a measure of aging is something that keeps popping up from my lay-person perspective. It seems important to know about, and I have little awareness of my lack of awareness, so, those of you who live and breathe telomere research.. could shed some light on my ignorance?

- what "big-picture" information governs the "big-picture" processes?

- critical details swept under the rug with my un-nuanced understanding of the hypothesis, (e.g. "telomeres do not shorten uniformly/monotonically/predictably")?

- how does empirical research concerning telomeres compare to the state (or even future) of our understanding/control of aging?

conjectures 1 day ago 1 reply      
For the time constrained: go running. You can do it almost anywhere, with minimal equipment. It's a means of transport. It's very efficient in terms of calories per minute. Downside is you end up looking like a runner not a buff dude. You can level up the intensity if you have less time etc.
iask 1 day ago 0 replies      
I bought a teeter hangup about a week ago. So far it helps. After a long day, spend 5 minutes inverted.

What I also do is running, in the office. Yes! Our building is pretty big, whenever I go to one of the office on the far side, I do some running. I even sprint up the stairs when I can.

taohansen 1 day ago 1 reply      
This doesnt hold up to all sedentary modes of living. Monks sit all day but appear in some cases to have cells absurdly younger than chronological baseline.
lightedman 1 day ago 1 reply      
I plan to live to be as old as the mountains, this is why I rip apart mountains, manually.
tomerific 1 day ago 0 replies      
All, this isn't a legitimate news source -- heck they're not even a legitimate scientific group. It is a dude that is republishing some data.
AhtiK 1 day ago 1 reply      
Jumping rope. 10min jumping can be as effective as 30min running. Super cheap. Can travel with you anywhere. Does not depend on weather.
roryisok 1 day ago 1 reply      
I have a standing desk for work, and I feel it has improved my health noticeably, but I still am sceptical of studies like this.
bbarn 1 day ago 0 replies      
Except for a few exceptional cases, eating right and exercising helps almost everyone feel and look better.
jMyles 2 days ago 4 replies      
I'm reluctant to be this commenter, but this honestly reads like "sky is blue" kind of thinking.

Worse: it is likely that nobody knows the "best" formula for physical-activity-unto-staying-young. Think about how difficult a problem this is.

And even if there were a plausible, scientifically defensible answer in terms of the effect of specific routines on a sample population, it's still very difficult to generally apply it.

There's room for good common sense here: do what makes you feel good, what seems to increase your fitness, and what you can do without causing serious injury.


intrasight 1 day ago 0 replies      
Stand or sit? Which is worse for my butt? If I sit all day, I'll have one of those squashed flat butts. But if I stand all day, will I have droopy butt?
rubicon33 2 days ago 3 replies      
"We found that women who sat longer did not have shorter telomere length if they exercised for at least 30 minutes a day"

In other words, you can't exercise away an unhealthy life style.

That's very similar to diet. In fact, the big 3 (diet, exercise, and sleep) taken together are far greater than the sum of their parts.

As a developer with a full time job, and side projects, I can easily find myself sitting in a chair for 12+ hours a day. I recently purchased a standing desk, and will be using a timer to remind myself when to break for 15-20 minutes for outdoor exercise.

I have forced myself to limit eating out to once per week. And use a sleep tracker to track my sleep, holding myself to getting the right amount, on average, every month.

Rust vs. Go ntpsec.org
475 points by ingve  2 days ago   562 comments top 53
wiremine 2 days ago 12 replies      
From ESR's conclusion is this:

> For comparison, I switched from Rust to Go and, in the same amount of time I had spent stuggling [sic] to make even a <100 LOC partial implementation work, was able to write and test the entire exterior of an IRC server - all the socket-fu and concurrency handling - leaving only the IRC-protocol state machine to be done.

I don't think this is an unusual experience. I'd consider myself a journeyman programmer with 16 years of experience and probably a dozen languages under my belt. Learning Go was unique: after 3 days I felt productive; After 2 weeks I sort of looked around and said "is this it?" Go gets out of your way for the most part, and performs as advertised.

For a lot of programmers, for a lot of projects, that's exactly what you want.

I've attempted to learn Rust a few times, and it always feel like I'm moving uphill.

Rust is awesome for a number of problem spaces, so I'm not knocking it. And Go _isn't_ awesome in a lot of ways (vendoring, I'm looking at you). But it feels like there are a lot of core, language-level things Rust needs to improve to attract more non-genius developers.

pcwalton 1 day ago 2 replies      
The concurrency complaint is strange, as almost nothing in it is correct.

1. Rust never "course corrected" to implement CSP, and as far as I can tell ESR is making that up.

Rust actually implemented channels and tasks as builtins first, with no mutexes or shared state available--those came later. At some point we saw the opportunity to reduce the complexity of the language and move the language closer to the metal by moving channels and threads to the library instead of keeping them as primitives. Mutexes and reader-writer locks followed a year or two into the language's development--in fact, they were partially implemented on top of channels at first. In fact, the motivation was essentially "hey, look at this neat thing I can do with borrowing: safe mutexes!", not "let's get rid of channels".

2. The idea that channels are going away is also untrue. I've heard a couple of people wish that channels could be removed the standard library and moved to the nursery since they're a fairly complex implementation, and embedded users might want a slim libstd. But nobody has ever floated the idea of channels just vanishing outright in favor of shared state and mutexes: it's absurd. Moreover, the standard library is stable and must be supported going forward, so we can't make that change even if we wanted to.

3. Go has mutexes as well. They're right in the library in the "sync" package. They're fairly idiomatic too: embedding a mutex in your structure as an anonymous field so that your structure can inherit the Lock() method is common.

4. ESR needs to be clear about exactly what defects can arise with mutexes. If he were to enumerate them, he'd find that Rust's type system is carefully designed to avoid them.

mcguire 2 days ago 3 replies      
Rust and Go are not readily comparable. Rust is a systems language aimed at the same domain as C++ and, to an extent, C. Go is more akin to Java and Python. ESR almost admits this:

"Latency and soft-realtime performance

"Again, zero-overhead abstractions and no stop-the-world GC pauses give Rust a clear +1.

"It is worth noting that Go came nearest disqualifying itself entirely here. If it were not possible to lock out Gos GC in critical regions Rust would win by default. If NTPs challenges tilted even a little more towards hard real time, or its critical regions were less confined, Rust would also win by default."

As an aside, if ESR couldn't get Rust's ownership model, I wonder how long it took him to internalize how not to write memory bugs in C?

SwellJoe 2 days ago 4 replies      
I'm not quite willing to dismiss Rust merely for being hard to learn, but after poking around in both Go and Rust for a few weeks, I feel like I could build a project in Go (slowly, and with a lot of googling), but I don't feel like I could build anything in Rust without a lot more learning.

Rust has the same problem as C++ (to a lesser degree; maybe it'd be more fair to compare it to Java), in that if you're a casual coder who dips into a bunch of different projects in a bunch of different languages without ever really specializing, it can be overwhelming. Go is much more readily accessible; partly because it is less ambitious, and partly due to philosophical differences. I can read most Go code, today, having only worked through "A Tour of Go" on the website; I can't even begin to read Rust code after similar time/effort invested.

So, yeah, Rust really is hard. It may be necessary complexity to solve the problems they wish to solve. But, it means I'm unlikely to ever have the time to invest to make it a part of my toolbox, unless it becomes my primary job; whereas I'm already almost there with Go. I'll probably build something real in Go within the next month or two. It can be something I do for fun, in my spare time, and I can expect to get useful results.

I want to like Rust. I'm just finding it hard to get to know Rust.

didibus 2 days ago 1 reply      
I keep going back to the Simple Made Easy talk by Rich Hickey, creator of Clojure.

Easy vs Hard as in:- Easy to learn- Familiar looking- I know most of it alreadyvs- Hard to learn- Nothing works as I'm used too- It's all new to me

Simple vs Complex:- Its parts are self-contained- The parts don't depend on each other- Quick to get something done once you understand itvs- Can't separate the parts from the whole- Each part depends on other parts- Slow to get something done even if you understand it

Most programmers and businesses value Easy vs Hard, because they think about themselves, not delivery of great quality software. If you focused on delivery of great quality software, you'd conclude hopefully that easy vs hard doesn't matter as much as simple vs complex.

Having said that, I think Go is easy, and for the most part, it is also pretty simple. While Rust I'd say is hard, but also very simple. Its simple in Rust to make correct software, but not easy. C is easy, but complex. C++ is kinda hard, and also complex.

So both Go and Rust are great improvements. Go chose to still be easy, at the detriment of how far it could innovate, but it still innovates. This approach is pretty smart, because what will happen probably is that a lot of people will move to Go, as it is not a very hard change from what they know, and then they will move to Go + 1, and then Go + 2, and then Go + 2, and eventually they'll end up at Rust, or something similar. Doing it that way is easier.

Ericson2314 2 days ago 1 reply      
> Learning curve

No pain, no gain, but sure.

> Translation distance

62K lines sounds like typical C with no code reuse to speak of, A clean-slate rewrite I'd hope would be much smaller, though admittedly talking about corrode and c2go speaks of the other direction.

> Concurrency

No mention of Rust being data-race-free and Go not; unacceptably lax.

> > While I give the Rust designers credit for course-correcting by including CSP, Go got this right the first time and their result is better integrated into the core language.

Wat. Rust was originally all about CSP too, but they went away from that because there was no need to stay. You can still do all the channel stuff you want but there's little benefit force that idiom. Go's special support for this is like....special-cased generics right?

> epoll

Mio? (sure future-rs is not yet ripe, but mio alone is last I checked.) Overall, ESR seems sprinkle in lot of hatred of dependencies (bloated standard libraries are such a crutch around not having a good ecosystem), which I personally find laughable as the brightest future of FOSS is large-scale code and abstraction reuse as Haskell, Rust, and the like, are just making mainstream.

hannibalhorn 2 days ago 1 reply      
It's worth noting that the author is Eric S Raymond, who we all know of...

I'd definitely agree that at this point Go is better for network services, and the learning curve / documentation / infrastructure of Rust is a bit scattered. I believe that as Rust matures it'll be better for the non-server stuff (no GC, etc.) and eventually become competitive with Go in this space. I'm quite fond of both languages, though!

lhnz 2 days ago 2 replies      
I think this really over emphasises the difficulty in writing code with a different programming language. The hard part is generally finding a good solution to the problem you're working on...

When I've worked with Rust, 90% of the time I found it just as easy as working with something like JavaScript or Python. However 10% of the time it becomes much much much more difficult. And those times I often end up going in the wrong direction, reading documentation and trying to apply a solution which doesn't fit (i.e. no syntax that will make things work because I am fundamentally doing the wrong thing).

For example, the other day I tried solving a problem in which I had to do an in-place matrix transposition on some slices and I was getting a bunch of compile-time errors for everything I tried. I googled later on and found this [0], which makes it seem like I would have needed to use `unsafe`, however maybe that's just a misunderstanding and there is a safe way.

People are making it out to be harder than it actually is. The semantics aided by the compile-time errors make most problems quick to solve. It's very expressive and gives types which help to avoid fragile code (e.g. `Result` and `Option`). The only times that it becomes difficult are when you need to reach for a solution but do not know what to search for, and I think this could be improved if some of the errors linked to related topics, and once the IDE support improves.

[0] https://athemathmo.github.io/2016/08/29/inplace-transpose.ht...

kibwen 2 days ago 0 replies      
The latter half of the post is pasted from a previous blog post last week, which has been discussed at length at https://www.reddit.com/r/rust/comments/5nl3fk/rust_severely_... and https://news.ycombinator.com/item?id=13385530 .

TL;DR: we have some concerns. :P

didibus 2 days ago 0 replies      
I find his conclusion strange, he counted the score as 3 points for Go in ease of learning, ease of converting a C code base to it and concurrency. He also gave Rust 3 points in zero-overhead deployments, better latency and real-time performance, and better security and safety.

I agree 100% with this scoring. I'm not sure though why he weighted the 3 pros of Go as more important for NTPsec since the goal of the project are: "Our goal is to deliver code that can be used with confidence in deployments with the most stringent security, availability, and assurance requirements." Given this goal I would personally say that safety, security, deployment ease and performance are critical, which given his pros and cons would make Rust a better candidate.

Animats 2 days ago 2 replies      
One big advantage of Go is that core packages exist for most of the things you might want to do on a server. Those packages are mostly maintained by Google employees and are used within Google, so the packages are exercised and maintained. Rust, like Python, has a large collection of packages maintained by random people and not under organized maintenance or testing.

"Corrode" needs to get a lot better before it is useful. Look at the Rust it generates.[1] It transliterates C into unsafe Rust. That's not too helpful. Doing such a translation well is a hard problem. The translator will need to figure out exactly what needs to be mutable where, which means building a graph of the program across function boundaries.

[1] https://www.reddit.com/r/rust/comments/4rv0uh/early_stage_c_...

dom96 2 days ago 3 replies      
It seems that the author places a lot of emphasis on the maturity and future prospects of various libraries. I'm surprised that Rust was so quickly dismissed as tokio seems to have a bright future ahead of it.

What I do think is a fair point is Rust's learning curve. With that in mind I would like to suggest a third alternative: Nim.

Support for epoll/kqueue/IOCP has been in Nim's standard library for a while now and is already very mature. The learning curve is probably somewhere between Rust and Go. In addition Nim also offers a GC that is predictable and controllable.

tree_of_item 2 days ago 1 reply      
I don't understand why esr thinks his opinion after 4 (four!!) days of Rust is interesting. Am I the only one here who spends a lot longer with a language than that before passing judgement on it?
tptacek 2 days ago 4 replies      
This is confusing. By far the most words of any concern in this piece are dedicated to a root concern that Rust doesn't have a select/poll abstraction, and that idiomatic network code simply allocates a task per socket. But that's true of Golang as well; not just true but distinctively true, one of the first things you notice writing Go programs.
squiguy7 2 days ago 0 replies      
> By four days in of exploring Go I had mastered most of the language, had a working program and tests, and was adding features to taste.

I felt this way too but then realized the advanced topics of Go really take time to get right. Writing correct concurrent code is non-trivial as you need to tailor your solution to the problem at hand. There is no one size fits all way to write concurrent code. In some cases a mutex makes sense whereas channels work better in others.

Programmers must be diligent when using these tools as well because of deadlocks, data races, or go routine leaks. I think this is an area Rust excels in with its ownership model that eliminates these problems.

jupp0r 2 days ago 0 replies      
I find the argument that of not being productive in a language in a matter of a few weeks to be quite weak. Programming languages are tools. Tools take time to learn.

Nobody would consider a flute to be an instrument superior to a violin because it can be learned faster. Rust is a tool that takes time to learn. Is it worth it? How could ESR decide if not in hindsight?

My personal experience is that understanding and applying Rusts borrowing and ownership made me a much better systems programmer in C and C++. Would I write a production NTP server in Rust or Go? Definitely Go, but that's no reason to dismiss the concepts and the remarkable engineering of Rust.

eridius 2 days ago 0 replies      
ESR seems to care really strongly about concurrency things being "primitives" in the language. Why? There's nothing wrong with them being part of the crates ecosystem. Maybe in 10 years the crate you chose won't be under active development, but it's not like crates will just mysteriously break once they reach a certain age. As for the language itself being stable, sure, a program you write today might not compile using the latest compiler in 10 years, but it should still compile using the last 1.x release, so it's not like you won't be able to compile your project in 10 years.
vog 1 day ago 1 reply      
I'm surprised this article is from 2017. It seems that esr doesn't account for the latest (huge) improvements of Rust during the last year.

Given that Rust is younger than Go, ignoring a year's worth of development makes for an especially unfair comparison.

The issues he cited were from 2014 and 2015, regarding Rust for networking code. Moreover, on his blog post (http://esr.ibiblio.org/?p=7294) he received a clear answer that he seems to have ignored:

 Stefano: [...] For epoll: There is a crate. You could also use future-rs or tokio. So you have three possibilities.
Finally, the author of another networking project, TRust-DNS, used these Rust features and was quite satisfied with the elegance of the resulting code:


justinsaccount 2 days ago 1 reply      
> For comparison, I switched from Rust to Go and, in the same amount of time I had spent stuggling to make even a <100 LOC partial implementation work, was able to write and test the entire exterior of an IRC server - all the socket-fu and concurrency handling - leaving only the IRC-protocol state machine to be done.

That's basically my experience with rust and go.

go is stupid, but I can write it. It's stupid needing to write the same for loop over and over again, but it's an obvious for loop that doesn't have any surprises, and everyones for loop looks the same.

rust is smart, but I can't write anything in it (yet). Some of rusts features would be nice to have in go.

eridius 2 days ago 1 reply      
Something I'm surprised ESR didn't consider is the fact that Rust lets you do a piecemeal translation. You can rewrite individual functions in Rust and call them from C very easily. So you don't need to rewrite all 62KLOC at once.
pimeys 2 days ago 1 reply      
I disagree with many points in this article, but being writing Rust in my company since summer I can definitely say it is much harder than any other language I've used. Now when I learned doing a synchronous service with it Tokio was released and I've been banging my head against the wall to get the concepts.

But then again, it just takes time. And I have some crazy two month uptimes with my rust services eating that constant 8 megabytes of ram...

alkonaut 2 days ago 1 reply      
This resonates with me in a lot of ways. I really object to a lot of Go's design, but I also feel Rust is too much of a struggle at the moment.

When I write performance critical code in Rust I can see how I'm rewarded for all the mental gymnastics and token salad of Rust.

But when I'm writing something that's difficult or complicated in the business/algorithm sense but not performance sensitive, then I just can't let the syntax obscure the semantics.

The code has to look as simple and readable as ruby/go/Swift/c#/Python/kotlin/Nim -- or the language has failed me.

Right now I feel it's often the opposite. When I write low level Rust code the type system is usually ok. For higher level code I usually find I need more noise and type overhead (more explicit lifetimes, more Box<Rc<T>>). That I think is a big problem.

Rust, like C++ is "you pay for what you use" - but what they mean is performance. I wish it was readability. I almost wish the default was a higher level language with lots of heap allocation etc, and you explicitly declare when you don't want that (c.f C# unsafe/stackalloc and similar)

k__ 2 days ago 1 reply      
When I first read about Rust and Go they were both advertised as C/C++ killers.

But I have the feeling Go became more of a Python/Node.js alternative.

general_ai 1 day ago 1 reply      
I wrote less than 2 KLOC of Go in my life (at Google, because I needed a web server for an internal dashboard, and Java was too much pain), and one thing is for sure: while I did not like the language per se, I did not hate it either, and I was able to get used to it in about 3 days, which is how long it took me to write the code. I literally knew nothing about it at the outset, and I felt comfortable in it after writing less than 2KLOC. I can't think of another high performance language with such a short learning curve.

I've tried Rust and concluded that it solves problems I don't have. I work mostly in C++ (more like C+, I tend to keep my code pretty simple), and with unique_ptr<>s, shared_ptr<>s and judicious use of concurrency primitives I can avoid pitfalls well enough that I maybe encounter a self inflicted concurrency or memory management issue once every six months, if that. And it pays amazingly well, and there's an enormous number of libraries available. That said, when I need a high performance web server, I'll use Go without hesitation. When performance doesn't matter, Python and Flask are pretty indispensable.

CalChris 2 days ago 2 replies      
Rust is hard to write but I'll grant that once written it is relatively easy to understand. That's kinda the opposite of C++ where it's easy to write and hard to understand (although it's easy to think you understand).

I appreciate the safety+performance value proposition in Rust. More cathedral, less bazaar.

mike_hearn 2 days ago 4 replies      
I wonder why it was only Rust or Go he considered.

He could also use Java. It has a 'pauseless' GC in Fedora Core's OpenJDK (Shenandoah). It has an epoll abstraction in the core library. It has libraries for doing things like async DNS requests already.

It would be nice sometimes if these A vs B comparisons were a bit wider.

jeffdavis 1 day ago 0 replies      
I am learning rust. To me, the amazing thing is the lack of runtime.

If you are writing a library to render the latest web image format, or implement the latest secure network protocol, then your choices are limited and Go is not an option.

Sure, you could write it in Go and then Go programmers could use it.

But if you write it in C or rust, it could be the de facto library for nearly all languages.

The article is a good one, but doesn't really influence my personal opinion much.

merb 2 days ago 6 replies      
well actually he says that DNS lookups etc needs to be asynchronous, but aren't go's channels blocking?

I mean he also says that rust has no good epoll/select abstraction, which is wrong.And also "a welter of half-solutions in third-party crates but describe no consensus about which to adopt"I think he didn't researched well enough.

I mean it would be the same as saying that there is no good way to create a custom non blocking dns server with pure java/scala (no external library used).Rust has https://tokio.rs/ and everbody who even looked into network services with rust, stumbled across it.

Well maybe for his goals and contributor base go is prefered and that is ok, but well this article is more or less written as a rant against rust, that he/she/it does not agree with the generel design/documentation/whatever of rust.

In fact I also think that the tooling of rust sucks. rust is a language which shines with extremly good tooling. I mean rust has a godlike package manager and every tooling gets better and better.

Compared to go where the package management just sucks but the tooling even IDE completion is extremly great.

P.S.: I would not want to have a garbage collected ntp server. But for a lot of things I thing Go would be a good fit, even when it would not be my tool of choice.

spullara 2 days ago 1 reply      
They are two entirely different use cases. Rust is for replacing C/C++ with a safer alternative. Go is basically for developers that never fell in love with Java or C# but need the same kind of language.
deathanatos 2 days ago 1 reply      
> things that should be dirt-simple in Rust, like string concatenation, are unreasonably difficult

This is how hard adding String objects is in Rust:

 a + b
A more full example shows only small complexity, really, and I include it only for completeness and fairness:

 fn main() { let a = String::from("Hello,"); let b = " world."; let c = a + b; // this is the only part that concats, really. println!("{}", c); }
The String::from call converts from a str& which is effectively a pointer and length to UTF-8 data, to a managed string, more akin to std::string in C++; str& doesn't implement +, I presume b/c it doesn't know what the resulting type should be (and doesn't presume).

A C programmer should easily comprehend why you can't simply add two char * s together; compare:

 char *new_string = malloc(strlen(a) + strlen(b) + 1); if(!new_string) { abort(); } stpcpy(stpcpy(new_string, a), b); return new_string;
(that's just the `a + b` line from Rust, in C, essentially)

(hopefully I got that right. stpcpy avoids an extra iteration through a that strcat cannot; I do know about strcat.)

Essentially, in all of Python/Go/Rust, if you has the right type, string concatenation is `a + b`.

> Contemplate this bug report: Is there some API like "select/poll/epoll_wait"? and get a load of this answer:

> > We do not currently have an epoll/select abstraction. The current answer is "spawn a task per socket".

It would need to be cross-platform; so I can understand why Rust isn't there yet in the standard library. There are third-party libraries out there that do this. (And C has nothing here, either)

You can always call epoll directly, and write the required abstraction yourself. (There are even third-party wrappers for calling epoll, so you don't even have you write that yourself either; see the excellent "nix" library.) If you want a 10 year solution

> Relatedly, the friction cost of important features like the borrow checker is pretty high.

At first, this is true. Then I felt like I started realizing that what all the "friction" of the borrow checker was it pointing out serious bugs in my code.

Regardless, NTP would be better off in either Rust or Go instead of C, IMO.

jnwatson 2 days ago 1 reply      
I'm curious if some of this is the fact that a lot of senior folks are learning new languages for the first time in a long time. I think we unnecessarily constrain language design if they have to look like everything that came before.

I remember learning Java was a piece of cake. Go was too. Both were remarkably similar to previous languages I knew. Rust was/is a lot more difficult, but nowhere close to the mindfuck that is Haskell or ML.

I mean, is it that bad that it takes a month to really learn a new language if it can provide worthwhile benefits?

luckystarr 1 day ago 0 replies      
Its true, coding in Rust makes you feel like a total moron at first. The experience actually reminded me of the time when I first picked up programming.

It only takes around a week or so though, after that you can be surprisingly productive. I never would allow myself to write the code in C that I wrote in Rust. I'm just not clever enough.

VexorLoophole 1 day ago 1 reply      
The main problem i have with rust is that there is no real book for beginners. Sure thing there is the RustBook and RustByExample. But i need something to guide me. I dont want to slap things together. I want to learn how to do it right. All Rust recourses i found only told me how to do XXX in Rust. Nothing along the lines of: This is could be a real world example for YYY. In examples, you will always only find excessive .unwrap() use and functions which dont borrow anything.

Had the same feeling with go. Looking into big projects like syncthing helped a bit. But first i have to get a better hang of the language. Books like "Atomic in Scala" or "Clojure for the Brave" are taking your hand and guiding you through the Language. Even when taking a peak into books like "21st century C" i learned more, then when i simply force myself through go's "Gopl" or Rust's Book. I know i am probably not the right target group. But i really want to learn one of those languages in my free time, and i had a really hard time to do so.

After understanding go better, i still want to switch to rust. The build system with cargo pleases me and looks somehow cleaner as in go. I just create a project with 'cargo new XXX --bin' and start coding and building some kind of lib in my project. No problem. I go there are 100 different ways to start a project, and most time you will only find people complaining about vendoring etc.

shmerl 2 days ago 0 replies      
> The amount of complexity and ritual required by Rusts ownership system is high and there is no other language I know of that is really good preparation for it.

C++ helps, if you are familiar with RAII usage in it.

iopq 2 days ago 0 replies      
This isn't anything different from the other two articles he wrote. What's the point of writing a third one saying the same things?
gwenzek 1 day ago 0 replies      
The title should have been "Rust vs. Go, by a Go developer who doesn't know Rust". Would have saved me 5 minutes.
julian_1 2 days ago 0 replies      
Stronger typesystems are better for production system. They give more guarantees and mean you need to write less unit tests. Rust's sum/algebraic types are a move in the right direction and so is pushing resource usage invariants into the typesystem.
jug 1 day ago 0 replies      
I think Go and Rust are interesting languages because both tackle the "C successor" problem and both feel like good solutions, yet so different from each other that it almost feels like they aren't meant to be used for the same problems. How could that happen?

I think it's largely because C and C++ has been used in such a wide variety of applications as "good enough" languages but where there were so much room for improvement, for modernization. We're talking about an old assembly language layer and an object orientation cludge on top of that layer. As soon as you do improve on these, you observe that the application space to cover for successors is huge, so huge that there is plenty of room for two different languages.

Developing in Go doesn't feel like Java or C# to me. It feels more like C (and definitely not C++) and I love that. It's so simple, yet producing native code. It's like what you'd get if you took C and from the very start decided the productivity vs performance problem favored a garbage collector, prefering productivity. Then looked at how common and underutilized multicore CPU's are, while at the same time how synchronization is hard to get right. If you then move from these problems to solve and realize the GC is your Achille's heel and simply try to optimize the heck out of that, I guess you get something like Go.

Meanwhile, Rust during the design phase must have been in such a similar place as Google engineers were when designing Go? Once again, C / C++ were flawed and unsafe, cumbersome to work with. They took the very same productivity vs performance problem and this time saw that, no, we don't want garbage collectors, we prefer performance supporting time critical systems above all in the spirit of C, yet safety. Then looked at common crash issues and went by that, and then they got Rust.

Personally I prefer Go. It's the most fun for me to work with and feels like a very pragmatic language. Rust feels more like an academic language to me: ideas and exciting concepts allowed to materialize. It also has its place, but seems like more in niched scenarios like real time systems and device drivers, low level, critical stuff like that.

onionjake 2 days ago 0 replies      
I hope that the measure of whether something is good enough for the embedded space is not android.

I have quite a few openwrt routers running with substantially less resources available to it than the worst android phones.

I don't know if go would be able to meet the cpu/memory constraints in that environment... from the blog post it seems like that wasn't even measured?

scotty79 1 day ago 0 replies      
Semi-related, side-by-side comparison of how common things are done in rust and go:


disordinary 2 days ago 0 replies      
One of the interesting things about Rust is that it can compile C libraries, so you can move a code base from C to Rust one module at a time.
adelarsq 1 day ago 0 replies      
Funny article. This is something like emacs vs vim by someone that knows only one editor
yellowapple 2 days ago 0 replies      
I'm curious about whether and how Rust "dealbreaking" situation compares to C/C++. Seeing as how Rust is designed to be a replacement for those two (so is Go, but it seems to be at a much higher level), I reckon that'd be the more fair comparison.

In particular, it's my understanding that epoll and select are system calls. In that case, just call them. Or is there really no means to do so from Rust (doubtful)? Sure, you'd have to wrap stuff in "unsafe" calls, but that's par for the course when interfacing with non-Rust software.

gok 2 days ago 1 reply      
So the author doesn't care about memory usage or throughput, except for a few "soft realtime" sections? Why would either Rust or Go be better for this task than something like Java or Python?
charlesholbrow 1 day ago 0 replies      
In the last post didn't he say that he spend 15 years writing in 'C' and learning the ins and out?

I don't think 4 days is enough to really evaluate a language. I'm not going to dismiss Rust based on this analysis.

maxekman 1 day ago 0 replies      
My favorite feature of Go is the immediate productivity that almost any developer can achieve within days af seeing the language for the first time. That adds tremendous business value to any project.
noway421 2 days ago 0 replies      
It feels like this guy will end up converting the codebase to both languages.

Which would ultimately be fun to follow

hoodoof 2 days ago 0 replies      
Hey wow this new technology will solve all our problems with that old technology!
EugeneOZ 2 days ago 4 replies      
Rust fans are trying to avoid "vs" articles, would be great to see such endeavor from Go fans too.
camus2 1 day ago 1 reply      
The problem with Rust is that Rust cannot be understood by someone who never had to do manual memory management. So people coming from Java, Ruby, Python, Javascript, PHP who have never used C or C++ will never understand why Rust handles variables the way it does. Rust cannot be popular among these people because Rust doesn't solve their immediate problems in terms of performances.

the Rust team often pretends they are open to suggestions as to how to make Rust easy or easier to learn, that's impossible given how memory management is done in Rust, especially in regard of the people i talked about in the former paragraph.

Only a developer familiar with C or C++ can appreciate Rust semantics. The others cannot.

hnbro 1 day ago 0 replies      
it's unfortunate that rust seems barely more than a compiler. it's... empty. the "ecosystem" feels like a namespace-free dogpile of college spring break experiments. a wasteland of pre-pre-pre-pre-alpha stuff. it'll be 10 years before it improves to even being underwhelming.

in go, you can download it and have the pieces to actually do useful stuff right away. legit productivity. not to mention putting together graphs or graph-like structures and other similar things aren't the obtuse sphinx riddles they are in rust.

user5994461 2 days ago 3 replies      
There are real world jobs requiring Go (and it's growing slowly). There are no jobs requiring Rust.

End of the comparison. That's all one needs to know if he's interesting in a career in distributed systems.

hubert123 2 days ago 5 replies      
I cannot agree more with his conclusions about Rust, you can overlook a lot of problems. But the language itself is just such an arcane pain to deal with, the author really brought this out: Even just writing and printing few lines of strings out has possibly hours of gotchas in it. Add a little IO and you're looking at days.It's not a question of a lack of documentation, I just think the language is weird. I truly wanted Rust to be great and I keep looking at it wanting it to be better but it just isn't.Isn't it funny how in the real world, suddenly the so lauded 'type safety' really just doesn't matter that much in practice. A simpler language overall would have made a much bigger impact than any generics.
Stepping into math: Open-sourcing our step-by-step solver socratic.org
532 points by shreyans  1 day ago   172 comments top 17
analog31 1 day ago 8 replies      
This seems interesting because it addresses the issue of "show your work." Many years ago, I spent a semester teaching the freshman algebra course at the nearby Big 10 university. This is the course that you take if you don't get into calculus. My students were bright kids -- they were all admitted to the state flagship school -- but not mathematicians.

There was huge variation in the preparation that kids brought with them from high school. In particular, very few of them understood what "show your work" means. They were told "show your work," but nobody told them what it really entails. Is it just to provide evidence that you did some work, to deter cheating, or is it something else? Many of my students were taught "test taking skills" such as the guess-and-try method. So on one exam, a question was:

x^3 = 27

One student's work:

1^3 = 1

2^3 = 8

3^3 = 27

Answer = 3

I asked the professors to tell me what "show your work" means. None of them had a good answer! These were the top mathematicians in the world. I wanted to talk with my students about it, but I'm not even sure that my own answer was very good.

But if we did well in math, then we just know what it means. It's not just evidence that you did the work. It doesn't mean "turn in all of your chicken scratch along with the answers." It means something along the lines of supplying a step-by-step argument, identifying the premises and connecting them with the conclusion, in a language that is "accepted," i.e., that mimics the language of the textbook / teacher. In fact, the reason to read the textbook and attend lectures, is to learn that language. (It's not so different in the humanities courses).

At least, that's my take on it, as just one teacher with one semester's worth of experience.

In my view, a problem solving tool that actually addresses the process of building the argument and not just determining the answer, would be beneficial to students.

tgb 1 day ago 7 replies      
Has anyone done a study to see if this kind of aided solving actually helps students learn? I'm worried that "Eh, I'll just write this solution down today, I'm sure I'll learn it tomorrow" is what's happens.

Awesome software though.

jorgemf 1 day ago 1 reply      
Some years ago I tried to do something a bit more complex: http://telauges.appspot.com/mathsolver/

My idea was to use planning and A* search to solve any type of math problem, even create probes for things like the quadratic equation https://en.wikipedia.org/wiki/Quadratic_equation . I gave up after learnt the search space was so big for it that it was impossible to solve. If I had to do it today I will explore deep learning as heuristic, but I think it probably wont work.

I always like to see this type of projects, I hope they succeed where I failed.

yequalsx 1 day ago 2 replies      
It's a nice program and I can see it being both helpful and harmful. From my perspective, as a teacher of mathematics at a community college, students are unwilling to engage in thought about a problem. If they can't see the solution in a few minutes then they want to look at a complete solution. Mostly they are not willing to struggle through a problem.

I vacillate on whether, with the advent of computer algebra systems, it is necessary for students to master algebraic manipulations. I started to think that conceptual questions are better.

For instance, give me an example of an equation with no solution. Explain how a baseball player can have the highest batting average the first half of a season and in the second half of a season but not have the highest overall average. Draw the graph of a function defined on [0, 1] but has not maximum or minimum.

Students can't do those types of problems either. They are very frustrating problems for students because it requires you to really think about what the words mean and to think of extreme situations. So I've reverted back to the traditional style of teaching math. Manipulation of symbols.

stdbrouw 1 day ago 0 replies      
Worked on something like this as a hobby project a while ago, but to avoid the complexities associated with solving arbitrary exercises, instead I had it set up as an algebra exercise generator: you start with the solution, which you then (algorithmically) obfuscate by splitting terms and recombining things for a couple of rounds. Never got around to finishing it, but the neat thing is that you've already generated one possible way to solve the problem, it's just how you generated the exercise in reverse.

Another thing that's quite easy to do is to check intermediate steps in a solution for equivalence. You don't even really need CAS, just brute force the problem by probing the equations: set all variables to randomly chosen values, n times and if the sets of results are the same for both equations, you're good.

Anyhow, Socratic looks great and a great deal more advanced and useful than what I came up with, so kudos!

benbristow 1 day ago 4 replies      
I'm jealous of kids these days... homework would've been so much easier with this.

You could always use a calculator but the whole 'show your own working' catch meant you had to do it all manually. Not any more!

equalunique 20 hours ago 0 replies      
My academic math journey stopped at pre-calc, and I had been a C student for quite a long time. HS Algebra II would never have happened for me if I hadn't discovered XMaxima, an emacs-based CAS. Fortunately I took a Discrete Math course before dropping out of college, and it gave me a new admiration for math.

In spite of my weak math background, this has been the most enjoyable comments section on HN I've read so far.

therealmarv 1 day ago 1 reply      
Does anyone know if there is a good open source library for making equations (Latex, MathML) out of pictures like in their demo?
chriswarbo 1 day ago 0 replies      
Very interesting work, and well-explained in the post.

Like many others here, I suppose that in it's basic form this would mostly be used for cheating on homework; although it would certainly be useful for those (few?) students who are truly motivated to self-learn the material, rather than just pass the tests.

One thing which springs to mind is "Benny's Conception of Rules and Answers in IPI Mathematics" ( https://msu.edu/course/cep/953/readings/erlwanger.pdf ), which shows the problem of only focusing on answers, and on "general purpose" problem sets. Namely that incorrect rules or concepts might be learned, if they're reenforced by occasionally giving the right answer.

I think it would be interesting to have a system capable of some back-and-forth interactivity: the default mode would be the usual, going through some examples, have the student attempt some simple problems, then trickier ones, and so on.

At the same time, the system would be trying to guess what rules/strategies the student is following: looking for patterns, e.g. via something like inductive logic programming. We would treat the student as a "black box", which we can learn about by posing carefully crafted questions.

Each question can be treated as an experiment, where we want to learn the most information about the student's thinking: if strategies A and B could both lead to the answers given by the student, we construct a question which leads to different answers depending on whether A or B were used to solve it; that gives us information about which strategy is more likely to be used by the student, or maybe the answer we get is poorly explained by A and B, and we have to guess some other strategies they might be using.

Rather than viewing marking as a comparison between answer and a key, we can instead infer a model of the domain from those answers and compare that to an accurate model of the domain.

We can also use this approach the other way around, treating the domain as a black box (which it is, from the student's perspective) and choosing examples which give the student most information about it.

Steeeve 1 day ago 1 reply      
Now... the only thing remaining is to translate this to common core :).

I say that in jest, but doing so would make common core much easier for parents AND teachers to grasp. There's an enormous divide between those who get it and those who hate it, and providing parents/teachers with something that would help them understand the benefits of common core concepts would be a gigantic win.

aidos 1 day ago 0 replies      
That's so cool.

Reminds me of how different the learning experience is now. When we were at school (80s/90s), there was nowhere to turn if you didn't have the answer. My parents had an Encyclopedia Britannica set, so at least there was a paragraph to go on. It's amazing how good you became at fleshing out that paragraph into an essay :-)

gravypod 1 day ago 1 reply      
Now that this exists I think it's worth creating an opensource version of the TI-Nspire for engineers & mathamaticians. Something based on cheap hardware, runs linux, and can implement this + a theorum prover to basically make the most handy lab calculator.
poseid 1 day ago 0 replies      
that feels like a nice application of AI in a way. we often use a computer that can help in making a plan (e.g. a kind of map or "steps" as here). this might be nice to help understand problem solving in general. also, nice to see the project is in javascript, that means quite a few non-professional programmers could learn from it.
JotForm 1 day ago 0 replies      
This is such an inspiring software.
MichaelBurge 1 day ago 1 reply      
People here keep saying this will change learning and be good for the students, but the only real difference is it's open-source. You can already get step-by-step solutions for more types of problems from Wolfram Alpha, and you can already get API access if you're a 3rd-party developer who needs it:


I don't think it will have any real effect.

StefanKovachev 1 day ago 1 reply      
GrumpyNl 1 day ago 0 replies      
It looks like Sheldon came through.
PyTorch Tensors and Dynamic neural networks in Python pytorch.org
444 points by programnature  2 days ago   83 comments top 18
Smerity 2 days ago 5 replies      
Only a few months ago people saying that the deep learning library ecosystem was starting to stabilize. I never saw that as the case. The latest frontier for deep learning libraries is ensuring efficient support for dynamic computation graphs.

Dynamic computation graphs arise whenever the amount of work that needs to be done is variable. This may be when we're processing text, one example being a few words while another being paragraphs of text, or when we are performing operations against a tree structure of variable size. This problem is particularly prominent in particular subfields, such as natural language processing, where I spend most of my time.

PyTorch tackles this very well, as do Chainer[1] and DyNet[2]. Indeed, PyTorch construction was directly informed from Chainer[3], though re-architected and designed to be even faster still. I have seen all of these receive renewed interest in recent months, particularly amongst many researchers performing cutting edge research in the domain. When you're working with new architectures, you want the most flexibility possible, and these frameworks allow for that.

As a counterpoint, TensorFlow does not handle these dynamic graph cases well at all. There are some primitive dynamic constructs but they're not flexible and usually quite limiting. In the near future there are plans to allow TensorFlow to become more dynamic, but adding it in after the fact is going to be a challenge, especially to do efficiently.

Disclosure: My team at Salesforce Research use Chainer extensively and my colleague James Bradbury was a contributor to PyTorch whilst it was in stealth mode. We're planning to transition from Chainer to PyTorch for future work.

[1]: http://chainer.org/

[2]: https://github.com/clab/dynet

[3]: https://twitter.com/jekbradbury/status/821786330459836416

smhx 2 days ago 1 reply      
It's a community-driven project, a Python take of Torch http://torch.ch/. Several folks involved in development and use so far (a non-exhaustive list):

* Facebook* Twitter* NVIDIA* SalesForce* ParisTech* CMU* Digital Reasoning* INRIA* ENS

The maintainers work at Facebook AI Research

spyspy 2 days ago 3 replies      
This project aside, I'm in love with that setup UI on the homepage telling you exactly how to get started given your current setup.
programnature 2 days ago 1 reply      
Actually not clear if there is an official affiliation with Facebook, other than some of the primary devs.
tdees40 2 days ago 1 reply      
At this point I've used PyTorch, Tensorflow and Theano. Which one do people prefer? I haven't done a ton of benchmarking, but I'm not seeing huge differences in speed (mostly executing on the GPU).
taterbase 2 days ago 1 reply      
Is there any reason this might not work in windows? I see no installation docs for it.
EternalData 2 days ago 0 replies      
Been using PyTorch for a few things. Love how it integrates with Numpy.
theoracle101 2 days ago 2 replies      
Most important question. Is this still 1 indexed (Lua was 1 indexed, which means porting code you need to be aware of this)?
vegabook 2 days ago 6 replies      
Guess there's no escaping Python. I had hoped Lua(jit) might emerge as a scientific programming alternative but with Torch now throwing its hat into the Python ring I sense a monoculture in the making. Bit of a shame really because Lua is a nice language and was an interesting alternative.
rtcoms 2 days ago 3 replies      
I've never fiddled with machine learning thing so don't know anything about it.

I am wondering if CUDA is mandatory for torch installation ? I use a Macbook air which doesn't have graphics card, so not sure if torch can be installed and used on my machine.

jbsimpson 1 day ago 1 reply      
This is really interesting, I've been wanting to learn more about Torch for a while but have been reluctant to commit to learning Lua.
baq 2 days ago 0 replies      
Very nice to see Python 3.5 there.
gallerdude 2 days ago 3 replies      
What's the highest level neural network lib I can use? I'm a total programming idiot but I find neural nets fascinating.
ankitml 2 days ago 3 replies      
I am confused with the license file. What does it mean? Some rights reserved and copyright... Doesnt look like a real open source project.
0mp 2 days ago 0 replies      
It is worth adding that there is a wip branch focused on making PyTorch tensors distributable across machines in a master-workers model: https://github.com/apaszke/pytorch-dist/
aaron-lebo 2 days ago 3 replies      
Is this related to lua's Torch at all?


shmatt 2 days ago 0 replies      
i've been running their dcgan.torch code in the past few days and results have been pretty amazing for plug and play
plg 2 days ago 2 replies      
Every time I decide I'm going to get into Python frameworks again, and I start looking at code, and I see people making everything object-oriented, I bail

Just a personal (anti-)preference I guess

How Discord Stores Billions of Messages Using Cassandra discordapp.com
394 points by jhgg  1 day ago   136 comments top 22
niftich 1 day ago 2 replies      
These kinds of write-ups offer valuable insight into a popular project's requirements and decision-making, and are some of the most instructive resources one can find: these show not only the kinds of challenges one has to face at scale, but also how architectural choices are made.

It's far more valuable to understand why Discord uses Cassandra than to merely be aware they do.

Out of curiosity, did you consider HBase and Riak? Did you entertain going fully hosted with Bigtable? If so, what criteria resulted in Cassandra winning out?

jakebasile 1 day ago 5 replies      
I use Discord a fair amount, and something that annoys me about it is that everyone has their own server.

I realize this is a key part of the product, but the way I tend to use it is split into two modes:

- I hang out on a primary server with a few friends. We use it when we play games together.

- I get invited to someone else's server when I join up with them in a game.

The former use case is fine but the latter annoys me. I end up having N extra servers on my Discord client that I'll likely never use again. I get pings from their silly bot channels (seemingly even if I turn notifications off for that server/channel), and I show up in their member lists until I remove myself.

I wish there was a way to accept an invite as "temporary", so that it automatically goes away when I leave or shut down Discord. Maybe keep a history somewhere if I want to go back (and the invite is still valid).

Aside from that, it's a great product and really cleaned up the gamer-focused voice chat landscape. It confuses me that people will still use things like TeamSpeak or (god help you) Ventrilo when you can get a server on Discord for free with far better features.

Now that I posted this, I realize this has little to do with TFA. Sorry.

edit: formatting, apology

ve55 1 day ago 3 replies      
Discord seems to me like it has a very polished user experience, and it's no surprise that users are trashing programs like Skype in favor of Discord when it is better in every area.

Discord seems to take security seriously, as they should, but I'm curious about their stance on privacy and openness.For example, I wonder if they would consider:

- Allowing end-to-end encryption to be used between users for private communications

- Allowing users to connect to Discord servers using IRC or other clients (or, at least having an API that easily allows this)[1]

- Allow users to have better control over their own data, such as providing local/downloadable logs so that they can search or otherwise use logs themselves

Discord is definitely succeeding within the gaming market, but I'm curious what other markets they would like to take a stab at.

[1] I'm aware Discord has an API, but if I understand it correctly, normal users cannot easily use Discord from anything other the official Discord apps, as this API is specifically for Discord 'bots'. I see there's a discord-irc bridge, but not much more than that. I may be incorrect on this.

pilif 18 hours ago 3 replies      
> While Cassandra has schemas not unlike a relational database, they are cheap to alter and do not impose any temporary performance impact

in most relational databases, the schema is cheap to alter and does not impose a temporary performance impact.

In-fact, all of their requirements (aside of linear scalability) could also be met with a relational database. Doing so would gain you much more flexible access to querying for various reports and it would reduce the engineering effort required for retrieval of data as they add more features (relational databases are really good at being queried for arbitrary constraints).

I think people tend to dismiss relational databases a bit too quickly these days.

maktouch 1 day ago 2 replies      
It's really interesting to see that you're using Cassandra for this. IIRC, Cassandra was created by Facebook for their messaging, and realized that eventual consistency was a bad model for chat, so they moved to HBase instead. (source: http://highscalability.com/blog/2010/11/16/facebooks-new-rea...)

The tombstone issue was really interesting ! Thanks for sharing.

flyingramen 1 day ago 4 replies      
It is fascinating that more and more people are using Cassandra. DataStax believes they have fixed problems with prior guarantees claims that were exposed by Jepsen. But there has been no official Jepsen testing since.

On the topic of looking at Scylla next, I wonder why did the team not just start out with it to begin with. Also, are they people with experience running both. How is the performance? And what is the state of reliability?

alfg 1 day ago 1 reply      
Love Discord. Most of my friends and I have switched over from using Mumble and it's been great.

I run a small Mumble host [1] and I've always thought of the idea of wrapping the Mumble client and server APIs to function like Discord/Slack as an open source alternative. Mumble is great and all, but the UI/UX appeal of Discord is so much better.

Keep up the great work!

Also, is this is the same Stanislav of Guildwork? Ha, I remember when Guildwork was being formed back in the FFXI days.

[1] https://guildbit.com

jjirsa 23 hours ago 0 replies      
Wildly biased Cassandra person, but I find this very well written and explained, and I'm especially happy that when you bumped into problems like wide partition and tombstone memory pressure, you didn't just throw up your hands, but you worked around it.

The wide partition memory problem should be fixed in 4.0, for what it's worth.

mahyarm 1 day ago 4 replies      
Discord missed an opportunity a year or two ago to become something like slack for large companies. Hipchat's perf is horrible and slack couldn't scale to +20k users a year ago. Managing a mattermost instance requires staff and is more outage prone.

It's really too bad that they didn't take advantage of it, since they were actually scalable compared to their competitors and had good voice chat. Slack has started becoming more scalable recently, so I don't know how much the opportunity is still there.

sparrish 1 day ago 2 replies      
If you're deleting often, I recommend running a full compact (after your repair) to free up space and rid yourself of those tombstones once and for all. Repairs without compacts make those SSTables grow and grow. It's amazing how much space a compact clears up.
joaodlf 1 day ago 1 reply      
Not surprised to see other companies facing issues with Cassandra and tombstones. Don't get me wrong, I understand the need for tombstones in a distributed system like Cassandra... It doesn't make it any less of a pain though :).
Globz 11 hours ago 0 replies      
I love Discord and use it on a daily basis, one of our main concern with my gaming group is the voice latency compared to TS, Mumble or Ventrilo but this is mainly due to the inability to host your own server.

One of the big missing feature we would like to have in Discord is the ability to assign special permission to our groups leader so they can communicate over voice chat to other other group leaders in other channels (global voice chat).

When we play PVP MMO's and have 40+ users all in the same channel calling shots its impossible to coordinate properly.

What we normally do is split the group in 4 so 10 players in 4 different channels and each group leaders are calling shots independently BUT can also communicate via voice chat to other group leaders. Basically there's a global voice chat for group leaders that no one else can hear but them.

Other than that Discord is amazing!

beck5 19 hours ago 1 reply      
Serious question, how do you backup a casandra database of that size. Do you even back it up or just rely on the sharing to prevent dataloss?
mastax 8 hours ago 0 replies      
For a bit more information about the tombstone issue from the perspective of the person who caused it: https://www.reddit.com/r/programming/comments/5oynbu/_/dcnxy...
jolux 1 day ago 3 replies      
Discord is great but I have intermittent performance issues with it that make it almost unusable in comparison to Slack which never has any noticeable latency.
smaili 1 day ago 1 reply      
Does anyone know what protocol/transport Discord uses? XMPP, web sockets, JSON, etc?
treenyc 22 hours ago 3 replies      
I'm curious why would people use a closed source software, when you can use something like https://riot.im

Please let me know. I may be missing something.

simooooo 19 hours ago 1 reply      
What's an upsert?
cookiecaper 1 day ago 2 replies      
I'm one of the people who nagged you on the redis post, and particularly expressed skepticism that such a transition would've been necessary. I haven't read this yet, but I just want to say thanks for actually following up to that thread and posting it. Looking forward to it!


EDIT: Just read the post, and while it provides a good perspective on Discord's rationale to introduce Cassandra in the first place and does a great job pointing out some unexpected pitfalls, it doesn't specifically respond to replacing Redis with Cassandra due to clustering difficulty, per the prior thread. [0] Redis is only specifically called out as something they "didn't want to use", which I guess is probably the most honest answer.

The bucket logic applied to Cassandra seems like it could've been applied to redis + a traditional permanent storage backend nearly as easily. The biggest downside here would be crossing the boundary for cold data, but that's a pretty common thing that we know lots of ways to address, right? And Cassandra effectively has to do the same thing anyway, it just abstracts it away.

Again, I'm left wondering what specific value Cassandra brings to the table that couldn't have been brought by applying equal-or-lesser effort to the system they already had.

I also found it amusing that they're already contemplating the need to transition to a datastore that runs on a non-garbage-collected platform.

[0] https://news.ycombinator.com/item?id=13368754

marknadal 18 hours ago 0 replies      
Wow! This is an incredible article. I do research and development for systems like this at GUN, and this article nails a lot of important pieces. Particularly there ability to jump to an old message quickly.

We built a prototype of a similar system that handled 100M+ messages a day for about $10, 2 minute screen cast here: https://www.youtube.com/watch?v=x_WqBuEA7s8 . However, this was without FTS or Mentions tagging, so I want to explore some thoughts here:

1. The bucketing approach is what we did as well, it is quite effective. However, warning to outsiders, this only effective for append-only data (like chat apps, twitter, etc.) and not good for data that gets a lot of recurring updates.

2. The more indices you add, the more expensive it gets. If you are getting a 100M+ messages a day, and you then want to update the FTS index and mentions index (user messages' index, hashtag index, etc.) you'll be doing significantly more writes. And you'll notice that those writes are updates to an index - this is the gotcha and will increase your cost.

3. Our system by default backs up / replicates to S3, which is something they mention they want to perhaps do in the future. This has huge perks to it, including price reductions, fault tolerance, and less DevOps - which is something they (and you) should value!

There backend team is amazingly small. These guys and gals seem exceptionally talented and making smart decisions. I'm looking forward to the future post on FTS!

no_protocol 1 day ago 2 replies      
lightedman 1 day ago 1 reply      
You're storing messages, how are you guaranteeing safety of those messages when it looks like one can seemingly just blast through your API calls to find messages when one isn't even on that server?
Caching at Reddit redditblog.com
349 points by d23  2 days ago   165 comments top 14
chime 1 day ago 1 reply      
> When you vote, your vote isnt instantly processedinstead, its placed into a queue.

I remember looking into this a while ago and was bewildered to find that when I upvoted or downvoted, there was no XHR call to the backend! There was no hidden iframe/image, no silent form post. Absolutely no network activity. Yet when I refreshed, my vote was shown correctly. I thought I was going crazy.

This was long ago so I'm a bit fuzzy on the details but after a bit of digging, I found the most elegant data collection technique I've ever seen. Instead of sending network data when I voted, a local cookie was set with the link id and vote value. Then when I went to another page, my browser naturally sent the cookie to the server, where I believe it was processed, and then a fresh cookie was sent back to my browser. I could vote on 10 links, the local cookie would get large and then on the next page refresh, the backend would receive my batch of votes, process them, and send me a fresh cookie again.

I don't think they do that now and I've never seen anyone do something like this. Even HN just makes an XHR call on voting. After twenty years on the web, it's not often that I am surprised so this was quite a thrill.

slavik81 2 days ago 20 replies      
> Performance matters.

It took 8.4 seconds to load the Reddit front page on my phone. Hacker News took 1.1 seconds. This feels like advice from the overweight gym teacher on how to do pushups.

The desktop Reddit site took 2.2 seconds over the same connection, by the way. It seems like it would be much more valuable to optimize whatever is taking up >75% of page time on mobile.

sciurus 2 days ago 0 replies      
The pain of static slab allocation is real! Changing usage patterns causing problems can be tricky to track down too; mcsauna looks helpful for this. Upgrading to memcached 1.4.25 and running with "slab_reassign,slab_automove,lru_crawler,lru_maintainer" was a huge improvement for our primary memcached cluster at Eventbrite.
jjoe 2 days ago 1 reply      
The slowness of page load mentioned by folks here is the reason why I think caching at the HTTP level (ex: Varnish) is much more efficient than caching at the service level (ex: Memcached), which is much further down the stack and is bound to be latency-sensitive. Because it's much less entangled in your code and deep into your infrastructure (less technical debt). A hybrid approach can work too but only if it's light and unobtrusive.

By the way, and I'm going out on a limb with my shameful plug, I built a Varnish-as-a-Service kind of infrastructure called Cachoid ( https://www.cachoid.com ). But to my own defense, I'm putting my energy, time, and money where my mouth is.

jrowley 2 days ago 1 reply      
The cache-perma seems pretty clever to me.

> For example, when new comments are added or votes are changed, we dont simply invalidate the cache and move onthis happens too frequently and would make the caching near useless. Instead, we update the backend store (in Cassandra) as well as the cache. Fallback can always happen to the backend store if need be, but in practice this rarely happens. In fact, permacache is one of our best hit ratesover 99%.

They basically have their application state duplicated in both places. Interesting architectural choice.

bluedino 2 days ago 4 replies      
I wonder how much better/worse the site would run if they had their own hardware like StackExchange. And I wonder how StackExchange would run if it were on AWS.
eunoia 2 days ago 4 replies      
Some back of the envelope math for their caching costs:

54 x R3.2xlarge EC2 instances

On demand = $314,571.6/year

w/ 1 year term = $166,860/year

w/ 3 year term = $110,340/year

w/ convertible 3 year term = $150,174/year

Is that a lot? Seems like a lot

bbeausej 2 days ago 0 replies      
Thanks for sharing the details. It's impressive to see the memory allocation and pool size for a site handling this much traffic. I would love to get some more information on reddit's platform overall traffic volume as I feel this would complement the discussion nicely.
nodesocket 2 days ago 2 replies      
I wonder if switching from memcached to redis would make a bottom line difference in terms of the number of instances needed (cost) and performance?
QuercusMax 1 day ago 1 reply      
Ironic timing that Reddit is currently undergoing a major outage right now...
egonschiele 1 day ago 0 replies      
What's the difference between mcrouter vs something like haproxy?
Florin_Andrei 2 days ago 1 reply      
Are those visualizations done in Grafana?
ksec 2 days ago 2 replies      
Slightly Off Topic, as of 2017, what's the advantage of Memcached over Redis. I thought we are basically in the era of Redis.
akjainaj 2 days ago 2 replies      
Taking into account how reddit performs, I'll take this as a guide on how not to use cache.

(This is a joke please. Understand it as such. I know reddit has the problems it has because it is severely understaffed)

Ask HN: Are we overcomplicating software development?
616 points by ian0  2 days ago   361 comments top 115
SatvikBeri 2 days ago 9 replies      
Many of these practices are popularized by Google/Facebook/Amazon but don't make sense for a company with 100 or even 1,000 people. I try to focus on whether a practice will solve a concrete problem we're facing.

Switching from Hadoop to Spark was clearly a good idea for our team, even though it required learning a new stack, but there isn't a strong reason to switch to Flink or start using Haskell.

Agile makes sense when your main risk is fine-grained details of user requirements, but not when you have other substantial risks, such as making sure a statistical algorithm is accurate enough.

Microservices probably reduces the asymptotic cost of scaling but add a huge constant factor.

Relational databases are the right choice 95% of the time, non-relational stores require a really specific use case.

TDD is good for fast feedback in some domains, but for others, manually investigating the output or putting your logic into types is better. E.g. a lot of my time comes from scaling jobs that work on 10gb of data but crash on 1tb, TDD is not that helpful here.

Continuous integration mostly makes sense when you're making a lot of small changes and can reliably expect a test suite to catch issues.

In short, ask the question "when is practice X useful?" instead of "is practice X a good idea?"

PaulHoule 2 days ago 5 replies      
Continuous integration is a good thing. Back in the bad old days you'd have three people working on parts of the system for 6 months and plan to snap them together in 2 weeks and it would take more like another 6 months.

Agile methods are also useful. If you can't plan 2 weeks of work you can probably not plan 6 months.

When agile methods harden into branded processes and where there is no consensus on the ground rules by the team it gets painful. The underlying problem is often a lack of trust and respect. In an agile situation people will stick to rigid rules (never extend the sprint, we do all our planning in 4 hours, etc.) because they feel they'll lose what little control they have otherwise. In a non-agile situation people can often avoid each other for months and have the situation go south suddenly. In agile you wind up with lots of painful meetings instead.

Also I think it is rare for one language to really be "best for a job". If you want to write the back end of a run of the mill webapp, you can do a great job of that in any mainstream language you are comfortable in.

BjoernKW 2 days ago 6 replies      
No, it's not just you and yes, we often do overcomplicate software development.

It's been that way long before agile methodology or microservices though. Complexity-for-the-sake-of-complexity EverthingHasToBeAnAbstractClass frameworks have been plaguing the software development business since at least the 1990s and I'm sure there are similar stories from the 80s and 70s.

It's hard to find a one-size-fits-all easy method for not falling into that over-engineering / over-management trap. I try to focus on simple principles to identify needless complexity:

- There is no silver bullet (see "microservices"): If the same design pattern is used to solve each and every problem there probably is something amiss.

- Less code is better.

- Favour disposable code over reusable code: Avoid the trap of premature optimisation, both in terms of performance and in terms of software architecture. Also known as "You aren't gonna need it".

- Code means communication: By writing code youre entering a conversation with other developers, including your future self. If code isn't easily comprehensible again there's likely something wrong.

corysama 2 days ago 4 replies      
One of my fav tech talks ever (and I watch a lot of tech talks) is Alan Kay's "Is it really 'complex'? Or, did we just make it 'complicated'?" It addresses your question directly, but at a very, very high level.


Note that the laptop he is presenting on is not running Linux/Windows/OSX and that the presentation software he is using is not OoO/PowerPoint/Keynote. Instead, it is a custom productivity suite called "Frank" developed entirely by his team, running on a custom OS, all compiled using custom languages and compilers. And, the total lines of code for everything, including the OS and compilers, is under 100k LOC.

majewsky 2 days ago 2 replies      
1) False dichotomy. Developer familiarity is one of the most important metrics for choosing "the best tool for the job".

2) Conway's Law applies in reverse here: If your organization consists of a lot of rather disjoint teams, then microservices can be quite beneficial because each team can deploy independently. If you're one cohesive team, there is not much benefit, only cost.

3) Depends. If you have a well-designed distributed system, it can be amazingly resilient and reliable without introducing much administrative overhead. (From my experience, OpenStack Swift is such a system. Parts may fail, but the system never fails.) There are two main problems with distributed systems: a) Designing and implementing them correctly is really hard. b) Many people use distributed systems when a single VM would do just fine, and get all the pain without cashing out on the benefits. See also http://idlewords.com/talks/website_obesity.htm#heavyclouds

4) Continuous integration was not meant to help with complexity. Its purpose is to reduce turn-around time for bugfixes and new features. If your release process is long and complicated, the increased number of releases will indeed be painful for you. Our team sees value in "bringing the pain forward" in this way. Your team obviously puts emphasis on different issues, and that's okay.

mhotchen 2 days ago 1 reply      
Many of the programmers I have worked with actually love complexity, despite trying to convince others (and most likely themselves) that they hate it.

Advice tends to be cherrypicked to suit an agenda they already have (with your example on microservices, the vast amount of resources saying they're very difficult, should be driven by a monolith first approach, and solve a specific set of problems is largely brushed under the rug).

I think because our industry moves so fast there's a fear of becoming irrelevant. Ironically companies are so scared of not being able to employ developers that they're also onboard with complicating their platform in the name of hiring and retention. I think this is down to the sad truth that most developer roles offer very little challenge outside of learning a new stack.

DanielBMarkham 2 days ago 0 replies      
You've thrown together a bunch of buzzwords and asked if we are over complicating things.

Buzzwords can mean freaking anything. I've seen great Agile teams that don't look anything like textbook Agile teams. Microservices can be a total clusterfuck unless you know what the hell you're doing -- and manage complexity. (Sound familiar?) CI/CD/DevOps can be anything from a lifesaver to the end of all life in the known universe.

So yes, we are over complicating software development, but the way we do it isn't through slapping around a few marketing terms. The way we do it is not understanding what our jobs are. Instead, we pick up some term that somebody, somewhere used and run with it.

Then we confuse effort with value. Hey, if DevOps is good, the more we do DevOps, the better we'll be, right? Well -- no. If Agile is good, the more Agile stuff we do the better we'll be, right? Hell no. We love to deep dive in the technical details. If there aren't any technical details, we'll add some!

Software development is too complicated because individual developers veer off the rails and make it too complicated. That's it. That's all there is to it. Throw a complex library at a good dev and they'll ask if we need the entire thing to only use 2 methods. Throw a complex library at a mediocre Dev and they'll spend the next three weeks writing 15 KLOC creating the ultimate system for X, which we don't need right now and may never need.

It has nothing to do with the buzzwords, the tech, or software development in general. It's us.

mschaef 2 days ago 0 replies      
Continuous Integration is (with a reasonable test suite) one of few elements of software development that I would consider almost essential for any long running project. It's just too useful to have continual feedback on the quality of the system under construction. (And this is before bringing in micro-services or any other complicating architectural pattern.)

Where I might agree with you more are on points 3 and 4: 'Advanced reliability' and 'Microservices'. While I have no doubt that these are useful to solve specific problems, I think as a profession we tend to over-estimate the need for these things and under-estimate the costs for having them. To me this implies that there needs to be a very clear empirical case that they support a requirement that actually exists. I'd also make the argument that the drive for microservices within an organization has to come from a person or team that has the wherewithal to commit resources over the long-term to actually make it happen and keep it maintained. (ie: probably not an individual development team.)

sebringj 2 days ago 1 reply      
It never seems complicated when I am doing my own side work for some reason. There are no design meetings, no hours tracking, no arguments on best practices, no scrum, no testing frameworks, dev ops, etc. I do use git and minimally create bash scripts to simplify repetitive tasks for deployment but its just a huge contrast to working in teams where something simple takes about 50 times longer.

I think keeping things as simple as possible and always going for that goal will increase velocity overall. Everything should be subject to scrutiny for promoting productivity and open to modification or removal. I know there is a balance where you have to increase complexity in a team environment but keeping friction as low as possible in terms of process and intellectual weight couldn't hurt.

The most productive place I've seen so far is a huge athletic brand I worked for where they kept teams at max 5 people in mini projects. This forced the idea of low overhead and kept the scale of management needed small. The worst place I worked for in terms of unnecessary complexity is a well known host, although it is the best place to work in terms of people, hired offshore that has a one-size fits all mentality and layered in as much shit as possible to slow down development to a mud crawl. I don't buy into process over productivity.

rolodato 2 days ago 0 replies      
I think the "learn to code" movement as well as overly-technical interviews for developers are partly to blame for this. It's well-known that developers are tested on how to do something that's considered technically difficult, such as abstract CS problems or a complicated architecture, but they are rarely asked why certain tools, practices or architectures should or should not be used. Comparative analyses to make objective recommendations between different solution alternatives are also rare in my interviewing experience, but they are one of the most valuable skill a competent software engineer should have.

I don't agree on point 4 though - CI can be something as basic as running a monolith's tests on each commit, which makes sure that builds are reproducible (no more "works on my machine").

brilliantcode 2 days ago 2 replies      
1997: I created my first website on Netscape Navigator. I was 10.

2007: I created a textbook trading RoR web app. I was 20.

2017: I'm struggling to create my first front-end website on Chrome and I haven't decided on the back-end. I'm 30.

The barrier to entry is indeed very high and no signs of slowing. I blame the explosion of low-interest capital from VC's fueling this fracturing.

bluejekyll 2 days ago 0 replies      
No. You are correct. Honestly I think you can solve a lot of that by following on from one of Deijkstra's core priniciples: Seperation of Concerns.

When you practice good seperationof concerns, specific choice in different areas can be more easily fixed later. It requires having decent APIs and being thoughtful on the interaction of different components, but it helps immensely in the long run.

Microservices are one way to practice seperation of concerns, but it can also be practiced in monolithic software as well, by having strong modular systems (different languages are stronger at this than others).

marcosdumay 2 days ago 0 replies      
Well, yes, we are overcomplicating it. Except on the parts we are undercomplicating... And I still couldn't find anybody that can reliably tell those apart, but the first set is indeed much larger.

1 - Do not pick a new language for an urgent project. Do look at them when you have some leeway.

2 - Yep.

3 - There's something wrong with your ops. That happens often, and it is a bug, fix it.

4 - If CI is making your ops more complex, ditch it. If less complex, keep it. In doubt, choose the safest possible way to try the other approach, and look at the results.

5 - Do not listen to consulting experts, only to technical experts. The agile manifesto is a nice reading, read it, think about it, try to follow, but don't try too hard. Ignore any of the more detailed methodologies.

halis 6 hours ago 0 replies      
1) Choosing JavaScript for a Math heavy project would likely be a mistake. There are plenty of other examples of picking the wrong language for the wrong job. That's where this statement falls apart.

2) Depending on how you bring them all together, yes this can be true. If you have something like AWS API Gateway, then microservices may be manageable. If you're rolling your own custom solution with something like nginx or haproxy, you're probably wasting a ton of cycles.

3) Again, I tend to agree with this. Premature optimization seems to be the norm these days. Especially when you get devops people involved. Do we need every single layer in our stack to be "highly available" if we have zero users? The answer is NO.

4) Well, this sounds clever, but I'm not sure it really means anything. Setting up something like Jenkins to watch your GitHub repos and build the branch and run the tests can alert you to issues early and really isn't that difficult to setup.

5) Nothing wrong with TDD as long as you don't go overboard. Nothing wrong with standups, planning or retros. Nothing wrong with short sprints.

dwc 2 days ago 1 reply      
Much of the problem in the things you mention is that those things are specific solutions that have been confused with goals. I.e., "we're supposed to build microservices" is a horrible idea, as opposed to "given this particular situation a microservice is a great fit".

Understanding the possible benefits and drawbacks of any solution is important. It's important in whether or not that solution is selected, but also to make sure that the implementation actually delivers those benefits.

It's very common in our industry to use "best practices" without understanding them, and therefore misapplying the solutions.

beat 2 days ago 1 reply      
1. What problem are you optimizing for? "The job" encompasses code, but it also encompasses staffing. It's a lot easier to hire Java developers than Scala developers. In a leadership role, your responsibility isn't just the day-to-day code - it's the whole project.

2. Microservices vs monoliths is a see-saw. You build a monolith, find it's a brittle, incomprehensible hairball, and you break out microservices. You build microservices, find that operational headaches are killing you, and start consolidating them into monoliths. Which kneecap do you want the bullet in?

3. Fix what breaks.

4. Continuous integration is vital. But it needs to be evolved along with the system. There's this thing I say... "Have computers do what computers do well, have humans do what humans do well". Handling complex and repeatable behavior (i.e. builds and test suites) should absolutely be automated as much as possible. Think continuous integration sucks? Try handing it off to humans for a while! You'll learn whole new levels of pain.

5. All process is about (or should be about) specific, discrete communications issues.

emeraldd 2 days ago 1 reply      
- 1) Choose languages that developers are familiar with, not the best tool for the job

95% of the time, a language that your developers are familiar with is the correct tool for the job simply for that reason! There are cases where it is not the case but those involve special case languages and special case systems. If you don't know what special case means then you're situation is almost certainly on that list.

- 2) Avoid microservices where possible, the operational cost considering devops is just immense

"If your data fits on one machine then you don't need hadoop ..." Same thing applies here. Microservices have place and putting them in the wrong one will bite you bad.

- 3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

Then there's probably something wrong or limited with the deployment that needs to be reviewed (2 node when you need a 3 node cluster, bad networking environments, etc.) If you have a reasonable setup with solid tech under it, deployed per specs then this should not be true. If, on the other hand, something is out of whack (say running a 2 node cluster with Linux HA and only a single communication path between them) you're going to have problems and the only way to fix them is to get it done right.

- 4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

I'm not sure about this but, if your deployment system requires CI you have a problem. An individual, given hardware and assets/code, should be able to spin up a complete system on a fresh box cleanly and in a reasonable timeframe. (Fresh data restores can take longer of course but the system should be runnable barring that.) If this requires (i.e. it can't reasonably be done manually) something like an CI script or ansible/chef/etc. script then you're deployment process is probably too complex and needs to be re-evaluated.

- 5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

Agile is commonly used to gloss over a complete lack of structured process or a broken. Even with Agile there should be some clean process and design work that goes into things or you're hosed.

hd4 2 days ago 0 replies      
For me, the trinity of development as a solo developer seems to be:

1. Writing code while using as many useful libraries and tools as possible to avoid recreating wheels

2. Continuous integration set up early on to handle the menial work and to let me concentrate on 1.

3. Constantly evaluating and researching what technology is available and newly appearing to give me an edge, because having an edge is never a bad thing in this field.

Agree with some of what OP said, especially with methodologies become hindrances and HA tools becoming points of failure.

justinlaster 2 days ago 1 reply      
1. I think this is rather obvious, work with what you have. Maybe think about hiring specifically for areas your team is in lacking in, as long as the team as a whole will see decent benefit from it.

2. I hate to say you're doing microservices "wrong" but I'd really question project structure and practices being the culprit behind the cost of doing devops with microservices.

3. This seems like an engineering fault, rather than some implicit principle behind those concepts causing more downtime.

4. How is CI a plaster on the problem of microservices? CI is useful with or without microservices.

5. Agile was always meant to be a guideline, not an end all and be all. It's meant to get your team to figure out how it wants to work, and write code before process. See: http://agilemanifesto.org/

The problems you are describing seem like big problems with your team, engineering and management. No amount of process and technology is ever going to fix a dysfunctional (sorry if that's too blunt) team. What I get from this, instead of having processes in place that make it easy to move code out, you're removing tooling to slow things down intentionally with the superficial result of "stabilizing" the entire development effort. The solution appears to be to get your team to write less code, and force management to bow down to the new reality of these "stabilizing" changes. Both of which can and sometimes should be done regardless of processes and tooling in place.

The best code is the code you don't write. But don't blame the tooling on making it easy for a team to be lazy and remove the all important characteristic of a team self-critiquing (i.e, "Do we really need this feature", "That'd be nice to have but right now we're managing to get things done.", "Did I actually test my code, was it reviewed, or am I just counting on the fact that I can shove something else out later while our redundancy systems pick up the slack?")

kekub 2 days ago 0 replies      
I am working in huge non-IT company as a software developer. I guess that is what gives me a totally different point of view on your lessons:

1) Without a unified technology stack and a common framework we would not be able to build and maintain our applications. We decided on C# as it works best for us. Currently we are 5 developers. Not a single one of us has ever written a line of C# code before entering the company - learning the language from ground up enables us to pick up patterns that our colleagues who joined the company earlier found to be best practices.

2) If you are not introducing a whole new stack with every micro service that you develop the devops costs are quite low.

3) I agree with you on that - I think redundancy always introduces more complexity. However there are systems that handle that job quite well (e.g. SQL Server). For application servers we use hot-spares and a load balancer that only routes traffic to them, when the main servers are not reachable. This works for us, as all our applications are low traffic applications.

4) Continuous integration works brilliant for our unified stack. In the last two years we went down from 1d setup + 20min deploy to 10min setup + 20s deploy.

5) We use agile methodology whenever possible and it works like a charm. However we had a lot of learnings. Most recent example: Always have at least one person from all your target groups in any meeting where you try to create user-stories.

Planning our software architecture has been a key element in my teams success and I do not see a point where we are going to cut it.

rb808 2 days ago 3 replies      
I've seen the addition of unit testing is a big cause of complexity. Previously simple classes now have to be more abstracted in order to unit test. Add mocks, testing classes & test frameworks. Some unit tests are handy, but I dont think it justifies the additional complexity. For the apps I write I'd like to see more emphasis on automated integration testing and fewer unit tests - so we can write simple classes again.
adamnemecek 2 days ago 2 replies      
There's this book that I've been mentioning around here called Elements of Programming https://www.amazon.com/Elements-Programming-Alexander-Stepan... that makes exactly this claim, that we are writing too much code.

It proposes how to write C++-ish (it's an extremely minimal subset of C++ proper) code in a mathematical way that makes all your code terse. In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=39m30sAnother point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible.Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol).https://smile.amazon.com/review/R1MG7U1LR7FK6/

This style might become dominant because it's only really possible in modern successors of C++ such as Swift or Rust that have both "direct" access to memory and type classes/traits/protocols, not so much in C++ itself (unless debugging C++ template errors is your thing).

jMyles 2 days ago 0 replies      
My first reaction to your (very thoughtful) review is that #4 seems out of place.

CI can be a way of enforcing the simplicity of the others - it can be a way of tunneling the build process into assuredly straightforward steps and preventing individual team members from arbitrarily (or even accidentally) adding their own complications into build requirements.

Other than that, I think you are definitely on to something here.

nojvek 1 day ago 0 replies      
As everyone is saying: do what is reasonable and useful.

E.g let's make an online shop.

It has browsing, purchasing and admin sections.

Browsing is simple. Query dB and show html. It's probably the most used as well and needs to be reliable. Having as different service means admin section could break while users are still able to browse. Same for payments. Sometimes it's crazy complicated. I think of microservices as big product feature boundaries that can work independently. A failure in one doesn't affect the other.

Continuous integration: once you have your tests, and some auto deploy scripts, you have an engine. You push code, tests auto run, a live staging is created for latest code, you play with it. Looks good? Merge with Master. It's deployed to production. The idea is deployment is effortless and you can do it multiple times a day just like git push. Tests just don't only have to be unit. We run integration testing features on dummy accounts periodically from different regions in the world on production. This means you are alerted as soon as something breaks. Fast deployment and great telemetry mean you can always revert to last known good state easily.

Investing in tests is a pain but it pays off in the long run. Especially if you have other developers working on same code base.

Just don't over do it. I believe these ideas came from pain developers actually faced and they used then to solve it. If you're not feeling the pain or won't feel it then you don't need the remedy.

josephv 2 days ago 0 replies      
The only way to have any sense of a good or solid development platform or lifecycle is, to me, to look at your specific situation and tailor everything to your deliverables and needs. Doing anything because of industry trends or academic pontificating will lead you towards the solution someone else had success with in a different circumstance.

Microservices work fine in some situations, agile works fine in some situations, but until you find that you are in one of those situations trying to bend your deliverables to meet a sprint-cycle or some other nauseating jargon will cause, as you put it, over-complication or just poorly targeted effort. (It can also cause enough stress to dramatically affect your health, I know better than most)

Those moments of solidarity between product and effort are real gems that I've only recognized in hindsight.

solarhess 2 days ago 0 replies      
You are right. Agile, languages, CI, devops are all tools not solutions to problems. Blindly applied, they will not get the results promised.

First focus on identifying the primary job to be done: build a valuable piece of software with as little effort as possible given your current team and existing technology.

Second, consider how valuable the existing software is and whether it really needs to be rewritten at all. Prefer a course that retains the most existing value. It is work you won't have to repeat.

Third, choose tools that maximize the value produced per hour of your team. CI, Devops, Microservices, Languages all promise productivity and reliability benefits but will incur complexity and time costs. Choosing the right mix is part of the art of software management.

TheAceOfHearts 2 days ago 1 reply      
I think in many cases complexity just comes from lack of experience and poorly understood requirements.

I've had my fair share of cases where I ended up implementing something needlessly complicated, only to later realize my approach was terribly misguided. I'd like to think I'm slowly improving on this as time goes on.

The software world has a big discoverability problem. Even though I know there's probably prior art of what I'm working on, I don't always know where to look for it.

ChuckMcM 2 days ago 1 reply      
Yes we are over complicating it, but that it primarily about trying to take what is essentially an artistic process and turning it into a regimented process (a known hard problem).

Rob Gingell at Sun stated it as a form of uncertainty principal. He said, "You can know what features are in a release or when the release will ship, but not both." It captured the challenge of aspirational feature development where someone says "we have to have feature X" and so you send a bunch of smart engineers off to build it but there is no process by which you can start with an empty main function and build it step by step into feature X.

That said, it got worse when we separated the user interface from the product (browser / webserver). And you're rants about microservices and continuous integration are really about releases, delivery, and QA. (the 'delivery time' of Gingell's law above).

These are complexities introduced by delivery capabilities that enable different constructions. The story on HN a few days about about the JS graphics library is a good example of that. Instead of linking against a library on your computer to deliver your application with graphics, we have the capability of attaching to a web service with a browser and assembling on demand the set of APIs and functions needed for that combination of client browser / OS. Its a great capability but to pull it off requires more moving parts.

mhluongo 2 days ago 0 replies      
You're right, though you should end most of your comments with "for us".

We've been burned by the microservice hype, and it took a while for us to realize that most of the touted benefits are for larger organizations. These "best practices"" rarely include organizational context.

harwoodleon 2 days ago 0 replies      
Fatal problems that hit start ups seem left-field, but they are baked into the design choices we make, often without discussion - because they seem part of "current accepted wisdom".

My major issue for startup software development is that often software is developed too discretely - with a utopian 'final version' in mind. Developers don't think holistically enough - they focus on details at the expense of design. "current accepted wisdom" is intangible, ever shifting, whereas the failure of a system is very real and can lead to loss of income etc...

Lots of start up companies don't design systems with humans in them, they write code as if it was a standalone thing - they often leave out the human bits because they are hard to evaluate, measure and control - variety of skill, ideas, approaches, mistakes, quality of life etc.

In my experience, this variety (life) often comes back to bite companies that can't handle eventual variance because of poor system design - not because of a choice of platform / provider / software etc.

I have been reading a lot around the viable system model (VSM) for organising projects. It seems to fit with what my view on this is. I am trying to implement a project using this model currently.


QuantumRoar 2 days ago 0 replies      
I think you wanted to say: "Are we simplifying things in software development?" All of the points you have made are actually simplifications of what might be the optimal solution.

Imagine the solution space as some multidimensional space where there is somewhere an optimal solution. The dimensions include the habits of your programmers, the problem you are trying to solve, and the phase of the moon. Microservices, a special form of redundancy, continuous integration, agile development are all extreme solutions to specific problems. Solutions which are extreme in that they are somewhere in the corner of your multidimensional solution space.

They are popular because they are radical in the way they conceptualize the shape of the problem and attempt to solve it. Therefore they seem like optimal solutions at first glance when really they only apply really well to specific toy models.

Take e.g. microservices. Yes, it's really nice if you can split up your big problem into small problems and define nice and clean interfaces. But it becomes a liability if you need too much communication between the services, up until the point where you merge your microservices back together in order to take advantage of using shared memory.

Don't believe any claims that there is a categorically better way to do everything. Most often, when you see an article about something like that, it is "proved" by showing it solves a toy model very well. But actual problems are rarely like toy models. Therefore the optimal solution to an actual problem is never a definite answer from one of the "simplified corner case scenarios" but it is actually just as complex as the problem you are trying to solve.

sydd 2 days ago 0 replies      
I agree with you, but not fully.

1) Well, this is only a case if the project is short enough that its not worth switching. Learning a new tech for a team takes months, only switch if the project is taking years.

2) Again, only use them for bigger (>2 years lifecycle) projects

3) Depends on what you need. We build a full stack apps with around 99.95% uptime (a few hours of downtime/year) in around 3 months of architecture dev time, this was good enough for us. Getting more would have hugely increased dev time, but this number was good enough for us.

4) Disagree. You can build simple CI pipelines in a matter of weeks, which will pay for themselves in a few months thanks to better uptimes, happier employees, shorter release times. Again its only needed if your project lasts for more than a year.

5) Disagree. Agile is very good, if someone knows it well (takes a few days to learn). Its not needed for very small teams (<6 people), they can self-manage.

But I think there are problems:

- People getting hyped about the latest trendy stuff. Use bleeding edge/new tech for hobby projects not for money making.

- Do not switch technologies unless really needed, dont fall for the hyped library of the week.

- Do not use a dynamic language for any project that will have more than 5K LOC in its lifetime.

- Do not overengineer. For example if the code is clean, works, but has that ugly singleton pattern its OK. Dont introduce the latest fancy IOC framework, just because you read it in the clean code book that its better.

- Unit test are overhyped. Use them for critical components on the server, and thats it. IMO the hype about them is because dynamic languages scale so badly that you need test otherwise you're fucked. Rather choose a well proven statically typed language, a good IDE, and take code reviews seriously.

solipsism 2 days ago 0 replies      
1) No way. Absolutely not. Not if what you're building is intended to last. Any language/ecosystem you choose has costs and benefits. You will continue to pay the costs (and reap the benefits) long after your developers could have become fluent in a language.

Certainly the language your developers already know is better than one they don't, all things being equal. But your rule is way too simplistic.

2) Of course. Avoid every complex thing where possible.

3) This means the cost/benefit ratio was not considered closely enough when planning these features. Again, avoid every complex thing where possible.

4) this is a strange one. Most people doing CI are not building microservices. CI is really more about whether you have different, independently moving pieces that need to by integrated. Could be microservices, could be libraries, could be hardware vs software. If you only have a single active branch everyone's merging into regularly, you're doing CI implicitly. You just might not need it automated.

5) take what you can from the wisdom of agile, and then use your own brain to think. And don't confuse agile with scrum.

bluestreak 13 hours ago 0 replies      
It isn't new when I say that it is hard to come up with simple solution.

In most cases people tend to work under pressure, which ends up with problem nicely fitted to tool at hand. You can hardly blame anybody for that. What we are not doing enough is going over "solution" again and again. Solving a problem second time around is always easier.

biztos 2 days ago 0 replies      
1) Sounds like there's a lot more to the story.

 * Was the "best tool" what the devs thought it was? * Was it something they would hate using? Say, Java for Perl devs? * Was there a steep learning curve? An obscure language?
2) How big is the system? How complex is the business? How ops-friendly are the devs to start with?

3) You (or someone) must know how much system failure would cost.

4) CI can help with your devops, but its main point is to help with your software quality. See #2.

5) Totally agree, though you can also try being agile about "Agile" and taking just whatever parts work for you.

My $0.02 anyway.

(Aside: years ago I worked on a team doing ad-hoc semi-agile, which worked pretty well. I'm 99% sure I could have double our output and launched a management-consulting career if I could have credibly held the threat of Real Corporate Agile Scrum over their heads. But that was before the flood. One of them works for Atlassian now, ironically enough.)

yawz 2 days ago 0 replies      
"Perfection is Achieved Not When There Is Nothing More to Add, But When There Is Nothing Left to Take Away" - Antoine de Saint-Exupery

IMHO, it takes technical and personal maturity to come to the conclusion above. Good architecture (or software or dev process or anything) should only have/contain the simplest things that are necessary.

msluyter 2 days ago 1 reply      
Though perhaps it's considered a component of 2), one could add Docker/containerization. I've watched folks spend weeks and weeks getting Docker setup for a service that probably didn't need to be containerized at all. And then once it's Dockerized, introspection/debugging/etc... seem to become much more difficult.
outworlder 2 days ago 2 replies      
> Avoid microservices where possible, the operational cost considering devops is just immense

Is it, though? There's more complexity due to more moving parts, sure. But being able to solve issues by just issuing a "scale" kubernetes command in the CLI is priceless. As is killing pods with no drama.

However, what are we talking about here? Small business ecommerce? Your monolithic app is probably going to work just fine.

> Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

Systems can and will fail. If you can eat the downtime, by all means forget about that.

> Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

Could you stop singling-out microservices? We have deployed continuous integration with old school rails apps before and it was extremely valuable.

Agree about agile.

apeace 2 days ago 0 replies      
I think #5 is the most problematic here, and was stated perfectly.

One method I have used successfully is sending surveys to people outside engineering. Send it to department heads and anyone else who seems interested in what engineering does. Ask them if they feel engineering is transparent, and whether they feel important bugs/features get followed up on. Let the responses guide you, and make the minimal process changes you need to in order to satisfy people's real concerns.

One other piece of advice: if certain people seem obsessed with process, it's possible they are poisonous to your whole organization and should be let go. Some people want process to be there to give them work (e.g. "managing the backlog" or "writing stories"), instead of doing actual work like programming or product research.

feyn 2 days ago 0 replies      
conradfr 2 days ago 0 replies      
As someone working at a Scrum company transitioning from PHP "monoliths" to DDD microservices shielded by nodejs gateways and apis and even CQRS/ES on the horizon I will answer yes.

But I guess that'll look cool in our resumes.

I must say sometimes I envy our mobile developers that are a bit immune from all that.

kingsawlamoo34 1 day ago 0 replies      
Why not all peoples are searching actual software?Its so difficult how many years in making system anyone cannot knowing of all of app or platform or app or plug in or device or codechar all things are supporting each other if one app is not update directdownload from google play store,windos for microsoft store how to give the part way of moving simple all App or Apk have in their basic code ID company .how about they making of you will be use one app or apk and then you sign out App and delete software but your using activity remain in more sure you can next time you reuse same app you will be seen or not your history this is one things but alittle difference in fre game app how about you play inside but you will be exit and then you replay this you can see or not your recent play section if you make log in account connect sure you can making or not resume game section game.all of game have in prevent and resume section already contain but you are not own create you cannot open this background codechar or security for example...
contingencies 2 days ago 0 replies      
Choose languages and frameworks that developers are familiar with.

Microservices are fine if you can rely on shared CI/CD infrastructure and automate execution properly, maintaining rapid build/test cycle times. They start to suck if people aren't familiar and everyone's laptop has to hold their own parallel multi-topology service prom regression test (^releases) every time you change a line of code... developer focus, flow and efficacy will be reduced.

I agree that redundant HA systems are usually not required. In the past it was expensive to get. However, tooling is now so good that with reasonable developers and reasonable infrastructure design, you can get it very, very cheaply if your services are packaged reasonably (CD-capable) with basically sane architecture and your infrastructure is halfway modern. This truly is excellent, because gone are the 1990s of everyone-relies-on-grizzled-sysadmin-and-two-overpriced-boxes-with-failover.

I don't think CI is a plaster, it is a great way to work, but like any tool or workflow is not appropriate in all situations.

We do over-complicate. Methodologies are too meta: programmers are already operating at max concurrent levels of abstraction. Better to incrementally adjust workflow (CI/CD on the workflow for the CI/CD of the workflow!). That's not to say that there's no value to some people thinking at this level some of the time, but Yoda told me "desk with agile literature much, sign of untidy mind be". I think he was right.

alex_hitchins 2 days ago 0 replies      
I agree with most of your comments. I think as a fairly new profession we are still finding our feet when it comes to best practices. I don't think there is one system that will work across the board for all trades. I mean I would think it took longer than 30-40 years to work out the best way to plumb, wire a house etc.

Sometimes when estimating work, I think how long would the same project take to build 5, 10, 15 years ago. It's not often that time spent coding today is any quicker than before.

Arguably we get better quality software now with unit tests, better compilers and better tooling. Perhaps I've just got some massive rose tinted glasses on!.

buzzybee 1 day ago 0 replies      
There's a great discussion to be had in scaling your practices to the human factors.

For a solo developer, just breaking things out into modules and massaging the formatting is likely to be a net negative - something you might do once you've accumulated months of cruft and are ready to start handing it off to others or repurposing it for a new project, but also a chore that will get in the way of thinking about the job in front of you right now, a temptation to think top-down planning will come to your rescue. Your advantage is in being able to change direction immediately, and there are a lot of ways to give that up by accidentally following a practice for a larger team.

As a team gets bigger, it's more important to be cautious because of momentum; any direction you pick for development will be hard to stop once it gets going.

At the same time, there are processes and automations that help at every scale, and at the small scale they're just more likely to be little scripts and workflow conventions, not ironclad enforcements.

EdHominem 2 days ago 0 replies      
No, but your rules don't resonate with me even though I feel the same overall.

1) Not the best language, but not the worst either. There's no excuse except microcontrollers for C these days (even though I still like it) and the fairly decent JVM can't excuse Java. I think people can come up to speed in a new language pretty easily. It's paradigms that are hard to learn, not syntax.

2) Sounds like you don't have devops. That's a solve-it-once sort of problem. And you have to solve it soon enough for some pieces so it shouldn't be put off. You need to be good at it.

3) It certainly can. It is increasing the size of your system considerably - not just the original system, but also the debugging rules for that system plus (as noted) the debugging rules for the debugging rules ad-infinitum. But what do you propose as a solution? Perfectly trained humans on-call? A procedures manual as detailed as the hypothetical code?

4) Well, lack of CI seems insane regardless of what sort of architecture you have. It's a symptom of not understanding the tools.

5) Capitalized anything is always bunk. But if I hear agile as meaning "short-term goals inside long-term goals, and continuous re-evaluation" then it makes perfect sense and has helped as a consultant and in industry.

cthulhuology 2 days ago 0 replies      
Honestly, it is probably just you (and your peers).

Quite frankly chances are the team you have sucks at operations, lacks the necessary experience to design complex systems, and probably doesn't do the fundamental engineering to make a reliable software product.

1 - false dichotomy, the best tool is one you have mastered, your team has individuals with 20+ years of development experience on it right? (Probably not)

2 - micro services are supposed to have small areas of concern and small functional domains to minimize operational complexity. Your services are programs that fit on a couple screens right? (Doesn't sound like it)

3 - redundancy's goal is to remove single points of failure, you should be able to kill any process and the system keeps working. (The word critical suggests you have spfs)

4 - CI is a dev tool to avoid merge hell by always be merging. CI is often used by orgs with massive monoliths because of the cost of testing small changes, and too many cook trying to share a pot. Ultimately if you don't have well defined interfaces ci won't save you. (You had well defined published interfaces with versions right?)

5 - agile is a marketing term for consulting services to teach large orgs how to act like small effective teams of experts. (Hint you need a team of self-directed experts with a common vision and freedom to execute it, you got that right?)

Most problems in tech are related to pop culture. Because we discount experience (because experienced developers are "expensive") we get to watch people reinvent existing things poorly. Microservices, soa, agile, ci, these things are older than many devs working today. The industry fads are largely just rebranding of old concepts to sell them to another clueless generation.

Computers are complex systems, networks of computers are complex systems. Complex systems are complex. Some complexity is irreducible, and complex system behavior is more than just a mere aggregation of the parts. People tend to over complicate their solutions when they don't understand their actual problem. They see things they are unfamiliar with as costly and overly complicated (as in your examples above).

Your problem is a culture that doesn't value experience and deep understanding. You and your team will over complicate things because you don't know better yet.

hacker_9 2 days ago 0 replies      
All problems revolve around structure, and as customers want more features, and capital builds, the structures get more complex. So we build even more complex structures to offset the complexity, but now things that were once simple get brought along and become more complex. Eventually the company hits a breaking point and re-invents it's structures to better suit their needs, but these grow in complexity once again given time. It is a never ending battle, and every business is at a different point in their complexity cycle.
MaulingMonkey 2 days ago 0 replies      
A certain amount of complexity or complication is required to solve problems. Sometimes, you will undershoot the mark, and not fully solve the problem. Other times, you will overshoot the mark, and create problems in the form of overcomplicated answers.

> 1) Choose languages that developers are familiar with, not the best tool for the job

It's a tradeoff - rampup time vs efficacy once ramped up. It's probably okay to let your devs rock it old school with vanilla Javascript for your website frontend - it's probably not okay for them to try and write your website frontend in COBOL, even if CobolScript is apparently a thing, just because they don't know Javascript.

> 4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

CI is great plaster for all kinds of problems, not all of which you'll be able to solve in a reasonable fashion. Of course, you may have problems which would be better to solve that you're using CI as a crutch to avoid solving - or to simply deal with the fact that you haven't gotten around to solving those problems yet.

In game development, I use CI to help 'solve' the problem of my coworkers not thoroughly testing all combinations of build configurations and platforms for each change. 5 configs and 6 platforms? That's already 30 combinations to test, so it's no wonder...

> 5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

On the other hand, other companies rock "flat" management well past the point it's effective, and may lack any kind of methodology to keep progress on track - which is also problematic.

Azkar 2 days ago 0 replies      
I feel like all of this just comes back to judgement calls. You can't pick technologies in a vacuum, and you can't generalize technology choices.

It's not very fair to make these claims without knowing all of the details around the situation. Microservices CAN be a pain, but it might offset a greater pain of trying to coordinate a monolithic deployment. It depends on things like team size, budget, and technology available to you.

This is where I see the disconnect between employers and most developers. "Programming" isn't a job. Your employer doesn't pay you to write code. They pay you to solve problems. The good employers don't care what tools you use to solve the problem, just that you solved it. The bad employers will force you to use technologies and buzzwords that probably don't apply to your situation. You should be able to defend all of your decisions and have good reasons for them.

On the flip side, not everything you try will work - that doesn't mean that it's a bad option, just that it didn't work for your situation. You don't need to have a redundant low-priority memo system because you don't get enough value out of it to justify the overhead of maintaining it.

YZF 2 days ago 0 replies      
The rule is there are no rules. The answer is "it depends".

If the only language your developers are familiar with is Ruby and you're developing a real-time, high-performance, system, then you shouldn't write it in Ruby.

If you need the kind of availability/scalability/encapsulation that microservices provide in your application/use-case then you should use them. Don't break you application into micro-services just because everyone says it's a good idea. An Angry Birds app on an iPhone doesn't need to be split up into micro-services running on said iPhone.

If you don't have redundancy and you lose your server then you're hard down. If you're OK with that fine. If you want to continue operation with one server down then you need redundancy. Redundancy doesn't necessarily add as much complexity as you seem to imply.

Continuous Integration is usually a good idea regardless of all other variables. If you have more than a single developer working on a system it's a good idea to keep building/testing this system with every change so you can catch issues earlier. You start very light-weight though with a small team. Even a single dev can do CI, it's not that hard.

Agile is just a buzzword but it doesn't hurt to familiarize yourself with the Agile Manifesto while making sure you're aware of the context in which it arose. It's really mostly about understanding that requirements often change and that we're dealing with humans. Again different projects, team sizes, situations will require somewhat different approaches. Sometimes the requirements are well understood and will change very little. Sometimes you know nothing about what the software will do when you're done.

peterwwillis 2 days ago 0 replies      
I think you confused trends for wisdom.

It used to be wise to wear bell bottom jeans and perm your hair. It also used to be wise to wear colored suspenders, or pocket protectors. And shoes with lights in them, and color changing shirts.

Granted, those same weird misguided trends were probably followed by the same people who accomplished everything we have today. I think it's the effort you put into the work that determines its output, not the details of its development.

lolive 2 days ago 1 reply      
Point 5 is really insightful. When you read it carefully, it implies that agile "methodology" will soon become the prevalent methodology. Because a successful project is all about managing a massive amount of "specific, discrete, communications issues". And doing so on a daily basis is the best option.

Off-topic note: point 5 is also the way to go with your wife/husband/girlfriend/boyfriend, your kids, your friends, etc.

steven777400 2 days ago 0 replies      
Keep in mind, as others have said, the "accepted wisdom" is coming out of high-income, high-velocity, technology companies. However, a lot of development is done at companies whose primary business is not software (or not technology). Additionally, many established businesses care less about velocity than hungry startups.

In that case, I think a different set of wisdom applies:1. Choose languages that are easy to hire for, easy to train for, have lots of 3rd party support, and easy for junior developers to use (and that your developers already know). This generally means Java, C#, or Python (on the backend)

5. First it's important to define "agile" in your context. Agile, in terms of the agile manifesto, is almost always beneficial to the project, although it won't speed it up. Agile, in terms of cargo culting specific artifacts, is often just a waste of time and source of confusion. If your organizational definition of agile is "no project manager needed", then you're in trouble. Good project managers are essential.

6DM 2 days ago 0 replies      
I think micro-services was your big issue. But yes, getting into the politics of pure scrum, kanban, whatever is a big drag.

DevOps has it's merits and will work well if you're team can stop trying to develop newer better scripts and learn when to say it's good enough. I saw one team revise their scripts over and over for a whole year when they could have been using that guy for new features/bug fixes.

StreamBright 2 days ago 1 reply      
Well, I think there are 6 points to answer here:

>> 0, Are we over complicating software development?

Yes, in many cases we are over complicating software development. I think a large part of OOP too complex to produce reliable services easily, still possible though. Simplicity is not as popular among developers as it should be. I often run into complex code that can be replaced by 10x smaller code base that is much clearer than the original.

1, Sure

2, I am not sure why you think that, you need services that can few things well and individually scalable units. This used to be SOA (service oriented architecture), and micro-services lately. There are cloud vendors out there who make it super easy for you to run such services for reasonable price on their platforms without a devops team.

3, See point 0. Complex systems fail more than simple ones. Failure isolation and graceful degradation should be properties at design time. The best is to have stateless (no master slave service or registry that is required for correct operations) clusters where you can scale the capacity with the number of nodes.

4, Continuous integration is way older than the term microservices. It contains patterns that a company figured out by shipping code that had to be reliable and it is optimised for frequent changes aka when you developing a new service or product. It is just a way of giving instant feedback to developers.

5, There are so many talks and videos about agile used bluntly is harmful on the web that I think this is a well understood question. Use a method that works for the team, it provides the insight to the business what they are doing and you are good. I use Kanban for almost 10 years with distributed teams (software and systems engineering) and it works perfectly for us.

+1 for simpler code and simpler software

Sammi 2 days ago 1 reply      
You are basically say much of the same as Dan North is saying in his newest take on Agile: https://www.youtube.com/watch?v=iFLBG_bilrg

Agile is dead, long live Agile. The difference now, is that we understand trade-offs. There's no silver bullet and there are no absolutes.

rymohr 2 days ago 0 replies      
Can't agree enough!

I actually wrote an article [1] last week exploring single-tenant SAAS architectures because I was annoyed with how complicated our multi-tenant plans were. Was bummed the HN post [2] didn't get any traction because I was really hoping for some critical feedback.

For me, the holy grail is a cost-effective system that doesn't back you into scaling issues down the road and is simple enough to be run by a single developer (on the side) rather than a dedicated team of sysadmins. Pipe dream? Maybe. But it's worth a shot.

[1]: https://hackernoon.com/exploring-single-tenant-architectures...

[2]: https://news.ycombinator.com/item?id=13385474

mbrodersen 23 hours ago 0 replies      
No you are absolutely right. 95% of problems in software development are created by the software developers themselves. At least that is my experience having worked in software for 20+ years in companies all over the world.
grok2 2 days ago 0 replies      
From what you are describing, it seems like there is more a problem associated with your team than with anything else. Maybe you don't have the right set of expertise in the team and people tend to work better with what they are comfortable with. Microservices/redundancy support/CI at a fundamental level increase the complexity of how you go about things, but they do have benefits. They require a way of thinking and developing that should be a cultural fit for the team for it not to feel like you are constantly fighting the system. One way to get there is to incrementally add these after the primary project is done. When tackling one thing at a time, things end up being simpler and and the need for these things gets into the working habits of everyone better and they are no longer fighting the system.
buro9 2 days ago 0 replies      
devops is good stuff. Just apply to the developers the same standards (and typically answers) as you would to your deployment world. You should be able to answer questions like: "How does a new developer get going within 5 minutes?" in the same way that you answer "How do we build and deploy a new app?" and both the local developer and remote system should be debugged and monitored in the same way.

devops isn't bad, and will speed up onboarding new staff, growing, and helps your devs and ops people immensely.

On the rest I'd largely agree with you... other answers may only apply at a certain scale, or complexity, or some other set of parameters that may not apply to you now.

Solve the problem you have now, and the problem you'll definitely have in the next 6 months.

The rest is for the future.

vcool07 2 days ago 0 replies      
You need to work in an environment devoid of any practices like agile / CI etc and then you would know the difference. It might slow down your progress, but makes up for it with consistency, discipline eventually leading to development of better(reliable) software !
Ericson2314 2 days ago 2 replies      
It's all about long-term vs short-term. Everyone architects software for the short term, I'd say the industry at large has collectively lost/never had the vision and wisdom to do anything else.

Now maybe if you are a tiny-ass start-up, sure, but for a big established company, this is just bad economics.

Why do we talk about "disrupting" the "behemoths"? Why is everything done in tiny-ass largely-parallel teams? Very few companies have had serious thoughts about programming at scale.

I don't dispute that doing things the right way is often a huge up-front initial investment, but you do eventually get over the hump.

seangrogg 2 days ago 0 replies      
From my perspective, the problem is that others believe everyone else is caught up in the same trends they are. If someone starts to prosthelytize something - whether that's build management, microservices, or even pairing React with Redux by default - individuals start to think it's the "new thing" and adopt rather than critically think about it.

Personally, I tend to shy away from tools unless they seem to do something of significant value for me that outweighs their cost on my development process. The "best tool for the job" is the one that allows me to finish a project in a timely manner, not one whose memory footprint is 10% lower.

zitterbewegung 2 days ago 0 replies      
If you keep on following every hype-train yea you will get over complicated software development.
Animats 2 days ago 0 replies      
I've used microservices, but on QNX, where you have MsgSend and MsgReceive, which make message passing not much harder than a subroutine call, and not much slower. UNIX/Linux was never designed for interprocess communication. You have to build several more layers before you can talk, and the result is clunky.

If you're crossing a language boundary, it's often better to use interprocess communication than to try to get two languages to play together in the same address space. That tends to create technical debt, because now two disparate systems have to be kept in sync.

oblio 2 days ago 0 replies      
Continuous Integration on any project which will be developed for more than 1 year by more than 1 person should provide a positive return of investment.

The rest are debatable, but I feel that the point above is close to an axiom these days.

zzzcpan 2 days ago 0 replies      
I think these problems are not about software development, but are infrastructural and architectural. Lack of good people to handle those things is certainly a problem. But you do need quite a bit of infrastructure for microservices, for resilience, for continuous integration and all of that paired with some good architectural decisions. Resilience is probably the hardest thing among them, as it requires some expertise in distributed systems, operations, infrastructure, so you wouldn't do something, that has almost no impact, but requires a lot of engineering effort.
mybrid 1 day ago 0 replies      
Another way of saying this is it is not science.

Usability needs to be applied to more than just the end user experience: but the entire SDLC experience.

kmicklas 2 days ago 0 replies      
> 1) Choose languages that developers are familiar with, not the best tool for the job

This is probably true but also the root cause I think. Enough developers aren't familiar with the right tools and abstractions (modularity, abstraction, purity, reproducability, etc.) that we just keep rehashing the same bad ideas in a never ending stream of new languages and frameworks that push the same decades old ideas.

protomyth 2 days ago 0 replies      
At this point, I think a lot of software development problems are complicated because we are building on a platform that really isn't designed for the apps we want. The web makes everything a lot more frustrating and hard. It complicates testing and requires a lot more process than is justified by the apps. At some point the era of the web will come to and end then maybe we will get a net gui (probably based on messaging) that will hopefully take the lessons of the web to heart.
makmanalp 2 days ago 1 reply      
I think this is where a lot of varied work experience (small / large / old / new companies) is key, because it gives you perspective. You can then ask yourself, "why does this process suck so much, and why didn't it when I worked at X? In my experience, people who come from a monoculture background usually seem to not question dubious software, architecture and methodology choices that end up killing productivity and sanity.
GFK_of_xmaspast 2 days ago 0 replies      
I don't do anything approaching microservices but a good CI setup combined with a good test suite is an absolute blessing that verges on a 'must have'.
raverbashing 2 days ago 0 replies      
1) The best tool is useless if people can't avail of its power

2) True. Microservices are usually premature optimization

3) True

4) CI is a good idea regardless of using microservices or not

5) You might elaborate this item

chetanahuja 2 days ago 0 replies      
No it's not just you. In general, "follow latest trends blindly" has never been a winning strategy in software development at any point in computing history. Now, that is not to say that you never change your tools or methodologies once you've mastered your existing tools. But the new tools/techs need to pass a very high bar before you subject your team to these.
wickedlogic 2 days ago 0 replies      
Yes, we are over complicating software development.
draw_down 2 days ago 0 replies      
Ultimately, though complexity is a real thing, the word is mostly used to mean "what I personally don't like".
rb808 2 days ago 2 replies      
The biggest issue I have is the current fashion for functional languages resulting in mixed style code bases. I've been working on established applications written in Java/C#/Python that have OO, imperative and now functional code all mixed together.

If I had it my way we'd choose one or the other but no one can agree which is the best way to write code.

swift 2 days ago 1 reply      
I'd like to push back on continuous integration being over-complicated. It's easy to do using off-the-shelf software and it makes life a lot less stressful when you have confidence that your changes are good before landing them in production. It's such a win that I'd set it up even with a 10 person team.
bjourne 2 days ago 0 replies      
Yes. What you have discovered is the same epiphany most developers have as they get more experienced and better at their jobs.
koolba 2 days ago 0 replies      
> 1) Choose languages that developers are familiar with, not the best tool for the job

The language that you're familiar with generally is the best tool for the job. Most software work can be equally done well (or at least greater than acceptably well) in a number of languages. Not having to learn a new one (or a new framework) is a plus.

pmontra 2 days ago 0 replies      
Most of the time we're creating complexity when we can avoid it and we're often proud of it.

The problem is that's very difficult to find the right compromise between time, cost, an architecture that can support the growth of the service so either we build something too thin or something too complicated.

lukaszkups 2 days ago 0 replies      
Are You a front-end developer? :D

Yes, I think very often we over complicate even simple things. But sometimes it pays in the long run.

bassman9000 2 days ago 0 replies      
Could also be interpreted as: "devops is not yet mature/lacking tooling".

Don't get me wrong, complexity has grown. Agile is a joke. But, e.g., build systems have been maturing for 30+ years. Their cousins, deploy systems, have a long way to go.

Gurrewe 2 days ago 0 replies      
All development teams or products are not the same. Sometimes microservices can improve the quality, and sometimes the opposite.

It is important to know why you do some things, instead of applying Hype-Driven-Development.

Do what is best for you and your team, instead of what is best for someone else (with a different product, problem, and team).

greyman 2 days ago 0 replies      
Continuous integration is also necessary for bigger projects with many inter-dependent parts. I worked in such a project, we had about 100 developers on it and I just can't imagine how it could be efficiently developed without CI. But for small projects it maybe isn't that critical.
agentultra 2 days ago 0 replies      
Just linking my comment to the other thread in response to this post:


tldr; simplicity is a great virtue and difficult to achieve in practice.

diminoten 2 days ago 0 replies      
Just to add to this a bit, what do you all think of the idea that "code is a code smell"?

In other words, if you're writing code, make sure you actually need to write it, and can't otherwise find someone else who's written/released/maintains it.

eikenberry 2 days ago 0 replies      
> 1) Choose languages that developers are familiar with, not the best tool for the job


Programmers have affinities for languages. They will work better with some languages than others and they know which languages fit them well. Those are the best ones to use.

usgroup 2 days ago 0 replies      
... I'd agree. Put briefly, if you're trying to save the day, people first.

But when you stop needing to save the day and want to build something will particular properties , you may find that process has to come first.

joelthelion 2 days ago 0 replies      
I think one cause of the problems is that what is good at Google scale is not necessarily relevant for a team of ten people.

I think the lesson here is be critical of "best practices" and think about what will work in YOUR context.

TurboHaskal 1 day ago 0 replies      
Hey, we gotta eat. If people won't pay for software licenses then we'll make them pay for training and consulting services.
tboyd47 2 days ago 0 replies      
You are correct. We cargo-cult Google and Facebook so much that we forget to apply lessons learned decades ago. People and interactions over processes and tools. There is no silver bullet. You Ain't Gonna Need It.
user5994461 2 days ago 0 replies      
So... that big list is the lessons learnt at the startup IR the big company? Its really not clear to me.

Same problem with all the comments that begin with "at my last company". Which kind was it?

dood 2 days ago 0 replies      
The issue is that solving real problems is hard, but making things complicated is easy, fun, and looks a lot like solving real problems if you aren't paying careful attention.
jasonlotito 2 days ago 0 replies      
> lack of communication

You can't talk about lack of communication and blame "devops" at the same time. If there was a lack of communication, you aren't "doing devops."

camus2 2 days ago 0 replies      
1/ What language did they choose? why? what made them think language X or framework Z would give them a competitive advantage at first place and what was the result of that choice?
GnarfGnarf 2 days ago 0 replies      
Software development goes off the rails because there are no physical materials involved, so there is no built-in limitation to prevent costs from going out of control.
sapeien 2 days ago 0 replies      
1) Doesn't always work if you want to target embedded systems or need performance, and all you know are scripting languages with huge overhead like Ruby, JS, Python, etc. Some languages really are better than others.

2) Could say avoid distributed computing if your problem is not distributed. This is more about being a blind follower of the latest hype.

3 & 4) Complicated DevOps are a bad idea in general. Stuff that seems to simplify things on the surface like Docker are actually hiding tons of complexity underneath.

5) To most people, Agile = JIRA = Sprints = Scrum. It's corporate mentality codified, so it's no surprise that a lot of startups avoid it.

d--b 2 days ago 0 replies      
truth is: you're young and you're becoming an experienced developer... You somehow have to go through these stages. In the end, you'll be all right.
devdad 2 days ago 0 replies      
Hi, I'm happy to be posting anon right now. Can someone ELI5 the difference between libraries and packages and a microservice?
pknerd 2 days ago 0 replies      
Some people have got so used to of complicated architecture and workflow that they are finding your questions odd. Just check comments.
graphememes 2 days ago 0 replies      
I have seen Microservices be the death of a lot of startups / corporations. Proceed with caution.
r4ltman 1 day ago 0 replies      
as a guy whose idea was successfully pitched to a successful tech company of which i am still connected, i'm going to say yes. the classification aspects of specialty training keeps the process from being as fluid as it needs to be in order to be truly game changing rather than merely, whatever expectation is expected.

i know this sounds different to everyone, here's the point,

The User Needs to Use It. The focus is always on everything else. Only when theres' been 'some' success does the user and by user, I mean, the entire field the program is for, is an influence, this lack of empathy keeps any leadership from ever happening when everything is based on 'past successes of other companies' rather than trying to lead effectively.

z3t4 2 days ago 0 replies      
i think the best way is to start backwards in the future. what are the requirements. then plan towards today. what do you need and when ... thats how i did plan my training program as an athlete. the most important question is what do i need (to do) right now
jbverschoor 2 days ago 0 replies      
madhadron 2 days ago 0 replies      
Figuring out how to do things simply is remarkably hard. After twenty years of this, I feel like I'm beginning to be able to design simple systems some of the time.

The problem with much "currently accepted wisdom" is that it doesn't explain exactly what is being balanced. "Works for my organization" is the equivalent of "works on my machine." For example,

1) "Best tool for the job" when applied to languages nearly never is a question of the intrinsic merits of a language design. There have been quite a few discussions recently on Hacker News on the virtues of a boring stack, that is, one that everyone else has already beaten on so much that you can expect to hit fewer issues.

2) Microservices are a tradeoff. If you have an engineering team of five hundred shipping a single software as a service product, one of your biggest issues is coordinating releases among all those people without having your services ping-ponging up and down all the time. Microservices are an answer to that. At that scale you've already had to automate your operational troubles, so it doesn't impose that much additional operational cost. If you have an engineering team of ten, then none of this applies to you.

3) High availability, like all concurrency, is hard. Try to write your own code so that it scales horizontally by simple replication and depends on stock components such as Kafka, Zookeeper, etcd, or Cassandra to handle orchestration. In many cases your reliability budget may be such that you can run a single system, automate some operations around it, and be just fine. It's only when your reliability budget doesn't allow that, or your workload forces you to orchestrate parallel work, that you have to go this route.

4) Yes. Nearly all discussion of agile software development that I've seen focuses on rituals without the applied behavior analysis underlying them. For example, a standup meeting has a small set of goals: establish a human connection between everyone on the team on a regular basis; air things that are blocking individuals in a forum where they are likely to find someone who can unblock them quickly; have everyone stand up and take responsibility for what they are doing in front of their team; and serve as a high bandwidth channel of communication of important information (the build is going to break this afternoon for an hour, etc.). If those outcomes are being achieved in other ways by your group, then there's no reason to have a standup. If you're doing a standup and it's not accomplishing one or more, you need to revise how you do it. Human behavior and interaction is something to be designed and shaped in an organization. What works in a team of three with excellent communication may not work in a team of ten or fifty or five hundred.

hmans 2 days ago 0 replies      
Yes, we are; no, it's not just you. Next question.
lngnmn 2 days ago 0 replies      
looking at some react-todo-demo and its dependencies - complicating? not at all!

J2EE will soon look like a reasonable thing.

siphr 2 days ago 0 replies      
shitgoose 2 days ago 0 replies      
it is not just you, but we are hopelessly outnumbered.
bbcbasic 2 days ago 0 replies      
Horses for courses.
btilly 2 days ago 0 replies      
There is a lot of BS in software development. Always has been, probably always will. Everything is a tradeoff. Understand the tradeoffs that you are taking, listen for the principles, and you can ignore most of the noise.

On to your questions.

1) Choose languages that developers are familiar with, not the best tool for the job

How familiar developers are with the language is part of what determines what is best for the job at hand in a real organization.

It isn't the only factor. For example if you're doing something new (to you), doing it in the language that you find wherever you are learning it from makes sense because you'll be more likely to get help through complex issues.

That said, do not underestimate the support advantage of using a consistent toolset that everyone understands.

2) Avoid microservices where possible, the operational cost considering devops is just immense

See https://martinfowler.com/bliki/MonolithFirst.html for emphatic support.

If you go the microservices route, think ahead about predictable challenges with debugging failures 3 calls deep, and plan in advance for monitoring etc tooling to solve it.

3) Advanced reliability / redundancy even in critical systems ironically seems to causes more downtime than it prevents due to the introduction of complexity to dev & devops.

As the old saying goes, DBAs are the primary cause of databases going down. Reliability is not something that you just plaster on top blindly. An systems are good at finding failure modes that you never thought of.

4) Continuous integration seems to be a plaster on the problem of complex devops introduced by microservices.

No. Continuous integration is actually a fix for developers checking in clearly broken code and then nobody discovering it later. That said, it does little good without a number of other good practices that are easy to ignore.

5) Agile "methodology" when used as anything but a tool to solve specific, discrete, communications issues is really problematic

This one generated the most discussion. I would say sort of, but you went too far.

Any set of poorly understood principles, dogmatically applied, is going to work out badly. Agile is actually a set of good principles that addressed a major problem in the common wisdom back in the day. But the pendulum has swung and it is often applied poorly.

That said, there are other problems in organizations which are prone to, "poorly understood principles, dogmatically applied"...

crispytx 2 days ago 0 replies      
Welcoming Fabric to Google googleblog.com
357 points by mayop100  2 days ago   125 comments top 33
s_dev 2 days ago 4 replies      
Fabric was a much better analytics tool than Google Analytics.

It's better because:

Instant crash reporting reduces release time anxiety. iTunes and Google analytics need 24 hour collection period.

Fabric can offer Fastlane and Beta -- Toolkits that help deal with distributing builds to testers and releasing apps. Google has nothing to compete with here.

Whatever way Fabric seem to define active users and sessions they seem to produce more accurate reporting while Google numbers integrated with the exact same app produce higher more ego stroking numbers.

huangc10 2 days ago 3 replies      
The title is a bit ambiguous. From my understanding, Google (Firebase) acquired Fabric from Twitter (not necessarily "joining").

Fabric is one of the top dev tools I use for iOS. I wonder what kind of change is in store...

niftich 2 days ago 0 replies      
Huh, interesting. I once responded to speculation that Google would buy Twitter by saying [1] they'd be better off neutering Twitter's ad network and data mining ambitions that were largely buoyed by Fabric and Crashlytics, and now I figure this will largely accomplish that. It also disproves my point that Twitter can sustainedly pivot into this space [2] by leaning on Fabric, Digits, and Magic Pony, and answers my more recent musing about how Fabric will fare with Twitter's recent downsizing [3].

[1] https://news.ycombinator.com/item?id=11913828#11914620[2] https://news.ycombinator.com/item?id=11937756#11942293[3] https://news.ycombinator.com/item?id=12784274#12784473

danielhooper 2 days ago 11 replies      
Knowing Google's reputation with abandoning developer tools and services, can anyone offer suggestions as to possible alternatives? Or counterpoints as to why I shouldn't fear this service degrading over the next year or two? We're currently using Fabric and Crashlytics for our iOS app where I work and this news has prompted us to research alternatives.
fharper1961 2 days ago 1 reply      
Seems to me like the main value for Google is the data from all the apps that have Fabric SDK integration (e.g. Crashlytics).

Quote from the Fabric blog post https://fabric.io/blog/fabric-joins-google

"Fabric has grown to reach 2.5 billion active mobile devices".

nstj 2 days ago 2 replies      
Having experienced some rather concerning keychain issues with Digits a few months back and seeing the writing on the wall with Twitter and their stockholders calling for blood I dropped all of my Fabric dependencies in November.

For crash reporting the best alternative I found (superior to Crashlytics IMO) is Sentry[0]. It's been awesome so far and allows you to easily federate crash reporting over multiple platforms. I have no association with the company.

[0]: https://sentry.io/

atarian 2 days ago 2 replies      
Twitter just lost one of their crown jewels; Fabric is probably the biggest mobile analytics platform out there.
rjvir 2 days ago 2 replies      
What does this mean for Twitter Digits? Will their free SMS authentications continue?

It would be a perfect fit if they phased in Twitter Digits as a Firebase authentication provider - I suspect many developers (including myself) already use these 2 services in tandem.

jrowley 2 days ago 2 replies      
Silly me - I was concerned they somehow acquired the Fabic python library.


mxstbr 2 days ago 1 reply      
I hope this isn't Twitter saving the good parts of the company before they crash and burn...
closed 2 days ago 0 replies      
(This Fabric is not the popular python library for deploying things from the command line.)
pkamb 2 days ago 1 reply      
So weird to see the mostly dead http://crashlytics.com and its iOS 6-era design (linen texture!) as it's been superseded by "Fabric" and now "Firebase". Seems like such a strong brand name to kill in favor of these very generic alternatives.
gorkemcetin 2 days ago 1 reply      
I wonder whether China will start banning Fabric servers, as they'll literally be owned by Google. If that would be the case, I cant imagine the mess Chinese developers will face. Assuming they have access to Fabric services now.
sulam 2 days ago 0 replies      
How things change, three years ago Fabric was a key part of Twitter's platform strategy. This has to feel like a big letdown, unless you're Jeff Seibert.
mindcrime 2 days ago 0 replies      
Well, this is probably good for Google and Fabric, but I'm less sure about it being good for Twitter. My opinion has long been that Twitter needed to double-down on being developer friendly and developer focused. This seems like the exact opposite of that.

It strikes me as being about as smart as Sears selling off the Craftsman brand. At least, to me, this feels self-defeating.

nnd 12 hours ago 1 reply      
Any recommendations for cloud-based CI/CD services? I'm currently using Bitrise, then delivering to Fabric via fastlane.
orbitur 2 days ago 2 replies      
I'm happy that Crashlytics will live on, as that was something I was concerned about in light of Twitter's recent poor performance.

However, I'm really not looking forward to the eventual Firebase-ifying/Google-ifying of the UI/design. The Firebase/Google Console interfaces are terrible. Just awful. I cringe thinking about what could happen to the Crashlytics UI.

sapeien 2 days ago 1 reply      
I haven't heard of Fabric before. Fabric seems to have an ambiguous name and their marketing website is equally ambiguous. Something to do with mobile app analytics? I find this trend in developer tool marketing to be appalling.
Sujan 2 days ago 0 replies      
What about Fastlane?
Hydraulix989 2 days ago 0 replies      
Glad to see this under a better umbrella. The Twitter developers kept blowing off everyones' requests for Crashlytics to support the gradle-experimental plugin (which was necessary to use the NDK within Android Studio for the longest time).
stevepotter 1 day ago 1 reply      
I'm an active user of Fabric and have found it rather convoluted and limited. For example, there is no way to easily get device UUIDs from Beta testers. Given they also sponsor Fastlane, I'm blown away they haven't provided a way to automatically register UUIDs. This is possible with HockeyApp through their API and some Fastlane scripts.

Crashlytics is their killer feature. The rest is mediocre. Hopefully Google will improve it. I'm not holding my breath.

sebleon 2 days ago 5 replies      
Unlikely that we'll see improvements in Fabric services for a while... presumably, engineering resources will be focused on integrating with Firebase :(

Anyone recommend alternatives for Crashlytics and Digits?

guelo 2 days ago 1 reply      
Huh? If Twitter's future is not ads and analytics then what is it?
KerryJones 1 day ago 0 replies      
More of a comment on Firebase -- as someone who was using them pre and post merge -- a whole bunch of small things are notably worse.

They said "great adoption" but it was actually forced (my previous company held on as long to the old Firebase as we could), from things to poor naming specs on export, base URL structures changing, the live-view able to handle less data points and a number of other small things, it was a bit of a let down.

Hoping this goes better.

alex4Zero 1 day ago 0 replies      
The only thing I'm worry about is Digits. I hope Google won't close it
zero-x 2 days ago 0 replies      
Best of luck, really dig the platform.
troymc 2 days ago 0 replies      
I guess they don't mean this Fabric: http://www.fabfile.org/
relics443 2 days ago 1 reply      
While Firebase is still awesome, there was a definite downgrade in support when Google integrated them. Hopefully this will go better.
andy 1 day ago 0 replies      
Let's hope it doesn't end up like Adwhirl. :(
KayL 2 days ago 0 replies      
I used it in my all iOS development. Will it be 2nd class?
tn13 2 days ago 0 replies      
What does this mean for moPub ?
xugo 2 days ago 1 reply      
what about stripe ?
jondubois 2 days ago 4 replies      
Based on my experience with Firebase, it doesn't reduce complexity; it just shifts it around and adds extra costs (both financial and performance costs) to your system.

For any serious app, you still need to have a backend server on the side and your Firebase service often becomes bloated and inefficient. Sometimes you want to store the Firebase data inside your main DB as well and so you end up with two sources of truth and Firebase ends up becoming a third wheel to your project (just a bloated data transport layer).

It's not surprising that Firebase has been sliding in terms of popularity: http://www.alexa.com/siteinfo/firebase.com

It's good for rapid prototyping/MVC but not for any serious use case.

I think the big lesson in the framework/devtools space is that the more opinionated the tooling is, the less flexible it becomes and the fewer use cases it covers.

My Go Resolutions for 2017 swtch.com
323 points by mitchellh  2 days ago   195 comments top 21
munificent 2 days ago 8 replies      

 > Part of the intended contract for error reporting in Go > is that functions include relevant available context, > including the operation being attempted (such as the > function name and its arguments).
I know the Go folks don't like exceptions, but this is an example of them learning the hard way that about one useful thing they lost by deciding to not do exceptions.

Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.

 > Today, there are newer attempts to learn from as well, > including Dart, Midori, Rust, and Swift.
For what it's worth, we are making significant changes to Dart's generics story and type system in general [1]. We added generic methods, which probably should have been there the entire time.

Generics are still always covariant, which has some plusses but also some real minuses. It's not clear if the current behavior is sufficient.

Our ahead-of-time compilation story for generics is still not fully proven either. We don't do any specialization, so we may be sacrificing more performance than we'd like, though we don't have a lot of benchmark numbers yet to measure it. This also interacts with a lot of other language features in deep ways, like nullability, whether primitive types are objects, how lists are implemented, etc.

[1]: https://github.com/dart-lang/dev_compiler/blob/master/STRONG...

bsaul 2 days ago 4 replies      
Along with generics, they should probably also reconsider algebraic data types, such a enums with values. This is the best feature swift adds to the table hands on, and it seems to me as it's pretty orthogonal to the rest of the language ( although it carries a lot of other features with it, such as pattern matching).

They wrote that they considered it to be redundant with interface programming, but really i don't understand why. Interface is about behavior, not data. An int doesn't "behave" like one, it is one. And something that's either an int or an array of string, doesn't "behave" like anything you'd want to describe with an interface...

As an example, one should see how protobuf "one of" messages are dealt with in go : switch on arbitrary types followed by manual typecasting. That's just gross...

ekidd 2 days ago 3 replies      
I've always wanted to like Go, but every time I get ~1,500 lines in a project, I remember my pain points. I totally see why other people like the current version of Go, but as it stands, it's not an ideal match for my brain.

Dependency management is a big pain point for me. I'm really glad to see several of my pain points on the list for this year, including another look at generics.

Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?

Anyway, it pleases me to see that the Go team is thinking hard about this stuff. At the bare minimum, I'm going to be contributing code to other people's open source Go projects for the foreseeable future. :-)

nine_k 2 days ago 1 reply      
Posts like this really return me the confidence in the future of Go the language.

I very much wish Go to succeed, it's built on a few nice ideas, but where it currently is it has a number of usability impairments that stop me from wanting to work with it.

But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems. (And this is besides the "routine", incremental but very important improvements, such as GC or stdlib.)

stouset 2 days ago 1 reply      
> Not enough Go code adds context like os.Remove does. Too much code does only

 if err != nil { return err }
Is anyone else surprised that forcing programmers to do the tedious, repetitive, and boring work of being a manual exception handler overwhelmingly results in people doing the least amount of effort to make it work?

I feel like so many of the headaches of go could have been avoided had the developers spent any time whatsoever thinking about the programmers using it.

throwaw199ay 2 days ago 3 replies      
> Not enough Go code adds context like os.Remove does. Too much code does only

Well, the error interface is { Error()string } and gophers were told to use errors as values, not errors as type because supposedly "exceptions are bad". By providing context you are just re-inventing your own mediocre exception system. Why use errors as value at first place if you need context? just put exceptions in Go, therefore people don't need to use a third party library to wrap errors in order to trace the execution context.

davekeck 2 days ago 2 replies      
Regarding error context: I'd advocate simple error-chaining using a linked list. If a function fails, it returns an error wrapping the underlying error as the cause, and so on up the stack. The top of the stack can inspect or print the error chain ("A failed because B failed because ..."), or pinpoint the error that was the root cause.

I would love for Go to include something like this:

 type Error struct { Description string Cause error } func NewError(cause error, descriptionFmt string, args ...interface{}) error { return Error{ Description: fmt.Sprintf(descriptionFmt, args...), Cause: cause, } } func (me Error) Error() string { if me.Cause == nil { return me.Description } return fmt.Sprintf("%v: %v", me.Description, me.Cause.Error()) } func RootCause(err error) error { if err, ok := err.(Error); ok && err.Cause != nil { return RootCause(err.Cause) } return err }

brightball 2 days ago 1 reply      
> In the long-term, if we could statically eliminate the possibility of races, that would eliminate the need for most of the memory model. That may well be an impossible dream, but again Id like to understand the solution space better.

Unless I'm mistaken, this is an impossible dream as long as shared memory exists. It's the core tradeoff that distinguishes the Erlang runtime from the Go runtime (there are others, but they all stem from this).

Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.

It's one of those details that changing it would essentially create a new language. You'd have code, packages and libraries that either worked that way or they wouldn't.

IMO, this is an area where Go gets into dangerous territory of trying to be all things to people. Be great at what you're good at which is the "good enough, fast enough, portable enough, concurrent enough, stable enough" solution for backend services in most standard web architecture.

If people need distributed, fault tolerant, isolated, race proof, immutable run times that aren't quite as top end fast and aren't ideal for giant in RAM data structures...there's already a well established solution there by the name of Erlang (and Elixir). They made the tradeoffs already so you don't have to reinvent them.

geodel 2 days ago 1 reply      
Excellent read. Official package management looks more about 'when' than 'if' now. As someone in Java world who did not graduated to Maven/Gradle and stuck to ANT I hope it will be minimalistic and immediately useful to Go users.
sbov 2 days ago 1 reply      
> all the way up the call stack, discarding useful context that should be reported (like remove /tmp/nonexist: above).

It's simple. With exceptions, we got used to "errors" that are, by default, debugable. But Go got rid of default debugable errors, and programmers are lazy.

zyxzkz 2 days ago 0 replies      
I really enjoyed reading this thoughtful post. He addresses a lot of the pain points I've encountered when writing Go.

I know there probably won't be immediate fixes, but it gives me confidence in Go's future.

jimbokun 2 days ago 1 reply      
"I dont believe the Go team has ever said Go does not need generics. "

I think that's true, but I do think its been said by a number of Go users and advocates, which is where the perception comes from.

vendakka 2 days ago 0 replies      
The go vet integration with go test looks interesting. I'm currently using github.com/surullabs/lint [1] to run vet and a few other lint tools as part of go test. It provides a nice increase in productivity for my dev cycle. Having vet + other lint tools integrated into my dev+test cycle has caught a number of bugs before they hit CI.

[1] https://github.com/surullabs/lint

Disclaimer: I'm the author of the above library.

maxekman 2 days ago 0 replies      
I would really like to see best practices in the documentation on how to include the right amount of error context, as mentioned in the article.

Also what to put and not put in context objects is really important to document as it could easily snowball into a catch-all construct and be totally misused after a while.

vbernat 1 day ago 0 replies      
Happy to see that being able to not use GOPATH is at last considered seriously! During years, Go people wanted to force people to work their way. We can still this state of mind in the associated bug report: https://github.com/golang/go/issues/17271.
kibwen 2 days ago 3 replies      
> Test results should be cached too: if none of the inputs to a test have changed, then usually there is no need to rerun the test. This will make it very cheap to run all tests when little or nothing has changed.

I'll be curious to see how this pans out, because it sounds like a very deep rabbit hole. Is there any precedent for this in other language toolchains? I've seen some mondo test suites in Java that could desperately use it.

vorg 2 days ago 1 reply      
> it would be nice to retroactively define that string is a named type (or type alias) for immutable []byte

Perhaps an array is better than a slice, so `immutable [...]byte` Also, the for-range loop would have to behave differently so I guess it's a version 2 change. And if semantics are changing anyway, I'd prefer a `mutable` keyword to an `immutable` one.

jaekwon 2 days ago 0 replies      
Re dependencies:

Glide takes our team 90% of the way, but is a bit glitchy (need to wipe ~/.glide sometimes) & lacks a command for `npm link` type functionality.

jnlknl 2 days ago 0 replies      
Great !
BuuQu9hu 2 days ago 0 replies      
It is nice that Go is trying to learn from Pony and Midori. I wonder whether any Gophers have started to learn about object-capability theory and the reasoning behind why so many values are immutable in Pony, Midori, Monte, and other capability-safe languages.

To expand on this a bit, in ocap theory there is a concept of "vat", an isolated memory space for objects which has its own concurrent actions isolated from all other vats. In a vat model, data races are nearly nonexistent; in order to race, one would have to choose a shared variable in a single vat, and then deliberately race on it. But this is not common because ocap theory enforces heavy object modularity, and so shared variables are uncommon.

Additionally, a "deliberate race" is quite tricky. Vats prepare work with a FIFO queue. In the Monte language:

 # A flag. We must start it as `false` because Monte doesn't allow uninitialized names. var flag :Bool := false def setFlag(value :Bool) :Void: flag := value # Set the flag to `true` immediately. setFlag(true) # Set the flag to `false` on the next turn. def first := setFlag<-(false) # Set the flag to `true` on the next turn. def second := setFlag<-(true) # And finally, when those two actions are done, use the flag to make a choice. when (first, second) -> if (flag) { "the flag was true" } else { "the flag was false" }
You might think that this is non-deterministic, but in fact the delayed actions will each execute in order, and so the flag will be set first to `false` and then to `true`.

xiaoma 2 days ago 2 replies      
Japanese toilet industry agrees to standardize complex bidet controls theverge.com
282 points by prostoalex  1 day ago   225 comments top 18
bjackman 1 day ago 16 replies      
> The government has recommended removing the Buddhist manji symbol from maps aimed at foreigners, for example, for fear of unintended associations with the Nazi swastika.

Does anyone else think this is a shame? When I've been to Asia and seen "swastikas" everywhere I've found it in a way joyful. The hate symbol has no power here, I thought; it's a positive thing. Why should one culture change their own iconography just because it was perverted in a different culture? It seems like a mild example of self-inflicted cultural imperialism.

Maybe I'm just being very philosophically nave.

patio11 1 day ago 11 replies      
If you want an illustration of the problem, here's one sourced by the very scientific method of finding the closest men's restroom:


There are over 20 individual controls on that unit (which is, FWIW, common and reasonably expensive). If you do not read Japanese, good luck at finding flush... and finding it will not help you finding it on the next machine you use.

tunesmith 1 day ago 2 replies      
A year ago I bought a Luxe bidet attachment for my wife as half-joke. We basically fell in love with it. Bought it for friends of ours to be funny. They thought it was hilarious, waited months to install it, then installed it and fell in love with it. Now they want to buy it for friends. We bought it for family members for this Christmas. Uproarious laughter, and then... you guessed it, they fell in love with it, and thinking of who they might buy it for. For Americans, this is truly one of those things where you come away thinking "Holy moly, why did we not do this sooner?"

It's $60 for the deluxe version, cheaper if you don't want hot water (and you actually don't need the hot water). It's a subtle-but-massive improvement in quality of life.

piyush_soni 1 day ago 6 replies      
Oh, how badly I wished Americans had a clean way of cleaning their ... bottoms during my 8 years of stay there. Thankfully, Amazon had nice portable bidets which you could fit in your home toilets, but everywhere else it was still the same.
dguest 1 day ago 1 reply      
I was hoping that along with the standardization we'd get an explanation (in english) as to what these symbols mean.

I mean it's nice to know that we can standardize to:

- a pair of line wobblers

- two tornadoes of different intensity

- two different fountain rides with different camera angles and zooms

- getting mauled by a three-toed sloth, and

- the all important black box

but maybe for people with less inductive skill some words would help too.

wapz 1 day ago 3 replies      
For those who don't know, you don't actually have to use the bidet when you go to Japan. The most complicated thing about toilets in Japan is that the flush button is on the wall sometimes (and the emergency button is often on the wall, too).
ericdykstra 1 day ago 2 replies      
It's nice that these companies have come together to standardize their icons.

I don't see the intrigue in this story, though. Can someone explain it to me? Is it the iconography design? The "wow Japanese toilets are complicated!" reaction? People just upvoting anything that has to do with Japan?

dcow 1 day ago 0 replies      
I think the real problem is lack exposure to such miraculous devices outside of Asia-Pacific.
IE6 1 day ago 0 replies      
As an American who has used Japanese toilets I can testify that they are solving a very real problem.
agumonkey 1 day ago 1 reply      
My favorite thing is not the complexity, but that you can operate the toilets without touching them. I wish we had foot controls for cover, flush, water cleaning faucet and soap mandatory everywhere.
allengeorge 1 day ago 2 replies      
At the risk of sounding completely uncultured, what exactly is a rear spray, and how does it differ from the bidet functionality? Is the bidet targeted and the rear spray a delightful misting?
jzl 1 day ago 5 replies      
Serious question: WHY can't the computer industry do this for USB-C cables and ports? It's desperately needed and shameful that they haven't done this.
dronedronedrone 1 day ago 0 replies      
this is tantamount to cultural genocide /s seriously though, one of the peculiar joys of being in japan and having a very poor grasp of the language is the inescapable urge to play with the bidet buttons. you will inevitably start spraying water all over the bathroom, get yelled at by a nice robot voice, and panic a great deal.
cm2187 1 day ago 8 replies      
Are these used outside of Japan?
codeddesign 1 day ago 0 replies      
To me... for a an industry standardization, those icons are pretty elegant. Just thinking of standard restroom signs and then looking at these.
binarynate 1 day ago 0 replies      
This standardization makes it clearer that there's a button that blasts you into the air.
petepete 1 day ago 0 replies      
What are the chances that they'll standardise on the Three Seashells system?
homakov 1 day ago 0 replies      
So they watched Why him?
How Do You Measure Leadership? ycombinator.com
356 points by craigcannon  1 day ago   141 comments top 38
freddyc 1 day ago 5 replies      
Over the years a test I've often used is asking "how does this person respond to being challenged/questioned?" A great leader tends to embrace the fact that someone is asking "why" and uses it as an opportunity to learn and potentially convert the questioning party (if they're questioning something in the first place, then you haven't nailed it 100%). A weak leader who doesn't have confidence in their abilities sees the challenge as a personal attack and reacts in a knee-jerk fashion (often, though not always, resulting in a termination). If you can't reconcile differing opinions and convert those with opposing views to you then you're doomed as a leader and odds are your company/team will experience high turnover.

Obviously there's a whole range of other traits that make great leaders, but I've found people that fail this test are almost always terrible leaders who others don't want to work for.

swombat 1 day ago 2 replies      
How do you measure a great car? There are three factors I've observed: great cars can accelerate, great cars are fun to drive, and great cars have steering wheels.

Sorry, but leadership cannot be reduced to these three factors. There are many excellent leadership frameworks out there which provide great insight into how different leaders operate, and what the great ones have in common. Look up topics like Spiral Dynamics, the Action Logic leadership framework, Kegan & Lahey, and the Integral framework, and you'll have some good starting points on models of adult development that correlate to effective leadership.

edw519 1 day ago 6 replies      
I've had 80 bosses. 77 of them sucked. I would march through hell to help the other 3 get something done. For me that pretty much sums it up. All the rest is fluff.

FWIW, OP's 3 metrics:

 1. Clarity of Thought and Communication 2. Judgment about People 3. Personal Integrity and Commitment
Those should be necessary but not sufficient characteristics of every person in your organization.

EDIT, response to walterbell & el_benharneen about what made the 3 different (in no particular order):

 - They always told the truth (to everybody). - They knew their stuff (tech, system, user domain). - They figured out the right thing to do. - They communicated often and flawlessly. - They did whatever it took to get the right thing done. - They smiled almost all the time. - They made each other person feel special. - They made work fun. - They were always teaching something. - They called bullshit instantly. - They protected their team. - They inspired us by showing how good things could be.

claar 1 day ago 2 replies      
Also a great read along these lines is "The 21 Irrefutable Laws of Leadership" by John Maxwell, which I'm close to finishing currently.

Maxwell claims that leadership is influence, not authority. When I became a co-founder, I thought that made me a leader. But as PG's excellent post and Maxwell affirm, leadership is quite distinct from positional authority -- and is much more difficult to attain.

Speaking directly to this post, I found that rating myself against Maxwell's "21 laws" was a sobering and likely accurate gauge of my leadership ability.

ChuckMcM 1 day ago 2 replies      
It is always interesting when someone who believes themselves to be a great leader, discovers that they are not. And since many of the traits that make great leaders, self awareness, humility, honesty, Etc. are missing in these folks, the world around them sort of explodes when that realization hits. In my experience it is a time when they are most likely to embrace 'leadership through politics.' It is always a strong signal that it is time to distance oneself from the faux leader's area of influence.
remarkEon 21 hours ago 0 replies      
This is a hard question to answer, and my personal opinion is based on my experience in the Army over a while. The best leaders I encountered managed to somehow turn out the best in the people they led. That can manifest in a lot of ways. Improvements on subordinate performance, increases in technical proficiencies, a more disciplined approach to their work. Those are all good metrics, but the best leaders managed to get their subordinates to actually want to improve on their own, without sufficient goading from their leaders. Most of that, therefore, lands in the realm of understanding group dynamics, behavioral economics, and leadership psychology.
jonathanstrange 16 hours ago 0 replies      
What about this study mentioned in Kahneman's Thinking fast and thinking slow according to which there was only a very slight correlation between the success of a company and the qualifications of a CEO?

Don't get me wrong, CEOs have my uttermost respect and I don't claim that it's an easy job. I just wanted to point out that there are reasons for believing (at least the possibility) that from the point of view of a realistic assessment the choice of a leader and his or her personality, qualifications and ambitions do not have much to do with the performance of a company and that the many apparent examples to the contrary are mostly based on selection bias and some biases towards oneself such as regarding one's own success more as an achievement rather than chance as those of others, estimating your own social status higher than those of others, believing your less biased than others, etc.

Maybe the best qualification for leadership is being at the right place at the right time, and nobody else really wants to do it?

If that sounds too negative, let me stress again that I think CEOs and people in certain kinds of leadership positions often (though not always) do some difficult work that I generally respect. I just don't buy the claim that the successful ones are little geniuses. A decent amount of intelligence (smartness), some generic business knowledge and being good with social relations seem to suffice.

prewett 1 day ago 0 replies      
Leadership is people development. So, how many people have you developed? How many times have you reproduced yourself?

If you want grow your company, you are going to have to reproduce yourself so that the new you is doing the old role so that you can step into the new one, or perhaps relieve yourself of excess roles. That role may or may not have the title you had when you were doing it, however. You might be titled "CEO" when you are leading a team of 5 people, but you will reproduce yourself as "Team Leader" as you start adding teams.

Merely having clarity of thought and integrity does not make you a leader, it makes you a great team member. Merely having good people judgement makes you a good manager, not necessarily a good leader. Developing people makes you a good leader. It's hard to do that without the other three, though.

Cyranix 1 day ago 1 reply      
RE: "Clarity of Thought and Communication" I have worked at a couple of places that put a lot of effort into internal communications, selling employees on upcoming product changes they'll be working on, but failed to acknowledge the existing significant problems that everyone saw and that were repeatedly punted on. Being able to give a slick pitch is not sufficient for this leadership criterion; the narrative must be "credible" (as mentioned rather briefly in the article). Is it just me, or do other people find themselves frustrated at internal messaging that is self-consistent but not grounded in reality?
curiouslurker 1 day ago 2 replies      
Great read but did Steve Jobs really have personal integrity? He was famously double faced, manipulative and as petulant and petty as a child, often settling personal scores with business decisions.
unabst 16 hours ago 0 replies      
Two words that weren't repeated enough in this essay, especially one with a focus on trust.


Great leaders have empathy towards their customers, their employees, and above all, to their cause, which is what is contagious.

This is an emotional connection that garners an emotional response. The person that initiates the connection is leading. The person responding is following. When this pattern repeats itself, it strengthens the form and function of the relationship.


Taking responsibility is not to be confused with taking blame, because they are opposites.

Responsibility is taken before the mistake, and doesn't go away after the mistake. When the mistake happens, you apologize, then fix it, because you're still responsible. Blame is only taken after the mistake. It ends with an apology or a legal defense, possibly an acceptance of punishment, and afterwards we forget it all happened. One is progressive. The other is regressive.

There was also one word that wasn't even mentioned.


A leader makes promises, and delivers on them, until everyone succeeds. They make promises to clients, to customers, to partners, to investors, and to employees.

You cannot be a liar and keep promises. You cannot be incompetent and keep promises. You cannot make excuses and keep promises. You have to be aware, proactive, and capable to even know which promises to make.

And with every promise you keep, you've just given everyone another excuse to trust you, depend on you, and follow you.

In a nutshell, if they can promise to be responsible for delivering on a cause they deeply believe in, they're a leader.

Bahamut 1 day ago 0 replies      
For leadership principles & qualities, I am biased towards the list that the Marine Corps has put out: http://www.tcsnc.org/cms/lib010/NC01910389/Centricity/Domain... .

The Marine Corps may not be a paragon in efficiency in some ways, but I have found that these qualities hold strikingly well for good leaders in the civilian world as well.

treenyc 22 hours ago 0 replies      
Before we can measure leadership. Maybe we ought to first figure out what we mean by leadership.

Often there has being a mix up between leadership, management, and a bunch of other stuff that has nothing to do with leadership.

If people are interested in how and leadership is effectively exercised and what it is. Take a look at this paper: https://ssrn.com/abstract=1392406

visarga 19 hours ago 1 reply      
Optimize for more than immediate profits. Don't consider themselves detached from common population, and act in the interest of the greater good. It's a case of game theory - we need to cooperate even at the cost of a personal loss for the greater good, otherwise we all lose.
pjmorris 1 day ago 1 reply      
A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves.

- Lao Tzu quote opening 'Becoming a Technical Leader' by Jerry Weinberg

Macsenour 1 day ago 1 reply      
Being a boss and being a leader are two very different things.

That may seem obvious to those that understand it, those that don't will think I'm nuts. As a scrum master I have been a leader at every company where I have worked. I have never had anyone report to me in those same companies, aka not a boss.

ktRolster 1 day ago 1 reply      
There's kind of a difference between a manager and a leader.

Manager - Makes sure things get done. If someone quits, finds a replacement, etc. We should all be managers of ourselves.

Leader - A person that employees are willing to follow. Makes the group into a team, working together. Actually cares about the members in his team, protects and defends them. Fights to get them raises, etc.

zzalpha 1 day ago 1 reply      
All the qualities they identify, here, are, in my mind, absolutely necessary (though not sufficient) for someone to be a good leader.

But, despite the title of the article, none of them are objectively quantifiable.

6stringmerc 1 day ago 0 replies      
Leadership can be measured by simply stripping away all external factors that could distort the ability to quantify the Individual Leadership Quotient. A few such elements would include, but are not limited to: A) Talent and Aptitude of Followers, B) Macro Economic Conditions, C) Luck, D) The Weather...basically I think the notion of Leadership is very elastic and, more often than not, highly circumstantial.

What is good Leadership for a bunch of grunts storming a beach in combat isn't objectively comparable to good Leadership for a bunch of teenagers in a classroom environment. There are some "Characteristics" I think that can be described and discussed as a useful musing on the concept, but it has to be qualitative not quantitative from my perspective.

ImTalking 1 day ago 0 replies      
I think you can overcomplicate this question but a leader is someone that, over time, people follow. Why they follow is up to the individual.

Gandhi was a leader, but then conversely, Hitler was also a leader. There is no morality in leadership, but I would say a common trait would be charisma.

laurex 1 day ago 0 replies      
These all seem like good things in a leader, but it does miss a vital quality, which is being able to guide those around the leader to perform at an elevated level, usually because the leader has the ability to both convey the importance of the mission and to be a good "whisperer," i.e. someone who listens to their team, understands what makes them tick, and supports them in performing at their best.
kogus 12 hours ago 0 replies      
Proof is in the pudding. Measure leaders by how many follow them, giving greater weight to leaders who are followed by other leaders.
eruditely 1 day ago 0 replies      
You should probably follow Nassim's idea of not trying to measure x (leadership) vs output of leadership f(x) and try to measure the exposure and how it impacts it. Since probably the most significant effort has been pulled into probability theory and trying to get a measure of x that's probably the place to look.

And you would NOT try to measure it as a point estimate as many have reminded us, you would try to set bounds lower&upper.

mempko 1 day ago 2 replies      
A leader is not a position, but a role anyone can play at any given time.
calinet6 21 hours ago 0 replies      
Having integrity, being able to judge people, and being a smart thinker? That's how you measure the leaders of an organization which was heavily influenced by the management principles of W. Edwards Deming? Have you even read Creativity Inc? Ed Catmull himself said, "As we struggled to get Pixar off the ground, Demings work was like a beacon that lit my way."

Get a used copy of this book and read it cover to cover: https://www.amazon.com/Leaders-Handbook-Making-Things-Gettin...

Chapter 2: The New Leadership Competencies:

- Competency 1: The Ability to Think in Terms of Systems and Knowing How to Lead Systems

- Competency 2: The Ability to Understand the Variability of Work in Planning and Problem Solving

- Competency 3. Understanding How We Learn, Develop, and Improve; Leading True Learning and Improvement

- Competency 4. Understanding People and Why They Behave as They Do

Those sound a tad more concrete and believable, don't they? That's an understanding of reality that might help you be a better leader to an organization that actually works. Dismiss the surface-level personality games and get yourself into the scientific reality of organizations, and you have a hope of leading one well. There are no missing partsthe whole system is important. That's the leadership secret.

My bet is that the leaders described in this post are better described by the above characteristics, and they more reliably predict leadership success, than any of their individual traits or abilities. Certainly Ed Catmull, who was himself a big believer in Deming's way of managing companies, fits that model, and Steve Jobs was heavily influenced by Deming and Juran in creating a system able to produce extraordinary quality. In fact, the whole Pixar team this post is about was more heavily influenced by Deming's concepts than any trite personality fluke, yet that influence is entirely ignored here.

This is forgivable: it's attribution bias. We instinctually want to attribute to the greatness of the individual that which was actually more nuanced, the outside factor in this case being a great body of knowledge about management and leadership that led them to be extraordinary.

Now you know. Read Peter Scholtes' Leader's Handbook, read Creativity Inc., and keep thinking about it. There's way more to it than just having integrity, being able to judge people, and being a smart thinker. If excelling at those were all it took, we'd be up to our necks in extraordinary leaders. Must be something else, then.

treenyc 22 hours ago 0 replies      
Hmm, do we distinguish leadership from management?
arca_vorago 1 day ago 0 replies      
Leadership is intangible, hard to measure, and difficult to describe. It's quality would seem to stem from many factors. But certainly they must include a measure of inherent ability to control and direct, self-confidence based on expert knowledge, initiative, loyalty, pride and sense of responsibility. Inherent ability cannot be instilled, but that which is latent or dormant can be developed. Other ingredients can be acquired. They are not easily learned. But leaders can be and are made. General C. B. Cates,19th Commandant of the Marine Corps

Ingrained to my brain from my Marine Corps days is the acronym JJDIDTIEBUCKLE as the list of leadership traits, and it has served me well since, although in the civilian world I have had to lower my expectations of others around me in having even a fraction of such traits.

Relevant reading for those curious about how the Corps approaches leadership: http://www.tecom.marines.mil/Portals/120/Docs/Student%20Mate...

alfonsodev 1 day ago 0 replies      
Two things:

By the profesional/personal growth of each team member and by the harmony of the group.

ThomPete 1 day ago 0 replies      
You don't. You experience it.
benkitzelman 1 day ago 0 replies      
Look behind them and see who is following (following.... not just obeying)
z3t4 1 day ago 0 replies      
How many people that follow him/her literally
losteverything 1 day ago 3 replies      
Getting people to do things they don't want to do.

I believe from Jack Welch

ajmarsh 1 day ago 0 replies      
By the output of the employees that are lead/managed?
perseusprime11 11 hours ago 0 replies      
Here's my short list:

1. Listen to your people and look after them.2. Delegate work because your people can do it better than you.3. Make sure your team is working on the right things

imh 1 day ago 5 replies      
I'm sad not to see an emphasis on giving a shit about the lives of those people you're leading. Personal development, career development, family, fun, etc. These are all hugely important to people outside of whatever widgets they are contributing to. A good leader should care about helping the people they lead achieve their goals, and not just in the sense of finding people who are willing to pretend their goals align with the widgets.
rebootthesystem 19 hours ago 0 replies      
Well, there are others factors at play today. Here are a couple of videos that discuss the general topic:



I have seen and continue to see some of the behaviors described in these two videos and it is deeply disturbing.

Attempting to lead people with deep social challenges is an exercise in frustration and futility. Leadership, in this context, is a very different thing than in what I'll call more traditional settings. It almost has to be reduced to appeasement and coddling. Latte's and ice cream.

We have a generation of adults who behave as petulant children half their age did in prior generations. Except they are in a 25 year old body. Some of these 25 year olds today would be slapped out of the building by 25 year olds a generation or two ago. They are weak, oversensitive, self-serving, entitled, delicate and disconnected from reality.

This is how you end-up with some of the crazy stuff coming out of outfits like Facebook and Google. They are completely devoid of real world social and business skills yet interact and affect the personal and business lives of millions.

One example that comes to mind are account suspensions and cancellations without even a shadow of customer care or service offered. If you can't swipe or click a problem away the option to actually engage with a real human being and exercise the ability to resolve problems simply isn't there.

How do you lead these people? Well, first they have to grow up. I suspect that will happen once they get to 35 or 40 years of age and finally understand reality. What will the consequences of such dysfunction be a few decades from now? Not sure.

sbierwagen 1 day ago 2 replies      

 It is based on observations I made when working closely with four leaders that I consider extraordinary: Ed Catmull (Pixars founder), Steve Jobs (Pixars CEO), John Lasseter (Pixars Chief Creative Officer), and Bob Iger (Disneys CEO).
All four of these guys were involved in wage-fixing, which cost their companies $415 million. https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...

So, "extraordinary" in the sense of being extraordinarily unprofitable.

Staying with the US Digital Service mattcutts.com
325 points by Matt_Cutts  1 day ago   230 comments top 27
1888franklin 1 day ago 5 replies      
Howdy Matt - will you build a Muslim registry if Trump orders it or Congress passes it?
jconley 1 day ago 3 replies      
Just a warning for everyone from California, Colorado, Washington, Oregon, etc. They do require you to certify for the background screening that you have not done any federally illegal drugs in the last 12 months, and possibly do drug tests. Might not want to spend too much time on it if that applies to you.


Matt_Cutts 1 day ago 8 replies      
Let me know if folks have any questions, but I left Google on Dec. 31, 2016.
verst 1 day ago 3 replies      
Congrats Matt. It was a pleasure to meet you at the USDS ice cream social last fall. Today was my last day at 18F. I'm returning to private sector, for now.

I'm proud of what we accomplished in building cloud.gov. I think 2017 will be a big year for the platform.

TheAceOfHearts 1 day ago 0 replies      
One of the best things to come out USDS / 18F is their design guidelines [0]. It's very comfortable to read, and completely accessible! I think my only complaint is that the components are all a bit gigantic. But it's understandable, since they have to cater to so many people.

Considering the government and tech got me wondering: what software do our leaders regularly use? What measures are taken to ensure it's as safe? How much of this information is readily available? Hopefully this won't land me in an FBI watch-list :).

[0] https://standards.usa.gov/

lazyasciiart 1 day ago 3 replies      
https://www.usds.gov/join#who says they are only accepting US citizens. However https://www.usds.gov/join#application only asks if you are authorized to work in the US, and says 'For some positions US citizenship is required'. So are there any positions that do not require US citizenship, or should that application form just ask if you are a citizen?
Keverw 1 day ago 2 replies      
I wonder how the USDS is different than the 18F? Seems like they both are doing the same thing, why not just merge them?
akmiller 1 day ago 2 replies      
Matt, are you not concerned that the USDS could be removed with the transitioning of President Trump. If I'm not mistaken it's part of the executive office so not only could it be undone, but it could be undone swiftly with the stroke of a pen. Seems like a risky move at this time!
new_hackers 1 day ago 2 replies      
What is the mood like with Trump coming into office? Is USDS going to remain alive and well?
tvanantwerp 1 day ago 0 replies      
Since the election, has there been any dropoff in people staying or applying to join USDS? I ado agree that now is probably a good time to stick with things--but I'm sure others view it differently. I'm curious if there are enough such people to see a difference in staffing.

Edit: Also, as a programmer and DC person, I love the work USDS does and I hope you guys keep working indefinitely.

brandonb 1 day ago 1 reply      
Just wanted to say: thank you for your service!
coleca 1 day ago 0 replies      
If you haven't seen Mikey Dickerson's presentation at last fall's Velocity Conference it's worth a quick watch: https://m.youtube.com/watch?v=LGSAyU2RZDo

Watching that video will make you understand why someone like Matt would feel the call to service he clearly does. USDS is a true example of everything that is right about civil service and has nothing to do with partisan politics despite being part of the Whitehouse.

general_ai 1 day ago 1 reply      
Great to see there are actual adults ready to take a pay cut and serve the country in spite of maybe not getting the president they like. Far too many people have fallen victim to feigned outrage and pearl clutching, and excluded themselves from the conversation completely. That's not how you effect change, folks. _This_ is how you do it. You go out there and put in the hard work.
usds_app_throw 1 day ago 3 replies      
As a new grad, the USDS was my top choice but I was constantly stonewalled by recruiters saying that I wasn't experienced enough.

For the USDS engineers on the thread--does the work really require 5+ years of work experience? Is there really no place for a passionate and moderately-experienced junior engineer in the organization? I would quit my job in a heartbeat if I got an offer.

mi100hael 1 day ago 1 reply      
Anyone know who the guy in the video giving the speech with his shirt buttoned all the way up is?

Edit: wow I don't recognize Jobs with a beard.

saycheese 1 day ago 1 reply      
>> "I'm the head of the webspam team at Google."

Just a heads up that the above is on your HN profile:https://news.ycombinator.com/user?id=Matt_Cutts

therealmarv 1 day ago 1 reply      
Very inspiring to learn about US Digital Service. I wished somehow Germany would have something similar.
saycheese 1 day ago 1 reply      
Now that you are officially no longer with Google, is there anything that you have put off talking about that you now feel is the right time to do so?
akytt 1 day ago 1 reply      
Fellow public servant and former Skyper here. I can fully see where you are coming from. Keep up the good work, a lots of folks are looking up to what the USDS is doing!
eikenberry 1 day ago 2 replies      
Why do they not have a remote work option?
edblarney 1 day ago 1 reply      
I applaud your commitment to civic duty.

I wish more of the big tech players would encourage leaves of absence to partake in these important tasks.

pryelluw 1 day ago 1 reply      
What's the interview process like?

What technologies are common?

jgalt212 14 hours ago 0 replies      
I said this in an earlier thread, but also applies to Matt's (and other Googlers) work for the USG.

AT&T, through Bell Labs, tried to be indispensable to the US Govt throughout the Cold War so that it would allow its monopoly to continue to exist. Google is following the same game plan. The problem with this game plan is that by Google seeming too cozy with USG it makes it harder for it to operate in foreign markets. AT&T did not have this problem as foreign markets during the Cold War were much smaller and more closed.

bogomipz 1 day ago 1 reply      
Can explain briefly how 18F relates to USDS? I guess I thought that 18F was the group behind healthcare.gov in the US and clearly I have that wrong. Thanks.
throwaway9865 1 day ago 4 replies      
Hi Looking at the salary and that I'm in Baltimore making $135k as Sr. UI/UXer (designing/coding sites apps for Govt agencies) I'm not sure the salary range of up to $163k would be worth it. That's probably not for UI/UXers or is it?

My rent here in Baltimore with utilities is 2k ... if I moved to D.C. A lot of the additional salary would be absorbed by the doubling or tripling my cost of rent.

Thus I wonder why remote positions and or a mix of remote and on site one day a week is not being offered?

codeonfire 1 day ago 1 reply      
Trump is going to shut this down unfortunately. I don't think people have really come to terms with what is going to change. If there is anything digital that needs done, some republican backed defense contractor is going to do it for 100x the price.
iblaine 1 day ago 3 replies      
> Working for the government doesnt pay as well as a big company in Silicon Valley. We dont get any free lunches. Many days are incredibly frustrating. All I can tell you is that the work is deeply important and inspiring, and you have a chance to work on things that genuinely make peoples lives better.

Pretty easy thing to say when you already have millions in your bank account.

Dont Tell Your Friends Theyre Lucky nautil.us
307 points by dnetesn  1 day ago   278 comments top 38
colanderman 1 day ago 8 replies      
The article (and some comments here) seem to conflate luck and what I will call lot. "Luck" I define as random happenstance during one's life. You can manage luck. Doing so is the central theme of many board games. You can increase your luck "surface area" by taking more chances. Entire industries (e.g. insurance) exist to manage luck.

Your "lot", on the other hand, I define as what you were born with. How you were raised, where you grew up, what kind of education you got -- everything you can't control that does have a significant impact on your life's outcomes. You can work to improve your lot, or minimize its impact on your life, but it's very difficult.

Of course there's some correlation: those with a good lot often learn early how to manage luck, and those who manage luck well can negate a poor lot.

Hence I begrudge no-one with seemingly good "luck": often (possibly more than not), their fortune is simply a byproduct of how they managed their luck. Good for them!

But those born into a good lot? They're the true "lucky" ones.

ergothus 1 day ago 15 replies      
My father and I have somewhat productive political conversations: He's fiscally conservative, I tend towards the liberal side of the scale.

Drilling in to find what we really disagree about, it seems to boil down to two concepts: (1) I view success as a matter of luck that your effort can make better or worse. He views effort as the single most important deciding factor in success in life (2) I'm willing to tolerate an amount of "unfairness" in people getting help they "don't deserve", while he finds this very offensive.

I honestly feel that if considered luck to be a larger factor and effort to be a lesser factor, his political stances would change pretty dramatically. (same applies to me in reverse). I wonder how much the social willingness to accept luck as a factor impacts popular political positions. (Perhaps not much, as the author in the article promotes a consumption tax, which is generally seen as more regressive)

dv_dt 1 day ago 3 replies      
I think this touches upon one of the biggest weaknesses of the current economic system. We systematically waste the human capability of millions of people because the system essentially randomly gives much better opportunity to some over others. Meritocracy somewhat exists but mostly to the extent that people can maximize the opportunity they've drawn as their lot in life.

I like the idea of Basic Income, but it's a somewhat limited solution to capping how far down someone can fall in society - what would really supercharge a future economy is opening up avenues to truly distributing equal opportunity. Wealth inequality suppresses this strongly, when people receive better margin of income over the absolute minimum economic allocation of their wages, they can then allocate their own wealth from their personal outlook in multiple ways - including starting businesses which may change the world.

kyleschiller 1 day ago 5 replies      
Debating the actual importance of luck seems a lot less important than developing the proper attitude towards luck.

Pretending luck doesn't exist can lead to arrogance and a lack of empty for people who haven't succeeded. On the other hand, believing that luck controls everything can lead to fatalism.

It might seem best to find a happy medium, but being wishy washy about this whole thing just gives you opportunities to blame your own failure on circumstances outside your control, while continuing to take credit for success. In the general case, looking for balance between opposing ideologies makes no guarantee that you'll walk away with the best parts of both instead of the worst.

In practice, it's probably best to drop the determinism/indeterminism dichotomy completely and just focus directly on the desired end attitudes.

On a side note, the reason American society is obsessed with meritocracy has nothing to do with a belief about the nature of luck. Denying luck as the path to success is just a way to make people work harder.

downandout 1 day ago 3 replies      
It's certainly true that you need to be very lucky to become a billionaire - generating wealth at that level usually involves tremendous numbers of other people loving whatever business you have decided to create. But if you're reasonably intelligent, at least in the US, it's quite possible to become a millionaire without much luck, through decades of hard work and discipline.

Examples: software engineers at large companies that stick around for decades (usually through options), doctors (at least specialists, such as cardiologists and anesthesiologists), and lawyers that go to the best schools and are able to land jobs at top-flight firms. Even tradesmen that stick to their craft, such as master electricians or plumbers, can quite reasonably expect to achieve millionaire status over the course of their lifetime assuming that they manage their money well.

So yes, luck plays a huge role in the creation of enormous sums of wealth. But if you live in a country with abundant economic opportunity such as the US, there's no reason to be poor unless you have been extremely unlucky (health problems, accidents, etc have befallen you), you are unwilling to work, or you've made extremely poor life/financial decisions.

phkahler 1 day ago 5 replies      
Progressive consumption tax is ridiculous. It requires your tax rate at the point of sale to be dependent on all your purchases to that point in time. That's just not practical. Or it may require every purchase you make to be recorded for tax-time when you then pay the taxes. Either way it requires the government to know every purchase you make, or at least the price. This is not something anyone should want.
chrishacken 1 day ago 1 reply      
Maybe I'm naive, but I don't think any one denies the role luck plays in one's success or not. However, to completely discard effort and determination is selling everyone short. I'm running a successful company partially because of "luck", I happened to start it at the perfect time, but also because I pour every ounce of money and time I have into it. My nights and weekends don't exist. Some people aren't willing to put in the time to turn luck into success.

Telling people that success is just a matter of luck will only reinforce the thoughts of unsuccessful people to believe they're "unlucky". You are able to make your own luck to an extent.

cmurf 1 day ago 0 replies      
Veil of ignorance. There's a significant part of the upper end (wealth wise) of the population that like our classist society just the way it is, or maybe that it should be more classist. Everything should be a rent, there should be no public lands, everything is to be exploited, and if you're on the short end of the stick it's merely unfair, not a wrong or a failure of society. Or the more extreme versions of this, higher class folk have better money, better ideas, better genes, make and sell better things. They are better than others. Democracy and socialism are threats to these notions.
real-v 22 hours ago 0 replies      
This reminds me a little bit about one of my favorite philosophers, Alain de Botton. Sometimes, he discusses meritocracy and meritocratic societies.

Basically, in a meritocratic society, such as the US, people tend to believe everyone's lot in life is deserved; luck is not considered a big factor. This creates a problem where the poor believe the rich made it through their hard work, while the rich believe that poor people deserve to be poor because they are lazy or stupid. People are where they are because they deserved to be there.

I used to place a high value in the concept of a meritocratic society, but experience is convincing me that the lack of compassion that such societies experience is not worth trade off.

jeffdavis 1 day ago 3 replies      
Just like when people are trying to sell you something, they call it an "investment"; people trying to implement government spending programs call it "spreading opportunity".

Some government programs really do spread opportunity, but that requires close examination and criticism; I don't just buy into it because a politician calls it opportunity. Is college an opportunity? It can be a huge opportunity to get ahead in life; but it can also just subsidize a partying lifestyle and a phony major for four years. It depends on the college, the student, and the structure of the opportunity.

It's hard to tell the difference between spreading opportunity and spreading results. It often requires looking at the details, measuring along the way, and it is often different for different people.

jartelt 1 day ago 0 replies      
I think a lot of people do not realize that you are lucky if you are born into a middle class or upper class family. Having parents with some savings allows you to take extra career risks because you know that you can likely get help from your parents if none of the risks pay off. It is more difficult to make the decision to work at a startup or buy a house if you are totally on your own when things go south.
luckystartup 11 hours ago 0 replies      
> Then at the end students got a bonus for their participation experiment and they were told that they could donate some or all, any fraction of their bonus, to one of three charities, their pick, just by saying so to the experimenter. What she found was that people who had listed external causes of the good thing happening donated about 25 percent more of their bonus to a charity than the people who had listed things they had done to cause the good things to happen. The control group was somewhere roughly in the middle of those two.

> There have been many experiments that have shown if you prime people to feel the emotion of gratitude, they become much more generous toward others, much more willing to pay forward to the common good.

> If you want people to think about the fact that theyve been lucky, dont tell them that theyve been lucky. Ask them if they can think of any examples of times when they might have been lucky along their path to the top.

That's the gist of the article. People get defensive when you say "you're lucky", because they interpret this as "you don't deserve your success". By reframing the message and asking people questions about times where they were lucky, then this can make them feel more generous.

Very practical advice for anyone who is delivering a speech at a fundraiser.

slitaz 1 day ago 1 reply      
"luck" is not a good choice as a word here.They mean something like a chaotic event that ended up being positive to them.

Also, just waiting for such a positive chaotic event to happen to you, is probably not the best strategy.

If you make good social interactions that you maintain, then those positive chaotic events are more likely to come your way.

ChuckMcM 1 day ago 0 replies      
If you get a chance to experience an "exit", where a number of people suddenly have much more wealth than others around them who are essentially doing the same things but joined the company at a different time, you will get to see all the different ways that people internalize that event (both positively and negatively).

Luck is very much a part of success and a big part of the way that Vikings talked of sailing with successful leaders ('they have a lot of luck'). And most importantly luck has no bearing character. But internalizing that can be hard when someone you despise gets rich, or someone you really care about fails to get the rewards that others in the same place have.

baldfat 1 day ago 0 replies      
I am anti-determinist and Soren Kierkegaard (founder of existentialist thought) so inspired me that I named my son Soren. The fight between the two parties of thought is huge and bigger then Windows vs OS X.

> Jean-Paul Sartre:

"What is meant here by saying that existence precedes essence? It means that, first of all, man turns up, appears on the scene, and, only afterwards, defines himself. If man, as the existentialist conceives him, is indefinable, it is because at first he is nothing. Only afterward will he be something, and he himself will have made what he will be."

Society sees luck in terms of fairness. This article used the word fair or fairness zero times. Fairness is a HUGE issue in deterministic thought especially dealing with how we perceive others around us.

Karlozkiller 20 hours ago 1 reply      
So if I believe in luck I will be more inclined to pay high taxes? I don't think that's how it works.

I mean of course people with money who realise not everyone who is poor is poor because they're lazy bums will be more inclined to help a "poor person" than they would if they believed all poor are lazy bums. But does that mean they will accept high taxation? Say I am rich and narcissistic, I believe I'm better than everyone and that my skill put me on top. I then realise that other who are skilled are poor and I want to help them become richer. Do I believe that paying the government to use my money for welfare to be the most effective use of my money to fulfil this end? Probably not in that case.

Furthermore some people are lucky, some unlucky. This does not mean that no effort but mere luck goes into building an empire. If luck was the only factor then sure, this argument or taxation might hold. But there's a lot more than luck to it, which is much more in the control of the individual.

tabeth 1 day ago 4 replies      
I'm a strong determinist. Effort, hard work and skill is irrelevant (any relevance comes from the fact that you're already in your statistical band for expected success and are trying to maximize within that). I believe most of your success is determined before you even take one step on this planet. Step one is acknowledging the truth: your initial circumstances dictate your future. Once this is acknowledged, we as a species can begin focusing on making the initial conditions ideal for everyone.

Note: I am not saying you shouldn't work hard. I am just saying that it's not doing as much as you think. Individual examples of success (I've done decently despite two parents who didn't finish elementary school, live in inner city, etc) are not of relevance for planning the future of the human race. The world is chaotic, so there will be outliers in spite of the "determinist property" of the world.

Parents' own desperation to "set their children up" for success is anecdotal confirmation of this fact.


Some examples:

Socioeconomic status v. Educationhttp://www.apa.org/pi/ses/resources/publications/education.a...

Health v. Educationhttp://www.nber.org/digest/mar07/w12352.html

Health v. Socioeconomic Statushttp://www.apa.org/pi/ses/resources/publications/work-stress...

Parent education v. child long term successhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2853053/

Skin color v. attractivenesshttp://journals.sagepub.com/doi/abs/10.1177/0095798405278341

Height v. successhttp://www.independent.co.uk/life-style/health-and-families/...

Weight (at birth) v. successhttp://ns.umich.edu/new/releases/5882

Attractiveness v. successhttps://www.psychologytoday.com/blog/games-primates-play/201...

Gender v. successhttps://www.historians.org/publications-and-directories/pers...

Eye color v. alcoholismhttp://www.sciencedirect.com/science/article/pii/S0191886900...

Geography v. socioeconomic successhttp://www.cid.harvard.edu/archive/andes/documents/bgpapers/...

emodendroket 1 day ago 0 replies      
This seems to take a sudden leap from a relatively uncontroversial (I'd think) proposition into a political program. I wonder about this bit:

> The price of the average American wedding in 1980 was $10,000. In 2014, the most recent figure I had, was $31,000.

According to a random inflation calculator I checked online $10k in 1980 would be worth almost $30k today. https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=10000&year1=19...

charles-salvia 1 day ago 1 reply      
In the United States, at least, poverty tends to be concentrated geographically in inner-cities and rural areas instead of being evenly spread out. This would seem to indicate fairly conclusively that location and environment affect opportunity and wealth more so than an individual willingness to work hard. In fact, being born into an environment of concentrated poverty like this molds your mental state and perception of the world, to the extent that the idea of breaking out of poverty may not always even appear as a possibility, thus discouraging you from even believing that hard work might pay off.
dang 1 day ago 1 reply      
This topic always reminds me of a line of pg's from years ago: https://news.ycombinator.com/item?id=1621768.
coka 16 hours ago 0 replies      
A few of the commenters here mention meritocracy, and it seems to me that they value it, or think that it is something we should strive for. I would just like to point our that the term "meritocracy" originally carried a negative connotation, with a very elitist endgame[1].

[1] https://en.wikipedia.org/wiki/Meritocracy#Early_definitions

nisse72 1 day ago 1 reply      
Tangentally related, I find it interesting that we often call people lucky when something very bad happened to them, but they somehow managed to survive the situation or land on their feet. We aren't as keen to describe people as lucky who avoided danger entirely.

Lone survivor in a plane crash? Lucky. Took a cruise instead? Meh.

Personally I think it's preferable to not be in the crash, than to have survived it.

jrs235 1 day ago 0 replies      
I believe https://news.ycombinator.com/item?id=13437977 ties in with this in that many "lucky" people prepared so that when luck struck things were aligned to take off.
kartan 1 day ago 0 replies      
> The whole process of constructing life narratives is biased in ways that almost guarantee that people wont recognize the role of chance events adequately.

This is also a cultural thing. Here in Sweden it is easier that people dismisses their achievements to not look like they are bragging and to accept that chance is part of life. I do that myself. And I feel better and less stressed recognizing that luck is part of why I have what I have.

So I work not too hard - that can be bad for my health and a bad long term investment - not too little - work is needed to achieve anything and you have to do your part to not let other down. So you work lagom.

thret 16 hours ago 0 replies      
It's hard to detect good luck - it looks so much like something you've earned. - Frank A. Clark
aresant 1 day ago 1 reply      
Ben Franklin has a great line on this topic - "Diligence is the mother of good luck."

The author illustrates this major point with an example of the "TOP" cellist in the world:

"One [cellist] earns eight or nine figures a year while the cellist who is almost as good is teaching music lessons to third graders in New Jersey somewhere. . . The person who is eventually successful got there by defeating thousands, maybe tens of thousands, of rivals in competitions that started at an early age. . . [but] the luckiest one . . [is] that person who is going to win the contest most of the time."

EG - you need to put in the hours of preparation & subject yourself to competition of the highest order to even have a chance at being the "luckiest" in your field.

jrs235 1 day ago 0 replies      
"It takes 10 years to achieve overnight success."


djyaz1200 1 day ago 1 reply      
There is a pretty good book that addresses some of the business aspects of this... "Competing Against Luck" by David Duncan and Clayton Christiansen (the same guy who wrote "The Innovator's Dilemma"). I'm not done with it yet but so far it goes into some interesting detail about how to reframe everything people pay for as jobs... and that building a successful business is about understanding the job to be done and mastering it.
Mendenhall 14 hours ago 0 replies      
Dont tell your friends there is no such thing as luck. There are only factors that are too numerous for you to account for them all.
jagtodeath 1 day ago 3 replies      
Not super related but I cant resist. The guy in the article looks almost EXACTLY like Steve Jobs.
MikeTLive 1 day ago 0 replies      
the first thing that impacts your future success is the luck of the conditions of your birth. you have no control over this. hard work MAY make up for this, however having a "better" birth condition plus this same hard work does not negate the value of that first starting position.

this is lost on many successful people who wrongly attribute the entirety of their success to their own efforts and presume that anyone who is not successful has simply not worked hard.

swolchok 1 day ago 0 replies      
Related reading: Fooled By Randomness, by Nassim Nicholas Taleb.
minikites 1 day ago 3 replies      
I think a lot of people are emotionally unable to deal with a world that is as dramatically unfair as ours is, so they fall back to the childish notion that people who have fallen on hard times deserve it and successful people controlled their own destiny to get there, because the alternative is too uncomfortable to think about.
pier25 1 day ago 1 reply      
This reminded me of the film "Match Point" directed by Woody Allen. IMO his best film.
chrismealy 1 day ago 1 reply      
Frank is a terrific writer and his books are excellent (rare for an economist).
rbanffy 1 day ago 0 replies      
Why would I? They live in this timeline.
jgalt212 15 hours ago 0 replies      
here's my formula (and I've been around the block a few times).

decent level of success = 0 units of bad luck + 3 units of skill and 2 units of hard work

yuge level of success = 5 units of good luck + 3 units of skill and 2 units of hard work

andrewclunn 1 day ago 2 replies      
A lot of this "luck" can be traced back very easily to causes like "had two parents who gave a damn" or "had enough to eat eat growing up." The people pushing this narrative that you're not really responsible for your failure / success want it both ways. They want to make you admit that you benefit from living in a peaceful stable society with infrastructure, while also not wanting to hold parents accountable for having too many kids too early, or admit that impact that divorce has on young children. It always comes down to pushing some narrative that is meant to justify further state intrusion into our lives and the dismantling of the family unit, all with pseudo-scientific (see "the gray sciences") justifications and emotional appeals. Spare me the bullshit, I aint buying it.


Looking for another example of this obvious propaganda? Try the latest episode of RadioLab:


Introducing ProtonMail's Tor hidden service protonmail.com
307 points by vabmit  1 day ago   92 comments top 14
ergot 1 day ago 4 replies      
For those wondering how to create your own custom Tor onion adress, look no further than: https://timtaubert.de/blog/2014/11/using-the-webcrypto-api-t...

And for those who think Protonmail are the only service with a custom address, think again, because Facebook has one too: https://facebookcorewwwi.onion/

You can find a tonne more at this list:


And staying on topic, Mailpile has their own .onion


mike-cardwell 1 day ago 4 replies      
This is not quite as good as riseup.net's onion support as it doesn't include SMTP services. See:


 mike@snake:~$ torsocks telnet wy6zk3pmcwiyhiao.onion 25 Trying Connected to wy6zk3pmcwiyhiao.onion. Escape character is ^]. 220 mx1.riseup.net ESMTP (spam is not appreciated)
So if your mail service supports onion addresses, then you can just replace "@riseup.net" in a users email address with "@wy6zk3pmcwiyhiao.onion".

Alternatively, your mail service could have explicit configuration in place to identify @riseup.net addresses and route them to wy6zk3pmcwiyhiao.onion instead of the normal MX records. I do this with Exim by utilising Tors TransPort+DNSPort functionality and then adding the following Exim router:

 riseup: driver = manualroute domains = riseup.net transport = remote_smtp route_data = ${lookup dnsdb{a=wy6zk3pmcwiyhiao.onion}}
Obviously this would be better if there was a way to dynamically advertise the onion address in the DNS instead of having to hardcode it in Exim.

[edit] - If they co-ordinated, Riseup and Protonmail, and potentially other similar privacy respecting mail services could send all their traffic over each other via Tor. If you work for either of these companies, please consider the possibility of looking into this sort of relationship.

tptacek 1 day ago 2 replies      
If you are so threatened that you feel the need to use a Tor hidden service to reach your email provider, you should know that email --- "encrypted or not" --- provides the worst protection of all possible encryption messaging options. Don't use email for sensitive communication, and certainly don't rely on the security features of any email provider for your own safety.
a3n 1 day ago 3 replies      
From ignorance, why would I (a non-interesting person in a nominally free country, with non-interesting interests that could nevertheless become interesting depending on political shifts and shit) want to use this hidden service, rather than plain old ProtonMail?
jron 1 day ago 1 reply      
Last I checked, ProtonMail required SMS verification for account creation.

Edit: When using Tor

_eht 1 day ago 3 replies      
Can anyone speak to their like/dislike of ProtonMail vs Fastmail. I currently use Fastmail and I'm happy, but always looking for something better.
dgiagio 1 day ago 3 replies      
Could someone expand how an email service over Tor helps when the messages you sent to others still go through SMTP protocol (even with TLS) and is stored/relayed in/to unprotected severs?
tghw 1 day ago 2 replies      
If only ProtonMail could import old mail, I would be giving them money.
benwilber0 1 day ago 1 reply      
I always get the feeling that these kinds of services are NSA honeypots. Whether intentionally or unintentionally.
ortekk 1 day ago 1 reply      
I wish ProtonMail would offer more email aliases with its paid plans - credentials reuse is what often allows to snoop on someone's online identity. That would really boost its value in terms of privacy.
akerro 1 day ago 1 reply      
dogma1138 1 day ago 0 replies      
I wouldn't recommend accessing email over TOR, especially not a paid account.

Infact I would not recommend accessing any public service that requires a unique account authentication over TOR.

This at least is somewhat more useful than facebook over TOR but unless you are accessing only free throwaway accounts (and never use those to communicate with anyone you know) using this somewhat defeats the purpose of TOR.

lazyeye 1 day ago 2 replies      
Why the funny domain name? Is there any technical reason why they cant use protonmail.onion?
gjjrfcbugxbhf 20 hours ago 1 reply      
Has anyone thought of DNS for onion addresses?
The Infrastructure Behind Twitter: Scale blog.twitter.com
305 points by kungfudoi  21 hours ago   63 comments top 12
niftich 9 hours ago 0 replies      
The day before, Discord did high-quality a write-up on why they chose Cassandra [1], and now this post hits explaining how one of the world's most popular and trafficked service has engineered their infrastructure; it's like a dream.

I'll echo the praise I wrote earlier, that insights like this aren't only some of the best content to hit HN, but become some of the most valuable resources for designers who have yet to face a scaling issue, but know they will soon.

Since you have developed custom layers on top of open-source software to fit your particular usecase and load profile, and host all this in-house, have you considered monetizing your infrastructure for outsiders who may have similar needs?

Today, one has limited, unpleasant choices: either pay out the nose for something like AWS or Google Cloud to get elastic scaling and the captive storage systems that can be made to handle these kinds of workloads and still have to write a fair bit of custom glue to get all pieces to play nice, or you can build out the servers yourself, but have to employ dedicated talent with the requisite expertise. Either way, the barriers are fairly steep; you could tap into an under-served market should you choose to sell IaaS (edit: or, more accurately, PaaS). Has this conversation come up in the past?

[1] https://news.ycombinator.com/item?id=13439725

burgreblast 16 hours ago 6 replies      
100,000's of servers for 100,000,000 of messages/day ?

I understand that half the servers aren't even doing messages, but, isn't WhatsApp doing 2 orders of magnitude more messages with 3 orders of magnitude (?) fewer servers?

Is that right? I'm curious how one would justify 10,000X worse?

So for each message, 10,000X more equipment is needed?

atcole 20 hours ago 5 replies      
This is a fairly technical analysis, and the terminology used in many cases is above what I know about networking. But the one quote that will stick is this.

"There is no such a thing as a temporary change or workaround: In most cases, workarounds are tech debt."

rollulus 19 hours ago 4 replies      
It strikes me that so much of the components they use (e.g. under "Storage") are in-house built (several dbs, blob store, caches, etc). Is that because at that time equivalent solutions didn't exist? Is that because Twitter suffers from NIH?
tabeth 18 hours ago 3 replies      
I know nothing about storage, so I'm a bit confused about why Twitter needed:

1. Hadoop

2. Graph

3. Redis/Memcache

4. Blobstore

5. SQL variants

(and a few others).

I do see that the post has a short snippet briefly describing what they're storing, but I'd be curious to know why (speed, cost, latency, space tradeoffs/constraints).

Also, if any more experienced folks want to chime in: Elixir/Erlang is "built for concurrency" as they say. I'd love to hear people's opinions on their one sentence simplifications of what kind of situation Hadoop/SQL/Redis/etc should be used for (similar to how Erlang is best used for situations where concurrency and fault tolerance is desired). In particular, is there a "Code Complete" type book for storage?

ashayh 1 hour ago 0 replies      
| We have over 100 committers per month, over 500 modules, and over 1,000 roles.

| we were able to reduce our average Puppet runtimes on our Mesos clusters from well over 30 minutes to under 5 minutes.

This isn't just tech debt .. it's poorly designed, poorly thought, poorly architect-ed and poorly managed in the first place.

Is it because Twitter cannot find good talent because of its falling stock?

jzl 20 hours ago 0 replies      
Lots of CIO-type buzzwords and acronyms in this, including many coined by Twitter themselves. More worthy of a skim than a rigorous reading. But interesting for a bird's eye view of how complex Twitter is behind the scenes.
mnutt 20 hours ago 2 replies      
They mention that they used to base their geo routing on the location of the client's DNS server but have moved to BGP Anycast. I've heard that there can potentially be routing issues for long-running connections using anycast to end users, is anybody else doing something like this and do these issues happen in practice?
seanmccann 19 hours ago 2 replies      
> Fast forward a few years and we were running a network with POPs on five continents and data centers with hundreds of thousands of servers.

That seems high given Twitter's size and the hardware distribution pie chart they showed. Does anybody have an idea how this compares?

turbohz 5 hours ago 0 replies      
If Twitter can manage to operate at profit, then I might be interested in Twitter's infrastructure.
therealmarv 18 hours ago 1 reply      
No Ansible? In an older (2014) Ansible video they claim Twitter is using Ansible but I only see Puppet mentioned.
marknadal 17 hours ago 1 reply      
I've talked/pitched (full disclosure: got a "no") some of the bigger names behind the product at Twitter. I was a little disappointed because they seem to be proud of their at least $15M+/month server costs which is partly driving their company into the ground (the user facing product hasn't improved despite "re-engineering" of their backend, at non-substantial price differences, and lack of innovation for consumers have made them lose everyone to Snapchat or anti-censorship sites).

Of tweets alone (about 200bytes per tweet), over the last decade, they probably have about 3 petabytes. Unknown to them (because of the aforementioned pride) they have 1.5 petabytes a month of free storage/caching they aren't even touching. If they switched to a P2P model like IPFS or (full disclosure: I work here) http://gunDB.io/ , but Twitter seems determined to stay as a centralized monolith. Which is too bad, because that has now become their own death - the regime change is happening and decentralized services will win instead.

Edit: Compare against 100M+ messages (1000bytes each) for $10 a day. 2 min screencast here: https://www.youtube.com/watch?v=x_WqBuEA7s8 . Even if you multiplied the feature set by 10K times, you would still be saving $12M+ a month. At this rate the Discord guys (pushing 120M+ messages a day) are doing way better - there post is on top of HN right now too, see my comment there as well. And they only have 4 backend engineers.

Little Snitch 3 Protect your privacy obdev.at
333 points by ergot  12 hours ago   196 comments top 35
tedmiston 11 hours ago 4 replies      
> Research Assistant

> Have you ever wondered why a process youve never heard of before suddenly wants to connect to some server on the Internet? The Research Assistant helps you to find the answer. It only takes one click on the research button to anonymously request additional information for the current connection from the Research Assistant Database.

I'm so glad they built this feature.

The hardest part about using Little Snitch is trying to figure out whether processes that look like system or daemons are making legitimate connections.

diggan 12 hours ago 12 replies      
Why are OSX applications in general so bad at telling website users which platforms they support? Like always, I have to keep digging around in the website, just to find out that it only runs on OSX...

Does anyone know a similar utility for Ubuntu/Linux systems? Paid or free, doesn't matter.

zitterbewegung 12 hours ago 3 replies      
This is a prime example on how to make a landing page for a product. I understand what you are selling and why I would want it. The product looks great and I think I'll try it out after work.
lazyjones 11 hours ago 5 replies      
I tried an earlier version of this and was a bit disappointed by the (apparent?) lack of information regarding these connections from applications, since there's so much going on on OS X and it's hard to tell what's legitimate and what isn't. It would be great if we could record traffic on a per-application/process basis and display it comfortably, or even have some built-in heuristics to identify common tasks like "Firefox update check" or "iCloud authentication".

It's very similar to the venerable "Spybot S&D" on Windows (the "TeaTimer" functionality, now apparently called "Live Protection": https://www.safer-networking.org).

noja 12 hours ago 4 replies      
Excellent product, but needs some kind of rule sharing feature. There are so many network requests from different components that it can be overwhelming knowing what to allow.
Hernanpm 9 hours ago 3 replies      
I noticed no one mentioned https://www.tripmode.ch/ I used to use Little Snitch before but it was to complex for what I wanted to do, allow disallow internet access to certain apps, tripmode does the trick in the simplest way I've even seen.
rbritton 11 hours ago 2 replies      
Not related in any way, Little Flocker[0] is a similar program but for file access. It's a little rough around the edges but has been improving steadily.

[0]: https://www.littleflocker.com

vijucat 12 hours ago 4 replies      
Please steal this idea and make a product; I'll be your first paying customer:

Data Loss Protection (DLP) for retail consumers.

DLP (see http://whatis.techtarget.com/definition/data-loss-prevention... for a definition) goes beyond what Little Snitch does and does packet inspection to ensure that credit card numbers (for example) are never sent out from your network / box. Ideally, you can add regular expressions to define other PII that shouldn't be allowed to be sent out (your name, address, etc;).

DLP products exist for corporate use, but I don't know of any lightweight + inexpensive one for personal use.

WireShark, Fiddler or Charles can incorporate this functionality, if I am not wrong. Not sure how one would MITM SSL with WireShark, though.

Khaine 49 minutes ago 0 replies      
Little Snitch is great. You need to have a strong understanding of networking and the apps that you use, to use it successfully. It is great at opening your eyes to what apps are trying to connect where, and by catching a cap you can investigate what they are sending.
bsmartt 11 hours ago 1 reply      
why was this posted today? I bought Little Snitch 3 in January 2013. I was thinking maybe this was a new major version but it's not.
djsumdog 9 hours ago 1 reply      
There's a great Defcon talk about someone breaking Little Snitch:


jstoja 11 hours ago 1 reply      
> A firewall protects your computer against unwanted guests from the Internet.> But who protects your private data from being sent out?

A firewall? No kidding, a firewall is not supposed to only block incoming traffic...

problems 5 hours ago 1 reply      
Does Little Snitch catch process injections (ie: I am currently running in EvilMalware, I open up Chrome, create a new page, write my code into it and create a new thread in it), or is it vulnerable to the same problems of Windows firewall applications before LeakTest and the like. The good Windows firewalls now are able to catch this kind of thing.
mostafah 11 hours ago 1 reply      
Ive been using this happily for a long time. For those taken back by the endless prompts on the first run: thats only for the start. Select forever for connections you trust and youll soon have much less prompts.

On a side note: the developers also have Micro Snitch, an app that warns when the camera or the microphone on your mac is in use.

mellamoyo 12 hours ago 8 replies      
Any similar software recommendations for Windows?
thehashrocket 1 hour ago 0 replies      
Little Snitch reminds me of Zone Alarm from back in the day.
koolba 12 hours ago 2 replies      
How does this work? Does it override the networking DLLs to proxy the socket creation calls?
rwinn 9 hours ago 0 replies      
First thing I install on any new system, couldn't recommend it more!

And the ability to do per-application captures and open them in wireshark is excellent for debugging.

iends 12 hours ago 13 replies      
Those of you who own Little Snitch...do you regularly block outgoing connections from applications you regularly use?
alphonsegaston 12 hours ago 0 replies      
Little Snitch is at once both great and horrifying. If you watch the day to day stuff that happens on MacOS, you'll see that Apple's reputation for security and user privacy is a pretty low bar. Aside from the constantly pinging Apple defaults, so many third party apps are just all the time phoning home to corporate servers when they're not even in use. Chrome can really just look for updates when I open it, not check in with Google about god knows what every thirty minutes.
therealmarv 12 hours ago 1 reply      
Serious question: Can I use only profiles (e.g. no connection until VPN is connected) and the rest of the time Little Snitch should behave like it's not installed? I'm not a big fan of watching every connection... have done this in the distant past with Zone Alarm and Windows and it was more bothering than anything else. I also doubt it increases my personal security a lot.... especially when I think about my normal Android phone which is sitting beside my PC.
bisby 8 hours ago 0 replies      
4-5 years ago when I last used a mac for work, there was a program that had an unlimited evaluation period and was just setup to nag on launch (like winzip). using little snitch just blocked the nag (literally the license did was remove the nag, so it didnt affect functionality). In the end, I wound up not using the program anyway - I really was just trying to evaluate it without the nag. For some reason sublime text comes to mind? I think I wound up just going back to vim

Installing little snitch, I got overwhelmed by how much stuff was trying to make calls in and out. It really does serve its purpose, but you also have to have an idea of what you should be letting out, you can easily break things and if you just "allow all" it somewhat ruins the point of having it.

jedisct1 12 hours ago 5 replies      
Little Snitch is a fantastic way for people to shoot themselves in the foot.

Most people using it have no clue what they are doing, block random things, and prevent software from working as expected. Not only this can make things less secure by breaking features such as automatic updates, it also makes developer's life miserable by having to provide support to people running their software in a half broken environment.

andrenotgiant 12 hours ago 1 reply      
I wish something like this could run at the router level. I am certain my low-end IoT devices are sending out data I don't know about.
mkj 11 hours ago 0 replies      
Objective Development (the developers) are a nice company, also providing V-USB - a bitbanging USB implementation for AVR microcontrollers without USB support. https://www.obdev.at/products/vusb/index.html
twsted 9 hours ago 0 replies      
I think these features should be included in every OS nowadays, like we have firewalls.

Anyway, I will probably buy this app, even if I share some concern others have about its own network calls.

Sykox 11 hours ago 2 replies      
Is there one absolutely similar to windows? Closest i found was GlassWire
FullMtlAlcoholc 5 hours ago 0 replies      
If anyone is looking for a summer application that won't inundate you with so much information, try radio silence
libeclipse 12 hours ago 4 replies      
Something like this would be brilliant on Android. Anyone know anything related?

It'd be great if it was for non-root too, but I'm not sure if it's possible.

benologist 8 hours ago 0 replies      
One day consumer rights protection agencies are going to scrutinize what we are doing in the background just like they're starting to do to ads.
icanhackit 12 hours ago 0 replies      
Long time LS user and love it - yes the constant notifications will tax your Qi but once you've set up the bulk of your rules it'll give you a lot of peace of mind. Also grab Lingon X if you're serious about control.
mattcoles 12 hours ago 3 replies      
Is it open source? Couldn't find anything on their site which is disappointing.
lwfitzgerald 12 hours ago 1 reply      
I'm currently using LS, but one of the problems I have is that it doesn't support wildcard domain rules. This means ephemeral hosts quickly build up a large number of rules which soon become redundant.
admax88q 10 hours ago 3 replies      
Protect your privacy by running this proprietary application!
teaearlgraycold 9 hours ago 2 replies      
This seems like a joke given that it's not open source.
Uber Hires Former Google Search Chief Amit Singhal as SVP of Engineering techcrunch.com
256 points by leothekim  9 hours ago   120 comments top 11
oculusthrift 8 hours ago 12 replies      
I'm really confused on what to think about Uber. My personal thinking/logic is really bearish on them, similar to the post on the front page yesterday [1]. However, I keep seeing extremely smart/accomplished people joining it which makes me second guess my intuition.

[1] https://news.ycombinator.com/item?id=13437414

tyingq 8 hours ago 8 replies      
There's a number of things that have happened in the organic search area of Google that seem to suggest a declining interest in quality organic results.

There's Matt Cutts' long leave of absence, his departure, and the announcement that he's not really being replaced. A much lower volume of communication from Google on initiatives in the space (they used to talk endlessly about Panda, Penguin, etc). Amit's original reason for departure was "his next journey will involve philanthropy"..that seems to have changed.

My guess is that two things are driving the declining interest...

a) The marketshare battle is done. Google won. No competition.

b) Their various initiatives to push organic results down the fold (more ads, knowledge graph, various widgets, and so forth) has made the quality of the organic results not as important. Good enough is the target.

ChuckMcM 8 hours ago 1 reply      
This statement -- Those computer science challenges for a computer science geek are just intriguing you give a geek a puzzle, they cant drop it; they need to solve the puzzle. Thats how it felt to me.

When I've been asked what keeps me going this is it, I really like interesting puzzles and I'm sitting there stuck trying to solve it.

It also says a bit about what Uber thinks their big problems are (or where their value add will be). I was expecting them to go with someone more operations focused like Urs Hoezle.

r_sreeram 8 hours ago 1 reply      
Amit joining Uber after a year's break coincides with the common "1 year no-solicitation" clause in employment contracts. I wonder if we are about to see some top people in Google get poached. Not that there's anything wrong with that.
sAbakumoff 9 minutes ago 0 replies      
Btw - What are the responsibilities of SVP of engineering?
inverse_pi 5 hours ago 0 replies      
Kevin Thompson, another VP from Google also joined Uber very recentlyhttps://www.linkedin.com/in/kevinthompsontech
cornchips 57 minutes ago 0 replies      
"The destiny of search is to become that 'Star Trek' computer and that's what we are building..." -Amit Singhal


1qaz2wsx 6 hours ago 1 reply      
have people considered this may eventually lead to Google acquiring Uber? There is the advising CEO Travis Kalanick bit in there.
carussell 5 hours ago 3 replies      
I'd like to see Uber get into mapping. Besides Uber's core business that everyone focuses on, they've got a self-driving cars program that's halfway off the ground and they do food deliver through UberEATS. In either case, they've got a vested interest in making sure high-quality mapping data is availablehigher quality than what Google provides.

Given their deals with tons of local businesses through UberEATS, they've got operating hours and location data that's fresher than what anyone else can provide on the scale that they're operating on. Would be nice to see them improving the OSM dataset and partner with e.g. Maps.me.

oh_sigh 6 hours ago 1 reply      
Why would a person worth billions work for someone else?
killbrad 3 hours ago 0 replies      
Uber is a middle man that takes money from existing and potential cab drivers' pockets, puts it into their own, and artificially reduces consumer costs.

Cab companies aren't innocent bystanders, but the drivers generally are. But All Hail Uber anyways, I guess.

Curl hearts Mozilla haxx.se
264 points by olsgaard  2 days ago   45 comments top 13
bch 2 days ago 1 reply      
> Also, when talking and comparing brands and their recognition and importance in a global sense, curl is of course nothing next to Mozilla.

You're too humble, Daniel. cURL might not be on the lips of of the general public as much as Mozilla/Firefox is, but curl is an important piece of code touching the lives of probably everybody, whether they know it or not.

hoodoof 2 days ago 5 replies      
This is such a non issue that it didn't even warrant the blog post.
stevoski 1 day ago 3 replies      
For prior art of using :// in branding, here's the IT consulting company I used to run in the early 2000's: https://www.sunesis.com.au/

They haven't changed their website since I sold out in 2003, quite an astonishing thing for a website.

You can see, though, that the graphic designer got a bit confused and reversed the order to //:. He insisted on leaving it that way because design reasons.

ma2rten 1 day ago 0 replies      
> I'm Daniel Stenberg, lead developer of curl and employed by Mozilla.
cooper12 2 days ago 1 reply      
Nice to see them making it clear that there's no conflict at all. Is it just me or does the "L" in curl's logo look too much like a "1"? I think a font with an "L" that's curved like an "S" would be more distinct and would match the monospaced look.
wodenokoto 1 day ago 2 replies      
Wow, people are really hating on this in the comments. Makes me wonder, are there any examples of new brand identity that has been generally well received at first?

The big things I remember is Uber, Instagram, Google and Yahoo, and I don't remember those being followed by any nice words.

Looking back I consider both Google and Yahoo to have been an improvement. I'm on the fence about Instagram. Not sure if I like the new one, but also not sure if sticking with the old one would make the service look dated. I still think Uber is really bad.

jschulenklopper 1 day ago 0 replies      
TIL that curl _had_ a logo. I've been using it almost daily since quite a while... but apparently never visited the project website after the new logo / wordmark of curl.

BTW, Mozilla's logo is a bit smarter, by using the colon instead of an "i" and the slashes instead of two "l"s. In hindsight, they picked the right name in 1998 for this logo.

jve 1 day ago 1 reply      
I'v read it as "hurts" and reading the article couldn't understand whether it was a sarcasm or what. Then I thought HN headline was wrong. Then I re-read it as hearts :)
chaosfox 1 day ago 0 replies      
curlill hearts moz
GogoAkira 1 day ago 0 replies      
they're just trying what other ideas such as nations are already doing, it's called nationalism, well this is browseronism, they are asking us to love the browser, it's gonna be a little bit harder since we wasn't born in Mozillandia, that would help then you could tell people about their ancestors etc.
blablabla123 1 day ago 0 replies      
This is so romantic ;)
kentor 2 days ago 0 replies      
At least curl is not using symbols for leetspeak.
hartator 2 days ago 0 replies      
Until in 5 years, Mozilla sues Curl.
Microsoft Azure in Plain English expeditedssl.com
263 points by handpickednames  17 hours ago   46 comments top 15
Swinx43 15 hours ago 2 replies      
This and the AWS in Plain English are both awesome. Is there an equivalent for Google Cloud Platform?
chucknelson 15 hours ago 0 replies      
This is cool - one item I think is wrong/misunderstood is Big Data > Data Lake Store.

It has nothing to do with ETL, it's basically just "HDFS in the cloud" [1] and a successor to using blob storage/regular old storage accounts for distributed/Hadoop-ish workloads.

[1] https://azure.microsoft.com/en-us/services/data-lake-store/

sumitgt 8 hours ago 1 reply      
I don't think Service Fabric is like AWS Lambda. Azure Functions is AWS Lambda.
jsingleton 12 hours ago 0 replies      
Re-post from the AWS thread (https://news.ycombinator.com/item?id=13442022).

That's a good high-level list, although the comparisons don't always match up. For example, I'd say Traffic Manager is more like Route 53 than ELB (which only works within a region).

If you're after something a bit more in-depth (but covering less services) then I wrote a three part series last year. It may be a little out-of-date, but most of it still applies. Azure now supports MySQL, for example.

1: https://unop.uk/on-aws-vs-azure-vendor-lock-in-and-pricing-c...

2: https://unop.uk/on-aws-vs-azure-vendor-lock-in-and-pricing-c...

3: https://unop.uk/on-aws-vs-azure-vendor-lock-in-and-pricing-c...

Edit: Should that "puts da" be on that page?

davidmichael 11 hours ago 0 replies      
Microsoft themselves publish a comparison document of services with AWS: https://docs.microsoft.com/en-us/azure/guidance/guidance-azu...
yread 16 hours ago 0 replies      
It seems that Azure naming is a lot better than Amazon, perhaps so much so that this guide is not even needed
expertentipp 15 hours ago 5 replies      
If it only was easy to starting playing around with Azure. Only to activate the account they require proper, bank issued, credit or debit card. They explicitly refuse to accept prepaid cards even though they are VISA/MasterCard (BTW the same problem with Google Compute)... or am I doing something wrong?
klausjensen 16 hours ago 0 replies      
This. This is absolutely brilliant. I have worked with Azure for years, and mostly love it - but I learned about a few services, that I never knew what were.

Great work, ExpeditedSSL

youdontknowtho 14 hours ago 2 replies      
"Cloud services" should not be named "Azure IaaS" because Azure IaaS is named Azure IaaS.
viach 13 hours ago 0 replies      
I love how the slogan on the main page written "Bam! ..." It takes attention and you actually read further. Nice small trick.
andysinclair 15 hours ago 1 reply      
Very good overview.One point that I disagree with, Cloud Services:"Run stuff but worry a fair amount about configuration and patching." We run a bunch of cloud services and MS are responsible for patching, I would describe it more as PaaS that IaaS.

We built this in our product to help visualise how the services fit together:https://my.sharpcloud.com/html/#/story/f7522de0-98ff-4d02-8e...

m0d0nne11 12 hours ago 0 replies      
Very useful, as is the one for AWS. Yay! though if these things are being touted as "plain English" they should probably steer rigorously clear of smart-ass insider references (no matter how full of cheer the writer may be feeling at the moment) because that's probably how these titles and terms came to be so opaque in the first place. But, again: yay!
kevingibbon 5 hours ago 0 replies      
Azure in REAL plain English: Microsoft AWS
k__ 15 hours ago 2 replies      
so service fabric is API Gateway and Lambda in one product?

Sounds good and removes a bunch of complexity, I guess.

itaysk 10 hours ago 3 replies      
There are so many fundamental mistakes here that I don't even know where to start.. Nice idea though.(I am a cloud solution architect with Microsoft)
Lavabit Reloaded lavabit.com
290 points by ycmbntrthrwaway  5 hours ago   99 comments top 19
bigbrooklyn 4 hours ago 10 replies      
If you NEED encryption, don't use email.

From: https://blog.fastmail.com/2016/12/10/why-we-dont-offer-pgp/

What's the tradeoff?

If the server doesn't have access to the content of emails, then it reverts to a featureless blob store:

 Search isn't possible Previews can't be calculated If you lose your private key, we can't recover your email Spam checking on content isn't possible To access mail on multiple devices, the private key needs to be shared securely between them

tastythrowaway2 0 minutes ago 0 replies      
this vs protonmail.ch?
codehusker 4 hours ago 2 replies      
Is there any person as trustworthy as Ladar Levison for a service like email or chat?

To my knowledge, he is one of the few that has gone to the mat for his users.

tinkersec 4 hours ago 1 reply      
Code for Magma Mail Server: https://github.com/lavabit/magma

Code for DIME (Dark Internet Mail Environment):https://github.com/lavabit/libdime

betolink 32 minutes ago 0 replies      
I consider this article relevant to this discussion: "Hackers can't solve surveillance" http://www.dmytri.info/hackers-cant-solve-surveillance/
jimnotgym 3 hours ago 0 replies      
Whatever did or didn't happen in the past, I for one am pleased to see another organisation attempting to make email more secure. Especially when governments have gone surveillance crazy. Goodluck Lavabit
akerl_ 4 hours ago 1 reply      
Trustful seems like a strange way to refer to the insecure mode. It is indeed full of trust, but not in the way a normal read would suggest: it requires full trust in Lavabit's hosting provider and administrator.

If you're going to operate in "trustful" mode, lavabit isny offering any real security wins over any other mail host.

MaymayMaster 4 hours ago 2 replies      
>Lavabit believes in privacy and will always ensure your digital freedom.

>Asks for your credit card information on the same page.

Wew, at least let us use buttcoin, Levison.

macmac 2 hours ago 1 reply      
Why would they ask for name, address etc?
MichaelGG 2 hours ago 0 replies      
Last I looked, DIME was just org level trust. That is, your domain determines what level of verification you get as far as knowing you have the right key for the recipient.

So if you used, say Gmail and they did DIME, you'd still be trusting them totally. Am I misunderstanding?

And still no admitting he was selling a fundamentally critically flawed service in the first place. If that's not even being mentioned, it really removes confidence from their new service.

As far as hardware HSM, that's cool. I very much enjoyed reading about how an HSM, the Luna CA3, was cracked:


mike-cardwell 3 hours ago 1 reply      
So they're using a HSM to protect the SSL key this time. Makes me wonder how many HSMs out there are already backdoored.
tptacek 4 hours ago 2 replies      
In August 2013, I was forced to make a difficult decision: violate the rights of the American people and my global customers or shut down. I chose Freedom.

Shouldn't that "or" be an "and"?

smoyer 4 hours ago 1 reply      
How do we know who's controlling the Lavabit domain?
advisedwang 4 hours ago 1 reply      
The explain document doesn't describe how key distribution works. How do I get a public key for somebody that I want to email, and how can I know that I am getting the right key?

This is the hard part of an modern cryptosystem and the usual source of weakness.

zymhan 4 hours ago 4 replies      
Any reason I shouldn't sign up right now?

edit: Signed up. Half off for life is a sweet deal.

truebosko 1 hour ago 0 replies      
Is this the right space to ask for opinions about Fastmail and its privacy? I just switched on trial after being on Gmail. I'm happy but I switched primarily to get part of my life away from Google.
Arallu 3 hours ago 1 reply      
What's the difference between Standard and Premier?
DKnoll 3 hours ago 0 replies      
I can finally get my old mail back. :)
satysin 4 hours ago 0 replies      
No trial is a shame.
Security researchers call for Guardian to retract false WhatsApp backdoor story techcrunch.com
244 points by sidcool  13 hours ago   168 comments top 20
msravi 10 hours ago 5 replies      
> The design decision referenced in the Guardian story prevents millions of messages from being lost

So it's a classic tradeoff - convenience vs. security, and the Guardian story correctly reported it.

> and WhatsApp offers people security notifications to alert them to potential security risks.

which is not enabled by default.

Sounds to me like apart from the hyperbolic use of the word "backdoor" (which apparently has now been removed), The Guardian is in the clear here.

mbgaxyz 9 hours ago 2 replies      
Bruce Schneier:

"How serious this is depends on your threat model. If you are worried about the US government -- or any other government that can pressure Facebook -- snooping on your messages, then this is a small vulnerability. If not, then it's nothing to worry about."


jakobegger 7 hours ago 1 reply      
When Apple said that iMessage uses end-to-end encryption, everyone started complaining that it's not real end-to-end since we have to trust Apple for key exchange.

Now we have the same thing with What's App: we have to trust them with the key exchange. It's marginally better, since they have an optional way to enable notifications after a key was changed.

I applaud the Guardian to run with the story. Whether to call it a back door / flaw / trade-off is just quarreling over semantics, when the important part is that you need to trust a central service.

If you want to be sure that noone can intercept your messages, use PGP or S/MIME, (preferably encrypting your message on an air-gapped computer)

Saying that we shouldn't worry about state-level actors is a bit naive after PRISM was revealed.

If you are doing something that someone in power might dislike, you should not rely on Whats App.

tptacek 9 hours ago 6 replies      
A reminder, for people scratching their heads over how any legitimate-seeming criticism of the trade-off WhatsApp took could generate this much ire from this many experts:

WhatsApp has something like a billion users. Virtually none of them asked for end-to-end encryption. They don't know that they want it, but they got it anyways when Whisper worked with WhatsApp to add Signal Protocol to WhatsApp.

But just because they don't want it doesn't mean they don't need it. Many WhatsApp users badly need messaging security that works better than their alternatives. They don't know it, but with the flip of a switch, they got that security. More than anything else, this is why Dan Boneh and the RealWorldCrypto steering committee awarded Moxie Marlinspike and Trevor Perrin the Levchin crypto prize this year.

Comes now The Guardian with a story saying "WHATSAPP BACKDOORED, BAD, SWITCH". None of WhatsApp's users are close to being able to evaluate what that means. But they know what "bad" and "switch" means. And in the global-scale game of telephone we're all playing now, that's what they're doing. You don't have to wonder: Zeynep will tell you it's happening.

Nerds on Twitter are puzzled. Isn't this a good thing? Signal is even stricter about security than WhatsApp. Wouldn't it be better for all these people to be on Signal? WhatsApp users will in fact probably try installing Signal. They'll even use it for a couple minutes. But their peers aren't on Signal, and they're switching immediately to messengers where they can find their friends. Not WhatsApp, though: The Guardian (or their shrill uncle on Facebook) told them not to. Nope, they're switching to SMS.

State security services could not be happier about this. You can't buy this kind of PR for money; you have to spend security researcher vanity to get it. Zeynep will tell you about this too: there are state telecom and security apparatuses right now signal-boosting The Guardian's irresponsible report. And there are activists circulating warnings to switch from WhatsApp to other messengers. It's a disaster: the lie is outrunning us.

The Guardian must retract this story, clearly and loudly. There's a way to report on the tradeoffs WhatsApp made, but this wasn't it; this was "VACCINES MAY CAUSE AUTISM". Don't take my word for it: look at who signed the letter, including Matthew Green, Bruce Schneier, Matt Blaze, Steve Checkoway, Chris Palmer, Dave Adrian, Bart Preneel, Jonathan Zdziarski, Steve Bellovin, and Emin Gr Sirer.

idlewords 10 hours ago 2 replies      
Note that the Guardian has published multiple stories about this fake issue, and seems to be doubling down on its coverage: https://www.theguardian.com/technology/whatsapp

The list of names at the end of Zeynep's article is pretty much a who's who of people you don't want to be publicly called wrong by when reporting on security.

roddux 8 hours ago 4 replies      
I'm not much for conspiracy theories, but it's interesting to note that The Guardian actually recommends people concerned about surveillance to stop using WhatsApp, without offering any alternatives.

>If you use WhatsApp as a way to avoid government surveillance due to its end-to-end encryption service, you should stop using it immediately.

In the wake of the UK snoopers charter having sailed through parliament, this seems odd. Occam's razor tells us that it's coincidence and bad reporting, but still.

michel-slm 8 hours ago 2 replies      
As a Guardian supporting member, I'm forwarding this directly to their editorial complaints. There are other publications that deserve my money if the Guardian refuses to stop being sensationalist.
jknz 9 hours ago 3 replies      
The logic behind this seems off?

Signal and whastapp have different behaviors regarding this. (Signal does not have the issue of re-sending messages as was previously reported here).

This letter signed by a lot of very serious security cryptographers means that there is a consensus among the community about the "best" behavior in terms of security, trade-offs, etc.

If there is indeed a consensus about what the "best" behavior is, then both Whatsapp and Signal should adopt this "best" behavior.

However Whatsapp and Signal do not have adopted the same behavior. So the consensus does not seem to be there, otherwise both Whatsap and Signal would have adopted the "best" consensual behavior.

So by first order logic... There is no consensus on this?

_Codemonkeyism 5 hours ago 0 replies      
Most comments here are like not reporting HTTPS certificate problems/MITM leakage with arguing that it's better to have HTTPS than not.

I wonder what people here would say if browsers would act with certificates the way WhatsApp handles key renewal.

We even discuss certificate pinning etc. in the web space.

tgsovlerkhgsel 5 hours ago 0 replies      
I'm not sure if I understood the issue fully: Assuming both parties have the "show security warnings" setting enabled and take it seriously, but ignore the lack of "message delivered" checkmarks, can the attacker snoop on one message or multiple ones?

As soon as the message is delivered it cannot be resent anymore, but could the attacker refuse to provide the delivery confirmations, then perform the attack (getting all messages that weren't yet marked as delivered, potentially over a large timeframe, while also showing the warning)? If so, I'd say it is a thing to worry about.

A smart attacker could wait until one party is switching phones, so that the warning is not considered suspicious, and since they could swap the new-correct key in immediately afterwards, users would be likely to dismiss the missing checkmarks and double key-change notification (at least before this news was published).

Also, for phone calls, WhatsApp only shows the warning after the call.

I don't believe these are backdoors, but I'm surprised WhatsApp isn't taking it more seriously and trying to address it.

tlogan 8 hours ago 1 reply      
First Question:

Lets suppose I'm human right activist in Egypt. If I enabled security notification in WhatsApp and other person's phone is captured by authorities but they cannot unlock it so they make it that user lost their phone. Now, if I send the message to that user, is this message received by other party with warning that is not encrypted or it is not delivered at all?

Second Question:

Is there any guarantee that server cannot change settings on the client?

feral 9 hours ago 1 reply      
There's a split in the HN community on this issue.

The split seems to be:

- Some people are OK with compromise for usability

- Others think being uncompromising is the only way to eventually achieve security. (This group is also, rightly imo, suspicious of WhatsApp/Facebook or any centralized product)

I do understand the latter mindset. Often the only way to get high security is dogged attention to detail, letting nothing slide. Attackers love to promote products which provide the illusion of security, but contain flaws or backdoors; and often the illusion of security is worse than nothing.

But I'm with the group that favors usability compromise here. Open source projects have successfully built high security products, but rarely gotten mass consumer adoption, precisely because of an unwillingness to make concessions to usability.

Without usability concessions, we end up with 30 character random login passwords - written on stickies on the terminal.

Even if you don't agree with the particular compromises in this case, please engage with those who do. There's no reason to think they are shills trying to undermine security. Favoring usability here is at least reasonable, with the same shared end goal of increased security for end users - this should be acknowledged, especially given the mass adoption success of WhatsApp/OWS.

Dan_JiuJitsu 8 hours ago 1 reply      
The vulnerability in WhatsApp was correctly described by the Guardian. Signal is more secure and does not have this vulnerability. How, exactly is suggesting users migrate to a more secure messaging platform misleading in any way?
WhitneyLand 8 hours ago 1 reply      
The Guardian made a mistake. They mischaracterized the issue. Now they're trying to correct it and offering a forum for rebuttal. The comparison with vaccines is actually offensive to me as someone negatively affected by that controversy.

In general I want potential issues like this to be noted (when properly defined and characterized) and debated.

These experts are arguing that WhatsApp makes the best possible trade offs given their user base. I don't agree and think it's worthy of discussion. The tradeoff they refer to is really a UX/product design decision.

electic 7 hours ago 0 replies      
back door

bak d()r/


noun: backdoor

1. the door or entrance at the back of a building.

2. a feature or defect of a computer system that allows surreptitious unauthorized access to data.

Seems like a backdoor to me. The reality is that the way it is implemented allows a foreign government to see messages it is not supposed to see...and that is a defect. I thank the Guardian for taking the lead on this and bring this issue to light.

_Codemonkeyism 5 hours ago 1 reply      
Reading this thread I want to be the Guardian, half of the comments assume >1 Billion of people are reading the Guardian and act on articles.
bostik 7 hours ago 0 replies      
I posted this in the other[tm] thread earlier today: https://news.ycombinator.com/item?id=13442653

Basically everyone in this thread who are NOT signatories to the open letter could spend their time a lot worse than by listening to the segment with Alec Muffatt.

I may disagree with some of his opinions on the desired UX, but the technical details and threat model considerations are very thorough and thought out.

robrenaud 9 hours ago 2 replies      
How do you know what a closed source app is doing? How do you know that they won't just go and change the code to send plain text messages to a log somewhere?
stefantalpalaru 9 hours ago 2 replies      
From the open letter on http://technosociology.org/?page_id=1687 :

> The behavior described in your article is not a backdoor in WhatsApp. This is the overwhelming consensus of the cryptography and security community. It is also the collective opinion of the cryptography professionals whose names appear below. The behavior you highlight is a measured tradeoff that poses a remote threat in return for real benefits that help keep users secure, as we will discuss in a moment.

What real benefits are gained from making it easy for ISP-level attackers to mount man-in-the-middle attacks? Security from your spouse snooping on your phone? What's the threat model here and why are these experts so adamant in minimizing the security risks?

Moxie went as far as to ignore the opt-in aspect of the (very benign looking) key change notification, but he's on the payroll. What's the motivation of the other experts in this sudden "overwhelming consensus"?

kahrkunne 9 hours ago 1 reply      
The Guardian is a fake news website. Of course they won't pull it.
How Do I Declare a Function Pointer in C? fuckingfunctionpointers.com
273 points by jerryr  1 day ago   103 comments top 14
petters 1 day ago 4 replies      
Just use the typedef. Even if you personally find the other variants readable, chances are that your peer reading your code doesn't.
userbinator 14 hours ago 4 replies      
The easiest and best way to learn the syntax is to not memorise specific cases but the grammar itself, which IMHO is no more difficult than the existing concept of operator precedence. Everyone using C should hopefully already know that multiplication has higher precedence than addition, so likewise function call (and array subscripting) has higher precedence than pointer dereference. Thus this table should make it clear that combining the two operators creates pointer-to-function:

 T x; T *y; T f(); T (*g)(); T pointer to T function returning T pointer to function returning T
and the alternative, T h(); , is parsed as T (h()); and thus becomes "function returning pointer to T".

The apparent struggle I see with this syntax has always somewhat puzzled me, because I don't see the same level of complaints about e.g. arithmetic expressions (like 6+3*4/(2+1)) which are parsed with precedence in much the same way. K&R even has a section on writing a parser that recognises this syntax, so I suspect it's really not that hard, but the perception spread by those who didn't learn the syntax but only memorised the "easy cases" is making it appear more difficult than it really is.

TheAdamist 1 day ago 1 reply      
The new c++ alt function syntax talked about here:https://blog.petrzemek.net/2017/01/17/pros-and-cons-of-alter...

mentions replacing function declarations for

 void (*get_func_on(int i))(int); 

 auto get_func_on(int i) -> void (*)(int);
which looks a lot more readable to me.

dnquark 1 day ago 3 replies      
The trick to reading crazy C declarations is learning the "spiral rule": http://c-faq.com/decl/spiral.anderson.html (here are more examples, with nicer formatting: http://www.unixwiz.net/techtips/reading-cdecl.html)
cestith 1 day ago 1 reply      
For anyone unable or unwilling to access that domain name for work purposes or filtering purposes, the linked page lists this alternative:http://goshdarnfunctionpointers.com/
int_19h 23 hours ago 3 replies      
Every time I have to deal with the declarator syntax in C or C++, I can't help but ponder what K&R were thinking when they designed this. It's not like there weren't other languages back then with a saner approach.

It looks like what they did was take the syntax from B:

 auto x[10];
and generalize it such that the type name ended up before the variable name, as in Algol. But in B this worked much better, because it didn't have array types (or pointer types, or function types) - everything was a machine word. So [] in a variable declaration was just to allocate memory to which the variable would refer; the variable itself would still be a word. When they made [] part of the type, and added pointers and function types, the result was a mess.

theophrastus 1 day ago 1 reply      
Or if one doesn't have cdecl installed there's an online version[1] which has proven as a useful check on several occasions

[1] http://cdecl.org/

bstamour 1 day ago 3 replies      
This is one of those cases where I prefer C++

 template <typename Func> using function_ptr = add_pointer_t<Func>;
and now declarations are a bit more sane:

 void foo(function_ptr<void (int)> callback);

hzhou321 1 day ago 3 replies      
I never got used to having variables sandwiched inside a type. I know I am not supposed to suggest out-of-the-box, but why can't we add a new syntax, e.g.:

 return_type Fn(parameters) var; typedef return_type Fn(parameters) TypeName;
where Fn is a new keyword -- or not, if compiler understands dummy syntax -- (I would suggest &lambda; when using greek letters in code become norm).

It simplifies the C syntax a lot IMHO.

PS: now I am out-of-the-box, maybe this is better:

 Fn{return_type, param1, param2} *var;

shmerl 19 hours ago 0 replies      
The syntax is atrocious, but there isn't much C can do about it.
porjo 23 hours ago 0 replies      
noscript shows a nasty looking XSS warning when I click any of the 'example code' links.
kruhft 1 day ago 0 replies      
One of the only reasons I had "The C Programming Language" on my desk when I was a C coder. The only thing I could never remember...
cmrdporcupine 14 hours ago 0 replies      
Needs more profanity. The whole syntax is profane.
2016 JavaScript Rising Stars js.org
251 points by gulbrandr  2 days ago   118 comments top 24
fhoffa 2 days ago 4 replies      
Since they didn't publish their data source, let me add a useful note: How to count the number of stars using GitHub Archive and BigQuery.

Naive query:

 #standardSQL SELECT repo.id, ANY_VALUE(repo.name) name, COUNT(*) as num_stars FROM `githubarchive.month.2016*` WHERE type = "WatchEvent" GROUP BY repo.id ORDER BY num_stars DESC LIMIT 1000
But let's fight "star fraud". There is an easy way to register "fake" stars - if you star and unstar a project repeatedly, each time this will register as a WatchEvent on the GitHub Archive log.

Better query, removes duplicates:

 #standardSQL SELECT repo_id, ANY_VALUE(name) name, COUNT(*) as num_stars FROM ( SELECT repo.id repo_id, ANY_VALUE(repo.name) name, actor.id FROM `githubarchive.month.2016*` WHERE type = "WatchEvent" GROUP BY repo.id, actor.id ) GROUP BY repo_id ORDER BY num_stars DESC LIMIT 1000
If we put all together, these are the real results:

* https://docs.google.com/spreadsheets/d/1aDlXrk3U1z5s0-1Is8KH...

Projects like 'fivethirtyeight/data' lose -864 stars (23%), going down 230 places in the ranking, while projects like 'FormidableLabs/nodejs-dashboard' lose less than 1% of their stars, going up 49 places.

When I said 'stars fraud' I'm not presuming malice, but with these star rankings we do create an incentive :).

Disclaimer: I'm Felipe Hoffa, and I work for Google Cloud http://twitter.com/felipehoffa

wyuenho 2 days ago 3 replies      
I'll probably get downvoted for this, but I think counting github stars probably only reflects visibility but not actual usage. To give you an example, cf-ui is counted as one of the "best" React UI component out there in bestof.js.org, but this project is not currently meant for general consumption. This fact is reflected in npm stats.

I think we as a community should be very vary about these "scorecard" websites and their methodologies. These things tend to be a self-fulfilled prophecy. I'm not sure I want to see a couple of brand name companies start monopolizing our eyeballs and technical conversation. I'm also very vary about people making decisions based on these sites. While not everyone is completely different from others, I'd like people to make their own decisions based on their only thinking process rather than just jumping on the new and shiny from the big names all the time. Not that this appears to be an issue right now, just a few words of caution.

falloutx 2 days ago 1 reply      
Fascinating to see Vue.js and Inferno both doing so well last year. Vue is definitely lot easier to learn than React, for new Developers, but I could be wrong since I haven't tried it a lot. Don't know how it scales up for large applications.

Preact is also another dark horse. This year we are going to see a few more "react-like" UI libraries.

In node, really happy to see Feathers catching up with other more popular frameworks like Express. First time I reached to feathers was by literally searching "Firebase Alternative".

AVA the Test runner, really never heard of it at all. May be I am too behind in the "Test Runners" category. I only use Mocha.

Also, they should have added a category for graphics libraries like Three.js, Fabric.js, Paper.js etc.

dehef 2 days ago 0 replies      
It isn't a good indication for the framework value itself. A star in github its like "meh I heard a lot about react-thing, that look complicated but can't be that bad, I should take a look one day"Its a self-persuading buzz
insin 2 days ago 1 reply      
Where are the star counts from?

Create React App was created in 2016 and has more than 18k stars but is shown as having gained 5.6k stars in 2016 (I think it got more than that in its first week!)

silvaben 1 day ago 1 reply      
Excited to see Vue doing so well on this list. I have been using it for the last few months at my day job, and I have been pleasantly surprised by it. I have tinkered with React & Angular in the past and I can second that the fact that compared to the other frameworks, getting started with Vue is a lot easier.

The ecosystem around it also quite mature - VueRouter & Vuex are well-tested solutions in case you have a use for it.

Another advantage that it has is its documentation & guides. It is quite exhaustive and easy to follow. My only gripe was that it doesn't go into the details of building a complete "single-page-app" that uses the accompanying tools (vue-cli, vuerouter, vuex etc).

Based on my learnings over the past few months, I have started writing a small ebook that goes over the process of building a full-fledged app.

I have created a small subscription form - http://eepurl.com/cvUk5D. You can add your email here to get notified when I launch this book and also get access to the early release.

anm89 2 days ago 4 replies      
Really Sad to me that Ember.js gets so little love. It is hands above Angular 1 or 2 in my mind and the only JS front end framework that I feel productive in.
lacker 2 days ago 4 replies      
Hmm, Create React App is listed as "+5.6k stars", but it was launched in 2016 and has over 18,000 stars right now. Perhaps an error in the data processing?
michaelrambeau 1 day ago 0 replies      
Hello there, this is Michael Rambeau, the writer of JavaScript Rising Stars.Thank you, everyone, for your comments.It's very nice to see people talking about things related to my project.As some people mentioned, after the initial release, there was an issue about the count of stars, for some projects.I'm sorry about that, it has been fixed during the following releases.I will try to take into account the ideas discussed there, when things calm down.Thank you a lot!
ArtDev 2 days ago 0 replies      
Who says that github "stars" are a quality indicator?

I use stars to mark projects I am interesting in but haven't tried yet. I am sure many people do the same.

juice_bus 2 days ago 2 replies      
Honestly, I was surprised to see Vue at #1 and not React.
cygned 2 days ago 3 replies      
Interesting that Angular 2 doesn't seem to be very popular - yet?
novaleaf 1 day ago 0 replies      
I don't understand why Hapi doesn't get much love these days. It's a monolithic framework, which means it's extremely opinionated in how things are done, but also it means you don't need to go hunt down 3rd party packages to cover features that every server dev needs out of the box.
tabeth 2 days ago 2 replies      
Somewhat off-topic, though it may be relevant.

Is it that much more difficult/complicated to have a server side rendered app with pure JavaScript sprinkled in versus SPA vs. no JavaScript?

The situation I'm thinking about is the following:



If you have an SPA then you literally can just have separate SPAs for each sub domain served statically. You could also have everything rendered server side using whatever back-end you're using. Finally, you can have server-side rendering and add JavaScript when necessary, but this seems to add complexity as your team would now need to know whatever templating language your server-side framework uses plus the front-end framework. Am I missing anything?

minimaxir 2 days ago 2 replies      
gcp 2 days ago 2 replies      
Looks similar to http://stateofjs.com/
franciscop 2 days ago 2 replies      
I love that this made it to the report in the "React Boilerplates": https://github.com/tj/frontend-boilerplate

> A boilerplate of things that mostly shouldn't exist.

the_wheel 2 days ago 0 replies      
Meteor should be on the list of Node.js frameworks (absent a more appropriate category).
paulddraper 2 days ago 1 reply      
#4 in TOC is React Boilerplates.

Perhaps Javascript is catching up to its big brother.

k__ 2 days ago 1 reply      
The React part is especially intersting, since Inferno, Preact and React share the React-API, which basically means React blew every other framework away.
Mizza 2 days ago 0 replies      
This is awesome, somebody please make one for Python!
Raphmedia 2 days ago 0 replies      
Happy to see Aurelia in that list. It's my discovery of the year. I have a lot of fun with it!
andrethegiant 2 days ago 1 reply      
Sublime Text not on the list of notable IDEs?
carsongross 2 days ago 0 replies      
On the front end, intercooler.js[1] actually gained more stars in 2016 (~3000)[2] than Mithril (which I like, this is not to take anything away from the Mithril guys!) This was in large part due to a big HN bump in November.

I know it's too contrarian and idiosyncratic to get a mention on a JS survey, but I have to get he word out somehow...

[1] - https://github.com/LeadDyno/intercooler-js

[2] - http://www.timqian.com/star-history/#LeadDyno/intercooler-js...

Who is Anna-Senpai, the Mirai Worm Author? krebsonsecurity.com
265 points by chopin  1 day ago   18 comments top 5
ploggingdev 1 day ago 4 replies      
Previous discussion : https://news.ycombinator.com/item?id=13428824

Mods, the current url points to somewhere in the middle (contains a #). Consider editing the url to point to https://krebsonsecurity.com/2017/01/who-is-anna-senpai-the-m...

The article makes for a fascinating read (could form the basis of a Social Network style movie), and also brings up the topic of IoT security. IoT devices in usage are only going to increase in number, so if the manufacturers don't get their act together, multi TBps DDoS capable botnets operated by teenagers will become the new normal.

Links worth mentioning:

AnnaSenpai 5 days ago on reddit (story adds up) : https://www.reddit.com/r/AskReddit/comments/5nqq3c/serious_p...

Chat between AnnaSenpai and a victim: https://krebsonsecurity.com/wp-content/uploads/2017/01/annas...

fennecfoxen 1 day ago 3 replies      
Because the article doesn't mention it all, and because it's interesting to ponder what fictional dystopian futures are sufficiently of interest to virus authors and the like that they use names from those works:

"In a dystopian future, the Japanese government is cracking down on any perceived immoral activity from using risqu language to distributing lewd materials in the country, to the point where all citizens are forced to wear high-tech devices called Peace Makers (PM) at all times that analyse every spoken word and hand motions for any action that could break the law. A new high school student named Tanukichi Okuma enters the country's leading elite "public morals school" to reunite with his crush and student council President, Anna Nishikinomiya.

... After being accidentally kissed by Tanukichi, she develops an obsessive love for him but due to lack of knowledge on "immoral" subjects she ends up expressing her love in extreme tendencies. These include pursuing him relentlessly and attempting to rape him, endangering Kosuri and Ayame when she sees them with Tanukichi, and becoming far more harsh and strict on her surveillance, believing that by doing "justice" and "good things" she will be loved by him."


And that's Anna-senpai, the fictional character.

Apocryphon 1 day ago 1 reply      
Based on this article, are a majority of DDOS-prevention firms really just hacker outfits who are launching attacks on rival firms?
Dolores12 1 day ago 0 replies      
If i were anna-senpai, i would put my own anti-ddos servers down to avoid suspicion. hence here is a question:

have any of ProtTraf servers been hit by Mirai botnet?

throw2016 1 day ago 1 reply      
Let alone the US the security services of nearly any state can take care of this. But no, they want to access and use these services and have plausible deniability and so let them exist and extort others.

I don't think anyone imagines the NSA, the russian or chinese security services do not have the ability to put a stop to this, at least those parts that are in their control.

       cached 21 January 2017 03:11:01 GMT