hacker news with inline top comments    .. more ..    3 Jul 2017 Best
home   ask   best   2 years ago   
Silicon Valley Women, in Cultural Shift, Frankly Describe Sexual Harassment nytimes.com
1156 points by coloneltcb  2 days ago   709 comments top 76
rl3 2 days ago 14 replies      
This entire article is a smorgasbord of cringe:

>During the recruiting process, Mr. McClure, a founder of 500 Startups and an investor, sent her a Facebook message that read in part, I was getting confused figuring out whether to hire you or hit on you.

One would think having the phrases "hire you" and "hit on you" in the same sentence when communicating with someone undergoing recruitment at your company would be reason enough to take pause for a moment, and maybe ponder why PR suicide seems like a good idea.

>Mr. Canter, in an interview, said that Ms. Dent came on strong to me, asking for help and that she had used her sexuality publicly. He said he disliked her ideas so he behaved the way he did to make her go away.


>Lindsay Meyer, an entrepreneur in San Francisco, said Mr. Caldbeck put $25,000 of his own money into her fitness start-up in 2015. That gave Mr. Caldbeck reason to constantly text her; in those messages, reviewed by The Times, he asked if she was attracted to him and why she would rather be with her boyfriend than him. At times, he groped and kissed her, she said.

That one's just downright pathological creepy in the extreme.

CydeWeys 2 days ago 5 replies      
This is a watershed moment in the VC industry. The dam has finally burst, and we're now seeing the establishment of a new norm in which women who are being harassed go public rather than feeling compelled to hide it. Expect to see many men who were operating under the old norms getting ousted.
tuna-piano 2 days ago 6 replies      
Wow, this seems terrible. For these women (and from the sounds of it, many more) to always have to second guess if an investor likes them for good business reasons or whether it's just because they're pretty. To always dread meeting a new business contact, knowing there's a small but substantial chance he'll make some awkward comment.

Some thoughts:

1. Is this more common in silicon valley than elsewhere? (I've never seen any remotely similar male->female sexual harassment in the non-valley places I've worked)

2. Is this more common among high-powered people than low-powered people (seems definitely so)?

3. There's definitely a double standard here. I once worked (in tech) with a good looking former male model. There were many very suggestive comments made to him and about him, and they made him feel obviously uncomfortable. Of course, the women making these comments did so pretty openly and with humor, as female->male is not really considered wrong for sexual harassment.

4. Obviously some of these remarks are worse than others. But many people do end up dating and marrying their coworkers (Bill Gates, Sergey Brin, Phil Knight...). I think there is not a black-and-white line for suggestive comments to people you are connected with at work. The line is thick and grey, but wow do some of these guys not have any self-awareness.

ig1 1 day ago 3 replies      
I feel deeply uncomfortable on 500 Startups positions on this (https://500.co/making-changes-at-500/) for four reasons:

1) 500Startups considered these issues to be serious enough to remove McClure from his role but didn't publicly disclose this. This meant that he presumably could continue to meet female founders and other WIT while still appearing to be in a position of power.

2) A number of LPs in 500 have stated that they were not told this change had happened and the reasons why. This suggests that 500 were trying to keep this as quiet as possible.

3) McClure continued to represent 500Startups in his official role after his apparent removal. A few weeks ago he was deeply involved in the launch of 500Melbourne in Australia.

4) Eight days ago Tsai tweeted "Binary Capital's Justin Caldbeck accused of unwanted sexual advances towards female founders. Where's the outrage?" while at the same time being fully aware that 500Startups was not disclosing McClure's behaviour.

500 Startups has done good work in the past on diversity but this appears to extend beyond McClure and they need to adopt full disclosure and address what from the outside could look like an attempt to cover-up inappropriate behaviour.

notacoward 2 days ago 3 replies      
News flash: finance guys have predatory attitudes toward women.

OK, not news at all. VC is part of the finance industry. No, it's not "disruptive" or in opposition to that industry in any way. Just look at who's putting all that money in before it's doled back out to entrepreneurs. Look at who has the power to force someone like McClure out. Yep, just a differently decorated branch office of Wall Street.

VC partners hew to the norms of their own industry, not the industries they invest in. The finance industry is a notorious bastion of the old boys' club preying on everyone else, with much of the dirty work done by hyper-aggressive young bloods vying to be among the very few elevated into the inner circle as its older members die off. Of course VCs behave this way, just like TV/movie producers and all the other specialized branches of the finance industry. How could anyone have expected or believed otherwise?

refurb 2 days ago 5 replies      
For everyone who is wondering why these people act the way they do, the answer is that their actions work some of the time. Otherwise, why would they do it if they continually strike out?

What one person might see as harassment, another might see as an exciting "chase". I've known women who have had guys be very aggressive (it's a fine line) and they were quite taken by it.

This of course is not excusing the behavior whatsoever. If you lack the social skills to see when you've crossed the line, the best approach is avoid the behavior all together.

ChuckMcM 2 days ago 1 reply      
I give a lot of credit to women like Susan Fowler who were brave enough to speak out, with starting what feels a bit like a tidal wave. I hope we see similar light shone on institutionalized racism.
austenallred 2 days ago 5 replies      
I had always heard the constant murmur of sexual harassment happening in the Valley, but I was never sure if there was any merit to it. I anecdotally had never seen anything that would even come close to a situation like those described, so I kind of brushed it off.

But then articles like these come out. Wow. These are some of the kingpins of Silicon Valley shown sexually harassing with hard, factual evidence. I had no idea.

auganov 2 days ago 2 replies      
"Apologies" like Chris Sacca's [0] don't make you too hopeful.

 Often I have committed as their first limited partner and encouraged them to use my name and participation to attract other investment. Ive introduced them to my funds most loyal investors and made sure they have had the opportunity to make their case rather than get lost in an inbox. Ive also connected them to our trusted service providers saving them the time and frustration I experienced when trying to get my first fund off the ground. Above all else, Ive rolled up my sleeves and spent the time mentoring many of them in how to approach the business itself and how to navigate the inevitable challenges that arise.
Yea, that's what VCs do?Just apologize and take a bit of time to rethink stuff, nobody's going to be reassured by lame platitudes.

[0] https://medium.com/@sacca/i-have-more-work-to-do-c775c5d56ca...

Apocryphon 2 days ago 18 replies      
How is it that the old money industries of Wall Street and Hollywood, no vanguards of gender egalitarianism themselves, seem to have less flagrant sexual harassment issues? Do they keep them under wraps or are they just more mature by now?
coloneltcb 2 days ago 2 replies      
Buried lede: Dave McClure implicated and is out of daily operations at 500 Startups
Mz 2 days ago 2 replies      
Saying anything, the women were warned, might lead to ostracism.

The problem is that women are already being ostracized. When you can't get hired or you can't get VC money or you can't get business connections because the only time men will talk to you is to hit on you, you are already basically dead in the water as a business person.

How do we get out of this dead end?

rdlecler1 2 days ago 4 replies      
Has anyone seen a policy on how to handle sexual harassment allegations in the work place? If it comes down to 'he says she says' do you fire someone even though you don't have evidence? Similarly, if you don't fire someone and new allegations come up later then it makes the employer look like they are an enabler.
tiredwired 2 days ago 2 replies      
Harassment goes both ways. I have worked in Silicon Valley for 8 years. I have heard women managers and even a CEO say and do inappropriate things. Not only did the woman CEO say horrible things about her employees, she was also a bad CEO.
gcatalfamo 1 day ago 0 replies      
As I was trying to say in the techcrunch submission, without defending the real assholes, my honest takeout by the whole sexual harassment threshold in the bay area or the US in general is that you could never end up together or even married with a colleague in that paranoid atmosphere.

Which is exactly what I did where I live in Italy. Happily married and yes, after saying things to a colleague of mine that would have made me accused in the bay area.

1024core 2 days ago 0 replies      
On the one hand, I cringe when reading such accounts, and wonder which one of the people around me are engaging in such behavior?

On the other hand, I'm glad these women are going public, as nothing scatters cockroaches like sunlight.

smmsnsks 2 days ago 3 replies      
Avoiding this situation is why I never meet women 1:1 outside of conference rooms, yet Mike Pence was roundly mocked for doing this.
goatcurious 2 days ago 1 reply      
I remember coming across a 2011 article on similar behavior by a New York investor, where I could not figure out whether the writers were condoning the behavior, in an almost fawning tone, or highlighting a problem. http://observer.com/2011/11/charlie-odonnell-women-in-tech-d...

Just read it again, it is beyond cringeworthy.

nashashmi 1 day ago 2 replies      
Isn't there an inherent culture among men that dates all the way back to middle school, high school, and college that somewhat sets the tone for inappropriate behavior later in life?

These kinds of behaviours happen a lot and begin at a very young age. And they seem to be okayed by nearly everyone then. Why don't these things stop before they begin? I almost feel sorry for the men in these stories because the rules seem to have changed on them at some point and nobody told them when.

Let's be clear! These actions are not ok in any work setting at any work level in any industry in any situation of diversity or lack thereof. Further, these actions are not ok in college, in high school, or middle school and in any area of education.

Today the focus is on the investment industry, but it happens everywhere, from the bar to restaurants to company meetings. And these articles need to do generalize the environment beyond startup valley. Everyone should be put on alert. Even bystanders.

CSMastermind 1 day ago 0 replies      
I've known Marc Canter for almost two decades now. I've been to his house, out to dinner with him, in classes he's taught, and at startups he's been a part of. I have never once seen him hit on anyone. He's also not a particularly powerful person in silicon valley. He is very socially awkward, however.

I'd, personally, like to see the proof before I condemn the man.

notadoc 1 day ago 0 replies      
I wonder how ubiquitous this behavior is? And how often it applies to reversed gender and same-sex scenarios as well? Do general demographics play a role? In any given male or female dominated industry, is the less dominant gender more likely to experience harassment?

Many people I know (both female and male) have been come onto in a work environment by someone they are not interested in, or experienced an inappropriate scenario or comment, or experienced unwanted flirting or propositions. I think most people simply don't talk about it outside of their friend or social group.

snikolic 2 days ago 0 replies      
Made me think of this NYT article from 2010 which mentions the fundraising experience of a female CEO/Founder. http://www.nytimes.com/2010/04/18/technology/18women.html

Some gross anecdotes about Bay Area VCs showing naked pictures of themselves, asking about her husband's sexual performance, etc.

(For the record, the company she was trying to fund has been quite successful.)

likelynew 1 day ago 1 reply      
I don't know if it is appropriate to say, but I think overt preference in hiring females is one of the biggest cause. Almost every tech company hire women with less qualifications to improve the the diversity. I know it is for the best, but in personal experience, it is very spoken thing between some males. Few interpret inside their head that the females are for them. I think it is evident even in snippets of this article: "I was getting confused figuring out whether to hire you or hit on you." It kind of means that she is not qualified to be hired, according to him, but.. I don't know, it is a complex relationship.
nhumrich 1 day ago 0 replies      
I always knew women had an unfair disadvantage in this industry. But I never realized it was this bad. This is a whole different level then I imagined, and makes me want to throw up. On behalf of my gender (im a male if that wasnt obvious from my lack of awareness) I would like to sincerely apologise. Women, please keep telling us these stories. Please have the strength to "go public" with this information. It helped me realize the severity of the situation, hopefully it will spread.
RangerScience 1 day ago 0 replies      
As a man, I really value these clear, blunt accounts. It's like a bug report with details - here's what I need to know to identify the problem, and begin addressing it.
naiveattack 1 day ago 0 replies      
Scroll down to the bottom comments for real personal anecdotes and experiences; and contrasting perspectives.

The bottom comments are important to complete this discussion if you are willing to entertain different thoughts.

I won't say more here, because I want people to actually be able to find this comment. Unlike the ones at the bottom.


taytus 2 days ago 1 reply      
Wow, think for a second how hard is to be an entrepreneur. All the shit we have to deal with at so many different levels.These women goes beyond that, literally putting everything on the line. Much respect.
s73ver 2 days ago 0 replies      
I hope this becomes more and more common, leading to actual protections against it, and not the "We're totally serious about combating harassment and sexism, this time, for reals" response that it seems has the the norm for the past several years.
sethbannon 2 days ago 0 replies      
It takes such incredible bravery to call out this kind of behavior in a palce that can feel too much like an old boys club. I'm in awe of these women. From Susan Fowler to Niniane Wang to Leiti Hsu & Susan Ho to the ones in this article. I don't know any of them but they are the catalysts of progress. It's up to all of us to make sure this is a turning point for an industry that needs to do so much better.
zebraflask 2 days ago 0 replies      
I would find it highly ironic if start up culture ends up adopting some variation of the "Mike Pence Rule" (which he got from Billy Graham) as a result of these incidents. Some of the comments sound like they suggesting similar things.
edwinyzh 1 day ago 0 replies      
Visiting the page I'm getting this error: Invalid URL

The requested URL "http://%5bNo%20Host%5d/2017/06/30/technology/women-entrepren..., is invalid.Reference #9.360e4cdb.1498896362.1047f8d

james1071 19 hours ago 0 replies      
This is what happens when people get rich quickly- lot's of them behave badly.

It is hardly unique to men - there are plenty of rich bitches in female dominated indsutries.

As for women being sexually harassed - that would stop immediately if they all shunned the rich guys.

But, that's never going to happen, is it?

lebanon_tn 1 day ago 2 replies      
> Lindsay Meyer, an entrepreneur in San Francisco, said Mr. Caldbeck put $25,000 of his own money into her fitness start-up in 2015. That gave Mr. Caldbeck reason to constantly text her; in those messages, reviewed by The Times, he asked if she was attracted to him and why she would rather be with her boyfriend than him. At times, he groped and kissed her, she said.

> I felt like I had to tolerate it because this is the cost of being a nonwhite female founder, said Ms. Meyer, who is Asian-American.

As abhorrent as Caldbeck's behavior was, what difference does being nonwhite make? It doesn't seem like white female founders don't get sexually harassed.

DrNuke 2 days ago 0 replies      
Douchebags are always one too many but Silicon Valley, like every other human environment based on ambition and greed, is surely not the safest / shiniest congregation on planet Earth?
tomcam 2 days ago 1 reply      
Have things become worse or is Seattle different? I saw none of this behavior during my tenure at Microsoft (1996-2000). Admittedly I am a dude, though a lifelong feminist.
rini17 1 day ago 0 replies      
No one noticed the photos with rather pin-up poses?
tawayyy 1 day ago 0 replies      
One thing that always makes me very sad is to see women working nearby top males in a company to look nowhere like normal women, but more like top models. Sometimes this happens just because a very beautiful woman is also very smart, but given the frequency of this thing I bet that there are cases of a conscious bias to select hot women. Now if you hire on purpose beautiful women that's a pretty good recipe for disasters.
idibidiart 2 days ago 1 reply      
If this is what happens in Silicon Valley, I wonder what happens in Wall St firms.
rargulati 1 day ago 0 replies      
Here we are, a moment where history changes, but human nature doesn't.

It is true: these women are brave and correct in action. The men failed in their professional responsibilities. Silicon Valley has the advantage of being the keeper of a market where the individuals in the labor pool have greater rights than their counterparts in Hollywood and Wall Street (where similar ongoings are typically kept hush-hush).

Yet what will be the outcome? Will the men with money now be afraid to take a professional meeting with a woman for fear of their own desires OR for fear that something negative will come of it? If the risk of a meeting may now include the risk of a lawsuit / losing your position and reputation, the calculus doesn't work out.

This could lead to a strange dystopian outcome: segregated funnels. Women VCs / Male VCs - some overlap may occur, but most end up meeting with their own gender.

Or not. Just a thought experiment on incentives and outcomes. Most likely we'll go back to regularly scheduled programming in a few months time. Some folks get kicked out, and a new batch will be there to replace 'em.

NelsonMinar 2 days ago 1 reply      
I'm ashamed of my colleagues and my industry.
louithethrid 1 day ago 0 replies      
I deeply, and with all moral conviction i can muster - detest people who have so much power they can abuse it, while i having no power and no chance to abuse it, at least get the chance to redocrate my inability and envy as moral outrage and principles. That is detestable. Very.
ngneer 2 days ago 2 replies      
These are horrible stories.

Interesting to see if folks on HN have had the experience of being attracted to an interviewee/interviewer, or investee/investor, and how they dealt with it given the complexities involved.

pfarnsworth 2 days ago 0 replies      
Good. Hopefully this change sweeps across all occupations are areas of the US and abroad.
LordHumungous 2 days ago 1 reply      
Wealth protects you in America, and these VC's know it.
partycoder 1 day ago 0 replies      
Takes a lot of courage to do this. People that fight their employers can sometimes be seen a problematic during background checks and can be dismissed for BS reasons (or not providing a reason at all).
itengelhardt 1 day ago 0 replies      
> Lindsay Meyer, an entrepreneur in San Francisco, said Mr. Caldbeck put $25,000 of his own money into her fitness start-up in 2015. That gave Mr. Caldbeck reason to constantly text her; in those messages, reviewed by The Times, he asked if she was attracted to him and why she would rather be with her boyfriend than him. At times, he groped and kissed her, she said.

Nice! Here's grade-A material for the 46. President of the United States of America /s

edit: added /s because it apparently wasn't obvious to some out there. Further clarification: This is a sarcastic comment on the fact that the USA have a president who has publicly admitted to sexually harassing women. With that person at the helm of the nation, how can anyone be surprised by others behaving similarly?

fjfkdjfjfjd 1 day ago 3 replies      
This is a little off-topic so mods, please flag/move as needed. (Wasn't sure if I should make this an Ask HN or not).

I feel I may be a victim of a false accusation of sexual harassment. I am really not sure how to proceed at this point. I have not put up a fight for fear that I could loose my job (which may already be in the works at this point). Management has not used the term "sexual harassment", only the verbiage that "I have made a female employee feel uncomfortable".

As I am currently in a temporary internship position due to end in a few months, I am not so concerned about this job, but this experience has made me very concerned about being falsely accused in the future of sexual harassment or rape. Should I just avoid all interaction with women at work completely? How would I do this with a female boss? (refusing to meet with her in private?)

Any advice is welcome.

austincheney 1 day ago 0 replies      
Yet another reason to NEVER move to silicon valley.


* if you land a job at a Fortune 50 company you could end up making $300k after 5-6 years if you are a badass. These are all big ifs.

* lots of sunshine

* easier access to venture capital


* high taxes

* real estate is 10x inflated compared to most of the rest of the country

* actually, everything costs more there. The valley also has the highest gasoline prices in the country. Even food is more expensive there.

* the top employers have, in the past, colluded to illegally not hire each other's developers

* illegal sexual harassment appears to be a cultural norm

* housing shortages

* weird local politics (social justice warriors are actually a thing there)


In summary the valley looks like a great place for founders (if male) but horrid for employees.

honestoHeminway 1 day ago 1 reply      
Clearly the valley has no problem with social isolated tech-types rising to power- and not advancing socially. Sorry, this is not just a moral problem- its the problem of a socially self isolating caste coming by rise to power back into human contact and then abusing it.
bdamm 2 days ago 12 replies      
I have a confession to make.

With few exceptions, I cannot work with a woman without thinking about having sex with her. It's not like I'm trying, it's more like the thought is a blinking red light and I can't help but look at the thought. Then I realize the thought is absurd, but it's already happened. This normally doesn't leak out, but for some women, I will give them "eyes" and usually feel bad about it afterwards. But that can be persistent, and for the woman, must be kind of creepy feeling.

I'd love for this to not be the case, but after years of it, I'm at a loss as to how a man changes this somewhat foundational part of the brain.

The women for whom I don't have that thought almost always follow remarkably strict professional conduct to a T. The equivalent for a man would be top-button done up and formal slacks every day, never smiling. I actually really like working with these women because it's kind of a relief.

mirimir 1 day ago 3 replies      
Yes, this is it, I think. I knew this guy, many years ago, who bragged about walking up to random women, and asking if they'd like to have sex. He said that he got slapped a lot, but also had lots of sex. I think that he was just BSing, but who knows?
watwut 1 day ago 3 replies      
> a volunteer organization who went directly from "I don't think I can see a movie tonight" to "This situation is making me uncomfortable" with nothing in between.

She tried to tell you gently in the way girls are taught to reject men while still being nice to them. If you reject dude other way, some tend to tget insulted anyway.

You did not get it, so she informed you flat out about how she really think about the situation. You got angry :).

Don't force us to walk super tight balancing act where being nice is too subtle, but telling it flat out is wrong and there are maybe two magical phrases (different for every guy) that are allowed. For one, not every girl has such super high social skills. For the other, there is no way to win in that situation.

bt4u2 1 day ago 2 replies      
This is silly. Don't reduce me to some risk averse weakling because I'm not willing to put my neck out for strangers for little to no reward. If you're like that, and you regularly go to Africa to help people, more power to you. I'm not. Most people are not.
2 days ago 2 days ago 2 replies      
bt4u 2 days ago 3 replies      
2 days ago 2 days ago 1 reply      
gus1h 2 days ago 2 replies      
tit_bounce 1 day ago 1 reply      
nickpsecurity 2 days ago 4 replies      
xor1 2 days ago 4 replies      
fuckoff 2 days ago 1 reply      
graycat 2 days ago 1 reply      
worldofeunuchs 1 day ago 1 reply      
nickpsecurity 2 days ago 4 replies      
6stringmerc 2 days ago 1 reply      
zobzu 1 day ago 1 reply      
oh_sigh 2 days ago 2 replies      
horsecaptin 2 days ago 2 replies      
peter94104 1 day ago 1 reply      
guiltyMisery 2 days ago 1 reply      
BadassFractal 2 days ago 1 reply      
It's an explosive combination of socially maladjusted nerds who suddenly ended up in positions of power in a location/industry with rather few women for them to influence with that authority + a new wave of progressive women trying to break into a male dominated space.

It's going to be a rickety ride for sure.

On a personal note, part of me was refusing to believe that someone like the partner at Binary was even possible in this day and age, I had never run into this myself and concluded this must have been a caricature at best. I must admit I was wrong to assume that, turns out those people are real. Hopefully there are very few of them, but I might have to change my mind on that one too soon.

pjwal 1 day ago 1 reply      
Let's use this opportunity to clean out the stink. Including the fact that https://www.linkedin.com/in/jsmarr/ made Friendster (somewhat) popular by taking users registrations to hack into other user accounts.

Register your email and password here: Now we will use your registration details on the major sites to scrape all the data we can. Huge violation that Joseph Smarr has never answered for.

drenvuk 2 days ago 1 reply      
How much of this is actually illegal? In workplaces you have legal recourse but in this situation it doesn't seem like there's anything you can do except blast their name out there for everyone to see.
azylman 2 days ago 2 replies      
I really hope that stories like this will let us stop blaming specific companies like Uber and accept that this is a problem with our industry. We're never going to be able to fix this as long as we keep sticking our heads in the sand and pretending it's not a systemic issue.
rokhayakebe 2 days ago 2 replies      
HN Women. I am a male and I am genuinely curious. Is the following SH or does the position of power the person holds make it SH or does the environment (workplace) make it SH.

"I was getting confused figuring out whether to hire you or hit on you."

microcolonel 2 days ago 1 reply      
PSA: If you are an honest man who manages or works with women in any capacity, please keep complete audio records of your time at work. File storage is cheap, a false charge will cost you your livelihood and damage the reputation of your entire industry. If you live in a jurisdiction which requires more than one present party to consent to recording, run for your life.

P.S. not at all insinuating that the accusations here are necessarily false.

Milestone: 100M Certificates Issued letsencrypt.org
575 points by okket  3 days ago   183 comments top 18
koolba 3 days ago 8 replies      
SSL certificate from a traditional provider valid for a year: $10.

SSL certificate from a traditional provider valid for two years: $20.

Automated SSL certificate generation and deployment via LetsEncrypt with zero human intervention and more importantly zero human intervention to renew it going forward - priceless.


That's the real value for me. At $10/cert, that's not even a rounding error. But manually generating a new CSR, uploading it via crappy web form, waiting a random amount of time, proving domain ownership by responding to an email (sent in plaintext), waiting a different random amount of time, downloading the new cert (again usually sent via plaintext email), and finally copying it over the old cert and reloading the SSL conf ... now that costs some serious time and time is money.

tyingq 3 days ago 1 reply      
This is an interesting situation where "public good" happened to align well with business goals of some deep pockets.

Particularly, Google and Akamai...two of the biggest LE sponsors. They both retain good visibility to user behavior (like specific urls visited) because of things like GA,MITM proxying, etc. But, ubiquitous availability of that is taken away from ISP operators.

Which is a good thing. Makes me curious if there's anything else like this that could be achieved. Are there other net public good projects that align well with deep pocket potential sponsors?

jagermo 3 days ago 6 replies      
I think they nail their point with "it illustrates the strong demand for our services."

Letsencrypt is cheap (free) and easy to use.

Even people with not a lot experience can secure their sites and apps, and it just works. Yes, you have to update it every three months, but that's worth the price and the excellent documentation.

Before letsencrypt I always wanted to secure my blog with https but never got around to it or it just looked super complicated and error prone.

Then, my provider built its own tool for LE and its just so easy to implement.

TekMol 3 days ago 5 replies      
I still wonder how Let's Encrypt works.

I understand that the problem they solve: A user wants to get the public key for a certain domain. So he knows he is talking to a server by the domain owner and not some man in the middle.

So he asks a third party whos public key he already has. In this case Let's Encrypt. Ok.

But how did Let's Encrypt get the public key from the domain owner?

I know they make the domain owner install some software on his server. But how does that make sure they talk to a server by the domain owner and not a man in the middle? Does the software include some private key so they can send them a "hey, encrypt this" message and to prove the server in fact has that private key?

If so, why is the Let's Encrypt software so complicated and not just a 5 line script or something?

gator-io 3 days ago 6 replies      
The biggest issue we've had is the short expirations. We have 51 certificates in our organization and do not want to rely on auto-renew.

As our community project, we built a totally free to the public service to monitor certs and alert you when they get close to expiration or are invalid, etc:


Feel free to use it for any certs.

asenna 3 days ago 2 replies      
Genuine question: Are the other smaller cert-issuing services going out of business? If not, what has been their response to LetsEncrypt?

Not that all of them should survive, there are a lot of crappy services that deserved this. But I'm just trying to place myself in their CEOs position and wondering how the game plan should be.

rmc 3 days ago 0 replies      
Wasn't the Snowden releases part of the motivation for Let's Encrypt? If the web is "encryption for everything by default", then dragnet survellance from the NSA (etc) is much harder.
doublerebel 3 days ago 0 replies      
Shameless open-source plug: if you need a automatic LetsEncrypt module that works with your containerized environment, try ten-ply-crest [0]. Works best with Consul where it automatically registers based on service tag, and can securely store certs in Vault, which makes it a great match for Fabio [1].

This way you can run multiple load-balancers and keep your existing infrastructure while still enjoying all the benefits of LE. Written in JavaScript but works with any stack.

Based on the great rawacme, and we'll keep this updated as ACME hits 2.0! [2]

[0]: https://github.com/nextorigin/ten-ply-crest

[1]: https://github.com/nextorigin/ten-ply-crest/issues/27

[2]: https://github.com/AngryBytes/rawacme-node/issues/2

linkmotif 3 days ago 0 replies      
How timely. I got my first LE cert two days ago! I used caddy, a really nice http server and proxy that's HTTPS first [0].

[0] https://caddyserver.com

tscizzle 3 days ago 0 replies      
The graph of HTTPS percentage has some sudden massive dips a little bit ago. Does anyone know the reason for those?
corford 3 days ago 1 reply      
I love letsencrypt but I wish they would hurry up with deterministic dns challenges. It would make securely automating certificate renewal vastly simpler and easier (especially for certs that are being used for non www serving endpoints e.g. mail servers etc).

I worry that at the moment there are probably a lot of systems out there that leave DNS API keys lying around on endpoints because it's easier to automate the renewal than write a convoluted two stage ansible role/play that delegates sensitive actions to the controller running the play.

michaelbuckbee 3 days ago 0 replies      
LE really fills a big need for cert issuance for massive web hosts with custom domains (WPEngine, Hubspot, Shopify are all on LE) as it both greatly reduces the support burden of setting up certs for all of their sites and sidesteps some of the limitations of the legacy cert structure.
binocarlos 3 days ago 0 replies      
I've been using kube-lego (https://github.com/jetstack/kube-lego) to automate certs for Kubernetes ingress for the past 9 months. It is a joy to just add a domain to a manifest, kubectl apply and then hit a browser with a certificate working. Thank you LetsEncrypt for making the Internet more secure.
pas 3 days ago 1 reply      
And most of that is because LE [still] doesn't issue wildcard certs, nor does it really plan to :(
id122015 3 days ago 1 reply      
I'd like to have the list with those 100 million websites, I didnt know there are so many.
halloij 3 days ago 1 reply      
Shame they can't issue wildcard certs. I don't see why they cannot.
pulse7 3 days ago 6 replies      
I would like to get a certificate with 3-years lifetime. I know that 90-days will limit the damage from key compromise, but I don't want to automate...
mavdi 3 days ago 10 replies      
Nearly 20K of them for Paypal phishing sites and who knows how many for others. While a noble intention, one can't ignore the damage they've done.
An easter egg for one user: Luke Skywalker einaregilsson.com
788 points by einaregilsson  4 days ago   133 comments top 20
dankohn1 4 days ago 2 replies      
This is similar to the story of the guy who pranked his roommate by buying Facebook targeted ads that only targeted him [0]. Perhaps in the future, the AIs will know us all so well that every interaction will be filled with little in-jokes.


carrier_lost 4 days ago 1 reply      
This is clever! Also worth praising: This site is free, loads quickly, doesn't require a user account, and has an easy-to-understand privacy policy.
scandox 4 days ago 6 replies      
Well Hamill is a better man than I.

I would hate this. If I imagine being him, this is like having to turn up to the office at 7am on a Saturday for a meeting. It's work. In analogy we're watching Mark Hamill get out of bed on a weekend, groan, drink some Pepto-Bismol and get his best shit-eating grin on his face.

comice 4 days ago 5 replies      
Must admit that this seems a bit creepy to me! I guess he's just amused but I wonder if it starts him thinking, is this game monitoring me personally? How else can I be targeted?
eeks 4 days ago 1 reply      
That's just too cool. I have not read such a faith-in-humanity-inspriring moment in a long time.
kbutler 4 days ago 1 reply      
Wonder how many people got "Dad" and wondered why in the world I'm playing against Darth Vader?!?
bcg1 4 days ago 0 replies      
Tried it out to see the easter egg... not only does it work, but the game is pretty awesome too
erikb 4 days ago 1 reply      
And this is how you do marketing the right way.
bojo 4 days ago 1 reply      
I suppose the more interesting question here is what the author did to actually detect the avatar.
cwbrandsma 4 days ago 1 reply      
Next time he need to put in Batman.
mysterydip 4 days ago 0 replies      
That's great! I haven't put any real easter eggs into one of my games yet, but I definitely will with my next one. You never know who will play your game (well I guess you do if they tweet about it, but you get what I mean).
devgutt 4 days ago 0 replies      
I wish they could have allowed Mark to change the third set by passing his hand over
msimpson 4 days ago 1 reply      
This is why we love Mark Hamill.
_e 4 days ago 0 replies      
If only the easter egg showcased one of his other roles such as cocknocker from Jay and Silent Bob Strike Back [0].

[0] http://m.imdb.com/name/nm0000434/filmotype/actor

lpgauth 4 days ago 0 replies      
Hopefully, this doesn't get shutdown by Hasbro.
simonhamp 4 days ago 0 replies      
Totally worth it!
dmitripopov 4 days ago 4 replies      
It should be noticed that there were the times when Mark Hamill was incredibly pissed off that people only see Luke in him and his acting career was actually ruined by this role. But now he is older and it looks like he is just fine with it. Or may be it's just prozac that makes it look this way.
mikeash 4 days ago 1 reply      
Wow, this place sure is filled with curmudgeons.
kstenerud 4 days ago 1 reply      
It saddens me to see all the comments ascribing victimization where none has occurred.

This is how SJWs operate. Don't do that.

Magic-Wormhole Get things from one computer to another, safely github.com
734 points by lelf  5 days ago   177 comments top 52
ohhhlol 5 days ago 2 replies      
> The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another. This allows the wormhole codes to omit IP addresses and port numbers. The URL of a public server is baked into the library for use as a default, and will be freely available until volume or abuse makes it infeasible to support.

why not make use of https://docs.syncthing.net/users/strelaysrv.html ? lots of servers http://relays.syncthing.net/

bhenc 5 days ago 6 replies      
> Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account?~

Assuming that you have openssh and rssh installed, you bootstrap like this:useradd -m -g users -s /usr/bin/rssh tmppasswd tmpedit /etc/rssh.conf and uncomment allowscpShare the password with the party you want to exchange data with. Make sure your ports are open.

See: https://serverfault.com/questions/197545/can-non-login-accou...

The use case I see for wormhole is if you're working purely in the python ecosystem. That's it.

You're free to disagree of course, but I prefer ssh, since it's peer-to-peer end-to-end encrypted,and extends to cover other use cases much more easily (rsync, VNC, etc.).

bob1029 5 days ago 3 replies      
I would highly recommend looking into this (seemingly-obscure) technique for NAT hole punching: https://samy.pl/pwnat/

It would allow for a "magic wormhole"-style system without the need for a MITM (trusted or otherwise).

schoen 5 days ago 2 replies      
This reminds me of the great tool http://www.fefe.de/ncp/, which seems like the same thing only without the cryptographic authentication!
chx 5 days ago 3 replies      
I like using https://file.pizza/ for this.
allworknoplay 5 days ago 1 reply      
The security model here is pretty great assuming you trust the rendezvous server.

Maybe consider an optional challenge/response prompt (like when your pal enters the prompt code, their client generates a second code that they give back to you) to make sure nobody's intercepted the request before them, odds aside (if someone got your initial code somehow, they could definitely man in the middle the request otherwise).

sw1sh 4 days ago 1 reply      
More people should use Keybase, and this would be more easier within its filesystem https://keybase.io/docs/kbfs
nsxwolf 5 days ago 1 reply      
I would use this just to cut and paste text from my host machine to my VMs, because I've never been able to get that seemingly simple concept to work reliably.
nattmat 4 days ago 1 reply      
I see you guys arguing what is easier, wormhole, syncthing, ssh. I'll argue that Keybase is by far the easiest. Just but the files in /Keybase/private/person0,person2
etanol 5 days ago 0 replies      
> Copying files onto a USB stick requires physical proximity, and is uncomfortable for transferring long-term secrets because flash memory is hard to erase. Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account? Copying files through email first requires transcribing an email address in the opposite direction

I had similar motivations in 2006 to write a tool to copy files "point to point". So here's my shameless plug:


In my case, cryptography was not a requirement, though.

llamataboot 5 days ago 1 reply      
Waiting for the security nits, but this looks awesome and I have use cases for it every week
pflanze 4 days ago 0 replies      
Interesting to see the various approaches. I've been using a pair of simple scripts myself for ages:


(they use a few other scripts from the same repo) used like follows (only practical when being able to copy-paste, my use case is to copy things between servers without needing ssh authentication between them, but having open ssh sessions into both from the same desktop/laptop):

 chris@a:/tmp/chris$ echo Hello > World chris@a:/tmp/chris$ netoffer World --Run:-- echo jxqtrb7xfq2e4dqy3uitc7986ydj56w59iqu84b | netfetch 15123 chris@b:/tmp/chris$ echo jxqtrb7xfq2e4dqy3uitc7986ydj56w59iqu84b | netfetch 15123 chris@b:/tmp/chris$ cat World Hello
(Uses gpg symmetric encryption underneath.)

api 5 days ago 0 replies      
A similar LAN or virtual network oriented tool:


... and another less magical one:


zimbatm 4 days ago 0 replies      
I'm surprised https://datproject.org/ didn't come up already.

It also allows you to share any data but also deals with incremental updates. The main use-case is to share big scientific datasets that update over time.

kyberias 4 days ago 1 reply      
My life is so much happier with Windows. Accessing files on any computer is very easy. Wannacry and that other nasty worm prove it to the world. :)
jedisct1 5 days ago 2 replies      
grizzles 5 days ago 1 reply      
Cool. Does the data transit the server, or does it do NAT / UPNP type stuff for direct comms after the initial rendezvous?
pveierland 4 days ago 2 replies      
https://transfer.sh/ is another neat service which allows you to upload a file easily using a tool such as curl and get a shareable link. There was one time when I only had Chrome Remote Desktop access to a machine without root, where this was a convenient way to share some files.

 $ curl --upload-file ./hello.txt https://transfer.sh/hello.txt https://transfer.sh/66nb8/hello.txt

mmargerum 2 days ago 0 replies      
Some of the golang folks are working on a very interesting project.


mihaifm 4 days ago 0 replies      
Great collection of tools here. I'm adding here my own approach, aimed at getting around proxies/firewalls: the files are encrypted and sent in the body of a HTTP request. The receiving end is a simple nodejs http server that can be started on the fly.


nathan-osman 4 days ago 0 replies      
Users may also want to look into NitroShare. Although it only works on the local network, it uses IP broadcast for automatic peer discovery (next version will use mDNS). It includes installers for Windows and macOS. Debian, Ubuntu, and Fedora include it in their respective package archives. There is also an Android app. (Disclaimer: I am the maintainer.)
jpillora 4 days ago 0 replies      
See also https://www.sharedrop.io/ (WebRTC-based AirDrop clone)
foota 5 days ago 1 reply      
How is the 16 bit wormhole code secure against brute force?
daxorid 5 days ago 2 replies      
So, basically, scp (with a 16-bit session key that has to be exchanged oob)?
aidenn0 4 days ago 1 reply      
Anybody know how the author generated the word-list? I generated one for my own passphrase use; I took the 2000 most common english words and used metaphone to prune any similar sounding words, which got me down to around 600; I then truncated the list to 512 yielding 9 bits per word.
astrodust 5 days ago 0 replies      
Extending this to a small text-based GUI with chat could make it a lot more useful, too. Sort of like a person-to-person BBS.
mbonzo 4 days ago 0 replies      
Hey everyone! I just did a quick video on Magic Wormhole showing how it works: https://www.youtube.com/watch?v=1SXCLAlpD70
curtis 5 days ago 0 replies      
> The receiving side offers tab-completion on the codewords...


Aissen 4 days ago 0 replies      
Looks like another application the key exchanges algorithms used in Mozilla Weave/Firefox Sync. They used to do that to associate new devices and send them the keys, but they killed that because the UX was more complex than user/password.
mbonzo 4 days ago 0 replies      
Hey everyone! I just did a quick video on Magic Wormhole showing how it works: https://www.youtube.com/watch?v=1SXCLAlpD70
abhinai 5 days ago 0 replies      
Interesting. How does this compare to WebRTC based peer-to-peer file transfer like perhaps https://simplewebrtc.com/filetransfer as an example?
jbergens 4 days ago 0 replies      
This kind of tool should probably be distributed as a binary but then it would be a bit harder to know it doesn't do anything strange.

I would probably write it in go to make it easy to compile for different platforms.

popey 4 days ago 0 replies      
This is so handy. I pushed it to the snap store. So if you're on a snap supported Linux distro you can just run "snap install wormhole" to get it and future updates.
dak1 5 days ago 2 replies      
How does this compare to AirDrop (besides obviously being cross platform)?
mcrmonkey 5 days ago 0 replies      
There are some handy docker images for this over on the docker hub if you want to try them and dont have/want the needed python setup but just happen to have docker installed
dagw 4 days ago 0 replies      
This would be so much more useful it was written in a language that made it easy to distribute the whole thing as a single statically compiled binary.
dang 5 days ago 0 replies      
consultSKI 4 days ago 2 replies      
Make mine pushbullet. Love it.https://www.pushbullet.com/
QuamVideri 4 days ago 1 reply      
Seems likely the use of the "Rendezvous" term is going to get you a cease-and-desist just like Apple got over what is now known as "Bonjour" (aka mDNS)


interfixus 4 days ago 1 reply      
Strictly for internal networks, and no crypto in transit, but dead easy - also for grandma - and covering phones as well as the three pc platforms: Dukto. I use it all the time.


reacweb 4 days ago 1 reply      
When I want to give things, I use ssh to put it in the static part of my website in a directory with a random name, then I send the url by mail. My sftp client is already configured with ssh keys. When the things I have to send is a collection of jpeg files, I use fgallery.
JepZ 4 days ago 0 replies      
Not exactly the same, but for those Linux/KDE users who look for a tool to connect to another machine within the same LAN, I can recommend KDE-Connect (file-transfer, notifications, shared-clipboard, etc.).
Thespian2 5 days ago 0 replies      
Trusting the baked-in rendezvous server would seem to be the most obvious security "nit," which could be addressed by compiling and running your own server. But out-of-the-box, that would seem to be a weak-point for MitM attack.
hossbeast 5 days ago 0 replies      
> The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another.
the_cat_kittles 5 days ago 1 reply      
the motivations are great:

"Moving a file to a friend's machine, when the humans can speak to each other (directly) but the computers cannot

Delivering a properly-random password to a new user via the phone

Supplying an SSH public key for future login use"

i had never even thought about those, and this is a great solution afaik

devindotcom 5 days ago 0 replies      
Nice, this looks useful and extensible! I popped it up on TC, I like the idea of transferring files wizard-style.


ReverseCold 5 days ago 1 reply      
I expected something local, like airdrop, but cross platform. (someone, please)
cestith 5 days ago 1 reply      
What's wrong with S/MIME? Email doesn't have to be unencrypted.

Also, you only need to send the public part of an SSH key to the remote end to set up future keyed connections to that system.

This seems like a solution with very narrow problems to solve.

Frogolocalypse 4 days ago 0 replies      
Very simple, handy, and yet powerful utility. Thanks.
dexzod 5 days ago 0 replies      
Interesting. Would it be possible to use it on Android and IOS?
draw_down 4 days ago 0 replies      
Cool, seems like AirDrop which I find very useful.
TDD did not live up to expectations microsoft.com
572 points by kiyanwang  3 days ago   425 comments top 97
nostrademons 3 days ago 16 replies      
TDD failed for economic reasons, not engineering ones.

If you look at who were the early TDD proponents, virtually all of them were consultants who were called in to fix failing enterprise projects. When you're in this situation, the requirements are known. You have a single client, so you can largely do what the contract says you'll deliver and expect to get paid, and the previous failing team has already unearthed many of the "hidden" requirements that management didn't consider. So you've got a solid spec, which you can translate into tests, which you can use to write loosely-coupled, testable code.

This is not how most of the money is made in the software industry.

Software, as an industry, generally profits the most when it can identify an existing need that is currently solved without computers, and then make it 10x+ more efficient by applying computers. In this situation, the software doesn't need to be bug-free, it doesn't need to do everything, it just needs to work better than a human can. The requirements are usually ambiguous: you're sacrificing some portion of the capability of a human in exchange for making the important part orders of magnitude cheaper, and it's crucial to find out what the important part is and what you can sacrifice. And time-to-market is critical: you might get a million-times speedup over a human doing the job, but the next company that comes along will be lucky to get 50% on you, so they face much more of an adoption battle.

Under these conditions, TDD just slows you down. You don't even know what the requirements are, and a large portion of why you're building the product is to find out what they are. Slow down the initial MVP by a factor of 2 and somebody will beat you to it.

And so economically, the only companies to survive are those that have built a steaming hunk of shit, and that's why consultants like the inventors of TDD have a business model. They can make some money cleaning up the messes in certain business sectors where reliability is important, but most companies would rather keep their steaming piles of shit and hire developers to maintain them.

Interestingly, if you read Carlota Perez, she posits that the adoption of any new core technology is divided into two phases: the "installation" phase, where the technology spreads rapidly throughout society and replaces existing means of production, and the "deployment" phase, where the technology has already been adapted by everyone and the focus is on making it maximally useful for customers, with a war or financial crisis in-between. In the installation phase, Worse is Better [1] rules, time-to-market is crucial, financial capital dominates production capital, and successive waves of new businesses are overcome by startups. In the deployment phase, regulations are adopted, labor organizes, production capital reigns over financial capital, safety standards win over time-to-market, and few new businesses can enter the market. It's very likely that when software enters the deployment phase, we'll see a lot more interest in "forgotten" practices like security, TDD, provably-correct software, and basically anything that increases reliability & security at the expense of time to market.

[1] https://dreamsongs.com/RiseOfWorseIsBetter.html

atonse 3 days ago 10 replies      
In the earlier days of the ruby community, I feel TDD was seen as gospel. And if you dared say that TDD wasn't the way (which I always felt), you'd feel like you were ostracized (update: just rewording to say, you'd worry that it would hurt you in a job search, not that people were mean to you). So I never spoke up. I feel like I was in the TDD-bad closet.

I absolutely think _tests_ are useful, but have never found any advantages to test-DRIVEN-development (test-first).

But part of that is probably my style of problem solving. I consider it similar to sketching and doodling with code until a solution that "feels right" emerges. TDD severely slows that down, in my experience, with little benefit.

What I've found works really well is independently writing tests afterwards to really test your assumptions.

agentultra 3 days ago 6 replies      
Maybe the title should be: TDD did not live up to my expectations?

I too, like the author, have been practicing TDD for > 10 years. Test, implement, refactor, test... that's the cycle. If you follow that workflow I've never seen it do anything to a code base other than improve it. If you fail on the refactor step, as the author mentions, you're not getting the full benefit of TDD and may, in fact, be shooting yourself in the foot.

I've read studies that have demonstrated that whether you test first or last doesn't really have a huge impact on productivity.

However it does seem to have an impact on design. Testing first forces you to think about your desired outcomes and design your implementation towards them. If you think clearly about your problem, invariants, and APIs then you will guide yourself towards a decent system.

The only failing I've seen with TDD is that all too often we use it as a specification... and a test suite is woefully incomplete as a specification language. A sound type system, static analysis, or at the very least, property-based testing fill gaps here.

But for me, TDD, is just the state of the art. I've yet to see someone suggest a better process or practice that alleviates their concerns with TDD.

dcherman 3 days ago 2 replies      
I've also never found TDD to really be very beneficial except for all but the most trivial utility libraries.

Most of the time, I have an idea of where I want to go, but not necessarily exactly what my interface will look like. Writing tests beforehand seems to never work our since nearly always, then will be some requirement or change that I decide to make that'd necessitate re-writing the test anyway, so why write it to begin with?

The extent of my tests beforehand these days (if I write any before code) are generally in the form of (in jasmine.js terms)

it('should behave this particular way', function() { fail();});

Basically serving as a glorified checklist of thoughts I had beforehand, but that's no more beneficial to me than just whiteboarding it or a piece of paper.

That said, all of my projects eventually contain unit tests and if necessary integration tests, I just never try to write them beforehand.

willvarfar 3 days ago 6 replies      
No article about TDD, particularly one that shouts out to the respected Ron Jeffries http://ronjeffries.com/, is complete without mentioning the TDD Sudoku Fiasco :)

Ravi has a nice summary: http://ravimohan.blogspot.se/2007/04/learning-from-sudoku-so...

Peter Norvig's old-fashioned approach is excellent counterbalance: http://norvig.com/sudoku.html

austenallred 3 days ago 6 replies      
The problem with TDD is that we flawed humans are writing the tests in the first place. If I suck at writing code there's no reason to believe I wouldn't suck at writing tests to check that code.

I use it on occasion as a good sanity check to make sure I didn't break anything too obvious, but this idea that TDD is a panacea where no bugs ever survive didn't ever make sense to me in the first place.

ryanmarsh 3 days ago 2 replies      
My day job is teaching TDD.

Just like other agile rhetoric I've found the benefits are not what the proponents advertise.

I teach it through pairing and here's what I find.

TDD provides two things.

1. Focus

Focus is something I find most programmers struggle with. When we're starting some work and I ask, "ok what are we doing here" and then say "ok let's start with a test" it is a focusing activity that brings clarity to the cluttered mind of the developer neck deep in complexity. I find my pairing partners write much less code, and much better code (even without good refactoring skills) when they write a test first. Few people naturally poses this kind of focus.

2. "Done"

This one caught me by surprise. My students often tell me they like TDD because when they're done programming they are actually done. They don't need to go and begin writing tests now that the code works. They like the feeling of not having additional chores after the real task is complete.

chubot 3 days ago 2 replies      
The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling.

Not to be smug, but I feel like this is a rookie mistake I learned 10 years ago immediately after starting TDD.

The slogan I use in my head is that testing calcifies interfaces. Once you have a test against an interface, it's hard to change it. If you find yourself changing tests and code AT THE SAME TIME, e.g. while refactoring, then your tests become less useful, and are just slowing you down.

Instead, you want to test against stable interfaces -- ones you did NOT create. That could be HTTP/WSGI/Rack for web services, or stdin/stdout/argv for command line tools.

Unit test frameworks and in particular mocking frameworks can lead you into this trap. I've never used a mocking library -- they are the worst.

There are pretty straightforward solutions to this problem. If I want to be fancy then I will say I write "bespoke test frameworks", but all this means is: write some simple Python or shell scripts to test your code from a coarse-grained level. Your tests can often be in a different langauge than your code.

The last two posts on my blog are about this:

"How I Use Tests": http://www.oilshell.org/blog/2017/06/22.html

"How I Plan to Use Tests: Transforming OSH": http://www.oilshell.org/blog/2017/06/24.html -- I want to change the LANGUAGE my code is written in, but preserve the tests, and use them as a guide.

And definitely these kinds of tests work better for data manipulation rather than heavily stateful code. But the point is that testing teaches you good design, and good design is to separate your data manipulation from your program state as much as possible. State is hard, and logic is easy (if you have isolated it and tested it.)

Summary: I use TDD, it absolutely works. But I use more coarse-grained tests against STABLE INTERFACES I didn't create.

sevensor 3 days ago 0 replies      
TDD has worked well for me exactly once: porting a library from to Python to C. I had a very clear idea of what every function was supposed to do and I could write tests first. Due to the nature of the library I was able to write tests that generated lots of random inputs and check the properties of the outputs. This was a great experience --- it was very easy to change the internals without fear of breaking something. Ordinarily writing C is a bit of white-knuckle experience, but this made it quite pleasant.
norswap 3 days ago 1 reply      
TDD never worked for me, I believe because of the nature of my work: research code, very explorative in nature. I do not now in advance how the interface will turn up, so it's hard to anticipate the interface in my tests (or it leads to a lot of wasted).

Nowadays I mostly test with "redundant random generation testing": generate random but coherent input, run logic, then... Either I can reverse the logic, and I do that and verify that I get back the original input. Or I can't and then I simply write a second implementation (as simple as possible, usually extremely inefficient). This finds bugs that classical unit and integration testing would never find.

dkarl 3 days ago 1 reply      
When you look at the TDD evangelists, all of them share something: they are all very good probably even great at design and refactoring. They see issues in existing code and they know how to transform the code so it doesnt have those issues, and specifically, they know how to separate concerns and reduce coupling.

I think one of the selling points of TDD, and something I hoped for from TDD, was that causation went the other way, and writing tests would result in code being refactored to separate concerns and reduce coupling. Sadly, I've seen that it is possible to write code that is highly testable but is still a confused mess. What's more, TDD as promoted encourages people to confuse the two, resulting in testability being used as a reliable indicator of good design, which produces poor results because it's much easier to make code testable than it is to make it well-designed.

I've also seen people mangle well-factored but untestable code in the process of writing tests, which can be a tragedy when dealing with a legacy codebase that was written with insufficient testing but is otherwise well-designed. A legacy codebase should always be approached with respect until you learn to inhabit the mental space of the people who wrote it (always an incomplete process yet very important), but TDD encourages people to treat untested code as crap and start randomly hacking it up as if it were impossible to make it worse.

This unfortunate (and lazy) habit of treating testability as identical with good design is not a mistake that good TDD practitioners would make, but I think they did make a mistake in the understanding of their process. My guess is that when those people invested effort into refactoring their code for testability, they were improving the design at the same time, as a side effect of the time and attention invested combined with their natural tendency to recognize and value good design. They misunderstood that process and gave too much credit to the pursuit of testability as naturally leading to better design.

I do think the idea of TDD is not entirely bankrupt, because the value of writing tests is more than just the value of having the tests afterwards, but I think its value is overblown, and people who believe in the magical effect of TDD end up having blind confidence in the quality of their code.

gkop 3 days ago 2 replies      
There are at least several other advantages to TDD the article misses:

* Faster development feedback loop by minimizing manual interactions with the system

* The tests are an executable to-do list that guides development, helping you stay focused and reminding you what the next step is

* Provides a record of the experimentation taken to accomplish a goal, which is especially useful when multiple developers collaborate on work-in-progress

namuol 3 days ago 1 reply      
The main benefit of TDD:

It strongly encourages you to think of your code as several input/output problems.

When you apply this model of thinking at scale it tends to lead to a much simpler (read: less-complex) codebase.

zorked 3 days ago 1 reply      
And another generation of programmers learns that there is No Silver Bullet[1].

This will happen again and again.

[1] http://faculty.salisbury.edu/~xswang/Research/Papers/SERelat...

sametmax 2 days ago 1 reply      
Another reason for the failure of TDD is the fact you need to be a very good programmer to be productive with it. Indeed, it requires you to be able to think your general API and architecture ahead.

Junior and 9-to-5 programmers suck at this. They are much better at tinkering until it forms a whole, then shape that into something that look decent and works well enough.

And we live in a world where they represent a good part of the work force.

You can't expect everyone to be a passionate dev with 10 years of experience and skilled in code ergonomics and architecture design while being good at expressing him/herself. That's delusional. And armful.

dyarosla 3 days ago 3 replies      
tl;dr: TDD is not, on its own, effective. If code is highly coupled, tests become highly coupled and a nightmare. The author advocates that learning to properly refactor and create low-coupled code should be a first priority ahead of following TDD blindly.

IMO, nothing really profound here.

exabrial 3 days ago 0 replies      
TDD takes discipline, planning, and creates stability; I think in the day and age of "rewrite it using the latest framework" it just doesn't coincide with the insatiable thirst developers to use the latest bleeding edge XYZ.
wyldfire 3 days ago 1 reply      
Sorry for the aside but I find it humorous that the headline that should read "TDD--, Refactoring++" instead shows "TDD, Refactoring++".

This is emblematic of that frustrating AutoFormat behavior that replaces double dashes with an emdash. Probably not a coincidence that this appears on MSDN -- perhaps it was drafted on Outlook or Office or some other tool w/this same AutoFormat.

This feature is responsible for countless miscommunications between colleagues la "I copied and pasted your command just as you had it in the email"...

mledu 2 days ago 0 replies      
If anything I think this article makes a great case for TDD. If your developers aren't good at design and refactoring and that is showing up in your tests, that is an indication that your design needs to be refactored to be less coupled. TDD isn't a panacea, developers have to have some level of sense and see the signs of a less than optimal design. Pain in test creation is a great way of showing that as it simulates client code.

I also don't understand people thinking that you have to write the entire test suite up front. You build your test along with your code. You start simply and build up, this way if you don't have concrete specs your tests are helping you with the design by thinking about consumption as well as implementation.

thewoolleyman 3 days ago 0 replies      
Having done TDD mostly full-time for well over a decade, I have to agree, and it hits home with some past experiences.

You can end up with fully tested, TDD'd code, that is not well-designed - i.e. unnecessarily coupled, and not cohesive. Cohesion and coupling are the basis of most everything in good software design - e.g. all the letters in SOLID boil down to those two things.

The premise of TDD is that it's supposed to make that too painful to do to a damaging extent. But, if you keep ignoring the pain, perhaps in the name of a "spike" solution, or because you just don't have the experience or background to know what good design is, you will end up with a tested mess of spaghetti.

And that's even harder to untangle and refactor than untested code, because you have to figure out which tests are useful, useless, or missing. That just slows you down as you work towards a better design.

In these situations, scrapping the whole module, including tests, and starting over, is sometimes faster in the long run that trying to refactor incrementally with the safety net of existing tests (another of the main values of TDD).

-- Chad

(edit: typo)

fanpuns 3 days ago 3 replies      
I appreciate that many of you (including the article author) are coming at this question with a lot of experience. I, however, knew very little about coding a year ago and learned with TDD as part of how I build almost every project. Although I think it's always the case that I might "be doing it wrong", it's hard for me to imagine now writing code without first writing tests. Part of this is, admittedly, that I'm still uncertain if my solutions or code will even work and writing tests helps me to both organize what I want to do and also verify that I haven't made silly syntax mistakes.

Was it harder to learn this way? Absolutely (at least I think so, but my sample size is 1). I can't tell you the state of despondency I was sometimes in learning test suites and trying to figure out how to test certain things all while knowing that I could just write the stupid code and inspect the output to see if it was right.

Also, I love to refactor. How do you refactor if you don't have tests to catch you when you break something?

dccoolgai 3 days ago 0 replies      
Contract-driven development is the best model for building stable and reusable systems at scale. The flaw in TDD is that it tries to make the tests be the contract instead of supporting the contract.
flashdance 3 days ago 0 replies      
I develop software for radio towers. This was a very confusing headline and article. I only figured out halfway through that I was thinking of the wrong TDD.

Test driven development is one thing. Time division duplexing is very different. I'll have you know that the latter did in fact live up to its expectations!

Showerthought: I wonder if our TDD codebase is TDD?

romanovcode 2 days ago 1 reply      
Oh, finally the TDD fad is dying. Never got into it and always thought it is a complete waste of time.

I advocate to write tests only for critical algorithmic calculations and nothing else.

Integration tests matter 100x more (at least in webdev).

pwm 2 days ago 0 replies      
I think testing itself is the second-best thing short of proving correctness. It won't guarantee that your code is correct but ideally it greatly helps reducing bugs. TDD, as far as I understand, seems to promise more than just the benefits of testing. It promises an emergent design that produces a solution to your problem. I think this works well for some problem domains, like a lot of web/CRUD/LOB apps, and not so well for others, eg. the Sudoku solver mentioned in this thread. On the other hand a lot of real world problems can be successfully solved by solutions that are adequate but not optimal, ie. good enough solutions and TDD seems to be a viable strategy for these. I personally yearn working on problems where TDD based emergent design is not enough and human ingenuity/intelligence/creativeness is needed. Sometimes I bump into these but at the same time I realise that most of my day-to-day job involves problems that is solvable by TDD alone. That said, while all my production code has extensive tests, probably less than 50% has a test driven design and I'm content with that.
js8 2 days ago 0 replies      
I think people should in fact test data, not code. Looking at it from purely functional point of view, functions should be either proved that they do what they should, or they should be asserted and QuickCheck-ed. But what really needs to be tested is that the input parameters (i.e. data) conform to some "hidden" assumptions that we had when we wrote the functions. Because as we modify the program, or even why we modify it, is that these assumptions have changed.
alkonaut 2 days ago 0 replies      
His argument is basically that with tight coupling, TDD is too hard and time consuming to pay off.

But part of the point of TDD is ensure that all code is testable, and testable means loosely coupled.

So you can't start TDD'ing on a bad and tightly coupled legacy codebase. You can do it on a greenfield project however. Greenfield is very much the "lab environment" he talks about. You control everything.

With greenfield projects comes another reality though: you often have to explore and sketch a lot. TDD does not work well for writing a dozen sketch solutions to something and throwing out eleven.

And that to me is the main drawback of TDD: it works poorly for very young code bases and it works poorly for very old ones (that weren't loosely coupled to begin with). It's a very narrow window where you can start using TDD in a codebase and that's when the architecture is first set, but the codebase hasn't yet grown too coupled. Such a narrow window means it's not very popular, for good reason.

briandear 3 days ago 1 reply      
I disagree with the assertion that TDD takes more time. TDD takes less time if you factor in the reduction of errors TDD helps prevent.

This article should be renamed, TDD doesnt work if you dont do it right.

This article seems to argue for Big Design Up Front. Ok, if you do that, then why not write the tests for those designs after you make the design then the code you write confirms to the design.

I dont think anyone advocates that writing tests is the same as the design process. Tests are the result of design not the other way around. The gray area is not design up front but how much design up front.

borplk 3 days ago 1 reply      
Unfortunately TDD has become a band-aid for lack of constructs that should be a part of the language in the first place.

A language that allows you to express the spec could be so much more useful.

cyberpanther 2 days ago 0 replies      
Sometimes lowering your expectations are a good thing for everyone. Now we know the pros and cons and can use it appropriately. No one particular habit is going to solve all your problems.


alexeiz 2 days ago 0 replies      
This looks more like "Misunderstood TDD did not live up to expectations." It's obvious from the article that tests were written after the code. That explains the excessive coupling in the code and tests being hard to write and constantly getting in the way of code development. This is not "test driven development". This is bolting tests on top of already (poorly) designed and implemented code. Tests are an afterthought. They did not drive the design. No wonder it doesn't work well. The high coupling in the code has to be repeated in tests and it's a rather painful and fruitless exercise. Have tests been written first, it would have been clear which design led to lower coupling: the one it's easier to write tests for.
maruhan2 3 days ago 0 replies      
"The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling. This means that if I change the behavior of how class <x> works, I often have to fix tests for other classes.Because I dont have low coupling, I need to use mocks or other tests doubles often. Tests are good to the extent that the tests use the code in precisely the same way the real system uses the code. As soon as I introduce mocks, I now have a test that only works as long as that mock faithfully matches the behavior of the real system. If I have lots of mocks and since I dont have low coupling, I need lots of mocks then Im going to have cases where the behavior does not match. This will either show up as a broken test, or a missed regression."

Simply comment them out temporarily?

"Design on the fly is a learned skill. If you dont have the refactoring skills to drive it, it is possible that the design you reach through TDD is going to be worse than if you spent 15 minutes doing up-front design."

I don't quite understand how TDD means you skip up-front design.

richardknop 2 days ago 0 replies      
Would it still be TDD when talking about functional tests? So no mocking. Or does strict TDD definition only include unit tests.

Because the general principle of writing a test case and then writing/editing code still applies with functional / integration tests.

And I have always preferred to use functional tests to test bigger components / packages based on their public interface then to write a unit test every small function inside the package.

Unit tests seem to be much more useful in situations when you know exactly what should your inputs and outputs be, for example if you are writing a function to transform data from one type/object to another. This is where unit testing shines.

But a lot development usually involves integrating / gluing together several higher level components together and passing data between these components and I much prefer functional tests there.

pif 3 days ago 1 reply      
A.k.a: no methodology will turn a coding monkey into a great developer. What a surprise :-)
peterburkimsher 3 days ago 0 replies      
TDD supports the paradigm of Software Engineering as an Engineering field. Design, plan, build, test.

Chartered Engineers have qualifications for their skills to do this - whether it's building bridges, designing circuits, or making cars.

Most programming is not Engineering. It's scripting. Hacking together a quick solution to meet the user's immediate needs.

Huge businesses (including the company I work for) have some really weak points in their production flow. They're planning factory operations using some shoddy macros in Microsoft Excel thrown together by some businessperson with no programming experience. Management won't change it, because "it works".

Other fields of Engineering (civil, electronic, mechanical) have serious life-threatening consequences if they fail. Software rarely has that risk. (Insert comment about healthcare systems and WannaCry here).

For times when software carries serious risk, then TDD is still important! The rest of the time, it's a burden.

he0001 2 days ago 0 replies      
For me TDD is mainly three things.Firstly its about testing myself so Ive understood the task properly before writing any code at all. In this step I also can tell if my code is doing too much and therefore can tell if my method or function conforms to the single responsibility rule.Secondly its about maintainability and logical reason. A codebase where I dont know what its supposed to do, or forgot about it, I can always rely on the tests to skip the parts thats not interesting making me to move faster.Thirdly its about the ability to refactor and therefore evolving design. Even if this is a step in TDD you will always need to refactor since requirements changes over time. The solution you had is not the optimal solution anymore and therefore you must refactor anyway. Evolving design is a strength where you continuously strengthen your code while getting work done faster and faster since you can offload the reasoning on the tests, covering your back.

AFAIK TDD is actually the only way to produce code in a systematic way. When reading TDD driven code I can make certain assumptions which I cannot do with randomly produced code (theres no system to the code). TDD code is developed automatically with tests in mind and are always a lot easier to test when you need to do it, and you always need to do it. (I would argue that there is possible code which is not testable as is, unless you refactor and then you dont know what the code is doing the same thing as it did before).

If your tests get coupled with the code, Id say its because either your method/function is doing to much or its a language problem, not giving you the necessary tools to ignore implementation details, which mocks usually are an indication of.

Since TDD is a systematic way of producing code (at least more systematic than not doing it), code which isnt produced with TDD will not play well with TDD produced code since it wont follow the same conventions, designs and possibilities.

TDD doesnt automatically make the code more bug free, but I dont believe that TDD cause more bugs just because you use TDD.

If programmers cannot learn or deal with TDD, you have a different problem on your hands.

lucidguppy 3 days ago 0 replies      
I'm not really convinced by this article or by the comments.

You can do exploratory code and TDD at the same time - you just have to write down what you expect the code to do first.

These criticisms of TDD are very weak because they don't spell out an alternative - every critic's vision of proper testing is different - and will respond "that's not what I meant".

thegigaraptor 3 days ago 0 replies      
I hope nobody uses this article as ammo against TDD. The benefits are not felt immediately but when time comes for maintenance/updates, I'm working on my second port with a company. The first app had fantastic testing and I was confident in the work I delivered. This second app however was led by a developer who "needed to get things done" and I now have to wrap the v1 app in functional tests to validate that I'm delivering a solid port. If the company had enforced better practices sooner, they would have saved the time I'm spending on retesting the original app. This second iteration is test driven, hopefully the next dev has a better experience.

Also testing helps alleviate QA's workload by ensuring developers have not broken any tests and regressed functionality before we hand off to QA.

If you're hacking on an idea or learning, I can understand not testing, but if someone is paying to deliver code, deliver it with tests, period.

hdi 2 days ago 0 replies      
I like the general assumption that TDD failed.

Failed at what exactly? Who would think 1 methodology would give them all they need to be a great software engineer?

If TDD supposedly failed, can we hear the process that succeeded at what TDD failed? Please extend an olive branch and enlighten the rest of us.

Because I tell yea, I can't even count the amount of "senior software engineers" I've encountered, that deploy untested production code daily to systems that help you guys buy your coffee in the morning, manage your money and pensions. Oh yea, and they all seem to think TDD is bollocks too.

When that percentage decreases and engineers like that become a rare occurrence, then we can talk again. Peace.

kevwil 3 days ago 1 reply      
This logic is flawed, and I'm not surprised it's coming from Microsoft. If the expectation is that (repeatedly?) making blind guesses quickly and (optionally) cleaning up the mess later is better than expressing an understanding of the problem domain before writing 'real' code, then yes TDD will not live up to those expectations.
makecheck 2 days ago 0 replies      
Its important to not assume that tests and code will be written by the same person.

When tests are being created early, its actually a good excuse to have at least a couple of minds looking at the same problem, instead of bottling it up into one person who ends up quitting next month. Its an excuse to not just discuss the approach to use but have some code, where each person may realize that they hadnt really thought about the whole problem or maybe didnt understand it at all.

Other criticisms in this thread are still fair. It is certainly possible to waste a lot of time on tests for instance, and to build something that is too restrictive. Ultimately though, if youre more than a one-person project, some form of sketch it out first is a good thing.

S_A_P 3 days ago 2 replies      
One thing I've noticed. TDD takes longer. It just does. You can argue that you are racking up less technical debt in the long term but every consulting gig I've been on where TDD was the "directive" often deteriorated because the business does not want to factor in between 40 and 100% extra time to allow proper TDD coding. They want the same somewhat arbitrary and bonus driven deadlines that they always do, and in order to meet them, we usually end up tossing out TDD halfway through and reverting to just having skilled developers get the job done as quickly as possible.

THIS is the economic reality of TDD failing. A manager wanting to reduce quarterly spend so he gets his bonus doesnt care that TDD will cost him less over 5 years, he cares that he can get a project delivered on time and under budget...

AngeloAnolin 2 days ago 0 replies      

"developers are quick to pick up the workflow and can create working code and tests during classes/exercises/katas and then failing in the real world."

The anathema of TDD is that people equate having well defined test cases to a solid product that brings the solution that the user wants. I have seen far more software projects boasting of >85% of code coversge for tests, but still failing spectacularly.

TDD failed because it was assumed as the magic wand that aligns the end product with what's on spec and the process that it would cover, but the reality of human behavior being merely coded to test cases is far out from reality.

colomon 3 days ago 4 replies      
It always startles me when people assume that one programming technique either works for every type of programming or doesn't work for every type.

Working on Perl 6 compilers, an extensive set of tests was our best friend. It was (probably still is, I haven't had time to help the last few years) utterly routine to write tests first and then write the code to make them work. It was a perfect way of working on it.

On the other hand, one of my personal projects in Perl 6 is abc2ly. It has lots of low level unit tests, great. But almost everything really interesting the program does is really hard to test programmatically. How would I write tests to make sure the sheet music PDF generated has all the correct notes and looks nice? That problem is significantly harder than generating the sheet music in the first place!

crucialfelix 3 days ago 0 replies      
Some code is perfectly suited to TDD. Other code not so much.

I was just working on something with a bunch of tangly functions that measure and remap data and prepare it for sonification as sound events.

I keep jest running and the workflow is much quicker and more satisfying than any other way of hacking.

paupino_masano 3 days ago 0 replies      
I think it all depends on the context of what you're writing. For example, we use TDD when making additions and modifications to a tax engine. For this use case it's incredibly useful as the relative inputs and outputs are both predictable as well as repeatable.
velox_io 3 days ago 0 replies      
TDD is a nice idea, however TDD can add quite a bit of overhead. Upfront overhead, before writing any code. This isn't a problem if it is justified/ needed. Plus the tests can become more cumbersome when code changes, more baggage to carry.

Testing often becomes a KPI, and therefor is commonly gamed. Doing the bare minimum to tick '100% code converge!'. I'm a fan of contracts to test the spec and boundaries(whether human or other application) of software.

TDD requires discipline and experience; You could spend an infinite amount of time writing tests and never deliver, or the other extreme becoming incapacitated fighting bugs.

Our first priority should be crash-free software, THEN start to think about making it bug free.

_Codemonkeyism 3 days ago 0 replies      
I love unit tests, they give me a good vibe and find some kinds of bugs - mostly b/c I think differently about a problem.

TDD never worked for me, b/c the code goes through many refactorings until I'm happy and it always felt tedious refactoring the TDD tests.

elliotec 3 days ago 0 replies      
corpMaverick 3 days ago 0 replies      
Title should be: "TDD did not live up to MY expectations".

TDD is sort of an art more than a science. You have to know when and how to do it. You also have to know how much as the marginal utility diminishes very fast.

kmicklas 3 days ago 0 replies      
Most developers are bad at refactoring because most of the tools for it are terrible. Even at Google, something as trivial as renaming a function can be a monumental task.
didibus 3 days ago 0 replies      
I remember an old Microsoft analysis where they measured empirically time to delivery and actual defects and found TDD to not reduce defects while increasing time to delivery. Can't find it anymore.
gedrap 3 days ago 0 replies      
Personally, I found that TDD works very well for small modules / classes, etc, when there's little design to be done. In this case, you can focus on writing down the spec (test cases) and be fairly confident that it works by the time all test cases pass. Also, I agree with the author about complexity involved if one decides to go down the TDD way for large, complex systems. So, essentially, it boils down to picking the right tool for the job, and TDD is just a tool, like any others.
PaulKeeble 3 days ago 1 reply      
Microsofts own study of TDD showed that it definitely improved defect rates and they went fully into developing with it for Vista which is part of the reason for the delays. Nowadays large chunks of the API is automatically tested and this has allowed Microsoft to release changes must more often with a lot less manually testing.

So while this individual is finding his local team isn't getting the full benefits, Microsoft appears to be based on its own report on the technique and its outwards software release cycle changing.

weberc2 3 days ago 2 replies      
The main benefit for TDD in my mind is that it mostly makes it painful to write spaghetti code. When I'm reviewing something that looks too integrated, I just ask for a unit test for that particular piece of functionality, and the author is effectively forced to go back and refactor. After going through this a few times, they learn to think about their design before they write code. Of course, many dynamic languages defeat this by offering hacks like Python's mock.patch which let you nominally test spaghetti code...
kgilpin 3 days ago 0 replies      
Also, when a test relies on mocks it doesn't test the real thing, and doesn't guarantee proper behavior in the real world. I suppose this is obvious from the nature of mocks. And yet, if you can figure out a clean and fast way to test something without mocks, I think you're better off.

Along with the coupling problem mentioned in the article, these are the two reasons why I am writing a lot fewer mocked tests (e.g. Rspec) than before.

garganzol 3 days ago 0 replies      
TDD works exceptionally well. The secret sauce is to find the correct scope for tests. In my experience, integration tests are the most suited kind of tests for successful TDD.
StevePerkins 3 days ago 2 replies      
Any programming language suitable for business application development is going to have static analysis tools that can reveal your percentage of test coverage.

As long as 95% (or whatever) of your logical branches are covered by tests, I don't really care whether you wrote the tests beforehand or after the fact.

However, TDD being hard is not a justification for not writing the test coverage at some point in the dev cycle. Too many developers, and managers, make that fallacy leap.

perlgeek 3 days ago 0 replies      
When reading the title, I had hoped for data. Like when Microsoft analyzed their own developer's data to find out if remote work impacted productivity or bug counts.

Instead, just another piece of anecdote. Sure, anecdotes from 15 years, still not what I hoped for.

Doesn't Microsoft have hundreds of dev teams, and can compare things like development speed and bug counts, and correlate with whether those teams practice TDD? I'd read that article immediately!

krmboya 3 days ago 1 reply      
What about a kind of middle ground, doing a 'spike' when figuring out stuff, what kind of thing you should build, what the design should look like, etc then follow up with TDD to stabilize the identified interfaces and produce tests that act as a system health check?

Ok, for consultants, perhaps they end up doing the same kind of things for different clients to the extent that they can just jump in doing things TDD from the very beginning.

Debugreality 3 days ago 0 replies      
I've seen TDD used once really well in a university setting where it was used only for shared libraries/services that could be used by multiple other teams or departments but not on individual (front facing) projects.

Probably because only the best developers on the team worked on the shared services it eliminated the refactoring issue as well as ensuring shared services could be a lot more reliably and safely updated.

MichaelMoser123 1 day ago 0 replies      
if requirements are not clearly understood then the tests will not be complete - now does that invalidate the need to write tests? I don't think so.

Now this problem is amplified when writing tests on top of mocks; if you don't understand the requirement of the next level (mocked level) then your tests will be very incomplete.

Still, having unit tests that are run with each build is much better than having no tests at all.

deweller 2 days ago 1 reply      
I grant the premise that TDD has drawbacks. But are they really worse than the drawbacks of not writing tests?

Code with no test coverage will have more defects and will be more prone to regressions.

For many projects TDD is the best we've got until something comes along that replaces it.

blackoil 3 days ago 0 replies      
TDD should not be taken as a religious dogma. The way I like it is, central business logic as pure functions, which have tonnes of unit test. While integration with other services and components sits on edge which do not have unit tests, instead integration tests. If I have a key piece of code, I wanna test, but would require lots of mocks, it is time to refactor.
luord 3 days ago 0 replies      
This article made me feel good. Ever since I started doing TDD, I refactor a lot more and my code looks nicer.

Hopefully, I'm not falling into the other trap he mentions and getting into design that would be worse than 15 minutes up-front.

Sadly, I can't comment on anything else as TDD isn't practiced at all in my area.

99_00 3 days ago 0 replies      
>The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling. This means that if I change the behavior of how class <x> works, I often have to fix tests for other classes.

At this point you should be realizing your code is untestable and needs to be refactored.

faragon 3 days ago 0 replies      
Tests are good for detecting code that is not working as expected, using them as a investment/insurance, based on a budget. However, in my opinion, TDD is often more about an obsesive-compulsive religion built on wishful thinking on programmers reaching the excellence by writing tests ad nauseam.
pcarolan 3 days ago 1 reply      
Defining the interface before you write the code is the major advantage of test-driven development and what it added to that way of thinking was very valuable especially to novices. It also makes your code more modular and reusable. Writing code as if other people were going to use it is something we don't talk about enough.
alexandercrohde 3 days ago 0 replies      
Tl, dr: When we treat unit tests as an ends in itself, this leads to writing clunky tests for clunky code. If an engineer doesn't understand modular, reusable code then that engineer can't won't be able to write code that can tested easily. Thus understanding design is a prerequisite to effective TDD.
henrik_w 3 days ago 0 replies      
One of the key benefits for me is the mindset (fostered by TDD) to make as much as possible of the code (unit) testable. This naturally leads to less coupled code, because otherwise it is not possible to test it in isolation. So the fact that you start with the aim of unit-testability leads to better designs.
remotehack 3 days ago 0 replies      
Software obeys its own dynamics. Some things work well. Some not quite the same. It's nice to see someone admitting that testing is good, rigid TDD; like rigid Agile or rigid Waterfall, are bad. Corollary, what our parents said still holds true; too much of a good thing is bad.
partycoder 2 days ago 0 replies      
TDD forces you to design with testing in mind, also known as construction for verification.

If testing gets in the way is because the design doesn't emphasize testing.

Then, if you have high coupling, you have high complexity and a stronger reason to test.

tomelders 2 days ago 0 replies      
Off topic: But who's in charge of typography at Microsoft? The typesetting of their blogs and documentation are horrific. And what little I see of Windows looks just as bad.
buckbova 3 days ago 5 replies      
Seems a little arrogant to say most developers don't know how to refactor or do it poorly. Maybe it's true. I really can't say one way or the other because what I see is most devs believe they don't have time to refactor.
draw_down 3 days ago 0 replies      
I don't really have solid arguments against it, I just never found it particularly helpful, or ended up with a result that seemed to justify always working this way, or even defaulting to working this way.
hughperkins 3 days ago 0 replies      
Since when did TDD fail? Which is not to say it needs to be applied systematically to everything. But there are often bits of code which are better off being correct, and TDD works well for those.
baybal2 3 days ago 1 reply      
As I remember, one Microsoft was one of the original TDD pushers. One person who worked there for over 20 years told me that "the peak TDD" at Microsoft was right around time of Win ME release
jv22222 3 days ago 0 replies      
I'm not sure if the OP is advocating against using tests and ci completely, or just the process of writing tests first and then code... Any one got any thiughts on that?
AdmiralAsshat 3 days ago 1 reply      
Stylistic critique of the article: Is it too much to ask that you enumerate your acronym at least once throughout the entire article? The acronym "TDD" appears 16 times throughout the article, and not once do we get "Test-driven development" spelled out.

I get that it's a technical blog, but "TDD" isn't exactly a household name. You can't utter it in the same breath as SSL or RSA and expect people to know what it means without context.

As a test (no pun intended), try reading the article with the premise that you have NO idea what "TDD" stands for. Can you reasonably infer it from the rest of the article?

throw7 2 days ago 0 replies      
too bad about the RE on the REOI. But TDD really failed because it didn't identify SLG parameters. I don't know if I'd call it a failure though, SLG parameters are usually hard to know before the start of a project and, even, throughout.
cphoover 3 days ago 1 reply      
TDD failed? What no it didn't? When was the last time you depended on the use of an untested library for a major project?
dc2 3 days ago 0 replies      
Writing TDD tests made me a better developer because it pointed out just how coupled my applications were.
zubairq 2 days ago 0 replies      
I wish I could upvote this article X10000000... I totally agree!
jasonkostempski 3 days ago 0 replies      
Could have figured that out much sooner if someone had written tests for the expectations.
emperorcezar 3 days ago 0 replies      
Gonna throw the baby out with the bathwater because bad programmers write bad tests.
haskellandchill 3 days ago 0 replies      
TDD works for me and the rest of Pivotal Labs, maybe 1000 or so people. YMMV
matchagaucho 3 days ago 0 replies      
Salesforce.com Developers are the highest paid in the IT industry and TDD is hardcoded into their development process. (All code is required to have 75% unit test coverage).

Correlation is not causation... but maybe it is?

24gttghh 3 days ago 0 replies      
Test Driven Development? All these acronyms and not once is it spelled out in the article or the discussion! Is it like saying HTTP or DNS to most people? I'd honestly never heard of it...but the concept seems logical from a high level.
mdpopescu 3 days ago 1 reply      
TLDR: TDD done badly doesn't work well.
dreamdu5t 3 days ago 1 reply      
TDD: For people who've never used a decent type system.

Writing a Solidity test to add two integers today really drove home the point.

soared 3 days ago 0 replies      
Yeah I could google it, but its common practice to spell out an acronym the first time you use it.
geebee 2 days ago 0 replies      
Over 405 comments, I am late to the party here, especially since this is just my 2cents.

I think TDD "failed" largely for creative reasons. And it didn't actually fail.

The reason I was willing to use the word failed, in quotations, is that I do think that TDD is dead in the sense that it's a stick that can be used to beat people into submission. There was a long series of debates on youtube with DHH and a few TDD advocates, titled "Is TDD Dead", and it's funny that I think DHH largely won the debate considering that I believe the answer is, clearly, "no, TDD is not dead". TDD remains relevant and useful.

And yet, I think the TDD proponents suffered a severe setback in that debate, severe enough that I'd consider it a pretty bruising defeat.

Why? Because the debate showed that debate is reasonable. That the position that TDD is dead can actually be defended. Here's the thing - a lot of TDD proponents denied the existence of a legitimate debate. There was right, and wrong. Blog posts saying that people who even question the value of TDD should be unemployable, that TDD might rightly one day have the force of law behind it, that questioning TDD is the modern day equivalent of medieval doctors denying the importance of sanitary conditions and washing hands. That sort of thing. I think that by the end of the debate, there were too many cracks in the TDD argument to deny that not doing TDD may, in fact, be a good way to write software. TDD may not be dead (or even close), but that sort of browbeating certainly was put to rest.

Some of it was what DHH called test driven design damage. But the biggest one was creative. TDD may simply not work for the creative flow required for many types of software development. It's like, to contrive an analogy of my own, requiring a writer to outline the next page before writing the current one. It's just too disruptive. You can justify it a hundred ways from Thursday, but if people doing it can't write software as well as people who don't, TDD will lose.

None of this is to understate the importance of test coverage. But write some code, write some tests, repeat - yeah, I think that works. Trying to force everyone to do TDD through a campaign of shaming and intimidation was a horrendous fail. That's was the outcome of the youtube debates - TDD advocates actually did defend the practice quite well, but they fell far short of a standard that would mean DHH shouldn't be employable because he question TDD.

Perhaps not all TDD advocates were that extreme, but it was a strong enough faction in the TDD movement that I don't think I'm finding extremists and using them as a straw man. That sort of browbeating really was part of the TDD culture, and I think that even the good parts of TDD, the parts we should keep and even evangelize as developers, are harder to defend because of these early tactics.

mncharity 3 days ago 0 replies      
Boston used to have a software craftsmanship meetup. One month, on the train going home, a few of us discussed "how to describe TDD". Someone had a teaching gig coming up. That night, I attempted to distill the views expressed by this couple of experienced TDD folks. Here it is, FWIW.

# What is TDD?

TDD is JIT-development, built on tests.

It's not developing things before you need them. That's too easy to get wrong. It's not built on reviews and approvals. They're too slow and fragile.

## TDD's JIT development with tests is:

1. Live in the present.

Focus effort on what is clearly useful progress now, not speculation.

Don't do planning or development before you need to. Because later, you will better know what is actually needed, if anything. Be restrained but thoughtful in judging how much of what, needs to be done now.

Don't put off integration. Until then, usefulness and spec are only speculative.

2. JIT-spec

Capture each behavior you care about as a test.

Keep them simple, small, and clear. A new spec is a failing test. A passing test means "done with that -- next!".

Don't stuff your mouth. Don't do lumps. Keep it bite-sized.

Don't spec it until you need it, even if you (speculate) you know where you are going later.

Don't worry about the spec having to change later. They usually do. If the speced behavior is clearly useful now to make progress, that's good enough. If it's something you don't really care about long-term, you can remove it later.

3. JIT-implementation

Keep implementation minimal.

Don't create speculative code. Do reactive implementation and refactoring. If you "might need it later", write it later, when you have clearer need and spec, and more tests available.

## About tests.

Programs have a few behaviors you care about, and many more that are implementation details. Test the behaviors you care about.

Tests redistribute development flexibility and speed.

* Behaviors pinned down by tests, are harder to change. Because you have to update the tests too. They're transparent but rigid, and change is slower.

* All other behaviors, become much easier to change. Because everything you care about is tested, implementation changes can be done energetically, without careful cautiousness and fear of accidental hidden breakage. They're opaque but flexible, and change is faster.

Distribute your transparency and flexibility wisely.

Some topics I'm notably unclear on include test refactoring and management.

* when does one delete tests?

* how are lines of development pivoted?

* how are different classes of tests handled? (eg: external spec commitment; less critical spec I still care about; sentinel spec, which I don't mind changing, but I don't want it to happen accidentally/silently; spec that's transient development scaffolding, and should be removed later; and so on)

Opportunities include:

* broader coverage of the strategy, test, and implementation layer activities

* description of how test suites and implementations change longer-term

* specific discussion of cross-cutting issues like risk mitigation

* tighter characterization of core (eg, all tests and code are a burden, and start with a high time discount, so create and retain only those which are clearly and currently useful)

programminggeek 3 days ago 0 replies      
This is why TDD failed...

People don't like doing it.

So they don't.

The end.

joeblau 3 days ago 2 replies      
im_down_w_otp 3 days ago 0 replies      
PBT-centric TDD has worked very well for us.
NYPD is canceling its Palantir contract buzzfeed.com
479 points by tefo-mohapi  2 days ago   258 comments top 18
svendisnigh 2 days ago 21 replies      
Palantir has an outdated software stack (Java/Swing). Their genius lied in creating "mythical image" and cleverly overselling it to the less proficient government agencies (apparently NSA was smart enough to see through the ruse). And then in turn sell the "spooky/mysterious" image to bunch of naive Stanford/Berkeley undergrads and convince them to work for peanuts.

Sadly in 2017, their tech stack is falling apart, they have no meaningful ML/AI strategy, the agencies/corporation cannot be fooled any longer, and employees have suddenly realized that the stock (without an IPO in sight) might be worthless and have begun leaving.

Finally the political environment along association with Thiel isn't helping them make any new friends.

Having known several people who work or were hired at Palantir, I can assure you it probably has the worst overconfident brogrammer culture. So much so that I truly hope no US government agency is using them for critical functions because its nothing more than hot air.

bane 2 days ago 1 reply      
Conjecture? Palantir is known to be rabidly expensive. The available information on the internet and from people I know who've encountered the technology directly describe their "product" as a mix of a very outdated tech stack and expensive integration services with expensive on-site support [1]

My guess is that the NYPD did an analysis of the life-cycle cost of the system over another contract period, and probably determined they could do something cheaper some other way. Since they've built another system based on a competitor's technology (with some special home rolled glue sauce) they're obviously getting some benefit from the basic approach, but given a choice of close enough tech and more badges or the status-quo they've decided to move forward.

1 - I've basically heard their software described as a big graph backend with a document store and an ancient Java client for mucking around with the graph. Most of their tech has been subsumed by very free and highly scalable technologies.

The most recent video shows a just normal run-of-the-mill web app any decent team of 3 or 4 good web devs could crank out in a few weeks.


They actually don't have any video up of anything recent that actually really shows their core product. The video here shows a java client https://www.youtube.com/watch?v=5OYy_UtINo4 but it's from 2015.

Kind of a shame considering how much money and talent they've vacuumed out of the market. If I'm right, it's amazing for IBM to actually come in at a lower price than some competitor.

krona 1 day ago 1 reply      
Everyone is focusing on Palantir. As someone who knows how much IBM has thrown at this over the past 2 years, it's really IBM and their historically strong relationship with the NYPD that is the real (but far more mundane) story.

Ultimately, if you piss off the IBM execs enough (which is what Palantir is exceptionally good at), then IBM will mobilize a significant chunk of the company to obliterate anything in its wake. Even if that means making a huge loss.

They then try to use that win as a template for future success. Unfortunately IBM is incapable at that since the harsh reality is its software division is just a bunch of incoherent acquisitions who can't generally collaborate.

osrec 1 day ago 2 replies      
I remember a friend of mine interviewing there. He was flown from London to SF for the interview. He got through the technical questions fairly easily. At the end, he had an interview with one of the co-founders, who asked him "As an FDE, what do you think your role is?". My friend totally BS'd his way through the response, thinking he's messed up the interview. The founder however ate up everything he said, and added his own flavor of fluff to the conversation. That's when he realised that Palantir was not really selling software, as they originally made out, but consulting, much like Accenture or McKinsey. Needless to say, he was truly disappointed by the experience - Palantir was quite simply not what it was made out to be.
praulv 1 day ago 0 replies      
I can't quite place them. On the one hand I don't see what their product offers that existing BI dashboard tools such as Tableau can't deal with.

On the other hand, 2 of my former, incredibly smart, non-douchey, genuine, down to earth colleagues (both non-white male if it matters) work for them so I'd question the brogrammer stereotype as I doubt they'd stick around if they were feeling out of place.

kirillzubovsky 1 day ago 0 replies      
Interesting the aim at Palantir's supposedly bad Java stack. It might be uncool, and even bad from the programmer's POV, but many government agencies have regulations that actually prevent them from using "cool" technology. I know of orgs settling gov-nt and medical software, for example, and they use Java and .Net because those things are easy to sell. Doing anything in Ruby requires jumping through a thousand additional hoops. EOD, the language for them is just best business decision.
jarym 1 day ago 0 replies      
When was the last time you heard a set of patched together IBM software being 'intuitive', let alone 'more intuitive' than Palantir's built-for-purpose product?

I'm experienced with both IBM software and Palantir and I am truly surprised by that bit in the article - imo IBM haven't put out anything intuitive since OS/2!

tyingq 2 days ago 2 replies      
NYPD gets their source data, but not the algorithm. Hard to say if that's unfair without knowing the nature of the contract.

It's certainly not unusual to sell analysis without selling the "exactly how".

mola 1 day ago 0 replies      
Had a really bad experience at a palantir recruiting event. First, i got an unsolicited email all black and mysterious. Annoying.When I got there I felt nauseous.They actually employed a bunch of good looking women to talk with us techies. So demeaning for all parties involved.I drank their cocktails and ran away.
tp3z4u 2 days ago 3 replies      
Wouldn't it be hilarious if their secret sauce analysis turns out to be a secret backchannel to the CIA/NSA and not some fancy algorithm.
nl 2 days ago 2 replies      
With no technical details it's hard to know what is going on. "Analysis" can mean many things, and sometimes one person's open analysis data is another's proprietary visualisation.
jimjimjim 1 day ago 0 replies      
i have no love for palantir and java makes me slightly sick but a lot of comments here are of the type "they use java, that's ancient, blah blah".

although they are in sv they are NOT a "start-up". They are an old-school government/3-letter-acronym IT supply/service company, think IBM and SAP.

In this environment, there is a good chance their customers may be using ie6. java is what their market expects not what-has-the-cat-dragged-in-now-js framework.

CRUDmeariver 2 days ago 0 replies      
I was talking to someone recently that had worked there and they said the first step for integrating with any client is to convert their data into a proprietary format called PXML, which I assume is XML with the added feature of vendor lock-in.
kelukelugames 2 days ago 4 replies      
How does buzzfeed get the scoop on this? I didn't know they did tech reporting.
jacquesm 2 days ago 1 reply      
At least the software ran on premises of the NYPD. There are many deals with governments where data gets pumped back-and-forth with external parties doing the analysis on their infrastructure.
kendallpark 2 days ago 7 replies      
> The department has created a new system to replace Palantir.

> The new system, named Cobalt, is a group of IBM products tied together with NYPD-created software.

You really have to wonder about the quality of this internal product. From what I've gleaned about Palantir over the years, they supposedly have one of the hardest dev interview processes. People that work there are on top of their game.

My gut feel is that NYPD does not have as robust of an internal dev situation. Or perhaps they shopped it out to contractors? Either way... skeptical that the system is better. But who knows! It's possible the subset of features a police department actually needs from Palantir might be much smaller than the feature set they're paying for (and not too hard to construct themselves). Whatever is going on, it's clearly a cost-benefit issue.

Although, if they're fighting for Palantir's analysis, it seems they need something that Palantir does that they can't do themselves from the data. There really isn't much one can conclude from this article. Only speculation.

EDIT: From responses on this thread it seems that the "hardcore super secret analysis" image Palantir promotes might be more bark than bite. What if NYPD actually did create something better? Intriguing.

eeZah7Ux 1 day ago 0 replies      
A hundred comments criticizing the software stack and nobody discussing the ethical concerns around the product.

Perhaps the discussion around rugged individualism in the current thread about poverty are accurate.

fensipens 1 day ago 0 replies      
> Big data helped New York's cops bust Bobby Shmurda

So glad Palantir helped getting Bobby Shmurda (best known for Shmoney Dance) off the streets.

Is it unethical for me to not tell my employer I've automated my job? stackexchange.com
669 points by Ajedi32  4 days ago   512 comments top 113
imh 4 days ago 10 replies      
This question is a beautiful example of typical incentives workers feel and how screwed up they are. On HN people talk often enough about how if you have a worker who gets their job done in 30 hours instead of the company's usual 40-60 hours, you should give them 30-100% more responsibility, but much more rarely "and 30-100% more pay." Butt-in-the-chair hours are super important culturally and it's been that way for a while. Incentives are screwed up enough we're getting questions like this.

If you've ever thought "I'm done for the day, but I'm going to hang out a little longer to leave at a more respectable time," then you're feeling (and doing) the same kind of thing.

afandian 4 days ago 11 replies      
When I was a student I took a work-from-home job manually generating HTML pages for an online furnishing shop. They somehow had a successful web presence but all sales were done by phone (early 2000s).

They wanted to pay me by the hour, but I negotiated paying by the page instead.

Of course, I automated the job. And surprisingly, at least to nave me, they were annoyed that I automated it. Even though they got the same result for the same money, and we had explicitly agreed to do it by output, not by time.

I learned something that day, though I'm not sure what.

ohyes 4 days ago 4 replies      
I think the real question is whether this person is salaried or hourly.

If they're hourly, then yeah, billing 40 hours a week when you only did 2 is fraud.If salaried, I think it's okay.

Here's why: Individuals in the company will be good or bad, ethical or unethical. The company itself will (likely) be largely amoral and driven solely by a profit motive.

So when this person announces that he's automated himself out of a job, it sounds like it won't be a matter of 'great work, here's a cut of all the money you saved us and some more interesting work.' It'll likely be a matter of 'thanks, here's your contractually required severance.'

That is what it is, but if the company is allowed to be driven by profit motive, he should be too. It is within the best interests of his profit motive to continue with the automated work. For some reason when the person is an employee, it's no longer okay to be a sociopathic profit-motivated machine, we're actively disgusted by this type of behavior.

It seems like there should be a fairness principle in this situation when making a decision about things such as this that treats the employer and the employee as equals in a contractual obligation.

simonh 4 days ago 2 replies      
If he was a company providing a service this wouldn't even be a subject for discussion. It would be a non-issue. Also he's been pretty much instructed by the company not to rock the boat. It seems pretty clear they really don't care as long as the work gets done.

I've known plenty of sysadmins that have significantly automated most of their work and mainly just monitor and maintain, good for them. Nobody ever criticised them for this, in fact it's good practice. Finally he's not really being paid for hours worked. If it took him every hour of that time the first few months,but then he got better at it and later it took 30 hours instead of 40 nobody would care. In fact I'm sure the company fully expects something like that to happen, again they just don't care.

He should stop introducing errors though.

hessproject 4 days ago 3 replies      
There are two ethical lines the poster may have crossed.

> I even insert a few bugs here and there to make it look like its been generated by a human.

As a few others in the original post pointed out, this seems to be the biggest issue. He is intentionally misleading his employer as to the nature of the work he is doing. The automation itself isn't immediately unethical, but the intentional misdirection could be.

The second issue depends on whether he is paid for his time or to fulfill his job duties. In the case he is being paid to fulfill his job duties, he is doing the job he was hired to do adequately, he is meeting the deadlines expected of him by his employer, at the price he negotiated when he was hired. However, if he is being paid for time, it seems clearly unethical to bill the company for 38 more hours than he worked.

renlo 4 days ago 2 replies      
I graduated in the recession without any real skills or an applicable / usable degree (lib arts in a language I could barely speak).

The first job I got after college was for data entry where I was expected to go to an email inbox which received some automated messages with some strings in them and to copy these strings and paste them into an Excel spreadsheet.

I was expected to do this for ~6 hours a day every day. Sitting there, copying and pasting strings from some email. Then this spreadsheet would be forwarded to my boss who would forward it to some other people (I don't remember who these people were, probably for auditing of some kind).

After a couple of weeks of this I really started to hate it. I had taken a class on spreadsheets when I was a kid and knew that there was a way to automate it all, so I did a couple of Google searches and figured out a way to copy all of these numbers automatically. It was done using some VB script IIRC and some spreadsheet formulas.

I stupidly told my boss. So now he had me doing other stupid and mind-numbing work for those 6 hours I would have been copying and pasting strings from the emails (like manually burning hundreds of CDs one after the other with Windows XP and a CD-burner which only worked half of the time).

I quit a week or two later, but learned a valuable lesson. Don't tell your boss. Side note: this is how I became interested in pursuing programming as a profession.

It would be great if there was a means for people to sell technology like this to their employers, for those rare cases where someone goes above and beyond the expected solution. In reality employers don't care because they own your output regardless so why do they need you?

protonfish 4 days ago 4 replies      
I did not expect to see so much contention about this. A company is paying him to do a job, and he is doing that job. Is the problem that he isn't miserable? This baffles me.
bjourne 4 days ago 2 replies      
What's wrong with us workers? Do you think the Apple executes have some secret message board where they ask questions like: "Is it unethical for us to sell iPhones for $800 when they only cost $20 to produce?" Capitalism is what it is, you play the game and shouldn't feel bad the (few!) times you win.
smallgovt 4 days ago 1 reply      
This may be going against the grain, but I think the real question the OP needs to ask is whether he really cares whether what he's doing is ethical or not.

Evaluating decisions like this really comes down to understanding your values and owning them.

Values are the measuring sticks by which people quantify success in life. Whether or not we realize it, we constantly measure our actions against our values, and how we 'measure up' determines our self-worth.

In this specific example, there are two conflicting values: integrity and family. They are in direct conflict, which is putting the OP in a stressful situation -- acting in the most honest way here will lead to a worse life for the OP's family. Creating the best life for his family requires that he must lie.

So, the OP needs to ask himself:Do I value integrity? Do I value my family? If I value both, which do I value more?

Personally, I don't think there's really a right or wrong answer to these questions. There's no intrinsic value in the universe -- but assigning value is part of the human condition and we feel fulfilled when we lead a life of purpose (however arbitrary). When faced with difficult decisions like this, it's important to be aware of what your values are. The 'right' decision for you will be apparent.

Bakary 3 days ago 1 reply      
This ethical question seems bizarre in a world where large blocks of the economy rely on effective misrepresentation or information asymmetry (advertising, etc.) and wealth itself is concentrated in the hands of a few. Those are stereotypical and clich statements to make but I don't think that makes them less relevant.

As far as I am concerned, this person can provide for his family, and has given the company the results they want. I don't see how it's a problem.

The "late-stage" power imbalance in favor of companies does provide interesting ethical arguments in my opinion.

danjoc 4 days ago 2 replies      
You should tell the company. There's probably a $20 gift card in it for you.


hedgew 4 days ago 1 reply      
The way I see it, this is the employer's problem. In a good company, what benefits the company, also benefits the employee. In this case the employee and the company have different incentives, and the company does not care enough to solve the problem of incentives.

There are many, easy ways the employer could solve this problem so that both parties benefit. The employee does not have the same ability to pursue mutually beneficial solutions, and is acting like a normal profit-seeking business would.

jstanley 4 days ago 1 reply      
Neither he nor the company would be better off if he were still doing it manually.

I don't know whether I think it's ethical or not overall, but it's at least a more optimal situation than if he had continued spending 8 hours a day updating spreadsheets by hand.

He's doing a better job than he was before, for the same price, and he gets more free time. Everyone's a winner. Admittedly, he is a winner by quite a bit more than they are, but he would have been perfectly within his rights to continue doing the work manually. Then they'd be paying the same price as they are paying now but getting work with more mistakes in it. Why would they want that?

pmoriarty 4 days ago 1 reply      
What is considered ethical or unethical always depends on who you ask.

Ask most slave owners a few hundred years ago if it was ethical to whip slaves (or even own slaves, for that matter) and you'll get one answer, but quite a different answer from the slaves themselves.

You'll get different answers to this question whether you ask it of employers or employees, capitalists, socialists, or communists, people who feel exploited or the exploiters themselves, and so on.

I'm not sure how much one could make out of such a survey other than on controversial issues there are great differences of opinion.

gsdfg4gsdg 4 days ago 1 reply      
It's horrifying to see people questioning the morality of their perfectly legal and adequate method to collecting a paycheck. Their employer has 0% loyalty to them. Their employer would stab them in the throat for 50 cents. And here is the employee asking if his sweet arrangement is ethical.

That's how hard Americans have been brainwashed into the idea of corporations and business as "Good" -- that a man is asking whether spending 38 extra hours a week with his son is built on an "unethical" foundation.

dgut 4 days ago 1 reply      
You get to spend more time with your son. In a country with a terrible maternity and paternity leave policy, it's morally right to do whatever possible to spend more time with one's children. You are doing a great service to the country, as your child will turn out to be a more mentally healthy adult. Just for that reason (besides that you are providing value to your employer), keep going!
peterburkimsher 4 days ago 0 replies      
Is it unethical to keep a chair warm when my boss didn't give me new tasks to do?

For other areas of life (immigration), I need to get more years of continuous relevant work experience.

I come to an office every day, but my boss just doesn't have enough to keep me busy. My job title is "Project Engineer", which is vague enough to cover everything from DLL debugging to Node.JS programming to network monitoring to evaluating Advanced Planning systems. The latest task is to do some online course in machine learning, even though he didn't specify how the company will need it.

On bad days, I feel useless. But I reconcile the situation to myself by saying it's basically a "basic income" (the salary is not high; the minimum that people on my visa can have). I could think about changing after I have the years of work experience, but years just come with patience, not with productivity. I feel like my situation isn't "fair" because my friends are so much more stressed, but I need the years, not the results.

I also do a lot of side projects and post them online (e.g. learning Chinese - http://pingtype.github.io ), but my contract and visa specifically state that I can't have any other paid work. So all my projects must be free and open source.

If the author of the automation scripts wants to comfort his conscience, I suggest reading more about Basic Income theories.

xenophonf 4 days ago 2 replies      
Once, long ago, I did two weeks' worth of work for a multi-person team in about a day, thanks to a little sed/awk magic. The work would have gotten done a lot faster if I didn't have to deal with the completely shitty X-over-dialup remote access setup they forced me to use. The project manager was actually upset with me because now we couldn't bill the client for 80x5 hours worth of work or whatever it was. Needless to say, I quit that job the following week. It's one thing to have a little downtime now and then to recharge oneself. It's quite worse to be bored because there's nothing fun/interesting/useful to do.
mikekchar 4 days ago 0 replies      
I love questions like this, especially when the person replies to the reactions. I'm less interested in the answers as I am to the question, "Why did you post the question?" There are lots of people saying that they think it is unethical, and the OP has taken time to respond to these reactions with a rationalisation.

In other words, the OP feels guilty and is seeking permission to continue with the course they have already chosen. They feel they won't get it from their employer, so they feel the need to find the permission from random strangers on the internet.

I've done a lot of process improvement in my career and this is always the trickiest bit. People make decisions and build elaborate walls to protect them. Exposing the decision does nothing to remove the walls -- it only prompts the builder to design even more elaborate walls. It pays to be sensitive to this!

odammit 3 days ago 0 replies      
I worked in data entry at a large hospital in the late 90s. I automated my data entry of reports someone was printing from Excel and that I was entering it into another system.

My boss walked by one day and I was reading the Monstrous Compendium and she asked "what are you doing?" To which I responded, "uh reading the Monsterous Compendium"... then explained I automated my data entry by having the people upstairs because bring down floppies with spreadsheets on them instead of printing the reports "to save paper".

Curiously I didn't sign any paperwork when I started regarding intellectual property and I'd written the app on my computer at home... sooooo, I got a bonus and a promotion to the IT department!

They fired the rest of the data entry team :(

dewiz 4 days ago 0 replies      
A company automating jobs and firing people is called progress. A person automating his job without being fired is sustainable progress.
timthelion 4 days ago 0 replies      
Recently, I paid two guys $150 to cary 4 tonnes of gravel up some stairs into my garden and level it. I figured it was a days work for the two of them, and that the price as reasonable. But then, they ran the whole time, and did it in half the time I had estimated for the job. And emotionally, I felt really ripped off, because I was paying twice the going rate for workers in my country. But WHY SHOULD I FEEL RIPPED OFF???? It is wrong to feel ripped off in such a situation. Their doing it quickly saved me time and stress as well.
logfromblammo 4 days ago 0 replies      
If you can trust the company to act in an ethical manner rather than a purely profit-seeking manner, there should be no problem in telling them you have automated your own job out of existence.

They pat you on the back, license the software from you for 0.5x your former salary every year, move the folks that formerly did that same work to other projects, and put you on retainer to update the program if it ever needs it. Then they offer you different work, to see if you can work more magic.

That said, I would only trust one of the companies that I have ever worked with to do that. The rest would screw me over good and hard, giving one excuse or another.

By the Hillel principle ("If I am not for myself, who will be for me? But if I am only for myself, who am I? If not now, when?") you have to consider the impact on yourself as well as upon others. Will the company fire me? Will it keep me and fire my co-workers, since I can do all of their work for a week in a single day? Will it pay me more to do so? Do I have a duty to act in the company's best interest if that conflicts with my own? What if it is best for myself and the company, but ruinous for innocent bystanders?

Clearly, if this is a typical US company, the ethical course of action is to not inform the employer. This is an unfortunate loss for the economy as a whole, but it is the only appropriate response to the modal behavior of business management. Maybe also file a patent on the method of automation, if able.

awjr 3 days ago 0 replies      
This reminds me of this joke https://www.buzzmaven.com/2014/01/old-engineer-hammer-2.html :

The Graybeard engineer retired and a few weeks later the Big Machine broke down, which was essential to the companys revenue. The Manager couldnt get the machine to work again so the company called in Graybeard as an independent consultant.

Graybeard agrees. He walks into the factory, takes a look at the Big Machine, grabs a sledge hammer, and whacks the machine once whereupon the machine starts right up. Graybeard leaves and the company is making money again.The next day Manager receives a bill from Graybeard for $5,000. Manager is furious at the price and refuses to pay. Graybeard assures him that its a fair price. Manager retorts that if its a fair price Graybeard wont mind itemizing the bill. Graybeard agrees that this is a fair request and complies.

The new, itemized bill reads.

Hammer: $5

Knowing where to hit the machine with hammer: $4995

dragonwriter 4 days ago 0 replies      
It unethical to deliberately introduce errors. If you have broad discretion about how you do the job, it may not be unethical not to actively call your employer's attention to your automation, though (but for the deliberate introduction of errors) it should, with a reasonable employer, be beneficial to do so.
stmaarten 4 days ago 0 replies      
This case is a microcosm of a fundamental tension. Namely: how should we divide the pie between capital and labor, if baking the biggest pie requires devaluing labor? There are explicitly positive and normative components to that question. Positive analysis cant resolve normative questions, and vice versa.

Personally, Im not interested in questions such as whether the OP has been dishonest, or what the status-quo legal regime would prescribe. I am interested in the underlying economic reality. The OP has developed a technology with real and quantifiable value. He created wealth. So: who should keep it?

At the macro level, I think its pretty clear that the existing economic and legal regime would have these gains accrue to capital owners. After all, markets (when they function) do a good job of allocating resources according to value signals. But that's just a default allocation; that doesn't tell us "who should keep it".

Posibyte 4 days ago 0 replies      
I think the problem is actually deeper than whether or not it's ethical, but rather the structure in which we place people gives them more incentive to hide their improvements rather than expose it and help the company flourish. Why should OP ever reveal it to his boss? Ethics? What do those matter on the bottom line for them? He could be fired or disciplined. His experience might be positive, but judging from the comments and how people are reacting to it, I wouldn't be very sure of that.

In this case, I think a positive of some sort to give the employee a reason to reveal this automation. People shouldn't be afraid to tinker and learn in the face of punishment.

lr4444lr 4 days ago 0 replies      
There's nothing unethical with this situation as the poster describes it. Is it unethical for you not to tell prospective buyers of your house what other offers you've received? Is it unethical for you not to tell the other side of a legal trial what character, logical, and emotional arguments you intend to use to sway the jurors?

No, some relationships have an inherently adversarial zero-sum component, and maintaining informational asymmetry could only be unethical if the other party will bind him or herself equally to not taking advantage of your sharing it. And speaking realistically, there isn't a snowball's chance in hell a middle manager of a large company with legacy systems would not fire this guy if this information got out and he were told to directly or was generally pressured to keep down department costs.

outworlder 4 days ago 0 replies      
I'd say, get rid of the intentional errors. If anyone asks, it's just because they became so good at their jobs that the work is now spotless. Which, frankly, it's the truth: business requirements were discovered in such detail that automation could be performed.

I don't think it is stealing. In fact, they are getting exactly what they asked for the job is getting done. The fact that it is taking less work (but it is still taking some work, he still needs to do clean up before running the automation) should be irrelevant if it is his only task.

This is assuming there are no specific instructions on how the work should be performed.

If it were a silicon valley-type company, then it is possible that this contribution would be properly recognized and the employee offered another position due to the demonstrated skills. From the looks of it, it's unlikely to happen.

So here are the choices:

Not disclosing, and getting into philosophical arguments on whether or not they are being overpaid. Depending on the complexity, this is the kind of thing that consulting companies thrive on and charge big bucks for. So, in fact, they may even be UNDERpaid, if this is eventually disclosed and becomes company property (maybe when they decide to leave the company?).

Disclosing, will force some tough conversations to happen. They will probably want the software, which they are entitled to, as it was done on company's time. And, once they have it, there's nothing preventing them from firing the person.

And, to be fair, companies do that sort of stuff all the time. They may start doing things manually for customers, figure out some monetary value they should charge to cover costs, plus profit. Eventually things get automated. Do they reduce their prices? Of course they don't. Cost optimization and the like.

EDIT: typo (also, using gender-neutral pronouns is tough)

empath75 4 days ago 1 reply      
five years ago, I had an entry level overnight noc position at a big company, and within 6 months I had scripted almost everything and was watching Netflix most of the night and didn't make any particular effort to hide that I had nothing to do.

I got rewarded for it with a promotion, and then I did the same thing and got another promotion, and another. I'm making more than twice was I was making before and now my job is telling other people how to automate their jobs away.

I keep scripting annoying tasks because I'm lazy and get rewarded for it with more annoying tasks and more money.

If he had just told his boss, and put what he did on his resume, I'm sure he'd be making more money today and have more interesting work than he would have if he hadn't lied.

hsod 4 days ago 1 reply      
Ethical behavior generally requires honesty and forthrightness. If you are only concerned about your own ethical behavior, you should tell them. Keeping it a secret is effectively lying.

If you want to do some ethical calculus, you can probably quite easily determine that your employer (or the general economic system) is less ethical than you keeping this a secret, which may give you some "ethical leeway" when dealing with them.

Furthermore, you could determine that your employer is likely to behave unethically towards you if you told them, in which case you may be able to determine that keeping it a secret is a net-positive ethically speaking.

But yes, it is unethical to lie to your employer about how you're doing your job.

whoami24601 4 days ago 0 replies      
People here seem to be generally in favour of the OP, unlike the top-contributors on stackexchange. I too think that employers "naturally have the advantage of a power imbalance" (by keinos) and in my opinion they often take that advantage.

As OP I would think about two options in that situation - although I'm not sure if I can judge that well, since I don't have a child. In both cases though I would stop faking bugs.1) Once OP stops pretending to work a full time job the employer might be smart enough to realize that said OP has more capacities and thus might provide him with more work. From my point of view it's not the employees fault that the employer does not know what's going on with the capacities. They don't give you enough work, why should you pretend to work?2) OP could be pro-active and inform employer of his increasing capacities. Maybe they provide OP with new work.

It might be that the employer requests the automation-tool later on, but maybe it could be that the employer overlooks the free capacity aswell.

SubuSS 4 days ago 0 replies      
All software rots: If they had the ability to run the script this engineer built, there is a high chance the same folks would've noticed the automatability of the job.

IOW - I don't understand how the user thinks this will be taken away from him though. It would seem he is a core part to the execution of said script considering he has to adapt it to new data rules etc. once in a while.

IMO - The fact that he is spending 2 hours and billing 40 is deception though: I mean, in an ideal world, the company would totally notice if they estimated an assembly line to produce X items in Y hours and it actually ends up producing 2x items.

Now whether you can engage in said deception, whether everyone else is directly/indirectly doing it, your family situation etc. all lie in the zone of subjectiveness. You just gotta trust your gut and go with it. But one thing is sure though: You get caught, you are getting a reaction - fired/possibly worse. The hr/human ego is far too fragile to let this go in 99.9% cases.

nthcolumn 4 days ago 1 reply      
As software developers we implement change which often mean others lose their job. I've worked myself out of more jobs than I care to remember, automating ruthlessly, fixing even when it meant I was redundant as a result. To not do so would make me guilty about all those systems I wrote which made others redundant. That's really what I.T. was for years ago. Was a time I was like some horrible spectre. If you saw me that meant yo' ass. Once I interviewed some users about some task they were meant to be rekeying, they hadn't done it for months as the old requirement had gone. I followed it back to the person who was sending the first part, a nice little old lady and told her gleefully she didn't need to do that onerous first collection task anymore, whereupon she informed me that it was literally all she did. I just left.
pinewurst 4 days ago 1 reply      
How is this the smallest bit unethical?

The person is paid to do a job and that job is done, and seemingly well. End of discussion.

Spooky23 4 days ago 1 reply      
If you're a salaried employee assigned a task, you fulfill the task, and are available to respond to requests during business hours, there's no ethical issue here.

If you're hourly and you are spending 3 hours a week and billing for 40, you're in bad ethical place in my mind.

teekert 3 days ago 0 replies      
If you were a company, no one would bat an eye, they'd say your employer is free to scour the market for the best options, they found you and are happy. I think many companies provide services that are easily automated and customers don't realize how little human labor is actually involved.

You could offer something like: "Hey, I can rewrite the entire system, make it completely automated. This will cost you ((time_it_takes_find_good_job + some)*your monthly pay). After that I'll be gone. hat you already did the work doesn't really matter imo and you leave your boss better of, and hopefully yourself too.

baron816 4 days ago 1 reply      
I had pretty much automated a past job and when I quit to try to become a software engineer, they asked me for advice for hiring my replacement (what they should be looking for, etc.). I told them not to hire anyone and that I had hardly been doing anything for months. They hired someone to replace me anyway.
mnm1 4 days ago 0 replies      
The only ethics in business are those enforced by the state. Is it unethical for the state to ask a salaried employee to work more hours? No, it isn't, so I see no difference here. The exact same principle applies equally in both cases. Hours are meaningless if he's salaried. I think it'd actually be unethical to himself if he told on himself to his employer. Our first ethical duty, after all, is to ourselves.
lordnacho 4 days ago 3 replies      
I'm surprised nobody has suggested he takes a second remote job. He's looking to send his kids to college after all.

I don't see the problem with doing your job super efficiently. Adding bugs is just "a duck", not a real productivity loss.

I don't see how you can claim the company wants hours rather than work done.

EdgarVerona 4 days ago 0 replies      
Reading this, it felt pretty strongly like the person wasn't asking for genuine input as much as they were asking for permission from strangers. He defends the "don't tell" side of the options with a fervor that strongly suggests he's already made up his mind but needs peer approval to assuage his guilt.

I can't blame the guy. I've lived in areas where tech jobs are thin on the ground. But what I would do if I were him would be to start looking, and try to find a new job as quickly as possible so as to minimize the amount of time in this state. I can understand a fear that it may take a while to find a new job - and if he has that fear, he should start looking now instead of assuming that coasting like this is okay.

whiddershins 4 days ago 0 replies      
I think the discussion ignores the labor law regarding salary employees. At least in New York, a salaried employee must be paid in full for any day he works any part of, at least that's my understanding. In general, the law tends towards the position that salaried employees who are except from overtime are also except from being docked pay for missing an hour of work on any given week. (Although I believe the employer can take it out of your vacation time etc)

So, at least from a legal standpoint (IANAL) my understanding is as long as the poster takes even five minutes a day to verify his work, he is performing his duties as a salaried employee. It is up to the employer to determine if he is worth his salary or not.

carlisle_ 4 days ago 0 replies      
OP asks "is it unethical?" in his question and proceeds to ignore the ethical issues raised by commenters. Sounds like he was just looking for validation to keep doing what he's doing. I would concur with the person that labeled it more humblebrag than question.
mch82 4 days ago 1 reply      
Just for fun let's flip this around: Is it ethical to keep doing the job manually if you know how to automate it and not tell your employer?
notadoc 4 days ago 0 replies      
I don't see much of a problem.

You are paid to do a task. Is the task getting done, and at the expected quality level? That is what matters, is it not?

Aside from that, if you can automate your job, you could likely create a service or product to sell that automation to that employer...

keksicus 3 days ago 0 replies      
How ethical is it to tell your boss and then lose all that time with your son. What ethics do you care more about? Making sure your boss gets all the time he thinks he's paying for out of you? Or your son getting as much time as you can give him? Are you more loyal to your boss than your son? If your ethics are driving you to cuck-out and screw yourself, it's time to delete your "ethics" and install new ethics. There is such a thing as fear of success, I'm hoping you don't have that fear.
greggman 3 days ago 1 reply      
I'm just guessing but there is a legal concept call "duty of loyalty". https://www.google.com/search?q=employee+duty+of+loyalty

The short version of which is the duty of loyalty requires that an employee refrain from behaving in a manner that would be contrary to his employers interests. That probably means what he's doing isn't okay but of course laws are different in every area and IANAL.

It's would be arguably different if he was a contractor I'm guessing.

richpimp 3 days ago 0 replies      
Does it really matter if it's ethical or not? I mean, we're not talking about the ethics of, say, killing a tyrannical despot or allowing a terminal sufferer to commit assisted suicide. If it were me, I'd keep on collecting that paycheck while picking up a second job and double my pay. Again, is it ethical? No. But who cares? This person's mom is right, he has a free lottery ticket. Keep cashing it in. You can keep your ethics while I laugh my way to the bank. Just don't get caught :)
auserperson 4 days ago 2 replies      
Is capitalism ethical though? Isn't every employee an exploited person?

What is the robbing of a bank compared to the founding of a bank? - Bertolt Brecht

dlwdlw 3 days ago 0 replies      
One thing to realize is that at the higher levels, it's accepted that salary is basically a retainer. It's payment for the option to ask for work but not an obligation. This is truer the more "creative" or "strategic" your job is. It is known that specific tools need to be used in specific ways at specific times.

However work culture is so ingrained that things devolve into chaos if this is openly said. Games are created so that everyone has something to do mainly so everyone feels equally important.

The behavior itself is OK as long as the game isn't threatened. As long as you aren't actively destroying something anyone above you has created and can produce when called upon, do what you want.

In practice this may mean appearing to do nothing all day but this being OK because you give a script automating seminar every 2 weeks. Or maybe changing work spaces every now and then so when missing you have the benefit of the doubt.

If your level in the org is so low though that you have 5 managers above you all believing in the 12hr work day scheme then you are very limited and will most likely be punished. In most software orgs though this isn't an issue as the "new" culture around thinking work is more accepted.

drej 3 days ago 0 replies      
Do you know how John Oliver's "cool" remark, when he's being sarcastic? That pretty much describes every single response I got when I told my superiors I automated (a part of) my job.

I don't expect them to pop a champagne, but they could say something along the lines "this is interesting, what else could we automate?", but it's usually more like "cool, here's more work".

It doesn't deter me from telling them in the future. Maybe someone will appreciate it one day. Maybe not.

holografix 4 days ago 0 replies      
In my point of view, no it's not.

Your employer's organisation's sole purpose is to make a profit for its shareholders. Unless you are either one of the shareholders or will be paid more for increased production you have no incentive to produce more.

Bar your personal relationships of course, if your boss is a great person and you feel like doing them a favour then that's an incentive.

If you believe you could be paid more for increased production, via a raise or promotion, discuss this with your boss in the form of:

"I have an idea, which I need to spend X hours working on and I'm fairly certain I can get it to work and it would provide Y% more productivity. If I do raise my productivity by Y% what would this mean to me?"

If they state something attractive as the outcome, get it in writing. I interpret this as basically the company paying you for your IP, specially if your automation can be replicated to other employees.

Now if your sole job is automating stuff / increasing productivity at the organisation... then that's a whole other story.

Just remember that if YOU automated your job, the organisation could ALSO do it and not need you anymore - so maybe use the extra time to find a job not easily automated.

getdatpapersong 3 days ago 0 replies      
Using a throwaway because people are going to throw a hissy fit.

Dude, you have family and your _ONLY_ responsibility is towards them. Period. How is this even a question? Get paid, during your "working time" learn something extra and increase your earning potential. You're in a very unique advantageous position, seize the opportunity.

You're doing your job, you don't have to be some schmuck too.

peterburkimsher 4 days ago 0 replies      
An appropriate comic from Poorly Drawn Lines:


"Welcome to work. You'll spend your time here in two ways: overwhelmed and underwhelmed."

"Is there a third option?"

"Well, there's 'whelmed'. But I'm not sure if that's a word. So no."

scarface74 4 days ago 0 replies      
I have mixed feelings about the post and what I would do.

1. In my current situation, having a wife that has a job with pretty good family health insurance, living in an area (not SV) that has a great job market, and with in demand skills, my first thought is that I would look for another job and explain that I automated myself out of job. That would be like saying I was laid off from Netflix because they didn't need me anymore after I lead the transition from hosting servers on prem to AWS.

2. But he isn't in that position. He needs to work from home to stay with his kid and according to him

Most likely they can walk out of their silicon valley office and shout I want a job and get 3 offers to start the next day. Unfortunately, there are places in the country that just arent like that. Im not trying to have a go, Im just saying that the situation absolutely does matter.

If I were in that position would I voluntarily tell them I've automated the whole thing? I'm not sure. Hopefully I would not intentionally add bugs. I would definitely be using the time to study and keep my skills up to date.

nemo44x 4 days ago 1 reply      
He should approach management and offer to automate this process for a lump sum of x-years pay. If they plan on using this method for 5 years then offer 4 years of salary for this tool.

Everyone wins. They save a years salary and don't have to deal with data entry errors. He wins because he is paid and can continue to earn more.

This is an example of automation and capitalism revealing their best features.

DogPawHat 3 days ago 0 replies      
Right, look, This guy is doing everything that is being asked of him for the price agreed. None of the more wonky ethics of the situation change that, and I don't subscribe to the belief that he owes the company anything more then his fair-priced labor.

The only thing he is doing wrong is under-utilizing his own talents and potential productivity, for which the optimal solution is for him to find a better job. As he seems to indicate that the current options are to stay working 1-2 hours of work or be highly to be unemployed, I believe he seek to preserve his employment in future actions he may take regarding disclosure, and wait for a better opportunity to present itself.

If it's ultimately a choice between providing for him and his son and not, its pretty much no choice at all, ethics be damned. I know which outcome I would prefer.

mcrad 4 days ago 0 replies      
I have worked at enough places where "appearing busy" is rewarded far more than being efficient and truly productive. This seems to be where we're headed as a middle management society and it sucks. Ethical behavior would mean doing whatever you can to reduce such perverse incentives (venturing a guess that means keep your mouth shut in this case).
janxgeist 3 days ago 1 reply      
So far (10 years) these rules have always worked out for me in the long run:

1) Your loyalty belongs to your company. Always do what is best for your company.

2) Always share your knowledge freely.

3) Never strategize in order to "secure your job".

4) Always pick the project or job where you will learn the most (grow the most as a person).

I would guess 90% of people I have met ignore this and start strategizing at some point. They seem to always lose in the long run.

"The company treated me wrong, so why should I work as efficient as I can?"

"I can't teach him EVERYTHING or my job won't be as important/secure any more."

"I will pick this project, because I have done something similiar already, so it will be easy work."

When sticking to 1-4, relevant people will notice eventually and your trajectory will go up.

When ignoring points 1-4, relevant people will lose respect for you. And even worse, you will lose respect for yourself.

This is just my opinion or my experience so far.

stuaxo 3 days ago 0 replies      
I did exactly the same thing in a data entry job, after the 2001 internet bubble burst.

Semi-Automated a highly repetitive job that took 5 minutes per document to process, down to under 1.

Once we changed jobs, I went to the IT department, they were not happy with people outside their department automating things and had a similar project already.

In the end, a year later my manager was replaced by another that was his ex-wife and suddenly the fact I was wearing trainers to work was an excuse to let me go (though a lot of data entry people did).

It may or may not have been down to the fact, that with my automation, her department would potentially be 1/5th of its size.

That company no longer has their large offices in the town I was in, with inefficient manual processes involving lots of paper.

The good thing that came out of it, was pushing me towards software development.

buremba 4 days ago 1 reply      
If he's getting paid for the result, then it's fine but most probably they pay him based on the hours that he works and he fakes it in order to get full wage. That's not ethical as stealing is unacceptable and even if your children are starving.

I would probably tell my employer that I wrote the software during weekends and it's done last week (Since I would probably get fired if I tell them the truth and I will live with the unethical side of this), and it also avoids the human verification which means for them to just get rid of the verification. I would start a company with the the software that I built and offer them monthly based subcription fee to get their work done. You will still getting paid and you cana also sell the software to similar companies.

If he doesn't want to deal with starting company instead spend time with his children, then he can find a business partner that can do the things other than product.

nehushtan 4 days ago 0 replies      
Like others have said, there's unethical deception involved in inserting arbitrary errors - especially to make it "look like its been generated by a human".

But my feeling is that in addition to paying the OP to "do a job" the company is also paying him/her (him from now on) to "be on call". Yes, they want X results but they also are paying a salary so that they can tap him whenever they need to. This aspect of the job is referred to when he says there "might be amendments to the spec and corresponding though email".

To some companies (especially those with very little other in-house expertise) having "the computer guy" on call to handle all of that mysterious stuff is worth a great deal of money. The company could consider it their insurance against catastrophe.

Nevertheless I would say the OP should come clean at the next performance review.

the_watcher 4 days ago 0 replies      
In terms of feeling uncomfortable with it ethically, but also being concerned about finding another job, couldn't the OP just dedicate some of the free time to finding another job that allows him to continue to live the remote lifestyle he desired when he took the job, then let his current company know that he's created software to do the job he was hired to do and has been testing it over the past X months to iron out the bugs? Then he could give them the option of just keeping the software and not him (without anyone employed who can fix any bugs that might arise in the future) or allow him to continue on in the current arrangement (perhaps negotiating a lower compensation figure in exchange for him running everything in just an hour or two a week and then supplementing that with the new job)?
matt_s 4 days ago 0 replies      
The poster there is referring to current state of things being automated. He is the expert of that particular system. If there are changes upstream, his automation will fail and if he isn't there, then what?

When upper management changes and someone would like to change the system or business process it supports they will need him.

mythrwy 4 days ago 0 replies      
Is don't believe this is an "ethics" problem. No reason for existential angst.

This is a practical problem. What do you want from the company long term? How do you want to spend your days?

Consider the longer term. What happens if they find out you automated something and didn't tell but rather milked it? Do you even care about what their reaction might be? Is this company at all important in your future? (there aren't "right" and "wrong" answers to this, depends on what your goals are).

How do you see this ending? How can you make maximum advantage out of the situation while preserving what you want out of the company (including possibly continued employment).

As far as I'm aware Moses didn't say anything about these types of situations so you are on your own. But don't be a short term thinker.

kyberias 1 day ago 0 replies      
This is a wonderful thread for all the employers out there that want to see how ethical the people applying for their jobs are. If they have a HN account, just read the comments here.
Para2016 3 days ago 0 replies      
I wouldn't tell the employer. I'm guessing they don't care about you and they will take your idea, use it, maybe get rid of you without any compensation for creating the automation.

The job you've been hired for is being completed by a tool you made, and you're getting paid. Maybe look for something more appropriate to your skill set like another post suggested.

Oh and if you're feeling guilty you can read this story about Alcatel stealing IP and forcing a guy to work like a slave.


Hasz 4 days ago 0 replies      
They want x amount of data processed, and are willing to give $y to do it. If the OP has found a way to offer x at a much lower price than estimated, good for him.

People make shitty deals all the time, he is under no obligation to tell tell them to fix it. The relationship is contractual and nothing more.

hateful 4 days ago 0 replies      
This describes my first job. We ended up automated everything and I ended up getting hired as a programmer.
mathattack 4 days ago 1 reply      
From a legal standpoint the company owns the automation. You need to tell them. They pay for your time and the IP you create.

An enlightened company would entertain your offer to deliver the same value as a fee for service at a discount to them. (You would incorporate)

rayiner 4 days ago 2 replies      
I'm surprised nobody suggested going back to doing it manually. If he can live with the moral compromise of what he's already done, he can eliminate any concern on a going-forward basis simply by deleting his script and doing things the old-fashioned way.
rcthompson 4 days ago 0 replies      
I think the practical thing to do is for them to assume that this won't last forever and start using some of that spare time to improve their employment prospects, i.e. looking for another job that isn't in danger of evaporating if anyone looks at it.
bayesian_horse 3 days ago 0 replies      
My recommendation would be to ask a lawyer if the employer has the rights to the software he wrote to automate his job. Depending on the answer, either offer the software or offer to write the software as a negotiated one-time purchase.

To find a basis for the negotiating, consider the salary for the foreseeable future and present-value that income.

The result would most likely be a happy employer, and an ex-employee with a lot of money in the bank who is now free to find any other job or move wherever he/she wants. Maybe even with the same employer.

nandemo 4 days ago 1 reply      
If you believe in the Gervais Principle,


then OP is a "loser", and most answers divide into "losers" ("coming clean will be worse, so just keep doing the bare minimum") or "clueless" ("it is unethical to mislead your corporate masters"). I'd like to see what's the "sociopath" answer.

slim 4 days ago 0 replies      
If he had a boss, reselling his work, and making those margins instead of him, he would not have this ethical dilemma. Somehow the ethical aspect of work disappears when there is even the slightest layer of sales above it
thirdsun 3 days ago 0 replies      
I understand that the company told him to not mess with the system, but why not show them that you found a way to automate the process without admitting that he's been using it for a long time.

Maybe I'm too optimistic or naive but after successfully showing them that it works and saves time the conversation could move on to optimizing other tasks and problems the company surely encounters. Instead of letting the guy go I could easily see how they find additional value in him in other areas.

toast42 4 days ago 0 replies      
As a follow-up to this question, imagine a job where a significant portion of time is spent waiting on a computer (rendering animations, code compiling, etc).

Two contractors are hired, one with a modern laptop and the other with a 10 year old machine. The older machine takes at least twice as long to process the work.

Is it A) ethical to bill for time spent waiting for the machine to process and B) ethical to use the older machine? Assume the contractor using the older machine is using the best equipment currently available to them.

ryanmarsh 4 days ago 0 replies      
In business this is called innovation.

If a business found such an internal optimization would it tell its customers what a killing its making or keep the profit and grow the business?

Telling the boss is peasant thinking.

flukus 4 days ago 0 replies      
It feels as though everyone is focusing too much on the specifics and not considering that there might be a bigger picture. If this person has a spouse to support, kids to feed and cloth and a mortgage to pay then the also have an ethical responsibility to not risk their income by coming clean. Even if it's just themselves there is an ethical responsibility to provide for themselves.

I think it's probably unethical behavior, but probably for entirely ethical reasons.

anovikov 4 days ago 0 replies      
Of course don't tell them. It is always stupid to leave the money on the table. You win nothing by telling in any case. They will fire you, they will own your app because after all you are a programmer and you coded it while on the job so it belongs to them anyway, and others in that company will hate you because you hacked through shit they had to do manually for years. And they will even think that you have scammed them after all that, anyway.
jarym 4 days ago 1 reply      
Seriously... Op could quit his job, then go tell his employer he'll maintain the system for a fixed monthly subscription.

They'll have less headcount and op will be free to pursue other activities.

35bge57dtjku 3 days ago 0 replies      
Ethical or not, I'd be more concerned about what I'd do if the current job ended, regardless of why it ended. What do you tell the company you're interviewing with about what you did at your last job? And I don't mean discussing this with them, I mean what do you say about the projects you worked on over the past n years when there's only this one automation?
fisknils 3 days ago 0 replies      
No. You're performing the job you are paid to do.You could hint that you could handle more work if you think it would benifit you, but how you perform your job, as long as the end result is the same, is not something you want to "bore your busy employer" with.

Especially not if it means saying "You could do this without me now"

venture_lol 4 days ago 0 replies      
If you are paid according to time and material, you could be sailing in bad legal waters. Ask an attorney about fraudulent billing.

If you are paid like an FTE, As long as your employer is satisfied with your level of productivity, then it really does not matter how long it took you to produce results.

Nevertheless, it's shady to insert bugs into your products. My work is my pride is what matters in the end.

kyberias 3 days ago 1 reply      
Wow. How can anyone be confused about this? This is clear cut. The stackexchange answers are correct and this HN thread is filled with really unethical, almost childish advice. What the guy is doing is basically fraud. The employer expects him to do efficient work. If he can automate it, that automation is owned by his employer.
hysan 4 days ago 0 replies      
I know this is an ethical question, but I wonder, would it be possible to have him license the script to the company for some annual fee and then offer the company a support contract as well in case new quirks are found that need to be updated? Combined license + support contract cost == his current salary. Or does the automated script he created already belong to the company?
s73ver 4 days ago 0 replies      
Considering that they're just as likely to fire you as they are to promote you, I would say it's perfectly ethical to not tell them.
bhgraham 3 days ago 0 replies      
I counter with another question. Is it ethical for a company to pay you the same to do more work? If you tell them you have automated your job, I guarantee that the reaction will be to give you more work. You will not get more free time or more pay.
Markoff 3 days ago 0 replies      
it's simple - how long do you plan to stay with this company? 3 years? so ask 3 years salary for your program or better even more for additional support and updates

if they don't seem interested in this just keep doing what you are doing

it's same like comparing performance of employees, some smarter of us learn workarounds to make our work more efficient, is it mandatory for us to share our findings? what would be my motivation to share unless i will get significant bonus or share on company profit gained through higher productivity?

now if you just started and you are in your 20s i can see how you still have naive idealistic attitude and let yourself abuse and help company to fire people including you, if your are older you are less prone to this bullshit

suls 3 days ago 0 replies      
There was a very similar story a while ago on HN:"Kid Automates Work, Is Fired, Hired Back, Automates Business "https://news.ycombinator.com/item?id=4167186

History repeating itself?

aey 4 days ago 0 replies      
Quit! Start your own company that sells your job as a service.

If you do it right your current boss can be your first customer.

danthemanvsqz 4 days ago 0 replies      
I think the only unethical thing is holding back on the results. It's great to automate tasks and the company doesn't need to know about it unless it is explicitly stated. But your obligation is to deliver high quality results as fast as you can.
punnerud 4 days ago 0 replies      
This is why I like the new innovation-project type in EU where you are both allowed to work for the company and sell a project to them at the same time.The problem here is how you could sell them the system, if you are allowed in US?
nsxwolf 4 days ago 0 replies      
I can't help but think the ironic day will come where someone in the organization will get the idea to automate his job and bring someone in to do it, and that's how he'll finally be let go.
erkkie 4 days ago 0 replies      
Interesting question here to me is how to align incentives in a manner that works out best for both, the employer and the employee. Automate your job and then move on to higher level problems, rinse and repeat. Profit share?
omginternets 3 days ago 0 replies      
Whether you answer "yes" or "no" to this question basically amounts to whether you're an entrepreneur or an employee at heart.

I'm only half kidding.

m83m 3 days ago 0 replies      
The best solution is to become a contractor with this employer, and charge a flat rate per result or per week/month.
ryanmarsh 4 days ago 0 replies      
Companies treat us so unethically why are we so gracious to them?
tfont 2 days ago 0 replies      
So you mean "Is it ethical for me to tell my employer I've automated my job?"
ceejayoz 4 days ago 5 replies      
They haven't just automated the job - they've deliberately inserted errors to make it look like a human made them. That's a big step over the line into fraudulent behavior.
methodin 4 days ago 0 replies      
Willfully putting bugs in code is ridiculous - that alone would be grounds for firing. The OPs concern about ethics has made his actions unethical that would otherwise not be.
aj7 4 days ago 0 replies      
Congratulations. You are supporting yourself on a monopoly rent. Don't be a fool and give it away for nothing. You've already said too much.
TheBaku 3 days ago 0 replies      
I wonder, if OP told the company about his script and the company demands the script is he forced to give it to them?
crawfordcomeaux 4 days ago 0 replies      
I'd argue everything they're doing could be portrayed as ethical in some context.

If they aren't actively looking to replace the job they feel the need to fraudulently accomplish, I'd argue that's the unethical component. I don't think they mentioned anything about looking for more work.

It's one thing to be in a situation where the only options you can perceive as valid are fraudulent ones. It's another thing to choose to stay in it instead of choosing to extract yourself.

NoCanDo 1 day ago 0 replies      
Nah, it's fine.
Vektorweg 3 days ago 0 replies      
Having a employer - employee relationship is already considered unethical by socialists.
myrloc 4 days ago 0 replies      
My question is - will he ever tell the employer about the program? Even after the point of employment.
vacri 4 days ago 0 replies      
If the author is an employee, it's pretty clearly unethical to withhold information from the company. The real question is not whether or not it's unethical, but whether the author is okay with behaving unethically.
alkoumpa 3 days ago 0 replies      
similarly, on a larger scale, one could ask whether deep learning is unethical for automating millions of jobs (if not yet, certainly in the future).
adamzerner 3 days ago 0 replies      
There's no universally accepted "right" answer to questions of ethics. See https://en.wikipedia.org/wiki/Normative_ethics#Normative_eth... for some approaches.

I'll provide a few perspectives.

Act consequentialist ("hardcore"): Is the world as a whole better or worse off after you take that action? Probably better off. By taking that action, there'll be less money in your companies pockets. That money may trickle down a bit to Average Joes, but probably will go mainly to rich people who don't need it. On the other hand, you'll have more free time, you'll be happier, and your child will get to spend more time with your son.

Rule consequentialist: Evaluating the costs and benefits of this particular action is error prone, so you're better off just following a good rule of thumb. In this case, I think a good rule of thumb is to oblige by your contract. Your contract as a full time salaried employee is to, basically, give us your time for 40 hours a week and work reasonably hard. If your contract was some sort of fixed price freelance gig, then things would be different, but by signing the contract you did, you gave them your word that you would work reasonably hard for 40 hours a week, and keeping your word is a good rule of thumb.

Rule consequentialist: Evaluating the costs and benefits of this particular action is error prone, so you're better off just following a good rule of thumb. In this case, I think a good rule of thumb is to be honest, and tell your boss.

Deontologist: You have a _duty_ to follow your contract. You should do it _because it's your duty_, not because you think it'll lead to good consequences.

Deontologist: You have a _duty_ to be honest.

Deontologist: You have a _duty_ to be the best possible father you can be, no matter what it takes.

Virtue ethicist: You should follow your contract, because doing so is sticking to your word, and sticking to your word is virtuous. You shouldn't be sticking to your word because you think following that rule-of-thumb will lead to good consequences, you should be doing it simply because it's virtuous.

Virtue ethicist: You should do what is best for your son, because being a good father is virtuous.

Personally, I believe in consequentialism, and I believe that you can use your judgement to decide whether or not to use act or rule consequentialism, based on whether you think you have a decent grasp of the trade offs. If you don't have a good grasp of the trade offs, you can expect a rule-of-thumb to do a better job than your attempted analysis, and should go with the rule-of-thumb. Otherwise, go with the results of your analysis.

In this situation, it seems to me that the trade offs are relatively clear, and that you could go ahead and keep it to yourself. But I wouldn't blame someone for taking the position that the trade offs aren't actually too clear, and it'd be better to fall back on a "be honest" rule-of-thumb.

Note: I expect that if you told them, they would take the program, and either a) use it and fire you, or b) maybe keep you around as a contracter or something to add to the program. You wrote the program during work hours, on a work computer, presumably. So legally, it is there intellectual property. Assuming you don't have some atypical clause in your contract.

thinkfurther 4 days ago 0 replies      
note: before posting I realized logfromblammo said what I'm trying to say and more much shorter and better: https://news.ycombinator.com/item?id=14657981 but now that I already rambled so much I don't want to just throw it away either, so here goes nothing.

> Is this the kind of example you want to set for your son?

Yes. I can nearly touch the very smart and decent person behind that post (which I didn't fully read because you bolded this and I had to get my opinion out before reading on :P)

Use a lot of your time on that son, and some of it on helping people here and there who don't have much time. Spend little money and lots of time! You can answer your son's questions, you can play with him.. don't sacrifice that luxury light-heartedly. Don't spend that penny without turning it over lots, it's the first of that nature you got, and many people don't even know a person who had one.

Of course, as others said, also learn interesting things and keep your eyes peeled for a job that would have meaning to you you can be 100% straight about to everyone involved. But I assume you're already doing that anyway.

This stroke of luck might not last forever, but it is a stroke of luck IMHO, from the sound and content of your post I'd say a nice thing happened to a nice person who put in the work to deserve it. Nothing unethical I can see about it. If they want it automated, they can hire a programmer. Wanting to have it automated by someone for data-entry wages, now that's unethical. So if you want ethics, calculate a generously low programmer salary for 6 months, then coast along some more until they paid you this much.

One thing I'm sure, suffering 40 hours a week when there is no need is kind of the worst example you could set for your son. IMO, of course. His father at least for a moment is free from bondage, but also free from delusions that often come with "aristocracy" (for lack of a better word, I just mean most people who "live the easy life" pay with it dearly in ways they don't even register). That's as rare as it is beautiful. Take the advice of anyone who never tasted this with a grain of salt. Especially if you use free time to seek out things you can do or create that are interesting to you -- I don't believe in relaxation or entertainment that much, I love being focused and busy, but I believe in autonomy and voluntary cooperation.

Everybody should... well, okay, 2 hours a month wouldn't be enough by a long shot, but I do believe life work life and starvation levels for all people on Earth could and should be compatible with a dignified, strong personality. But we're really programmed to not even want that, to not even recognize that as the minimum responsible adults should settle for, but rather belittle it as utopian. Yeah it's a hard problem, but it doesn't get easier by working on unrelated gimmicks instead.

As you said yourself, the company already gets the end result out of you what it wanted out of you for that money. Now they get the bonus of you improving yourself and the world and spending more time with your son than you otherwise could. At least on a human level, anyone who doesn't see this as an added bonus to be happy about is petty. This makes the world much better than you saving the company a job would, which often is just pissing down the drain. You didn't get this job with the intent of automating it, and you probably started trying that without even knowing if it would work, because you like coding. And then you knew that they wouldn't just say "good on you, enjoy the time with your son". I know I'm trying a bit hard here, but if you squint you might say you have to "lie" to get them to "do the right thing".

> You cannot strengthen one by weakening another; and you cannot add to the stature of a dwarf by cutting off the leg of a giant.

-- Benjamin Franklin Fairless

This is true. And yet, if you would let them, they would do it. To be fair, I know none of the people involved, but for a general "they" this is too often true. And nothing would be gained, only something would be lost, and you would have lost the most.

I say you got lucky, it's yours. Use a lot of it selflessly, but use it! Maybe ask a lawyer for advice, don't be reckless of course. But if your only danger to this is your conscience being infected with the general pathology of society, rectify that. Fuck survivor guilt, you know? Good for everyone who gets as far away from the prison system (in the sense of System of a Down) as they can. Don't leave us in the ditch, but never get dragged back in either.

_RPM 4 days ago 0 replies      
Workplace.stackexchange.com makes me cringe. It seems every post is written by socially challenged people with absolutely no social awareness or confidence. I had to stop subscribing to it.
Another Ransomware Outbreak Is Going Global forbes.com
504 points by smn1234  5 days ago   409 comments top 43
Animats 5 days ago 8 replies      
Maersk is down. Their main site says:

 Maersk IT systems are down We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyber attack. We continue to assess the situation. The safety of our employees, our operations and customer's business is our top priority. We will update when we have more information.[1]
Maersk is the largest shipping company in the world. 600 ships, with ship space for 3.8 million TEU of containers. (The usual 40-foot container counts as two TEUs.) If this outage lasts more than a few hours, port operations worldwide will be disrupted.

[1] http://www.maersk.com/en

willstrafach 5 days ago 7 replies      
FYI to Sysadmins: Paying the ransom at this point will be a waste of money, as the contact e-mail address has been blocked.

https://posteo.de/blog/info-zur-ransomware-petrwrappetya-bet... (German)

https://posteo.de/en/blog/info-on-the-petrwrappetya-ransomwa... (English)

jannes 5 days ago 7 replies      
This is even more proof how powerful a 0-day in the wrong hands can be.

All of the affected companies' should be considered compromised by the NSA.

Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised. Ransomware is much more visible than spyware. Think about all the spyware-infected PCs/networks that nobody knows about.

elcapitan 5 days ago 3 replies      
secfirstmd 5 days ago 1 reply      
(Sorry for the repost but I feel the pain of sysadmins so it might be useful to some people as everything melts down around them this evening)...

Hey, FWIW we had to do some response for ransomware cases recently.

There was a lack of decent stuff out there for how IT teams should deal with it. So we contributed to putting together this quick checklist:


Would be great if more people wanted to add to it.

bkor 5 days ago 2 replies      
The Netherlands and various other countries have created laws where either their version of the NSA and/or police can hoard 0days to be used for hacking.

This massive outbreak is so widespread that at this stage it appears that it either was a very recent 0day or something which only recently was fixed by a patch.

Instead of having loads of countries hoarding security problems I highly encourage a focus on security instead. Seems much better for the economy overall.

110011 5 days ago 4 replies      
Can someone provide a simple (but not overly so) explanation of how the current generation of ransomware operate i.e., A) spread and B) lock up the computer? Does it always require human intervention for A. ? Thank you.
maddyboo 5 days ago 4 replies      
Does anyone know if any tools exist on Linux which can be used for early detection of ransomeware?

Something that monitors file access, disk activity, etc. for suspicious behavior and can trigger some action or alert?

I think I remember some discussion about using a 'canary file' - some innocent looking file with known contents which should never be modified. If a modification is detected, you know something fishy is going on.

nlte 5 days ago 2 replies      
This isn't yet the cyberattack "the world isn't ready for" (https://www.nytimes.com/2017/06/22/technology/ransomware-att...), is it?
mbaha 5 days ago 5 replies      
A friend sent me the bitcoin address, they've already collected 2600$.

[EDIT] Now 3230$

Source: https://blockchain.info/address/1Mz7153HMuxXTuR2R1t78mGSdzaA...

vldx 5 days ago 1 reply      
Interestingly, WPP mandates all it's employees to shut down their computers irrespective of the OS.

> As a precaution, WPP is mandating that everyone immediately shut down all computers, both Macs and PCs. This applies to you whether you are in the office or elsewhere. Working on an office computer remotely is not an option. Please leave your computers turned off until you hear from us again.

> Many thanks for your co-operation and patience.

> Best regards,

dz0ny 5 days ago 0 replies      
Public analysis is tracked here https://otx.alienvault.com/pulse/59525e7a95270e240c055ead/

Seems that payload servers are in Germany, France, and Malaysia.


mihaifm 5 days ago 3 replies      
> with WannaCry it was alleged a nation state was likely responsible for spreading the malware: North Korea

Is there any evidence for this? Looks like another fake rumor.

the_cat_kittles 5 days ago 6 replies      
i said this before and it was met with mostly hostility, but im still wondering... bitcoin has enabled ransomware, so its a boon to crooks. what has it done for non-crooks? i dont mean conceptually (no fed! decentralized! etc. etc.), i mean since its come into being, what has it done for you personally? for me: i bought a vpn subscription, anonymously. probably not able to do that as easily without btc. but, i would personally trade that for not having ransomware attacks. thoughts?
voidmain0001 5 days ago 1 reply      
Kaspersky wrote about Petya 16 months ago. https://blog.kaspersky.com/petya-ransomware/11715/ Has the delivery changed causing it to resurface again?
tudorconstantin 5 days ago 4 replies      
Maybe this is the year of Linux on desktop.
memracom 4 days ago 0 replies      
Note that having a good multi-generational backup system in place for all machines, servers and laptops, would render this kind of ransomware harmless.

But the state of IT has deteriorated so badly these days because management doesn't care any more. After all why care when you can just take your severance pay and get an increase in salary and more responsibility at another company. Rinse and repeat.

It used to be that the primary job of system admins was to keep the data safe from loss. That was more important than keeping the systems running. How did we lose this?

hackrack 5 days ago 1 reply      
Idea: What if the purpose of these WannaCry style ransomware attacks isn't to get people to pay in Bitcoin, but to drive up the price of Bitcoin?
HIBC2017 5 days ago 0 replies      
If you're infected, don't pay the ransom. The email address that's used has been blocked by the email provider.


nuclx 5 days ago 3 replies      
As someone affected by the ransomware - did anyone else notice empty console windows popping up from time to time the days before the ransomware triggered the encryption?
kuon 5 days ago 0 replies      
Those attacks are still "gentle" as if you have (and you should) a read only backup you can resolve it with near 0 dataloss.

What I fear are cancer like virus, not wiping or encrypting data at time T, but introducing subtle errors on a longer period. You would be contacted by hackers saying your last 6 months of data contain error. That's scary.

tonyplee 5 days ago 1 reply      
Wonder if they manage to disable the UK's Trident Nuclear Submarine this time.

"Windows for Warship"https://en.wikipedia.org/wiki/Submarine_Command_System

"Want to Nuke someone, please send Bitcoin to unlock the systems."


mighty_warrior 5 days ago 1 reply      
Your own fault if you didnt patch out eternalblue. No sympathy for hacked orgs.
jz10 5 days ago 2 replies      
My friend's work laptop is a victim of this same attack... all the way here in the Philippines.

There was a company wide email blast to disconnect all workstations from the internet at once.

Fascinating development

r721 5 days ago 0 replies      
>Cyberattack hits entire Heritage Valley Health System, shuts down computers

>A cyberattack is affecting the Beaver and Sewickley hospitals and all other care facilities in the Heritage Valley Health System on Tuesday.


jl6 5 days ago 1 reply      
Could someone write a whitehat worm or virus to get into all those vulnerable Windows systems and close the door behind them by patching the hole?
andruby 4 days ago 0 replies      
I always replay the end-game of Uplink [0] in my head when I read news like this.

Great game with great music.

[0] https://www.introversion.co.uk/uplink/

runeks 5 days ago 0 replies      
> Ukraine's government, National Bank [..]

Now there's a new attack target: the central bank. Send 100 BTC to this address and I will decrypt the balances stored by your central bank, so you, again, know how much money you own.

kator 5 days ago 1 reply      
emersonrsantos 5 days ago 0 replies      
Does anyone know if this decrypt app still work?


gmisra 5 days ago 1 reply      
Is anyone aware of an entity that attempts to objectively quantify the economic impact of an event like this (ransoms paid, data lost, labor hours lost, new security costs, etc)?
strictnein 5 days ago 2 replies      
On a related note, I don't understand the reason behind transactions like this:


Is there something special about using numerous senders like that?

zvrba 4 days ago 0 replies      
So.. ransomware authors want payments in Bitcoins. The obvious counter-attack from the governments would be to target and shut-down all services exchanging bitcoins (or other digital money) to real money. Heck, they can hack them and delete all data, so they shut down on their own.
r721 5 days ago 0 replies      
>We have confirmed U.S. cases of Petya ransomware outbreak


paulpauper 5 days ago 3 replies      
store important stuff on external hard drives

never download suspicious stuff specially from emails

faragon 5 days ago 1 reply      
Why Ransomware authors are not yet in jail? /cc FBI CIA BND MI5 FSB
kronos29296 5 days ago 0 replies      
I remember reading something about a guy warning about intrusions on his company during Wannacry to steal company data and install malware. Now we have this. This is giving me goosebumps.
agumonkey 5 days ago 1 reply      
How can one check quickly if his OS is vulnerable ? I know MS pushed updates, but sometimes updates are stuck, or fail to install or are delayed by the user .. so
athenot 5 days ago 2 replies      
Do these attacks affect anything else beyond Windows?
jdc0589 5 days ago 0 replies      
FYI, looks like this is still using EternalBlue.
butz 5 days ago 0 replies      
One of my clients got some strange emails around 12:00 GMT with links to probably infected websites. Is this related to ransomware?
agumonkey 5 days ago 3 replies      
Did these ships are also oil tanker ?
dagaci 5 days ago 2 replies      
I'm afraid that this attack demonstrates that the old PC architecture: Side-loading any app, userspace, privilege escalation, low level file sharing functionality just isn't for purpose.

If malware can exploit a 0-day, 100-day, 1000-day security hole in a corporate network of 2000 machines, its too easy for that malware to share itself across the network and send emails attachments to AllUsers (every single company I've worked for still allow Everyone to send anything to Everyone).

Microsoft's next XP patch should be to remove SMB functionality or just outright disable it (and probably remove IE and other nonsense installed by default too).

And when Windows 7 expires the final patch should be a severe lock down too..

Scott's Cheap Flights: Growing a small side project into a booming business indiehackers.com
541 points by bkidwell  2 days ago   222 comments top 31
chrisballinger 2 days ago 8 replies      
Something I've noticed that's lacking from most aggregators like Kayak and SkyScanner (my current fave), is the ability to increase the price of the ultra-budget carriers by adding their carry-on baggage fees and whatnot. I don't care about checked bag fees, but some of the carry-on fees are ridiculous ($60 per overhead item?). These budget carriers are cheating the system by looking like the cheapest option, when in reality one of the more expensive carriers is actually cheaper once you compare them apples-to-apples.
jmarbach 2 days ago 11 replies      
Disclosure: I built a competing tool for finding cheap flights, https://concorde.io.

I think the secret of Scott's success is his incredible writing ability. Finding the cheap flights is the easy part. Communicating with users in a way that consistently wins over their hearts and minds takes a high level of consideration and creativity. This writing ability combined with his co-founder's understanding and application of direct response marketing has produced fantastic results. I am glad to see their success.

natch 2 days ago 3 replies      
Ouch, their site triggers one of my pet peeves about travel sites, listing prices without clarity whether the deal is one-way or round trip.

 NYC to Paris: $260 Normal Roundtrip Price: $900
So did people who got that deal save $380? Or did they save $640? Which is it?

As feedback to Scott and Brian, this actually makes me hesitate to sign up for premium, because I don't know what level of discount we're talking about here. A 2x difference (one way versus RT) is significant.

jwong_ 2 days ago 4 replies      
One thing I'd like to vent about is that Scott seems to be making a lot of money selling e-mail addresses.

I signed up for a couple different contests using unique e-mail addresses generated specifically for the contest (Yes, they were separate e-mails for separate contests; No, I did not double-dip my entrees). I was then signed up for 2-3 e-mail lists for 3rd party companies over the course of a couple weeks. These companies weren't even doing anything tangentially related to what Scott writes about(cheap flights). I wish it was more transparent that he was going to sign you up for these random companies.

csallen 2 days ago 0 replies      
Scott also came onto the Indie Hackers podcast this week to talk a little more about his business. For example, about why they don't do any paid advertising to acquire customers: https://www.indiehackers.com/podcast/020-scott-keyes-of-scot...
thebiglebrewski 2 days ago 4 replies      
This is amazing but is it the kind of business that can last for more than a few hundred thousand users? At a certain point people get none of the free deals, because too many are trying to book at once, right?
smaili 2 days ago 3 replies      
I hate to be the one to ask, but how much moat does this actually have? Couldn't Kayak literally just add a subscribe or alert me later input and have their massive infrastructure automate this very thing?
Huhty 2 days ago 1 reply      
What fascinates me with this story is how low tech everything was. A simple landing page, email list/newsletter, and "value" in the form of travel deals. Anyone can do it, and the barrier of entry is minimal.
joelrunyon 2 days ago 1 reply      
Good to see Brian & Scott on here. Crazy explosive growth over the past 18-24 months!

Great evolution of an MVP into a legitimate, mid-size business without taking tons of funding to do it.

triangleman 2 days ago 4 replies      
So, Scott originally accumulated all those miles working for ThinkProgress? How does that work? You buy the flights and expense them back to the company, pocketing the miles?
Gys 2 days ago 3 replies      
Reminds me of: www.holidaypirates.com

With special websites for most bigger European countries. Its more or less doing the same, with lots of affiliations, an app, Whatapp group, etc.

SadWebDeveloper 2 days ago 2 replies      
Just wondering if the guys can automate the scraping part, seems unlikely this is usually information that needs to be handpicked or "moderated" by one or more humans.
joering2 2 days ago 1 reply      
If you receive Scott's emails for longer than one week, you will quickly have the answer to your question "why not to scrap".

I think enough people turn to premium member exactly because a personal touch of Scott and his team. You have this feeling this is not simple aggregate -- I always enjoy reading Scott little tips "when you buy ticket to certain city, don't forget to visit specific point of interest". I end up researching those and always came up with fun info, making me believe Scott is pro and knows what he is doing/researching for me. A scrapper will be inhumane and you will quickly realize that and most likely convert 1% of what they convert.

Scott - I am very curious how did you initially advertise your newsletter? First weeks/months you were live - how did you get your initial traffic?


kingosticks 2 days ago 1 reply      
I recently subscribed to the UK version of this (https://www.jacksflightclub.co.uk). I didn't know there was a US version, I guess there might be lots of similar services. I wonder who came first.
knownothing 2 days ago 0 replies      
So now people are paying other people to browse FlyerTalk for them? At least if you're part of the forum you can contribute back to the community.
tmaly 2 days ago 1 reply      
I just listened to the podcast for Scott's Cheap Flights.

Fantastic job Courtland. Your podcast is one of my favorites in this space.

pgodzin 2 days ago 1 reply      
It's interesting that they employ a dozen flight searchers who manually look all day. Are there any flight prices APIs that can be scraped, looking for significant outliers?
notadoc 2 days ago 0 replies      
Great read. The indiehackers site is full of interesting interviews and stories, well worth all of them as there is always something to learn.
Taylor_OD 2 days ago 0 replies      
I've been on the list for a little while. I havnt booked any flights from the deals yet because I've got a fair bit of traveling planned already this year but I'm starting to look at flights for next year. Check out the list if you havnt. I live in chicago and flights anywhere are usually $500 or less.
kitcar 2 days ago 0 replies      
Interesting that they have been able to raise prices over time - a fixed number of cheap seats available at any moment means the more users on the email list, the less likely an individual user will be able to extract value from the list - hence list growth actually reduces the value it delivers.

I guess fear of missing out is a strong sales tool!

jpindar 2 days ago 2 replies      
OK, so how many of us are wondering what other kinds of products this would work for?

I've seen similar newsletters for books, for Amazon Subscribe & Save, and for low cost Amazon items.

jpster 2 days ago 2 replies      
>We actually don't do a ton of A/B testing or worry much about open and click rates, and here's why: Our revenue model is subscription based. We don't take any commissions or have any ads in the premium emails. Our only incentive is to keep premium subscribers happy.

I find this a surprising statement -- aren't the emails the primary channel for trying to convince a free user to upgrade to paid? Why not try A/B testing to try and boost that percentage from the ~10% to something higher?

jly 2 days ago 0 replies      
Awesome writeup. It's great to see companies that can make this work without taking any funding. I had no idea there were so many people behind this.

I have nothing to add except that I've been a very happy paid customer for several months now. These guys run a fantastic service that has been worth every penny. Yes, some travel companies that have extensive infrastructure could probably do what is being done here, but they don't.

IPS3c 1 day ago 0 replies      
Hidden fees and baggage charges are the worst. Airlines need to be more up front with their charges.

This is a cool business model though.

benjaminbeck 2 days ago 0 replies      
Amazing how much they did without too much technology or upstart cost!
desireco42 2 days ago 0 replies      
Definitely great business and great example of side business.
losiiiii 2 days ago 0 replies      
I'm not a fan of the fact that on their website the testimonials are the same (with the currency changed) no matter what location I choose.

The destinations in the locations are of course also the same, so it just kinda seems like they are trying to trick me. Kind of reinforcing the feeling I generally get from this kind of business. If they have that many happy users, they could at least get real testimonials from each place.

chriskingnet 2 days ago 1 reply      
I'm not a fan of the fact that on their website the testimonials are the same (with the currency changed) no matter what location I choose.

The destinations in the locations are of course also the same, so it just kinda seems like they are trying to trick me. Kind of reinforcing the feeling I generally get from this kind of business. If they have that many happy users, they could at least get real testimonials from each place.

lavezzi 2 days ago 1 reply      
Still don't see the appeal. I get better and quicker results from theflightdeal.com and flyertalk.
BryanBryce 2 days ago 0 replies      
The system is down
bhyam 2 days ago 0 replies      
Really cool story.
The .feedback scam everythingsysadmin.com
555 points by 0x0  3 days ago   159 comments top 49
wodenokoto 3 days ago 9 replies      
Wow, the author is not exaggerating when saying it is a scam.

Looking at the .feedback page for Stack Overflow, it says at the top, in fairly large letters "We make Stack Overflow, where the world's developers get answers, share knowledge & find jobs they love. Also proud builders of the @stackexchange Q&A network."

Then at the absolute bottom, in small, washed out print it says "Disclaimer: This site is provided to facilitate free speech regarding Stack Overflow. No direct endorsement or association should be conferred."

So, users are not supposed to confer that a page claiming to be by the makers of stack overflow, are associated with SO? Beyond any reasonable doubt, the people behind that site are trying to scam visitors.

If the creators of the .feedback pages are also the TLD owners, it seems obvious to me that they should face legal charges and be stripped of the TLD.

finnn 2 days ago 1 reply      
The whois data for domains under this TLD is kind of interesting

whois stackoverflow.feedback returns a phone number with a CNAM of STACK OVERFLOW, and the same address listed on https://stackoverflow.com/company/contact

whois google.feedback returns the phone number +1.1978600872, which is not a valid US area code last I checked. The address appears to be a PO box in Seattle.

facebook.feedback has the same bogus phone number and the same PO box as google.feedback.

myaccount.feedback (where they send you if you try to vote on anything, presumably other things too) has a residential address on Mercer Island, WA and a Google Voice number listed.

Calling the Google Voice number results in a voicemail where a person identifies themselves as "Jay" (presumably Jay Westerdal[0] of Top Level Spectrum, Inc who owns the .feedback TLD[1])

Another thing of interest is that myaccount.feedback encourages you to login with Google/Facebook/Twitter/LinkedIn

[0]: https://icannwiki.org/Jay_Westerdal[1]: https://icannwiki.org/Top_Level_Spectrum,_Inc.

xg15 2 days ago 2 replies      
Comment from the article:

> I would like to add that IMHO all new TLDs are scam. Brand owners are forced to register their name in many of them.

.feedback is just the tip of the iceberg.

I think this is the core issue here. I remember that a few of ICANN's new TLDs caused similar issues. The idea of selling TLDs to companies for use at their discretion is horrible enough, but they also seem to be completely ignorant of this problem.

This feels as if ICANN is trying to become the new FIFA.

(I don't really have much empathy for google etc having to pay $600 a year, but the same problem could also hit lots of small sites. Also with the current trend, TLDs seem to loose all structure and meaning and just turn into another brand vehicle or trade asset)

phantom_oracle 2 days ago 1 reply      
On the topic of scams, has anyone ever discovered what ICANN did with the enormous amount of money it raised by selling these TLDs at a premium?

I recall .blog being purchased by Automattic for 15-20M (that being just 1 example).

ICANN is to tech what the SEC is to finance: A corporate revolving-door where you join to do your corporate-masters bidding and then move back to your 7-figure job.

emidln 2 days ago 2 replies      
I worked for a feedback company in the past. Nobody cares even when they pay for a feedback widget and get the reports emailed/pushed to their managers. Most of the clients didn't even login to the site or collect the feedback (sign up for notifications, download a csv, etc). AFAICT, companies paid for the feedback widget to check some box on a yearly powerpoint along the lines of "manages user feedback". It's a good racket to be in once you realize that nobody cares about your product and just wants a good-looking widget that is effectively write once. You just build good sales and relationship managers then hit up enterprise.

I'm 90% sure that the entire concept of the feedback form/widget is only still with us as a way of channeling user rage into non-social platforms. "Here, complain into this void that won't hurt our public opinion!"

jommi1 2 days ago 0 replies      
Looks like they were already found out in March...? (0)How are they still up??

The company behind this (1), has .realty .forum .contact .pid and .observer and all the "sell" pages lookalike. Holy fuck this looks dirty.

(0) https://www.icann.org/uploads/compliance_notice/attachment/9...(1) http://www.topspectrum.com

SimonPStevens 2 days ago 1 reply      
The real scammers here are ICANN and the atrocious way they handled the generic TLDs.
toni 2 days ago 0 replies      
The .feedback sites also load a "fingerprint" script which tries to gather all kinds of info from your browser.


captainmuon 2 days ago 0 replies      
I wonder how they can get away with this, and not be sued into oblivion.

On an amusing note, the pages barely load for me. Could this be the first time the HN hug of death took down a whole TLD?

speedplane 2 days ago 0 replies      
GTLDs were introduced only a few years ago, but it's clear the implementers were conflicted by reckoning the freedom of the "earlier internet" with what the internet has become today. They wanted to honor the freedom of the DNS system. But they also recognized the internet has changed enormously. Today, the internet is largely ruled by large brands, just as most mainstream media is.

The GTLD system is a pretty poor compromise between those two positions. User's don't have freedom of the early DNS system, and brands now have a huge enforcement burden to mange. Registering sites like coca-cola.fun and coca-cola.shopping are almost certainly not allowed, and will be shut-down eventually, but it now costs Coke a sizable sum to monitor and take down the infringing websites.

accountyaccount 2 days ago 3 replies      
They also own .sucks... of which they were charging $2500 per domain. I don't know why these manipulative TLDs are allowable without considerable regulation. This latest example seems to be more directly extortion, and is likely illegal in many places.
_jal 2 days ago 1 reply      
Ick. In a way, it is a sort of corporate version of those mugshot sites.

I understand a lot of other scams. (Not condone, but I see the attraction of running it.) This one just seems like a lot of work to put into something that I really don't see the targeted companies choosing to go along with. Seems like borrowing a ton of grief - at the very least, they'll be hearing from a bunch of crabby lawyers.

ryan-c 2 days ago 0 replies      
Someone appears to have figured out how to inject arbitrary javascript into the .feedback pages, so be careful visiting.
gumoro 2 days ago 1 reply      
Go to http://feedback.feedback/ and write a sternly-worded review, that'll teach'em.
dasil003 2 days ago 2 replies      
Reminds me of GetSatisfaction back in the day. It was never as much of an outright scam as this, but they definitely had that protection racket vibe about it.
Marat_Dukhan 2 days ago 4 replies      
Interestingly, amazon.feedback redirects to amazon.com, so Amazon did pay
r1ch 2 days ago 1 reply      
Aren't all ICANN accredited registrars bound by the UDRP[1]? It seems like any domain registered "on behalf" of a company could easily be taken down / claimed by the UDRP process.

[1] https://www.icann.org/resources/pages/help/dndr/udrp-en

chadcmulligan 2 days ago 1 reply      
just lodged a complaint on icann.feedback, should fix it.
thinbeige 2 days ago 0 replies      
Slightly OT: This reminds me of Glassdoor. Glassdoor is more subtle but has a similar business model.

Pay for the 'employer branding' package way more than 600/year (rather per month) and you get rid of the worst employee feedbacks.

ollybee 2 days ago 0 replies      
This seems a similar model to the .sucks TLD. They offered domains to company owners for $2500 before offering them to the public.
blazespin 2 days ago 2 replies      
You can't do "or pay $600/year to take the web site down". There are laws against that. It's called extortionate. That being said, they could be more subtle like Glassdoor / Yelp / etc. People would actually have to find .feedback domains useful for that to happen however. My sense is that this will just go nowhere, no business will be made, and it's all a lot of sound and fury over nothing.
UnrealIncident 2 days ago 0 replies      
They're also fingerprinting every user. I noticed because it caused Firefox to hang for over a minute.
publicopinionsa 2 days ago 1 reply      
I might want to include that IMHO all new TLDs are trick. Brand proprietors are compelled to enroll their name in a large portion of them.
rrauenza 2 days ago 3 replies      
Could the major browser providers start just blacklisting these kinds of TLD's? Or grey listing with huge warnings?
jacobmalthouse 2 days ago 0 replies      
Tangential but important. Some new domain endings are really focussed on an ethical approach to using what we view as important public infrastructure (meaningful words + DNS = impact). At home.eco we embedded ethics into our governance (ICANN Contract) our corporate DNA (B Corp) and our product (profiles.eco). We think there is real potential to leverage the DNS for good.
sergiotapia 2 days ago 0 replies      
Fucking genie's out of the bottle now isn't it. :)

A bit late to call party foul when absurd tlds are available more and more. Ocean of piss and all that. Embrace the chaos.

warent 2 days ago 0 replies      
An unethical and shamefully pathetic extortion scheme. This is really a disappointing and completely uninnovative, destructive direction to take the internet in
babuskov 2 days ago 0 replies      

Looks like they got some great feedback.

BLanen 2 days ago 0 replies      
Couldn't companies do a class-action on the fact that they pre-registered domains with trademarks belonging to other companies but not giving those domains to those companies?

There's precedent for getting a domain based on trademark from someone else.

yellowapple 2 days ago 0 replies      
I'm normally thrilled to hear about new TLDs.

This is an exception. It'd be great in theory, but the preregistered scam domains are absolutely ridiculous.

I hope the likes of Google and Facebook come down on these sites, and come down on them hard.

snakeanus 2 days ago 1 reply      
This is why people should move to things like OpenNIC or to a DNS-less address scheme like the ones used in namecoin/tor/i2p. The ICANN scammers should be stopped.
crispytx 2 days ago 1 reply      
Scammers like this give capitalism a bad name.
kierenj 2 days ago 0 replies      
I hit the "Claim" button, and it's gone to a page asking for my CC details. I wonder if anyone could claim it..
martin-adams 2 days ago 0 replies      
This feels very similar to Trustpilot in my opinion. The format and description of the company (in the first person) is extremely similar. The only difference is branding. Trustpilot don't have a domain looking like the company, but companies can pay to take control of their brand on the platform.
dabber 2 days ago 0 replies      
The fine folks at .feedback seem to be Kurt Vonnegut fans.

cdn.feedback serves up this on an otherwise empty html doc:

 <!-- There's only one rule that I know of, babies - God damn it, you've got to be kind -->

TeMPOraL 2 days ago 0 replies      
See the thread here[0] for info on who's likely behind it. It seems to be operating since at least 2015 already.

[0] - https://news.ycombinator.com/item?id=14669058

jheriko 2 days ago 0 replies      
this is the price of having no or nearly no regulation or standards.

as much as i appreciate the arguments for why that is the case, its important to recognise the cost of that philosophy in practice

still i hope there is some legal action that comes against them.

SippinLean 2 days ago 0 replies      
If you click through "Claim this site" there doesn't seem to be anything in the form of verification, seems that just anyone can claim them. The price for SO was $750 though, not $600.
metaphor 2 days ago 0 replies      
> If they do discover it, they are given a choice: Pay $20/month to receive the feedback, or pay $600/year to take the web site down.

Corporate-driven lawsuit for trademark violation?

pawy 2 days ago 0 replies      
Doesn't Trademark protect them ? I thought that a simple request could take down such domain names. (1000 buck per name if I recall)
ara24 2 days ago 0 replies      
Browsers should just s/.feedback/.com/, problem solved.
King-Aaron 2 days ago 0 replies      
There's some stellar reviews popping up there already
natch 2 days ago 0 replies      
Can the TLD just be blackholed by major DNS providers?
Beltiras 2 days ago 1 reply      
www.amazon.feedback redirects to amazon.com login page. They paid the extortion?
babuskov 2 days ago 0 replies      
I wonder, how does this compare to Yelp?
Dot_Feedback 2 days ago 2 replies      
Hello I am Jay the CEO of .Feedback

I wanted to correct a few facts.

First, while it has been reported that Registry pre-registered 5,000 domains this is incorrect. We have not registered the sites you mentioned. You can check the whois and look up the owners.

Second, The pricing referenced is out of date and not accurate. Prices can be found as low as $5 for a .feedback domain. Check out Crazydomains.com

lemagedurage 2 days ago 2 replies      
A site that provides uncensored free speech aimed at companies is considered a huge scam? It looks like the comments are not manipulated, and I appreciate that authenticity more than practices happening around Google, YouTube, Facebook etc.
oh_sigh 2 days ago 2 replies      
How is this a scam? The result is obviously just a ratings site which never purports to be the sites referenced in their URL...why does it matter if you type in google.feedback or feedback.com/google or google.feedback.com into your browser? No reasonable person could be misled to believe that google.feedback/ was at all related to Google.

edit: Okay, I guess it is a pretty messed up website. If you go to reply to something, it gives you the ability to "officially" reply (as, say Google) for a mere $29 per message. This doesn't seem like extortion, but it is a pretty horrible business idea.

Analyzing Cryptocurrencies Using PostgreSQL timescale.com
513 points by akulkarni  4 days ago   185 comments top 15
inlineint 4 days ago 4 replies      
A nice analysis, and it shows how SQL makes it easy to quickly explore data.

However, it seems like the plots of the results of the queries were done manually by writing some code to make each plot.

I can't stop but mention that using Apache Zeppelin Notebook [1] with Postgres interpreter for Zeppelin [2] (Spark SQL should provide comparable analytical capabilities as well, but this comment is not about it) it is possible to show graphical representation of query results without writing a single line of code.

[1] https://zeppelin.apache.org/

[2] https://zeppelin.apache.org/docs/latest/interpreter/postgres...

adamnemecek 4 days ago 24 replies      
So what exactly is the current attitude towards cryptocurrencies? E.g. a coin named Diamond went up 200% since yesterday. https://coinmarketcap.com/currencies/diamond/ However it's not obvious what's driving it.

Also, on Saturday, there was an ICO of this coin called TenX (https://www.tenx.tech/) which sold 100,000 ETH (~$30M) worth of TenX in like a minute. Is it all pump and dump?

I can't imagine that anyone has definite answers but I'm interested to hear some opinions.

RobAtticus 4 days ago 0 replies      
This was work done by our intern over the past few days using TimescaleDB. Our team is around to answer any questions!
gthtjtkt 4 days ago 2 replies      
Decent SQL skills for an intern, but I don't really see any "analysis" here. As someone who's been buying and following cryptocurrencies intently for the past few months, I can confidently say there's nothing in this article that even a novice investor would be interested in.

The TL;DR is "Prices went up, prices went down. Some more than others." There's nothing actionable in that.

And a lot of the currencies in your dataset have basically zero volume. AMIS -- your "most profit in a day" -- appears to have a 24h volume < $1,000 and isn't even traded on a single major exchange. Again, that information serves no purpose, and coins like that should probably be excluded as outliers. The fact that they were not only included but highlighted tells me that the author knows nothing about the market they're attempting to analyze.

This looks more like clickbait / advertising, to be honest. "Hey, we need someone to write an article on cryptocurrencies because it's a hot topic. Just give me 2,500 words and a few graphs ASAP. Doesn't matter if it's relevant."

cupcakestand 4 days ago 4 replies      
OT and hijacking the thread:

"TimescaleDB (the OPs product) is a new open source time-series database built up from PostgreSQL."

Do you know good alternatives or which distributed databases are generally well suited for huge volumes of time-series data? Cassandra?

mrb 4 days ago 1 reply      
"Turns out that if you had invested $100 in Bitcoin in July 2010, it would be worth over $5,000,000 today."

Actually if you had invested $100 three months earlier, in April 2010 (when BTC started trading at $0.003 on BitcoinMarket.com) it would be worth $87M today.

leot 4 days ago 2 replies      
Assume near-perfect liquidity among x-coins. Assume, also, that software makes starting a new kind of coin as easy as starting a small business. This implies that the crypto-currency market cap will be divided among far more than "21M".

So, given near-perfect liquidity, what makes any particular coin valuable? For shares it's earnings per share. Will we need something akin to "earnings per coin"?

spreadstreet 4 days ago 0 replies      
If you need additional datasets, head over to https://spreadstreet.io where you can download over 3,000+ datasets across hundreds of digital currencies.

Nice article! Had a good time reading it.

partycoder 4 days ago 0 replies      
placeybordeaux 4 days ago 0 replies      
Props for providing the scraped data set as well!
mbonzo 4 days ago 1 reply      
In case anybody here finds this useful, I made a screencast following the blog for setting up and loading the data: https://www.youtube.com/watch?v=RSFC24FMxy4&feature=youtu.be
irrational 4 days ago 0 replies      
I remember when bit-currency first came out years ago. Back then it was still possible to generate them on your slow desktop machine without too much work. I considered doing so, but figured they would never go anywhere so why bother. Sigh...
EGreg 4 days ago 1 reply      
Where can we get access to this dataset and time series?

I would like to run my own analysis too!

smaili 4 days ago 8 replies      
Slightly off topic, but what are good CoinBase alternatives for buying/selling that cover currencies outside of just Bitcoin, Ethereum and Litecoin?
riston 4 days ago 0 replies      
Well written article and great analysis.
Take Naps at Work nytimes.com
518 points by aarohmankad  5 days ago   272 comments top 48
rectang 4 days ago 10 replies      
For the last 8 years at work, I napped for 12-40 minutes basically every afternoon. I headed out into the parking lot and slept in the back of my car. (I bought the model I own after testing to make sure the back was nappable.)

My most productive, creative time was the hour or two in the late afternoon following the nap.

All my co-workers knew the deal and so did my boss. Nobody else napped, but my napping was normal and accepted.

The idea of giving that up if I ever go back to a "normal" butts-in-seats company seems stupid and uncivilized.

beilabs 4 days ago 2 replies      
When I worked in China (5000 engineer company) we'd have our lunch in the canteen, I found that many returned to their desk for the last 15 minutes of their lunch break.

Out came the pillows, all lights were dimmed and calming music played through the PA system. No-one spoke, made noise during this time; quick power nap of 15 minutes did the trick for a lot of people. Definitely something I approve of; I've a small mattress in my office just for such occasions.

have_faith 4 days ago 9 replies      
>Sleeping on the job is one of those workplace taboos like leaving your desk for lunch

What kind of company is it a taboo to leave your desk for lunch?

peterburkimsher 4 days ago 1 reply      
Lunchtime naps are common in Taiwan. Someone will turn off the light in the office at the start of the lunch break, and turn it back on at the end.

That change of lighting encourages everyone to take a rest or leave, because staying will disturb the people who want to rest. It also saves a little electricity, I suppose.

freeflight 4 days ago 3 replies      
I'm really envious of people who are actually able to take naps because it just doesn't work for me.By the time I actually "phase out", for which I need at least like 20-30 minutes, most of my lunch break would already be over, resulting in me waking up all groggy and irritated.
daxfohl 4 days ago 10 replies      
Better: eat less lunch. Ever since I've started having no more than a moderate bowl of low-cal (not "diet", just not "vitamix'd pizza") soup, the second half of the day is just as productive as the first. If I eat enough for a food coma, I go home; I'd provide no useful anything for the rest of the day, nap or no, so no sense hiding it by merely being around.
oneeyedpigeon 4 days ago 0 replies      
> Naps at work can make you more productive. Maybe dont be this obvious about it, though.

That image caption sums up the prevailing attitude for me, really: don't be obvious that you're being more productive, especially if it goes against the grain. Better to toe the line and be less productive. There are many examples of the same phenomenon, beyond just napping.

innopreneur 4 days ago 0 replies      
In India, this seems to be common in households. You would find family members taking an hour or so nap after lunch. Even during festivals or occasions, host arranges nap session for guests. Though it is seen as unprofessional in offices. Until few years ago I used to think those people as lazy and unproductive but now perspective seems to be changing.
suneilp 4 days ago 1 reply      
I like meditation better. I want a meditation room at work, complete isolation, with options for sitting and lying down. Deep meditation has become useful for me to relax and recharge. Also, when you get good at it, it helps you solve things faster when you get stuck on a problem.
lz400 4 days ago 1 reply      
The difficult part for me is avoiding napping during 50% of my conference calls.
overcast 4 days ago 0 replies      
I work about 5 minutes from my house. I get to go home, in silence, grab some food, and nap in my own bed. It makes ALL the difference in the world.
nstricevic 3 days ago 0 replies      
I nap for 20 minutes every day at work (at my previous company, for 4 years or so and now since I'm working remotely). Here are my steps for taking a nap:

1. Find a good place to nap. Use the same place every day. I used to nap under my desk on a lazy bag at my last job.

2. Quickly find a comfortable position. Quickly fix everything that bothers you (like watch on your wrist or anything else that's making you uncomfortable).

3. Start breathing from your stomach - not your upper torso. Your stomach should raise up and down, not your upper torso.

4. Relax your whole body. In the beginning, start by relaxing one by one region. First your toes. Then your lower leg, then your upper leg. Then the other leg... Until you relax your whole body. It should feel as your mind is separate from your body. Like it could go out of it. Your body should be completely numb. Later, as you progress, you will be able to relax your whole body with a few breaths. As if some force flows from your stomach and removes spasm from your body as you breath out.

5. Start removing thoughts from your brain. As you start thinking about something, just stop. Another thought comes in. Kill it. Just kill thoughts. You can think only about your breathing. Nothing else.

That's it. With these steps, I'm able to feel a sleep in just a few moments. I use that all the time.

Bonus: I have a special position that I "developed" that mitigates office sounds. I nap on my back, slightly turned on left side. I put my left ear on the pillow or a lazy bag. I put my right hand over my right ear and over my head. That way, a pillow isolates my left ear, while my right biceps isolates my right ear from sounds. I found this to be very effective.

Good luck napping.

spike021 4 days ago 0 replies      
At my job, I always feel like it's frowned upon to take a nap. In most cases we're a "as long as you get your work done, we can be flexible with hours" kind of shop. But the moment I close my eyes even for just a few short moments, someone either taps me or later mentions "man, you look exhausted, are you okay?"

Power naps make me feel more refreshed, which I'd assume people would want out of their coworker/employee late in the work day, but maybe I'm wrong.

ck425 4 days ago 0 replies      
I've regularly taken naps at lunch too and found they make a big difference to afternoon productivity. I've found walking also has a similar effects, especially on grass. I normally do one or the other depending on how well I slept the night before.

How long do people find most effective? I've found either 12 min or 30 min to be best for me. 30 min if I'm particularly tired, 12 min if not. If I go longer than 12 I get foggy for around 30 min after.

blhack 4 days ago 10 replies      
I don't think I could physically fall asleep at work. Are people really this tired while they're at their offices?

It seems like if you're so tired while you're at work that you actually want to take a nap, then maybe there is another underlying problem.

flor1s 4 days ago 0 replies      
Here in Japan it's very common to take naps during the day. Personally I don't take naps but I try to take enough rest during the night. Taking naps during the day might actually hurt your ability to sleep in the evening. https://www.verywell.com/30-days-to-better-sleep-go-to-bed-o...
drmanny 4 days ago 3 replies      
What if your boss is about to catch you napping under your desk and you have to call a friend and convince him to call in a bomb threat so you can escape?
amelius 4 days ago 0 replies      
I'll make sure that article is visible in my browser while I'm taking that nap.
partycoder 4 days ago 1 reply      
In Japan this is called inemuri. However there is etiquette associated to sleeping on the job.

I've heard in some cases there are secret nap meetings where all the attendees agree to take a nap.

comstock 4 days ago 0 replies      
My feeling is that apart from really essential meetings (and those are very few) your schedule should be pretty open and sleep/break/nap when you want (ideally at home).

Then use metrics other than bums on seats to measure performance.

Osiris 4 days ago 0 replies      
My office had two small nap rooms in the break room. A small bench-like bed, a blanket, and a sliding door that makes a dark room. It's bed really to take a 20 minute break to refresh and refocus.
elchief 4 days ago 2 replies      
I wonder how long it takes things like work productivity tips on HN to filter out to the real world?
closed 4 days ago 2 replies      
Before I started drinking coffee, I had a solid nap flow going. Now that coffee is in the picture, I'd probably just lay there for 20 minutes twitching :/.
notadoc 4 days ago 5 replies      
I don't understand napping, never have. Are you really that tired that you must sleep in the middle of the day? Maybe you just need better rest at night, or maybe you need to exercise more or eat better food? Unless you're a toddler then napping feels like compensating for something else.

To each their own, if it works for you that's great.

rdtsc 4 days ago 1 reply      
Don't unless you're sure people won't talk behind your back and management won't get the wrong impression. "Look at so and so snoring while we are working hard getting stuff out of the door".

Rationally you'll explain and show the article from NYT, and they'll agree. But irrationally they'll still form an opinion and stick to it.

aluhut 4 days ago 3 replies      
I'm pretty sure that it wouldn't be allowed where I work. Even though it's an US company, it's being led by Germans and in some offices they will already complain if you eat too long. "Take Naps at Work. Apologize to No One." would be a pretty dangerous statement there.
menzoic 4 days ago 0 replies      
NYTimes HQ has 3 nap rooms on different floors. Not sure how many people knew about it.
pmarcelino 4 days ago 0 replies      
I've been doing naps at work for the last year, since I started to try polyphasic sleep. Naps play an essential role in my polyphasic strategy. Those 30 minutes I spend sleeping after lunch save me some sleep hours during night. You can get some data points about naps and its importance in this book (which I recommend): https://www.amazon.co.uk/Sleep-Myth-Hours-Power-Recharge/dp/...
ltwdm 4 days ago 0 replies      
I think age plays a role too- a few years ago I would have laughed at the idea of someone having to take a break and rest in the middle of an 8 hour period to regain energy. But I can now clearly see how that matters.
ksk 4 days ago 1 reply      
Do the companies who allow naps see it as a perk or a necessity? IOW, if you see naps as beneficial would you be OK with people taking them during a product release, or a service interruption?
rosege 4 days ago 0 replies      
There are definitely days when I've wished my workplace had a nap room. Maybe I've had a busy few days and I just get a bit more tired than normal after lunch. It makes sense to me - if I'm tired at 2pm then I normally have another 3.5 hours until clock off. If I dont nap they wont be very productive. If I could nap I would only lose half an hour then be productive for the other 3+ hours. I would probably stay longer anyway so its no lost time to my employer anyway.
beat 4 days ago 0 replies      
One of the nice things about working from home, for me, is the ability to take the occasional nap when I'm dragging. It takes me about 15 minutes to get what I need... once I hit a half hour, it becomes counterproductive as I'm starting to "sleep" rather than "nap".

Most nights, I do get sufficient sleep, but I need the occasional nap anyway.

peterkshultz 4 days ago 0 replies      
A study by NASA found that the optimal time for a nap was 26 minutes.

The lengthy study can be found [here](http://www.jetlog.com/fileadmin/downloads/NASA_TM_94_108839....).

SirLJ 4 days ago 0 replies      
This is the greatest benefit of working from home (or second greatest if you have a lot of traffic to fight)... Around lunch, do a quick 30min exercise, protein shake, lay down, read a bit and fall asleep for 30 minute and you'll feel like you have one more productive day in the afternoon...
fixesCrashes 3 days ago 0 replies      
And here I'm, on a holiday, pretending to provide IT support to five employees. The other 90+ employees chose not to work on holiday. I hate society...
baby 4 days ago 0 replies      
I'm all for this, but I think it's also important not to eat too much for lunch and to do sport, you probably won't feel like taking naps during the day as much.
kraftman 4 days ago 0 replies      
Or you could work shorter hours and get a decent nights sleep.
inestyne 4 days ago 0 replies      
This goes back to the days when people drank much more, at work, at lunch, etc...

In a drinking culture, nodding off at your desk is a very very bad thing because it means you can't handle your liquor and this just could not be tolerated. Due to all the drinking, with management and clients, if you had a problem your were a liability. It's insane but it was another time. Not my generation but I've had conversations.

Napping now, with the amount of real work we all do. The stress level we are willing to carry. It's just not anybody's business anymore.

quotemstr 4 days ago 0 replies      
I find nap rooms useful for those days when I've been there all night and need to have a semblance of functionality for meeting the next day.
sgspace 4 days ago 0 replies      
Napping is fireable offense at my company. :(
oridecon 4 days ago 1 reply      
My coworker used to take a nap and fart really loud without noticing. I'm not trying to be funny I just worry I might do the same.
malkia 4 days ago 0 replies      
Where I work, go/nap is respected. If I see a teammate napping, I would try to be less noisy, and leave him/her to rest.
carrja99 4 days ago 0 replies      
I quit energy drinks 3 months ago, taking a nap during lunch has been immensely important.
deskglass 4 days ago 0 replies      
How do you take naps? It takes me an hour or two to fall asleep. I don't drink coffee.
raoulr 4 days ago 0 replies      
Technique around here is to nap in the car. 15 minutes is usually enough to be refreshed.
bakul 4 days ago 0 replies      
I found that I don't feel sleepy after lunch if I work while standing up. But if I sit in a chair I can feel sleepy and a five minute nap makes for a more productive afternoon than fighting the sleepiness. Has anyone else observed this effect of curing after lunch sleepiness by standing up?
bitwize 4 days ago 0 replies      
I'll take "fast tracks to a pink slip" for $800, Alex.
cortexio 4 days ago 1 reply      
nytimes.. i'll skip
Largest-ever study of controversial pesticides finds harm to bees nature.com
369 points by etiam  8 hours ago   100 comments top 17
jfoutz 7 hours ago 6 replies      
Losing bees would suck so bad. Lots of plants co evolved, requiring pollinators. Orchids are the weirdest, with special moths unique to them [1]. But so much stuff depends on bees. A whole bunch of kinds of fruit trees, different kinds of beans, even celery.

Hand pollination is possible, of course, but that seems like such a pita. Perhaps it's possible to automate.

We have a perfectly good, self optimizing system that constantly moves to optimality. If we could just lighten up a little, not push quite so hard, or even just do localized trials of intensive use pesticides and fertilizers, we could find a balance of what the system can support.

Either go slow and look for local optimizations that are then distributed widely, or engineer immunity, or both.

Ugh. I guess if it was easy, it wouldn't be a problem.

nl 5 hours ago 2 replies      
Note that the new HONEST act is designed to stop the EPA being able to act in studies like this because the environmental data can't be independently reproduced.

See https://www.theatlantic.com/science/archive/2017/03/how-to-g... and various other coverage.

kortex 7 hours ago 3 replies      
American farming practices are a perfect storm of detriments to the honey bee. Widespread use of pesticides is, of course, directly bad. The monoculture of crops also exacerbates the accumulation of pesticides, since bees get a smaller variety of food. And due to the heavily managed style of beekeeping, hives are closer together and are inhibited from swarming, leading to even more propagation of Varroa mites.
simonsarris 7 hours ago 2 replies      
Funded by Bayer and Syngenta.

The extremely cynical interpretation is that Bayer patented the first neonic in 1985 and the major one (Imidacloprid) is now off-patent.

Bayer would probably love it to be banned, so that everyone has to switch to some newer, less tested, more patent protected discovery of theirs.

ajarmst 2 hours ago 0 replies      
Did anyone ever assert that neonicotinoids were harmless to bees? I thought the questions were about whether the harm was sustainable at an effective level of use, whether (and how quickly) bee populations (and which species) could adapt to their presence, whether harm to pollinators could be mitigated by changes in the way we use these pesticides. The harm also had to be compared to the benefit of their use (generally in terms of improvements in crop yield and quality). An ideal environment for pollinators has to be measured in terms of the requirement to feed a population of human beings that has doubled in my lifetime, and like it or not, pesticides and other chemicals are part of that. The reason neonicotinoids were originally attractive was reduced toxicity to mammals and birds, so it's not a no-brainer to replace them with something else.
gerdesj 3 hours ago 1 reply      
"Although the study found that neonicotinoids have an overall negative effect on bees, the results arent completely clear-cut: the pesticides seemed to harm bees at the UK and Hungarian sites, but apparently had a positive effect on honeybees in Germany. Pywell notes that the German effects were short lived, and the reason for them is unclear. They might be linked in part to the generally healthier state of hives in the German arm of the trial, he speculates"

Bollocks! Neonics harm bees because that is what they are designed to do - break insects. I'm not an expert but there have been rather a lot of articles in New Scientist describing "confused" bees relating to neonics over the last few years.

exabrial 7 hours ago 1 reply      
I think one important thing to point out here is this is a _insecticide_ study, not an _herbicide_ study, because they're very very different things.

That being said, I'd like to see the results replicated independently. As there article points out, there are a bunch of asterisks in the claims... and a lot of conflicting conclusions.

EDIT: Use the correct term pointed out to me

giardini 4 hours ago 1 reply      
Whoaaa! see

"Do Neonics Hurt Bees? Researchers and the Media Say Yes. The Data Do Not. " at


wherein they state:

"One problem: The data in the paper (and hundreds of pages of supporting data not included but available in background form to reporters) do not support that bold conclusion. No, there is no consensus evidence that neonics are slowly killing bees. No, this study did not add to the evidence that neonics are driving bee health problems."


"But based on the studys data, the headline could just as easily have read: Landmark Study Shows Neonic Pesticides Improve Bee Healthand it would have been equally correct."

rgrieselhuber 7 hours ago 1 reply      
This is one area where the scientific method feels like a subpar approach to deciding policy. I'll grant that it's likely better than the alternative in most cases but man does it create some really dangerous blind spots.
andy_ppp 1 hour ago 0 replies      
Pesticides harm your brain too apparently:


alliao 6 hours ago 1 reply      

the last paragraph is rather enlightening...

tptacek 3 hours ago 0 replies      
Here's WaPo on why you want to read more than just the headline on this:


joecool1029 5 hours ago 3 replies      
Time to bring back DDT. It was banned for misinformed reasons without any evidence to show that accumulation contributed to cancer or anything else nasty in mammals.[1] Neonics are reported to be 5,000-10,000 more toxic to bees. [2]

A 5 minute walk in fields around me will lead to at least 3 ticks latching on. We're now facing the Lone Star tick that causes a life threatening meat allergy (wtf) and Lyme's disease. I've also personally reported one of the first cases of West Nile virus in this state from a found dead bird. Now there's Zika, which seems great for humans. Oh and we can't forget the re-proliferation of bed bugs as well. But yeah, we should keep trusting the newer classes of pesticides that don't work as well and haven't been studied as thoroughly.



brookside 1 hour ago 0 replies      
moomin 5 hours ago 5 replies      
Is smoking tobacco harmful to your health?Is lead in petrol poisoning children?Is the earth heating up due to mankind's actions?Are bees being harmed by pesticides?

Seems like all these things have answers that, being honest, we all knew the answer to long before the question was "settled". But there was an awful lot of money to be made prolonging the ambiguity.

notadoc 7 hours ago 2 replies      
singularity2001 7 hours ago 2 replies      
Good, now can they confirm/publish that pesticides cause harm to HUMANS?
Show HN: GreenPiThumb A Raspberry Pi Gardening Bot mtlynch.io
533 points by mtlynch  5 days ago   181 comments top 50
setq 5 days ago 5 replies      
I love these projects. They are always destined to fail from experience but are great fun anyway. I've tried a few times to do similar things. I'll catalog my disasters quickly:

1. Seedlings need blowing around a bit when they pop out or they get all spindly. Cue wiring up two 80mm PC case fans to a chain of a single astable then two monostable 555's to generate an oscillating wind field that goes on for 5 seconds each direction after a delay of 10 minutes. Dried the compost out, blew most of it it away and then killed the plants dead. No tomatoes for me!

2. Watering robot version 1. Similar to above but with a 74hc390 dividing down the clock so it only ran once every day. Used an unprotected MOSFET to control a small water pump from ebay. Back EMF blew the MOSFET up and jammed it as a short. Emptied the entire water reservoir into the pot, down the wall and into the carpet.

3. Watering robot version 2. Same as above with problems fixed. Apart from I ran out of bipolar 555's so I used CMOS ones which are a little more tetchy about noise. Cue last 555 getting jammed in an on state and the same thing happening. This time, the tupperware box with the electronics ended up getting wet and the wall wart exploded.

Edit: meant to say to the OP - nice work. This is the spirit of all things interesting :)

crusso 5 days ago 1 reply      
But Im a programmer, not a gardener. If I had a plant, Id have to water it and check the plants health a few times per week. I decided it would be much easier if I just spent several hundred hours building a robot to do that for me. If the plant lives to be 80 years old, I come out slightly ahead.

The mark of the start of any good hobby project is a sense of humor about the time it really takes to accomplish something simple with technology on the first go-round.

exelius 5 days ago 3 replies      
So the reason your moisture sensor project failed is because those types of moisture sensors are really designed for "stick it in, test it, and pull it out" testing. If left powered on in a moist environment, the conductive material on the sensor will quickly corrode (quickly as in the span of a few hours, as is seen on the graph).

However, Vegetronix makes an ultrasonic soil moisture sensor that does not have electrodes, and thus does not corrode. It is far more complex and expensive ($40) but it's designed as a moisture sensor for sprinkler systems and as such is engineered to be left in the ground.

Edit: Link to Vegetronix sensor: http://www.vegetronix.com/Products/VH400/ . I have used it and it works well, but as it turns out, even this is not sufficient to really automate a garden. You need fertilizer. Hydroponics make dealing with that complication much easier -- until you realize that the fertilizers are caustic / acidic enough that you have to flush the lines with water as well...

In other words, there's a pretty good reason you can't buy a kit off the shelf that will grow plants :)

Eduardo3rd 5 days ago 2 replies      
I really like the addition of the web camera - it's a nice bonus on top of the more traditional temperature/humidity/light/moisture readings that most people incorporate into these DIY systems. Much better than what I made the first time I built one of these things.[0]

The one thing that made my scratch my head was your approach to measuring moisture. There are several very reliable methods for measuring soil moisture directly (changes in resistance, capacitance, time domain reflectometry, etc) that will give you exactly what you are looking for here.

"Therefore, we felt it was fair to assume that watering based on moisture level is impossible and that GreenPiThumb is doing the best it possibly can, given certain inexorable limits of the physical world."

This just isn't true. The sensor you picked up from Sparkfun should give you decent measurements for a while before degrading gradually depending on your soil chemistry.

[0] I ran a consumer soil moisture IOT company for a few years that was sold to Scotts Miracle Gro in 2016.

danhardman 5 days ago 3 replies      
Did the pot you use have holes in the bottom? It looks like you have it just sat on a desk so I'm assuming it's basically just a bucket?

Your moisture problem could just be that you were relying too much on the water evaporating/being absorbed rather than needing to drain out. Gravel at the bottom of a pot with holes in helps water drain really well. Alternatively, without the gravel you could place the pot on a dish, fill the dish and let the water be absorbed from the bottom up.

2III7 5 days ago 4 replies      
I bought this moisture sensor for a gardening project


Haven't had the chance to try it out in soil yet, but reading the comments it looks promising. It uses capacitance instead of resistance and connects directly to the I2C pins of the Pi. So, easier to setup and should be more reliable.

On the software side I use Grafana https://grafana.com/

Not really made for gardening projects but its monitoring and alerting capabilities are pretty much perfect for this kind of application. Not to mention how easy it was to set it up on the Pi and get all (temperature, moisture, light) the sensor data in.

thedaniel 5 days ago 2 replies      
In the water distribution section the author mentions the other gardening software doesn't mention how they distribute water. There's a longer history of irrigation than there is of embedded software development, so maybe he should have talked to someone that actually does agriculture or googled 'irrigation system parts' and bought one of the thousands of existing drippers for a buck.

Of course the later heading "the gardening wasn't supposed to be hard" seems to imply that he assumes non-coding skills are easy to figure out or obvious, which is a sadly too-common trend in the tech world.

bfu 5 days ago 5 replies      
Instead of measuring soil moisture try:

 - measuring air moisture of small upside down cup on top of the soil - measuring weight of the whole pot
Or don't measure at all and instead use precise amount of water on precise times (RTC module, medical grade piezo-electric pump). My similar project is on hold at 20 or so sketches of various types of water pumps.

nrs26 5 days ago 3 replies      
I am so excited to see this! I've been working on a very similar project to monitor my outdoor vegetable garden using a raspberry pi and some ESP8266's. Like you, I'm using this as a project to better learn javascript, angular and django. It's in the very early stages, but I'm really loving the experience so far.

Here's a picture of my setup. http://imgur.com/a/BV188

I have a enclosure (that I recently made waterproof) that sits out in my garden that has the ESP8266 wireless chip in there, which works very similar to an Arduino with built in WiFi. I have it reading data in from a soil humidity / temp sensor, an air humidity sensor, a light sensor, and a air temperature sensor.

That data gets sent back to a simple django webserver that I have running (indoors) off of a raspberry pi. It records all the sensor readings every 10 minutes and registers them to various plots in my garden. And then, if there are any big issues (no light for 2 days, lower than average soil humidity or soil temperature, etc), it texts me.

Eventually I'll connect it to my irrigation system, but I don't trust it enough yet!

I have the exact same problem with soil humidity sensors that you mentioned. I even sprung for some fancy ones (http://bit.ly/2sMNRnD) that claim to be waterproof. I cannot make them read useful information and, once it rains or I water outdoors, the sensors read 99% for the next few days. It's very frustrating and the missing piece to make all of this work.

Like you, this started as a quick, month-long project and now it's become something a lot bigger :)

I think eventually I'd like to build this out to be a vegetable garden planner, so I can plan my vegetable garden at the start of the season, monitor what's happening with them, and automatically trigger my irrigation system if needed.

Anyway - it was great to read this! I'd love to hear how this project evolves and would be happy to share any of my experiences as I've put this together.

P.S. And, it's a long shot, but if you (or anyone is reading this) figures how to accurately measure soil humidity temperature in a waterproof environment, I would be forever grateful!

grw_ 5 days ago 2 replies      
A raspberry pi is the wrong choice for this project from my perspective, complexity is far too high for a simple project. I looked at building the same thing (minus watering) because it seemed commercial products were way too expensive (Parrot Flower Power is $60!).

The cheapest DIY solution I could think of was ESP8266 ($2), Vreg ($.5), moisture sensor ($.5) and LiPo battery (i have many of these..) but I decided I didn't have time or inclination to write the software.

I continued looking for commercial products, and ordered one of these: https://www.aliexpress.com/item/Chinese-Version-Original-Xia...

Pros- Cheap ($15). Has temperature, light and 'fertility' (capacitance?) sensor.Cons- Logs to phone app (in chinese) via BTLE instead of WiFi.

After a few weeks it seems to be working satisfactorily and I will probably order a few more units.

zfunk 5 days ago 1 reply      
Love this. I went down a similar path last year, using a Raspberry Pi to water an outdoor vegetable patch. In the end I used a package [1] to control a remote controlled plug socket. I also hit similar problems with soil moisture, so went down the route of pulling a weather feed - pump water if it isn't going rain. Easy if you are watering outside!

[1] https://github.com/dmcg/raspberry-strogonanoff

lloydjatkinson 5 days ago 2 replies      
Hmm. A MOSFET for a motor switch might be overkill. Also, there's no flyback diode. So your Pi pretty much will let its blue smoke out when you get a flyback voltage, or at least the MOSFET.
joshribakoff 5 days ago 1 reply      
As for the faulty soil moisture, check out tropf blumat. It's made in Germany (you know the Germans make good stuff). It uses osmosis and gravity to keep the soil moisture consistent, no electricity. I've had great success with roots even growing up out of the soil towards the drippers https://youtu.be/UWPLr0Selh8
jnty 5 days ago 0 replies      
So great to read. This is exactly the kind of project the Raspberry Pi was designed to enable - doing something a bit odd (and arguably pointless!) but having fun and learning loads along the way.
tp3z4u 5 days ago 1 reply      
The plants look waterlogged; the roots need air to breathe.

Drain the water back into the reservoir (use a simple filter to prevent damage to the pump) and just use a schedule for watering.

I used a mechanical timer switch for 15 mins every hour for my hyrdo setup. For soil, such tiny plants, and no lights you would need far less frequency. A general rule of thumb is to give it enough time between waterings to let it get a bit dry.

hammock 5 days ago 0 replies      
>GreenPiThumb: a gardening bot that automatically waters houseplants, but also sometimes kills them.

Quite an advanced bot: as close to a human as you can get!

dbrgn 5 days ago 1 reply      
I also thought about doing a project like that, thanks for the write-up. I somehow can't really believe that it's that hard to measure soil moistures.

Are there any industrial plant moisturing robots? What approaches do these use?

Edit: I met this guy a few months ago at a faire: https://lambdanodes.wordpress.com/ The project doesn't seem to have advanced since then, but maybe he'll get better results with his epsilon node.

ldp01 5 days ago 1 reply      
I'm not much of a gardener but the idea of measuring the moisture level seems ill-conceived... Namely because you are measuring at only a single point in your 3D region of soil, and you don't know what range of moisture should be maintained without experimentation.

Using our guy's own sawtooth model of watering/drainage, it would make more sense to just water at fixed intervals and experiment with the frequency to see if the plant grows.

Still a fun project!

Dlotan 5 days ago 0 replies      
I have a pretty similar project. But the problem with the mainstream moisture sensors is that they break after some time (or I did not find a good one). For myself I found the expensive solution a flower power from parrot http://global.parrot.com/au/products/flower-power/ a handy solution. It workes with bluetooth and they have some documentation. I did some working for python: https://github.com/Dlotan/flower-master-fab/blob/master/app/... and get good results over a long perioud without too many outliers
INTPenis 5 days ago 1 reply      
As a city dweller who has grown a lot of plants on balconies and in apartments I have also been down that road but I found a much easier solution to the watering. Autopots[1].

With those you can have 6 plants on a single 47l tank and only re-fill it every month. (depends on how thirsty they are)

It's a gods send when I want to go away for a few weeks vacation.

For other house plants that are not connected to a huge water tank I tend to just turn over a 2l plastic bottle into the soil after thoroughly saturating it first with water.

So with this the only real requirement I had for my grow op was monitoring. Because I have hard wood floors and I don't want them to swell up due to a leaking tank.

1. https://autopot.co.uk/products/

cbanek 5 days ago 0 replies      
As someone who has done a lot of gardening, and automated that gardening, I personally find soil moisture to be a complete boondoggle of something to measure. It doesn't tell you if the plant is taking up the moisture, or if the pH is correct (or you have some kind of nutrient salt lockout). The author says they are having trouble because the water isn't evenly distributed.

While it sounds scary, hydroponics is much easier to automate. Use a substrate like grodan blocks that can't be overwatered, and have it drain back into the reservoir you are pumping from. Then it's just a matter of setting your cycle time appropriately, and watering for x seconds every y hours, and changing the water after a set number of days, and adding new nutrients. By using more water than you need without risk, you can ensure an even level of watering over the entire medium.

It's also much easier if you have a nutrient problem as you can easily flush with a large amount of properly pH'd water to 'reset' your substrate, which is very hard to do with soil.

This doesn't even get into things like potting mix typically has eggs for all sorts of pests, if not pests themselves. If you are lucky and get a clean batch, you are still providing a great environment for pests to live.

While it does cost a bit more to do hydro, it's honestly not that much. If you want to be super cheap you can use pH papers or by a $20 pH pen. A starter nutrient kit from general hydroponics should be under $30.

PS - here's a link to my raspberry pi automated hydro system on hackaday (https://hackaday.io/project/12418-garden-squid)

ShirsenduK 5 days ago 2 replies      
You may also hack this product by Xiaomi.


There are many wifi enabled switches to enable the water pump. Try out the ESP8266 and/or ESP32 in your next project.

Your mind will blown with the possibilities :D

Also, you may loose the soil and go for hydroponics, that would make this really from the future.

splitbrain 5 days ago 1 reply      
I recently build something similar for my balcony. But instead of a $30 raspberry I used a $9 NodeMCU: https://www.splitbrain.org/blog/2017-06/10-automated_plant_w...

Also just took me a few evenings instead of months.

tylerjaywood 5 days ago 2 replies      
This is awesome. I did something similar at reddit.com/r/takecareofmyplant

I was running through moisture sensors on a weekly basis until I hooked it up to the GPIO on only flipped the power on when I was taking a reading. Now I get readings every 10seconds and haven't had to replace the sensor in over 6 months.

yellowapple 5 days ago 0 replies      
The traditional answer to automated irrigation for indoor plants (and outdoor plants when you need better water efficiency than with a sprinkler system) is typically a drip system (background info: https://en.wikipedia.org/wiki/Drip_irrigation). Interesting that this project didn't seem to go that direction, seeing as how water distribution is kind of a solved problem in agriculture/horticulture (at least in terms of the mechanical aspects; efficiency can still use improvements, and little projects like GreenPiThumb are definitely steps in the right direction).
option_greek 5 days ago 1 reply      
I had good results with using a simple timer board that switched the pump on and off at fixed intervals. The total setup (for 5 potted plants) costed around $25 including the timer board. The results were great especially with tomato and okra where we had a hard time collecting the produce :)

Edit: Was using something similar to this board:http://www.ebay.com/itm/Automation-DC-12V-LED-Display-Digita...

I think RPi is a overkill for this project.

krylon 5 days ago 0 replies      
I dig the writing style! I find that putting in a joke or two every now and then makes it far easier for me keep up my attention.

Also, this makes me think. I have two Raspberry Pis lying around. One is a glorified video player, the other one is just catching dust right now. I have wanted to do some kind of hardware project with it for a while, but I am kind of lazy and have pretty much no knowledge of electronics.

I have wanted to build a weather station, though, that keeps a long term record of its measurement in some kind of database. A Pi would be well suited for that, so I might get around to it one day after all.

inetknght 5 days ago 1 reply      
One of the better laid out Pi project summaries I've seen. Good work, I say.
rdez6173 5 days ago 1 reply      
I wonder, for a container plant, if you can reliably use weight as a measure of soil saturation. Of course plant growth would add to the weight, but I suspect that could be accounted for.
peter_retief 5 days ago 0 replies      
I used a fish tank pump with surgical tubing and a moisture sensor with an arduino to switch the pump on when it got dry but as you say, its never that easy, the whole thing is lying in a drawer somewhere, it didn't make anything easier. Another project involved load cells and LPG gas bottles. That was a bit more useful but it needed to be constantly powered and the wires kept falling out, so that's also in a drawer somewhere. I still haven't made a useful project yet
tanvach 5 days ago 0 replies      
This was exactly the route I wanted to go down until I found out about hydroponics, especially the Kratky's method. It's super easy, does not need any electronics or pumps, and I've successfully grown lettuces and herbs.


un-devmox 5 days ago 1 reply      
> The first time we pumped water into our planter, the tube directed a small stream into one spot, completely soaking that area but leaving the rest of the soil dry.

I use irrigation tubing with drip line emitters in my garden. That might be a solution for you. The cool think about the emitters is that they control the flow of water and are easily positioned. I think they start at .5ga/hour on up to 10ga/hour.

somecallitblues 5 days ago 0 replies      
I'm building a space bucket atm using Arduino instead of Pi. It's a lot of fun. There are some amazing projects on spacebuckets.com. https://www.spacebuckets.com/u/POstoned has a good Pi setup with schematics etc. I think his reddit post is very detailed with sourcecode.
betolink 5 days ago 0 replies      
This project reminds me of an installation at MOMA NY, the same concept but they used an Arduino. Now there are even kits to do it for whole garden https://www.cooking-hacks.com/documentation/tutorials/open-g...
neelkadia 4 days ago 0 replies      
I did almost same to feed the Rat! :D http://www.feedmyflash.in/ and also open sourced the code at https://github.com/neelkadia/FeedMyFlash
fpgaminer 5 days ago 1 reply      
I vaguely remember that when I read about soil moisture sensors, they all needed to be cleaned and dried after every use. Basically, you couldn't leave them in the soil. Seemed odd to me, but I never dug further into it.

Is that true? If so, that would explain why the sensor doesn't work here, but leaves me wondering why I've seen so many projects try to use the sensors that way.

pavel_lishin 5 days ago 2 replies      
Questions to everyone who's tried similar things:

1. why use a water pump, instead of a gravity-fed system with a valve you could control with a servo?

2. Would a scale be able to measure soil moisture? Dump in X grams of water, wait until scale registers X/2 before adding more. (Some fiddling would be required to see how much of the water is retained by the plan as building material.)

bitJericho 5 days ago 0 replies      
The Color Computer from Tandy started because of a project called "Green Thumb". Hurray for gardeners and farmers! :)


theandrewbailey 5 days ago 2 replies      
I'm sorta thrown off by the use of "we" and "our" in the middle of this article. From the top of the article (which doesn't use we/our), I understand that this is just one guy doing this. Is there someone else in the process that I'm missing?
stevehiehn 5 days ago 0 replies      
This is inspiring. I've been thinking about hand rolling a rain water reserve style irrigation system that essentially just pulls/scrapes a weather forecast and waters my garden. Not nearly as precise as this but hopefully useful.
_devillain_ 5 days ago 0 replies      
This was a hilarious read (as well as informative). Fantastic work, fantastic writing!
ajarmst 5 days ago 1 reply      
I dislike the use of the "robot" for anything that doesn't have autonomous movement (those were "RC Wars", not "Robot Wars"---although I, for one, would have delighted, at a safe distance, in someone strapping a circular saw to something autonomous). Nor is watering a single indoor potted plant the same thing as "gardening." An actual Gardening Robot would have been very interesting (I envisioned something trundling around with a spade and actuators for pulling weeds when I saw the title).

I actually love the Pi---one is now my primary computer---but it seems to have created a niche for "let's add a website and database to my really trivial control systems project" that I'm not sure really advances much of anything.

LordKano 5 days ago 0 replies      
I have been thinking about making a homebrew Phototron using an arduino, grow lights, temp+moisture sensors and a water reservoir.

The addition of the camera makes me think that a Pi might be a better solution.

banderman 5 days ago 0 replies      
I am excited to see projects like this. I believe that it will be important to be able to deploy fully autonomous food production facilities to Mars ahead of any colonization effort.
soheil 5 days ago 0 replies      
I once setup something similar but with solar panels in Mojave desert to water a peach tree. Needless to say that didn't work oht as planned.
pjc50 5 days ago 1 reply      
On this subject, does anyone know what the state of the art is in using machine learning to identify (and preferably eliminate) weeds?
water42 4 days ago 1 reply      
the codebase is really elegant. thanks for open sourcing everything!
crb002 5 days ago 1 reply      
Why computer vision of plant stress is better than fancy sensors.
z3t4 5 days ago 0 replies      
it would be interesting to see if its possible to make a closed system. a black box that gives tomatos ...
Which word begins with y and looks like an axe in this picture? stackexchange.com
601 points by Gigablah  2 days ago   157 comments top 26
peterkelly 2 days ago 7 replies      
This reminds me of the scene in Silicon Valley where Peter Gregory notices the popularity of Burger King, the number of sesame seeds they use, and remembers that sesame seeds only grow in Myanmar, Brazil, and Indonesia - the former two of which have large Cicada populations that emerge at different times. After some research, he finds that this is about to occur simultaneously in both countries for the first time in a couple of centuries. Apparently Indonesia doesn't have cicadas, so he purchases some surprisingly cheap Indonesian sesame seed futures based on the expectation that the price will spike next year.

Whoever did the detective work in the top answer to this article should get into investing, if they aren't already.

CJefferson 2 days ago 10 replies      
This is the kind of deep investigation of a super specific issue I always enjoy reading. This is what I thought the internet would be, 20 years ago.
armandososa 2 days ago 0 replies      
A long time ago I worked as a "designer" for somebody who owned a chain of medium retail stores in Mexico and a also biggish printshop. When there was a (printed) product was selling very well, he would go to me and ask me to make one as similar as possible so he can manufacture it and get more of the profits.

He did it all the time and didn't care at all for the quality of the product. At first I tried to make the illustrations myself or actually try to do an original spin on the product, but he put pressure to just pull a clip art and call it a day.

Some times I tried to hide some in-jokes or innappropiate stuff just to see if anybody noticed. For example, he once got me to to copy a whole book on some catholic saint, on a hurry, and refuse to pay some one to proff-read it. So I intentionally replaced some words here and there to change the meaning and I even changed the name of the saint to "Batman" in the middle of the book. Nobody ever noticed.

So my guess is that whoever designed this ball is on a similar situation and did put even less effort into doing it, than whoever has the third top answer in the Stackoverflow post. Or maybe she even put a swedish axe just to see if anybody would notice.

Luc 2 days ago 4 replies      
The top answer is a picture of the use of Yxa in a Swedish alphabet learning book, yet there's still people here clinging to the yellow paint tube theory.

Incredible. Not sure what cognitive bias is at play here (or more charitably: perhaps they didn't scroll down).

EDIT: Some say these are pictures of a paint tube and some yellow paint: https://jimthechairmaker.wordpress.com/2014/01/08/my-carving...

The fact that an axe is used in the exact same way in the Swedish book is very strong evidence. There's prior art.

louprado 2 days ago 1 reply      
Assuming this was made in China, there are two sources of confusion that could lead to this mix up.

1. The letter 'A' and an upside down Y-shaped character share the same key on a Chinese/English key board. This could lead to an unintended subconscious relationship between these symbol shapes.[1]

2. The orientation of these characters abruptly changed on the ball graphic between W and Q, leaving the orientation of the Y-shaped letter unclear to someone with minimal familiarity of Latin letters. An upside down Y is also easy to confuse with the letter A if you are not familiar.

This all assumes that Axe was the first common-'A' word selected. Perhaps they postponed the choice of the common-'Y' word given it is challenging. After misattributing Y for A, then someone just added an Apple graphic.

[1] https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/St...

thedrake 2 days ago 3 replies      
I found the BALL manufacturer!!!!https://www.alibaba.com/product-detail/Alphabets-print-ball_... (not able to comment on the original thread on the page bc I do not have the 50 points needed. Pls post a link there to further the discovery)
magic_beans 2 days ago 1 reply      
This was such a delightful read. A quote from the wonderful Okja recently released on Netflix: "Never mistranslate!"... unless you make balls for toddlers!
Udik 2 days ago 0 replies      
You look at the axe and wonder: "why?" :)
yuleanswer 2 days ago 4 replies      
Yule. For Yuletide
ninjakeyboard 2 days ago 4 replies      
Tube of paint squeezing out some yellow sounds reasonable.
ryan606 2 days ago 1 reply      
One of the best guerilla marketing tactics I've seen in a while.
raldi 2 days ago 1 reply      
Now let's see if we can solve one of /r/whatisthisthing's most longstanding mysteries: What does the J stand for on this blanket?


rplnt 2 days ago 1 reply      
I thought the Worm was a Maggot.
rdiddly 2 days ago 0 replies      
Happily I see that my pet theory involving yaks is represented in the answers; I can now go back to work.
awesomebing1 2 days ago 0 replies      
I've always found English SE to be interesting, because it varies between questions like this with extremely well thought up answers and questions that are so-so.
saimiam 2 days ago 0 replies      
"Y? You Axe..."
jessaustin 2 days ago 0 replies      
We've all been trolled by a toy artist...
racl101 2 days ago 0 replies      
Holy schnikes. I'd love to have that much free time ... and powers of insight.
wallabie 2 days ago 0 replies      
It may still be an axe, but with a silent Y in the front. A 'yaxe', if you will.
flipcoder 2 days ago 0 replies      
Looks like yellow paint being sprayed out of the top of a spray paint can
thought_alarm 2 days ago 0 replies      
And now, back to my thesis.
justinhj 2 days ago 0 replies      
Don't y'all have y'own axes?
omgbear 2 days ago 0 replies      
yellow brick road? Could be a castle at the end.
xattt 2 days ago 1 reply      
jbob2000 2 days ago 2 replies      
The answer is one of the top comments:

> Yellow. Looks like a squiggle of yellow oil paint squeezing out of a short, fat tube.

Thoughts on Insurance ycombinator.com
505 points by akharris  3 days ago   409 comments top 57
sbarre 3 days ago 14 replies      
> After paying for broker commissions, fronting costs, reinsurance, customer service, claims processing, theres often around 50% of the original premium dollar left to pay claims which is the primary purpose of an insurance company.

What about shareholders? One of the biggest problem I have with insurance companies as for-profit enterprises is the inherent conflict of interest that comes from trying to service claims and customers as best as possible and turning a profit for shareholders.

I've always felt that insurance companies should be run as not-for-profits, or at the very least co-ops..

Don't get me wrong here, still pay the employees and the executives competitively (you want things to run efficiently and by talented teams so you need to attract top talent), but otherwise the whole enterprise should be working hard to make sure every other dollar goes to helping the customers who pay the premiums, and that's it.

6stringmerc 3 days ago 3 replies      
Disclosure: I've worked in both P&C and Reinsurance for several years. Also on Wall Street for a couple years. Also in Healthcare for a couple years.

Overall I think this is a nice briefing on the state of the insurance market in the modern economic landscape. It is extensively regulated with rates set and various nuances. All of this, of course, comprises part of a grand "data set" that looks quite appealing to modernization.

Unfortunately, I think there should be a strong expectation that the market (industry) will both be openly hostile to "disruption" oriented attitudes a la Uber, but laugh at any ability to raise capital to compete at any meangingful level.

I applaud your interest in perhaps improving a legally sanctioned form of graft (I prefer Mutual Organizations myself). Conversely, my experience leads me to laugh a little because I've seen the numbers and the complexity behind the scenes. I've got no interest in the industry beyond the paycheck it provided, but it is quite fascinating in numerous respects. Just the naming conventions alone once you get to Bermuda is a trip. Good luck.

socrates1998 3 days ago 7 replies      
The biggest problem I have with insurance is the inherent disproportionate power relationship between the company and their customer.

Essentially, the moment a customer becomes more trouble than they are worth, they are dropped. This is true with other types of industries, but if your health insurance drops you when you get cancer, you can't get more health insurance, and you die.

Same with house insurance, car insurance, ect...

And it causes death or financial disaster all too often.

Some would argue that "this doesn't happen" or "it's illegal".

1) It happens ALL THE TIME.

2) It's illegal, but if you don't have the means or education to fight it. You are pretty much done.

It's the fundamental nature of insurance companies to milk healthy customers while dropping unhealthy customers. It's just too tempting and they are too protected by our legal system for them to not do it.

I know this is pessimistic, but as long as you realize this fundamental imbalance in the relationship with you and your insurance companies, you can mitigate it to a certain extent.

But, really, the only way to completely mitigate it is to be so rich that you don't even need insurance.

asr 3 days ago 4 replies      
Before you go thinking about how this applies to healthcare... health insurance in the U.S. is different from other types of insurance and personally I think we'd all be better off it were not even called "insurance."

Health insurance is a mix of pre-paying for predictable and certain expenses with tax-free dollars, a transfer/entitlement system to ensure that more people can afford insurance (by design, your premium does not match your expected risk--either you are pooled with others at your employer, or your exchange account is subject to rating band requirements which means, for example, that in many states old people can only be charged 3X more than young people even though old people are likely to be much more than 3x more expensive to insure), and actual insurance. I'm not sure what percentage of your premium reflects the cost of actually insuring you against uncertain future health events, but it's far from 100%.

This is an interesting article, and some of it applies to healthcare in the U.S., but much of it does not.

wyldfire 3 days ago 14 replies      
I've always wondered about US health insurance -- why can't the physician give me quotes about my personal obligation for various treatment options? It's frustrating that as soon as it's time to come up with a bill, poof there it is but prior to the bill being generated all I get is shrugs?

Is it because the insurance coverage algorithms are too complicated? Because the different entities involved in a single treatment plan is too complicated to navigate? Because physicians feel that cost is orthogonal to medicine and they prefer not to be involved/prefer to recommend the ideal treatment based on a predicted outcome? All of the above?

It feels like if there were a particular hospital group / physician group that had this feature, they would attract a lot of attention. Just imagine, "Your initial differential diagnosis will not exceed $150 and we'll discuss treatment options or more conclusive diagnostic tests afterwards."

All I've heard so far are physicians who don't accept insurance but instead have a straightforward "menu" for common items, which is interesting but not what I think most people want.

Animats 3 days ago 2 replies      
Insurance companies have been into information processing in a big way since paper and pencil days. The first company to buy a commercial computer was Metropolitan Life.

In commercial insurance, there's a question of how intrusive the insurance company should be. My favorite insurance company, The Hartford Steam Boiler Insurance Company, established in 1866, was finally bought out by Munich Re a few years ago. Hartford Steam Boiler insures boilers and equipment in industrial plants. Most of their employees are inspectors. If you want to buy a policy from them, they come out and inspect the equipment. They give you a list of what you have to fix. Then they come back to inspect after everything is fixed. Then they sell you a policy. They also inspect again, randomly and unannounced. Cut corners on maintenance, and HSB cancels your policy. The premiums are low, because boilers inspected by Hartford Steam Boiler don't blow up.

Most companies hate that, even though the premiums are lower.

togasystems 3 days ago 1 reply      
After having spent the last 3 years in insurance with Allay trying to take on employer health insurance costs by making it easier for smaller groups to become self insured, I have found that the regulations are cumbersome but not huge blockers. The regulations are there to protect people and for the most part do that job correctly.

The hard part about this industry is that there is no single incumbent to disrupt, but thousand of very small businesses who have personal relationships with their clients. Also whereever you jump into the process, you have to deal with companies who do not value technology as much as the HN crowd would. These companies still print out PDFs and have automated very little of their business. No matter how fast you make your software, you are the behest of the companies below and above you in the chain.

If anybody wants to nerd out on the insurance industry, my contact is in my profile.

gwintrob 3 days ago 2 replies      
We're working on the modern commercial insurance brokerage at Abe (https://www.hiabe.com/).

Aaron's right that too much of the industry runs on pen and paper. It's confusing for the buyer and a massive headache for brokers.

Most of what we're building is behind the scenes to make the brokerage way more efficient. If you're an insurance expert with ideas to leverage tech or engineer interested in man+machine symbiosis, I'd love to chat (gordon [at] hiabe.com)!

toddwprice 3 days ago 0 replies      
Healthcare needs to be funded through a non-profit community trust whose first priority is to secure the highest overall health for the population. Insurance is motivated by profit which will always seek to squeeze more money from the system as its first priority.
Spearchucker 2 days ago 0 replies      
I've done a lot of systems design and development in the insurance industry, and it's a vast space with opportunities not just for insurers.

The most recent thing I worked on was a pricing and activation engine that sat behind a web site that acted as a broker for a number of insurers. That isn't new, but it was new for this market - life insurance. As such my employer was a single provider on a panel of providers.

The web site that provided the panel brought a number of innovations - one of them being an underwriting SaaS. Panel participants are able to enter their underwriting crown jewels into a 3rd party web site, secure in the knowledge that their IP wasn't going to be leaked or shared with others on the panel.

There were many more efficiencies that the model enabled, which were never realised (well, not by the time I left) because the SaaS provider couldn't reach financial agreements with some of the providers.

Greed was (is?) something that risked torpedoing the most innovative thing I've seen in life insurance, ever.

All that to say I agree that this market is brimming with opportunity. The market is so incredibly broad, and deep, and so complex... Regulation is definitely a thing, but hardly an impediment. I have, for example, spent many years in this industry, but never worked for (or with) reinsurers. I have a long-standing suspicion though, that that market is so convoluted that the front-line insurer can conceivable be it's own reinsurer, after having passed through like, 15 other reinsurers...

frabcus 3 days ago 1 reply      
Surprised not to see mention of the fundamental modern tech problem with regard to insurance.

As soon as you have big enough data, and artificially intelligent enough algorithms, the insurance becomes too predictive.

The whole point of insurance is to pay people when rare, bad events happen to them. If an insurer can predict well enough who will be the victims, it can refuse them insurance, and hence remove the entire purpose and benefit of the insurance.

This is the flip side to moral hazard. What does it even mean to offer commercial insurance, if it is only offered to the people who least need it?

This is least bad, for example, with a car predicting you're a bad driver as you can improve behaviour. It is really problematic with data such as gene sequencing, which you can't do anything about.

Only way out I can see is compulsory insurance, levied as a tax. Or maybe non-profit or Government AIs, trained to find a sweet spot between moral hazard and its opposite?

zstiefler 3 days ago 1 reply      
While much of what Aaron wrote is spot on, one thing missing is the importance of distribution. This a challenge for any startup, but is more pronounced in a regulated industry like insurance with largely homogenized products and with serious restrictions on how you can legally distribute your product (see Zenefits).

If you consider how most people and small businesses buy insurance, they typically make purchasing decisions one a year at most. As such, you need to get in front of them at the exact moment they want to purchase. GEICO and Progressive have done this really well, but have effectively bid up the cost of online advertising to make it prohibitively expensive. This is also why agents are such a powerful force in the industry (and because they effectively provide carriers with an initial underwriting screen which they don't need to file publicly).

It's important to get the product right, and there are many flaws with most P&C insurance today (chief among them that the forms haven't really changed in the past few decades), but I'd encourage any entrepreneurs to make sure they have an answer on distribution before spending time on product.

Disclosure: I've spent a lot of time looking at this as founder of a P&C insurance startup a few years ago.

aaroninsf 3 days ago 1 reply      
Some context:

Americans pay 2.5x more than the average first-world single-payer system.


This funds demonstrably and starkly worse outcomes on the metrics we should collectively care about: life expectancy. Incidence of chronic disease. Infant morality.

The reason we pay much more for much less is simple: for-profit health care optimizes for profits, not outcomes. That is exactly what it has done; exactly what it will do.

The public sector already does single-payer in this country with an order of magnitude less overhead cost than the private sector.

When I discuss these facts with conservatives and libertarians, I usually discover that most of this is decreed "fine," because we disagree about whether or not health care is a right.

For that reason, I've stopped suggesting that it is, and focused on another stark fact:

Failure to provide basic health care to all, through straightforward means, merely means that it is provided through partially hidden means (like ER rooms) at vastly greater cost, not least because emergency care cannot perform preventive and chronic care, which in many cases would provide better outcomes for two orders of magnitude less cost.

Morality is a matter of instinct and choice. The costs of the existing broken system are however obvious, and render any defense IMO irrational.

SmellTheGlove 3 days ago 0 replies      
Nice writeup. Certainly scratches the surface a bit (and I think that was the intent). I've spent the bulk of my career in the industry, both in P&C and A&H, and there is a ton of opportunity. You correctly conclude that it's not because we're idiots, even though some days I'm kind of an idiot.

Again, tons of opportunity, with some barriers that aren't unique to the industry but maybe unique to what the tech community might encounter (and this is by no means an exhaustive list) -

1. Capital requirements: You need a dumptruck full of money to do much in this industry, at least if your intent is to write or reinsure business, but even to a lesser degree if you're working in ancillary services.

2. Regulatory environment: Assuming US operations, 50 states with 50 different sets of regs need to be okay with what you're doing. In addition, federal law comes into play in certain spaces (GLB, HIPAA, etc).

3. Distribution: The current model compensates every layer of the distribution model very well. It's not as easy as you might imagine to disrupt when entrenched interests are all making a ton of money AND are interdependent upon their neighbors in the value chain to continue to do that. You can't just hack a piece off because that bothers their neighbor, who in other circumstances might otherwise be a competitor, but has a shared interest. The relationships get complex.

None of this is fatal, but you must navigate it and play by many of the rules, particularly with #2. As Zenefits learned very publicly, the insurance industry and its regulators weren't going to let someone do what Uber did to the taxi industry - which was essentially to operate in the grey/black and just ignore the calls to stop. Insurance is a subset of financial services, and financial services is a powerful industry. For any startup, my advice is to build compliance and regulatory relations in from day 1. It's the least exciting thing you'll ever do, but is extremely important. Anyhow, I'm rambling. By no means do I want to discourage anyone from trying, I'm just trying to highlight some of the big things that are "different" in our space.

I'm particularly interested in the data piece because that's what I've done and built a career on in this industry, but it never struck me as sexy enough for YC. Appreciate the article.

gilsadis 3 days ago 0 replies      
It's an excellent article and spot on. Insurance has remained fundamentally unchanged for centuries. In order to really change it, a complete re-architecture is needed. Insurance should be fully digital, hassle free, transparent and fair.

It's hard to really change it when you're not a fully licensed and regulated carrier. An MGA can create a beautiful UI on top of an old insurance product, but eventually, consumers will meet face to face with this old insurance product (for example in claims), and it's going to be the same old experience again.

As many of you stated here, there's an inherent conflict between the insurance company and the insured. Until that changes, the experience will stay the same.

In Lemonade, we are changing that. Will love to get your feedback. Here's how it works - https://www.youtube.com/watch?v=6U08uhV8c6Y&t=9s

Disclaimer: I'm head of product at Lemonade (lemonade.com)

ksar 3 days ago 1 reply      
> For most of the insurance world, the hardest and most important thing to find is effective distribution and customer acquisition.

This is absolutely true. If you look at how regional, family-owned P&C carriers got their start, you'll find they started as brokers. These brokers found a profitable niche (call it motorcycles in California, or non-standard auto in Texas) that they wanted to own, moved on to become MGAs, and then admitted carriers offering multiple lines of business.

If you can find a way to own the customer as a distributor, you own the life-blood of the entire business downstream. As a result, P&C insurance companies are huge spenders on marketing (GEICO, spends $1.7B/year). It would be game-changing if this cash was used to provide utility to their customers above and beyond the insurance transaction.

I'm of the opinion that insurance premiums could be the new ad dollars - used to create products that promote lock-in, and much more consumer-centric insurance companies.

joshfraser 3 days ago 1 reply      
My issue with insurance companies is that they're conflicted by design.

As typically publicly traded companies, they have a fiduciary responsibility to their shareholders to maximize profits. And the only way to maximize profits is to deny coverage.

The purpose and value of insurance is amortizing risk. There are very few things that I think should be controlled by the federal government, but a single payer health care system is one of the few that actually makes sense from an incentives perspective.

In the words of Charlie Munger who's spent more time thinking about insurance than almost anyone: "Show me the incentive and I will show you the outcome."

Why does the US pay the highest prices for mediocre health care? Perverse incentives.

sharemywin 3 days ago 1 reply      
You also might want to check out guidewire. They're the company that's basically taking over the software side of P&C for carriers.


If you're going to innovate on the software side in P&C you would need to figure out what they're not doing or could not do that would allow you to get ahead of them.

osullivj 3 days ago 2 replies      
The article spends many paragraphs on the complexities of the B2B and B2C players in insurance, their distribution channels, and how they get paid. It spends one sentence on noting how the pricing of risk is based on actuarial models, and then moves on. The pricing of risk is absolutely key to the entire industry, and it is stone age! It's all done in spreadsheets.
Entangled 3 days ago 7 replies      
What if a thousand people get together and put $100 each in the blockchain in order to cover emergencies for the unlucky 10% that may need aid in times of distress?

A society can't function if more than 10% of the population is in distress and that's exactly the purpose of insurance, the many helping the few, not the other way around.

But since insurance moves a lot of money (just like banking) and there is propensity to fraud and corruption, then it is highly regulated making it hard for newcomers to disrupt the market.

Start from the beginning again using technology, allow tech insurers to take from 100 and serve 10, that's it. Who to serve? That's the easy part, those in need.

kozikow 3 days ago 1 reply      
Unless you start with covering something completely new, I think it makes sense to start as analytics provider for insurance. Becoming a carrier is expensive, and MGA can only sell existing policies already created by a carrier. If you sell existing policy you are at severe data disadvantage comparing to existing insurance companies, even with superior tech.

Shameless plug: At https://tensorflight.com we are working on P&C insurance and we focus on analytics for commercial properties mentioned in the article. Please get in touch at ( kozikow [at] tensorflight.com ) if you are interested in the subject.

chiph 3 days ago 0 replies      
The article mentions "Personal Cyber Insurance"

Many insurance companies will insure against data loss as part of your homeowner's policy (with lots of exclusions..) and I've been wondering why they don't partner with one of the online backup services (SpiderOak, Backblaze, etc) in order to preserve their customer's data to reduce the risk of a payout, or at least how much they have to pay on a claim.

sonink 3 days ago 1 reply      
I might not know too much about the topic, but imo American healthcare system has a lot of learn from the Indian healthcare system. My guess is that America should simply copy-paste India's model and it should be good to go.

For the resources that it spends on healthcare, the Indian healthcare systems offers perhaps the most efficient system in the world. There is insurance if you want, but you can choose your health providers in the free market too.

Hospitals, Doctors, Medicines, Tests, Procedures, Post-op care/services - everything can be comparison shopped. And if you have more time than money, you can show up at any one of the almost-free govt. funded hospitals to get treated by who is often a very good doctor.

The inefficiency of the American system might as well be a result of extensive litigation around healthcare, but I suspect that its simply an oligopoly defended by pocketed politicians.

I would guess that for any hospital expense above a few thousand dollars, and for someone who cant afford, it might make a lot of sense to just hop on a plane to India.

Mz 3 days ago 0 replies      
Re the comments on Data in this piece: Yes, insurance is an industry suffering from perpetual information overload. I worked at a large insurance company for 5.25 years. I processed claims and we relied heavily on systems for looking up endless information. They would totally overhaul the system for doing that every few years. I think it occurred two or three times in the time I was there and I heard from other people that, yes, this seemed to happen every two to three years. It had all kinds of problems. What was supposed to be a new and improved, more user-friendly system meant that all the old timers who had figured out how to find everything were now lost, even with training. It also meant that some things were not compatible and not findable anymore. It was truly bad.

I have a certificate in GIS. As a claims processor, I worked with multiple databases all day long. This included such things as looking up names and addresses for providers (doctors, hospitals, etc -- the people providing medical care). I suggested we create a GIS based system to make this easier. The idea did not fly.

But if you want an area of opportunity in insurance tech, try finding solutions to some of their back end problems. Managing information overload is a serious and ongoing problem in the insurance world. I was there 5.25 years and I only know a thimbleful of information about the industry. Laws and regulations vary by state. In order to sell in all 50 states, you need to get licensed in every state individually and we had to be able to look up "state exceptions" which were laws that impacted how claims were paid in each state and on and on.

People in insurance are absolutely not stupid. It is just overwhelmingly complicated and no one can keep up with all of it. And, yes, it is very fragmented. But if you want a good business opportunity in this space, consider trying to build back end solutions to help them better manage the information in which insurance companies are simply drowning.

EGreg 3 days ago 0 replies      
There's one cool thing about insurance providers, whether private or public:

They are the ones most aligned with YOUR well being.

They pay when something happens.

They get more money if things stay good or get better.

In fact, I would argue that public option / single payer insurance is MORE interested because of lack of competition (induced supply, as it were) that will destroy the profits if costs are driven down.

In other words, a public insurance program actively would try to improve public health, not to beat competitors but to lower its costs and reinvest that into MORE public health innovation.

Want to help the world? Make startups that improve public health (diet apps, exercise apps, whatever with measurable results) and have the insurance companies fund it.

youdontknowtho 3 days ago 1 reply      
Health insurance is just broken. The only way it gets better is if something forces them to view the entire country as one actuarial pool.
sytelus 3 days ago 0 replies      
When you go to clinic for a simple injection for allergy response they charge you $1500. The whole process takes 2 hours of stay in a clinic and doctor spends about 15 minutes with you. When a healthy women goes to deliver normal baby in hospital, she typically gets charged $2000 per night while doctor spends less than 1 hour with patient and cost of material is almost nothing. Does this feels data or technology problem to anyone?

In USA, private entities who owns clinics and hospitals have recognized that they can charge virtually anything in the name of providing "gold plated" care. All the while insurers have recognized that they can charge person $1500 premiums a month without anyone noticing because premiums are directly taken of out people's paycheck by employers. Vast majority of employed people have no clue that they are actually paying for their premiums which are almost same as what they would have paid as an unemployed person. Instead people assume that they get low cost insurance because their employers has some sort of great "group deals". So neither party has any incentive to reduce cost. Its neither a data nor technology problem, its how markets are completely eliminated out of equation by creating a law that employer needs to provide insurance even though employers are simply transferring cost to end consumers.

If government creates a law that employers must not provide insurance and everyone must buy their own insurance on open market according to their preferences and budgets, I believe cost of insurance would fall dramatically in very short amount of time. This is how lot of developed countries operate and cost of equivalent quality care there is usually 10X lower precisely because of this.

asmithmd1 3 days ago 0 replies      
I am surprised no one has mentioned:


They are doing three things:

1. Innovating on customer acquisition with a mobile app

2. Building out a for-benefit corporation insurance carrier one US state at a time.

3. Innovating on the business model.

The conventional, for-profit side of the business takes a flat 10% of premiums and "buy" insurance from their captive for-benefit carrier. Any money left over in the for-benefit insurance carrier is donated to the charity of your choice.

sharemywin 3 days ago 1 reply      
The biggest thing I can say is don't underestimate underwriting. There's a reason a lot of carriers focus on specific areas. There's a hidden customer acquisition cost in that a group of new customers that has a bunch of bad business that needs to be weeded out.

If you think about it a customer knows their own risk better than anyone. So, you're betting that you can judge their risk better than them and your competitors over and above your cost to manage paperwork, regulations, money and fraud.

nappy 2 days ago 0 replies      
I'd suggest to the mods that all threads about health insurance be removed and perhaps moved to a new post. Though an interesting discussion, Aaron's article explicitly does not address healthcare. The dynamics of healthcare are so different than P&C, it is a wildly off-topic discussion.
grizzles 3 days ago 0 replies      
Nice article & loved Kyle's article about the thorny issues surrounding customer acquisition.

I grew up around the industry because my father worked his entire career in it. I've always found it fascinating. Actuarial science is so underrated. I'm also an entrepreneur and I've previously thought about entering it to disrupt it. If there is one thing I'd add to the main article it's this:

If you look at the data vs all other industries, Insurance is the most profitable industry out there. That's not bad considering the product is manufactured out of thin air.

This is perpetually interesting because to me it kind of destroys one of the central tenets of capitalism. It's a very old industry. If anyone can conjure stuff out of thin air, you would expect there would be much more competition / lower margins in it by now. But there isn't. So the industry is really effective at erecting moats to keep innovators at bay, In my opinion, this is main "value" (cough) that the regulators play in the market. The industry gets away with so much, and it needs reform, but I'd bet against it ever happening. This situation in replicated in every country around the world. That's why it's hard to be a true entrepreneur in the industry. You need an investor with deep pockets to enter the market. You need to verify your brilliant customer acquisition strategy works. Then you need to wage war vs the incumbents.

Bonus content for actuarial nerds looking for a good chuckle:http://reactionwheel.net/2017/06/venture-follow-on-and-the-k...

jpswade 3 days ago 2 replies      
My favourite article on Insurance is one by the BBC[1], which questions what makes gambling wrong but insurance right? It also points out how it's history is steeped in shipowners and traders meet in shipping agency Lloyd's of London's coffeehouse in 1863.

1. http://www.bbc.co.uk/news/business-38905963

cmurf 3 days ago 0 replies      
There's a substantial difference between something like automobile, homeowners, or renter's insurance; and health care "insurance" which is arguably wrongly named.

Automobile insurance is mandated by law in all 50 states (Maine lets you self insure but you have to stipulate you can do this, in effect it's still a mandate). Homeowner's insurance isn't required by law but might be required by the mortgage company. And renter's insurance may be required by a landlord. All of these only protect you against almost random events, or at least pretty much unpredictable events.

Health care insurance is weighted by being one part warranty and two parts aging payment plan. How many people use their health insurance every single year in some form or another? Dental? Eye doctor? Cold or flu? That's not insurance. It's a payment plan for predictable services. Even pregnancy is predictable. Something like cancer, congenital disease, are not predictable. So we've conflated a bunch of things into a giant payment plan, with nearly a dozen layers of middle men, every one of whom expects a profit cut. And so in the U.S. we have the most expensive per capita health care cost in the industrialized world, and yet not everyone is covered. Basically we are so stupid, that we are willing to pay more for health care services to deny other people being covered, and to jack ourselves off proudly that this is the best health care system in the world and it's (mostly) free market. It's a big fat stupid hand job.

So I would not consider health care insurance in the same category as other insurance types. It's a sick care payment plan, and if it weren't for the government making it illegal now, you'd still see lifetime maximum caps. Get too sick? Go fuck yourself. Die, don't die, go to the ER, charge it to a credit card, declare bankruptcy, we do not care about you. We don't care. We don't give a fuck about you.

That is the American healthcare and insurance system.

TimPC 3 days ago 0 replies      
Attempted tl;dr version:

Insurance is a horrendous industry and someone should reform it because it sucks and could do so much better.

Insurance exists in a horrendous sales market where connections are required to multiple providers across multiple roles in a complex sales process.

Insurance is subject to highly fragmented regulatory complexity making it difficult for any company to scale to multiple markets. This not only affects you, but it affects many of the providers you build partnerships with, making each new market often require new relationships.

So my takeaway is don't do insurance unless you have something so overwhelming that you can't do it no matter what. Also expect that Idea & Working Technology gets you 1% of the way there and 99% will be building relationships, sales channels and regulatory compliance.

elihu 3 days ago 0 replies      
It seems like for health insurance at least, the problem of trying to optimize insurance premiums based on any risk factor you're legally allowed to discriminate on is a problem that just sort of goes away in a medicare-for-all style single-payer insurance system. Everyone pays taxes and receives health care services -- you just have to make sure your taxes rate is set properly to cover the costs incurred.

(I think governments are sometimes unfairly criticized for inefficiency because they incur higher costs simply by not having the option to cherry-pick their customers the way private companies usually can. There are of course cases where the criticism is justified.)

johnobrien1010 2 days ago 0 replies      
One thing I'd add is that the potential for new technology to better identify risks also is the potential to nullify the need for insurance. It is fundamentally the uncertainty over which house will burn down that leads to the need for insurance; if anyone had a 100% accurate model, the folks who were definitely not going to have their house burn down would not need insurance.
hardtke 3 days ago 0 replies      
I'm surprised he didn't call out title insurance as a place for potential innovation. This insurance pays out at something like 1% of premiums collected. Historically, the customer acquisition was handled by making a large kick back to the real estate agent. The research involved (look for contractors liens and such) seems like something that could be automated using NLP.
honestoHeminway 2 days ago 0 replies      
Denial of Service by Obfuscation, Denial of Service after exceeding a certain amount, Denial of Service by Way-Lawyering, these is all information that needs to be compiled from anecdata into solid numbers, that anyone can instantly acess and that allow for a metric on insurance quality.
corford 3 days ago 1 reply      
Would be interesting to know how many of these points map over to the insurance landscape in Europe? I know next to nothing about either but, from talking with Stateside friends, have the (possibly wrong!) impression that the insurance market in Europe is less sclerotic and a little more modern/competitive/innovative.
incan1275 3 days ago 0 replies      
Yonatan Zunger posted a really awesome piece about the origins of insurance, that complements this nicely.


blazespin 2 days ago 0 replies      
Insurance is fundamentally a 'data' problem, if you can numerically analyze human interaction. I know insurance brokers, they definitely set up their book based on what they know about their customers.
kapsteur 3 days ago 1 reply      
creeble 2 days ago 0 replies      
> Each of the players in the structure needs to get paid.

No, they don't. They exist merely as a means for insurance companies to extract more profit out of their customers because they believe they are "overexposed" to certain risks -- despite the fact that exposure to risk is _exactly_ their business.

You may make the argument that reinsurers, ILS buyers, and fronting carriers are all essential to the insurance business because they lower overall costs by spreading risk. But if this is so, then they don't "need to get paid", they are a cost _savings_.

Which is it?

chasely 3 days ago 0 replies      
If you had domain-level expertise that would be relevant to providing analytics for P&C (re)insurance providers/consumers, but didn't know how the insurance market works, where would be a good place to start?
jarsin 3 days ago 0 replies      
- Cyber Insurance -

Lifetime offer: 99% discount if you switch to Linux.

mooneater 3 days ago 2 replies      
Insurance can be a catalyst for a million other things. Amazing how often the answer to "why cant we do innovation X" is: because insurance.
bitJericho 3 days ago 0 replies      
The best solution is the complete outlawing of health insurance. It'll take riots for that to happen.
artur_makly 2 days ago 1 reply      
really dumb idea but..

What if the insurer and healthcare provider were one and the same?

Wouldn't that remove the fraud 'game' , increase transparency, and lower premiums?

or will it actually make it 2x worse for the patient?

CptJamesCook 3 days ago 0 replies      
It's surprising to see an article about insurance without references to black swan risks.
iamnotlarry 2 days ago 0 replies      
I have come to believe that the insurance industry in the U.S. has become part of the cause of expensive medical care. I am not opposed to the idea of insurance, but here are some of the problems I see in how it works today.

I do not believe that insurance companies actually negotiate the price down. I have seen too often that the price goes up when I produce proof of insurance. While it may seem counter-intuitive, insurance companies have perverse incentives to push prices up. Even the most well-meaning insurance company will opt for certainty over potentially inexpensive. But I don't think they are well-meaning to begin with and I think that it isn't even structurally beneficial to them to reduce costs. For an example of when lower costs benefit insurers, see the price of identical healthcare purchases outside the U.S. Products manufactured inside the U.S. magically become cheaper in Belgium and Nigeria.

The insurance industry has worked with the medical industry to obfuscate the cost and price of healthcare. This is in both industries' interests. If I pay $100 for a doctor to splint a broken finger, I'm happy I had insurance to make it so cheap. I'll let my insurance company pay the other $1700 and I may never even look at the bill that shows a total cost of $1800. I won't see that they charged $25 for the popsicle stick, $25 for the tape, $5 for the pen used to fill out the medical chart, $30 for the receptionist, $75 for the nurse, $300 for the PA, $200 for the xray, $200 for the radiologist, $50 for the ibuprofen, etc. If I actually had to pay that bill, I would argue it line by line. I would absolutely refuse to pay large portions of the bill. $5 pens that they reuse for the next patient are obviously an unethical ripoff. But I don't have to pay those line items, so I don't fight it. I just have to pay a $1500 premium every month. And I'd be crazy to refuse to pay that.

We all know that insurance premiums are high because healthcare costs so much. This is because of lawsuits and freeloaders. This is how price obfuscation shifts the battle. We don't question anybody's integrity over the $5 pen because the discussion is all about lawsuits and freeloaders. If I had to pay the bill myself, I'd ask why I got billed for 4 x-rays, but only received 3. That 4th one that didn't turn out because the radiologist put the cartridge in backwards? I'm not paying for that. I'd ask why the receptionist seems to be making $500/hr. I'd ask what the lab fee was for. I'd comb my bill and question everything. $1800 is a lot of unplanned expense for me. $100? Not so much.

If I had to pay my medical bill in full then file for reimbursement, I'd fire a company with 90 day turn arounds and switch to the insurance company with 5 day turn arounds. And the next time I got stung by a bee, I'd maybe not run to the emergency room. Yes, that could lead to bad outcomes. But the current practice has its own bad outcomes.

If I had to keep fighting an insurance company for timely reimbursement and started noticing that I pay $1500x12 every year to cover 80% of my ~$6000 medical expenses, I'd start getting all kinds of ideas. I would think about putting $1000 a month into an HSA and look for a $40000 deductible policy. I'd think about pooling my HSA with 100 of my closest friends to start a healthcare co-op where I could borrow up to 1x my balance for 1 year at low interest to pay for big medical bills and then get a $60k deductible policy instead. Then, in five or six years, when I've built up $30k, I'd drop my monthly payment into my HSA to $500, and just let the balance earn interest. Then, when the unthinkable happens, I drain the account, borrow another $30k, file a claim on everything else and spend the next year paying back the $30k and lining up contingencies for any event that will happen before I can build my HSA back up. I'd start daydreaming about pooling my $12k/year with others to build and staff a very small clinic that could take xrays and treat bee stings. Just 100 founders paying the same $1000/month could generate $1.2M per year. That could secure a 15-year $3M bond to build the clinic. It could pay a long way into supplies and staffing, too. One could even imagine that members get free services (ignoring the $1000/month) while non-members could pay $1800 for a broken finger.

In other words, I'd figure out how to rely as little as possible on insurance companies. And the money I now spend to fund their paperwork and profit would stay in my account.

nafizh 3 days ago 0 replies      
Is there any startup focusing on the healthcare insurance industry?
jheriko 3 days ago 0 replies      
> Insurance is fundamentally a data problem.

this is a special sentence. it reads roughly as "i am ignorant of reality" to me. :(

very american.

pasbesoin 3 days ago 1 reply      
Two comments from me (amongst many I might make):

1. A while ago, as a somewhat pointed quip, I started pointing out that in the U.S., for health insurance we pay "whole insurance" rates (yes, that term applies to life insurance, but is an apt comparison in terms of pricing) for what is essentially "term insurance" (which, in the life insurance segment, is quite a bit less expensive, because after the term, you're uninsured).

2. I tried to do eveything "right". Carried insurance when I could, when I was younger. Carried it with the best company and policy I could get. Didn't resent that I was paying it much more than I was getting out: It's "insurance", and if I'm healthy, I've already "won". I'm glad the money can go to those who need it (patients, I was assuming).

Despite doing everything "right", my last corporate job got off-shored. At a time when the individual insurance market was going from ridiculous to impossible. I managed to hold onto some form of insurance by dint of personal connections, to watch exclusions slapped onto my individual policy (for things the doctor had said he wouldn't treat -- too minor, risk/benefit not worth it -- while I was still on the corporate group policy). To watch my premium rates triple in three years.

The ACA came along just in time to rescue me from the looming vast pool of the uninsured. Which, once you entered, you had a very hard time exiting.

I never received a subsidy from the ACA. It simply allowed me to participate.

And by the way, the biggest injury to it? It was written to make insurers whole for their losses, until they had a handle on their demographics. Well, under Congress, funding is always a separate exercise -- the budget process -- from the laws and activities that that funding in turn enables.

And the Republicans simply refused to produce the funding mandated in the ACA. The numbers I heard from and expert in the field is that insurers were getting about 17 cents on the dollar for losses the ACA law told them would be reimbursed.

In other word, it wasn't simply imperfect and in need of improvement. A starting point, per those who negotiated and accepted some pretty terrible compromises in order to get it passed and some improvement in coverage started.

It was actively sabotaged.

Ok, I find I have a third point to make right now:

3. When the ACA was proposed, I believe it was not at all a "left" initiative. It was meant to be pretty middle of the road.

Around that time, 2007-8, and for some years prior, businesses were crying bloody murder at the year over year increases in health care costs they were facing. Often well into the double digits, year after year.

And they were saying, 'We can't compete with foreign competition from societies that have universal health care, where their costs are perhaps half ours and are increasing at single digit percentage rates.'

The ACA was meant to address "conservative" business problems as much as "social" individual and community problems.

All that was sabotaged, dealt with in bad faith, for the sake of the political and power agendas of the selfish.

By the way, the ACA has produced substantial improvements on the group insurance side of things. For individuals, and for businesses, by holding cost increases down.

And insurers have made good profits on these changes and their group policy business. Money they don't include in their accounting when they turn around and describe all the losses they've faced under the ACA marketplace plans (which are classified as "individual" insurance plans, not part of the traditional group policies/business).

wyager 3 days ago 5 replies      
> Insurance carriers set their rates based on actuarial models designed to predict the likelihood of future events.

Sure, until the government says that you are no longer allowed to set rates based on actuarial data; maybe you aren't allowed to charge people appropriately if they have some expensive "pre-existing condition", or because of certain "protected" but statistically relevant characteristics. This throws a big fat spanner into the whole expectation value thing, because you're no longer an insurance company but a weird privatized subsidy pool that isn't allowed to make expectimax decisions anymore.

Your clients aren't allowed to expectimax anymore either; if they catch on to the fact that they're actually subsidizing someone else, they're not allowed to form their own rational insurance pool (because it would violate restrictions on "discrimination", e.g. ACA section 1557) and if they choose to opt out of the irrational "insurance"/subsidy market you hit them with a hefty fine.

I encourage everyone to look up expectation values for payin/payout of medical insurance for different customer types under current regulations (start with https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361028/ ). TL;DR If you're a reasonably healthy man below retirement age, you're getting screwed hard. Maybe society can collectively agree that this is a good idea, but we should at least be honest and stop referring to it as "insurance". Actual insurance markets without coercion are a net positive for all participants, rather than a convoluted scheme to (effectively) transfer money from one demographic to another.

billylo 3 days ago 0 replies      
Insurance is essential. They are like brakes in your car. You incorporate them not because you want to go slow, but because it enables you to go fast.
pkamb 3 days ago 0 replies      
Any options for renting vintage furniture by owner? I'd like to pick up more nice mid-century pieces when I see them cheap at estate sales, but won't have the space for them for a while...
pcunite 3 days ago 1 reply      
Disclaimer: You may not agree with this. A downvote does not change that. A comment and intellectual conversation goes much further, however.

A quote from the article: The insurance industry is built on mitigating downside risk.

Maybe they should have everyone agree to use computers with sensors which can create accurate maps of everything people have ever done, might do, or should do. Citizens will be so much better off with this new system of fairness. We'll call it insurance instead of goverment.

How I Took an API Side Project to 250M Daily Requests ipinfo.io
449 points by coderholic  1 day ago   152 comments top 36
Tloewald 1 day ago 0 replies      
I'd like to point out that the things he says he's doing instead of marketing are, in fact, marketing. It's "guerilla" marketing, and it's being paid for with the writer's time. Nothing wrong with that, just don't confuse "marketing" with "advertising".
smokybay 1 day ago 3 replies      
The author does not say how much does maintaining the service cost and what is the long term plan. As others have referred to already, a similar existed before and ended for a simple reason no point in maintaining it with constant loss and no clear revenue plan.


westoque 1 day ago 2 replies      
Good strategy! That's also what I did to get my side project (Cookie Inspector - Google Chrome Cookie Editor) project to having 80,000/daily users.


I solely marketed it at Stack Overflow and was getting upvotes and that was all my marketing.


Also a big factor there are good reviews. When users like your project/product, they will market it for you.

davidivadavid 1 day ago 13 replies      
I'm not sure why people are proud to do things without spending money on marketing.

What if spending money on marketing had made you grow twice larger? Twice faster?

When people say "I didn't spend money on marketing", the only translation is "I knowingly overlooked massive growth opportunities."

rickduggan 21 hours ago 2 replies      
This is super cool. I use a similar API to provide a client-side service called IP Request Mapper (https://chrome.google.com/webstore/detail/ip-request-mapper/...). Coming soon to a Show HN near you.

What it does is show where every asset on a web page is loaded from. It allows you to visualize how many different requests go into building just one web page. While it's gotten much better, the Houston Chronicle (https://chron.com) used to make about 500 individual requests to build its home page. It's down to about 125.

It's best to run it across two different monitors, with IP Request Mapper on one monitor and your "normal" browser window on another. Then enter any URL and watch the map start populating based on the geolocating every request made by the page.

But it's projects like ipinfo.io that make these other things possible. Standing on the shoulders of giants and all that...kudos to you, coderholic.

reacharavindh 1 day ago 0 replies      
Happy user here. My GF came up to me and asked if I could somehow get country names for the ip addresses she had of her survey respondants. I Googled and found this neat little API. True, I could have downloaded the raw databases from elsewhere and worry about the SQL I need, whether the data is recent or ancient or even correct. I decided it was an overkill for my need, and just used this API in a throttled(1 req/s) mode and left it overnight. If I need this IP to Location need again, I'd happily pay for this API.
ribrars 22 hours ago 1 reply      
Great overview here on how you solved a problem and built a business around that.

I read that you use Elastic Beanstalk for your server config, but I wanted to ask:1. What programming language did you use?

2. What, if any, configuration did you have to do to the Elastic Beanstalk config to deal with network spikes and autoscaling?


babuskov 1 day ago 3 replies      
I'm baffled why anyone would use this, when you can import data in a database and run it on your own server?

I mean, you might spend 20 minutes more to set it up, but you are safe from having to rely on 3rd party service.

Anyway, kudos to coderholic for creating this and sharing the story.

Scirra_Tom 1 day ago 3 replies      
Where did you get the IP DB from? My understanding is most you can't resell access to?
jacquesm 1 day ago 2 replies      
That's great. Question: does it make money? The words 'profit', 'money', 'income' and 'revenue' do not appear in the article.
unchaotic 22 hours ago 1 reply      
Crowded space. Quick google search of any of these keywords "ip address location api" , "ip lookup API" , "geolocation API by IP" etc. shows :

- https://db-ip.com/api - https://ipapi.co - https://freegeoip.net - ipinfodb.com - https://www.iplocation.net - http://neutrinoapi.com- http://www.ip2location.com- https://www.telize.com

and a few dozen more. I wonder if collectively they are serving over a few billion requests per day. Microservices & API culture FTW !

larsnystrom 1 day ago 2 replies      
Ipinfo seems to have the exact same logo as Podio (https://podio.com), a service owned by Citrix.
GordonS 9 hours ago 0 replies      
> Our servers use latency based DNS routing to serve over 90% of all requests handled in less than 10ms.

What exactly does that actually mean though?

Does it mean that processing time at your server is 10ms, or 10ms to time to first byte, or something else?

Giving it a quick test, I generally get the actual JSON result in around 400ms. The lowest I got was 200ms, the highest around 1000ms. It didn't seem to make any difference if I used the HTTP endpoint instead of the HTTPS one.

WA 1 day ago 5 replies      
I use ipinfo.io mostly to see my own public facing IP address and for me, it's 2 reasons:

- I somehow can remember that domain. I don't have to google "my ip" and dig through weird domains that change all the time

- The design is clean and simple. Not too many information, no ads, loads fast.

fusionflo 1 day ago 0 replies      
Kudos to you guys for building this. There is always a lot of scepticism from people on "why would anyone pay for this" . Reality is not everyone has the time or resources to build their own kit. There are literally 1000s of businesses on the internet that that are in the business of selling "time" or timesavers and removing the risk of maintenance, ongoing support.

Keep improving this and with the rise of web personalization, the demand will continue to grow.

firloop 1 day ago 0 replies      
Related, some other adventures while running an API to retrieve IP addresses.


mrskitch 1 day ago 0 replies      
I'm employing a similar strategy for my library https://github.com/joelgriffith/navalia as I couldn't find any solution to manage headless chrome (plus the API for that protocol is pretty heavy).

Building for what folks want, even developers, is so obvious that I think we often forget about it. It's also not as glamorous as self driving cars or rockets, so gets discredited easily.

Sound points though

craigmi 1 day ago 0 replies      
Pretty cool man, use your site all the time for ASN lookups, although I find your carrier information wildly conflicts with digital element's DB.
kasbah 1 day ago 1 reply      
Does anyone know how ipinfo compares to running your own instance of https://freegeoip.net?
diminish 1 day ago 0 replies      
Congrats. I m not sure but ipinfo could be very interesting to startups and programmers. So a good idea could be writing attractive articles and posting them on HN and Reddit programming and some other subreddits. That would bring more customers with zero marketing.

See also:https://news.ycombinator.com/from?site=ipinfo.io

drej 20 hours ago 0 replies      
I see it's still a thing. Back in high school, some ten+ years ago, I coded up an 'ip2country' website. Not sure why, there were dozens of those. I guess I had a free domain and a lot of time on my hands. I put some Google AdSense on it and let it go. I checked my AdSense account some six months later and found out I was cashing $20/month. Easiest money I've ever made.
niko001 1 day ago 1 reply      
This has worked well for me, too. I saw an influx of "How to offer a time-based trial version on Android" on SO and developed a trial SDK as an answer: https://www.trialy.io/
kevan 1 day ago 0 replies      
>90% of our 250 million daily requests get handled in less than 10 milliseconds.

Minor nit, but with that level of traffic I'd expect you to be bragging about P99.99 latency, not P90.

reefoctopus 12 hours ago 0 replies      
Are you just using the max mind data?

What percentage of those 250,000,000 is from paid plans? Even if it's only 20% you'd be doing $xx,xxx per day. Is that in the ballpark?

kpsychwave 1 day ago 1 reply      
Given the fast lookup time, it would be useful if you could provide a JS API fot synchronous loading.

Essentially, a blocking script in the dom <script src="...api.js" /> that prepopulates the window object. With clever error handling, this could improve perceived performance significantly.

A few questions:

1. What differentiates you from ip-api.com and other providers?

2. Do you use MaxMind?

3. Is there an option for no-throttling? 100s of simultaneous requests?

I aggregate multiple IP databases for my SaaS (https://www.geoscreenshot.com) and I need highly performant / reliable IP look ups.

motyar 22 hours ago 0 replies      
Such stories don't let me stay focused on my freelance work.

I got inspired and start researching and Building. ( btw failing yet)

elnygren 20 hours ago 0 replies      
The author says I took even though this was pure luck and coincidence. Attribution bias is strong in this one.

However, it is important to acknowledge that he did put himself into a position where he was available to become lucky (= he built the API and linked to it).

SirLJ 1 day ago 0 replies      
I see the author is posting the same thing every 20 days or so, so here is the 0 marketing...
pier25 1 day ago 0 replies      
So what's your stack? Still running PHP?
merb 1 day ago 1 reply      
well currently my location is basically totally wrong.from https://www.iplocation.net/ I've only seen one service that tracks my location 100% correct (correct village), all the others are 200 or more km away from my real location.
ge96 22 hours ago 0 replies      
That's crazy the details like lat/long, what if proxy or where does that data even come from? ISP? Or you take time to build it out ie research. At any rate cool.
rodionos 1 day ago 0 replies      
Checked one of our static IPs: the country is correct, but the city is 500 miles off.
erikb 1 day ago 1 reply      
How much money do you make per api req?
martin_hnuid 1 day ago 0 replies      
Thanks for sharing.

I am ready to launch a startup and currently trying to figure out what to focus on (so many ideas!).

I posted an "Ask HN" earlier today. Wondering if anyone might have some thoughts or advice on this:


imaginenore 1 day ago 1 reply      
Just some rough calculations. Assuming the worst-case scenario, everyone in the highest tiers (the cheapest per request), 250M daily requests means he makes

400 * 250M / 320K = $312,500 per month.

Or $3.75M per year.

Not counting the expenses.

kalal 1 day ago 0 replies      
You are great! My karma goes down, please!
Delivering Billions of Messages Exactly Once segment.com
481 points by fouadmatin  3 days ago   132 comments top 35
newobj 3 days ago 4 replies      
I don't want to ever see the phrase "Exactly Once" without several asterisks behind it. It might be exactly once from an "overall" point of view, but the client effectively needs infinitely durable infinite memory to perform the "distributed transaction" of acting on the message and responding to the server.


- Server delivers message M

- Client process event E entailed by message M

- Client tries to ack (A) message on server, but "packet loss"

- To make matters worse, let's say the client also immediately dies after this

How do you handle this situation?The client must transactionally/simultaneously commit both E and A/intent-to-A. Since the server never received an acknowledgment of M, it will either redeliver the message, in which case some record of E must be kept to deduplicate on, or it will wait for client to resend A, or some mixture of both. Note: if you say "just make E idempotent", then you don't need exactly-once delivery in the first place...

I suppose you could go back to some kind of lock-step processing of messages to avoid needing to record all (E,A) that are in flight, but that would obviously kill throughput of the message queue.

Exactly Once can only ever be At Least Once with some out-of-the-box idempotency that may not be as cheap as the natural idempotency of your system.

EDIT: Recommended reading: "Life Beyond Distributed Transactions", Pat Helland - http://queue.acm.org/detail.cfm?id=3025012

mamon 3 days ago 5 replies      
"Exactly once" model of message is theoretically impossible to do in distributed environment with nonzero possibility of failure. If you haven't received acknowledgement from the other side of communication in the specified amount of time you can only do one of two things:

1) do nothing, risking message loss

2) retransmit, risking duplication

But of course that's only from messaging system point of view. Deduplication at receiver end can help reduce problem, but itself can fail (there is no foolproof way of implementing that pseudocode's "has_seen(message.id)" method)

alexandercrohde 3 days ago 1 reply      
Here's a radical solution. Instead of becoming a scala pro akka stream 200k engineer with a cluster of kafka nodes that costs your company over $100,000 of engineering time, technical debt, opportunity cost, and server costs, just put it all in bigtable, with deduping by id....

Enough of resume-driven-engineering, why does every need to reinvent the wheel?

bmsatierf 3 days ago 1 reply      
In terms of connectivity, we deal with a similar problem here at CloudWalk to process payment transactions from POS terminals, where most of them rely on GPRS connections.

Our network issues are nearly 6 times higher (~3.5%) due to GPRS, and we solved the duplication problem with an approach involving both client and server side.

Clients would always ensure that all the information sent by the server was successfully received. If something goes wrong, instead of retrying (sending the payment again), the client sends just the transaction UUID to the server, and the server might either respond with: A. the corresponding response for the transaction or B. not found.

In the scenario A, the POS terminal managed to properly send all the information to the server but failed to receive the response.

In the scenario B, the POS terminal didn't even manage to properly send the information to the server, so the POS can safely retry.

falcolas 3 days ago 1 reply      
So, a combination of a best effort "at least once" messaging with deduplication near the receiving edge. Fairly standard, honestly.

There is still a potential for problems in the message delivery to the endpoints (malformed messages, Kafka errors, messages not being consumed fast enough and lost), or duplication at that level (restart a listener on the Kafka stream with the wrong message ID) as well.

This is based on my own pains with Kinesis and Lambda (which, I know, isn't Kafka).

In my experience, better to just allow raw "at least once" messaging and perform idempotant actions based off the messages. It's not always possible (and harder when it is possible), but its tradeoffs mean you're less likely to lose messages.

travisjeffery 3 days ago 1 reply      
Kafka 0.11 (recently released) has exactly once semantics and transactional messages built-in.

- Talk from Kafka Summit: https://www.confluent.io/kafka-summit-nyc17/resource/#exactl...

- Proposal: https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+E...

StreamBright 3 days ago 1 reply      
"The single requirement of all data pipelines is that they cannot lose data."

Unless the business value of data is derived after applying some summary statistics, than even sampling the data works, and you can lose events in an event stream, while not changing the insight gained. Originally Kafka was designed to be a high throughput data bus for analytical pipeline where losing messages was ok. More recently they are experimenting with exactly once delivery.

ju-st 3 days ago 3 replies      
52 requests, 5.4 MB and 8.63 seconds to load a simple blog post. With a bonus XHR request every 5 seconds.
skMed 3 days ago 0 replies      
Having built something similar with RabbitMQ in a high-volume industry, there are a lot of benefits people in this thread seem to be glossing over and are instead debating semantics. Yes, this is not "exactly once" -- there really is no such thing in a distributed system. The best you can hope for is that your edge consumers are idempotent.

There is a lot of value derived from de-duping near ingress of a heavy stream such as this. You're saving downstream consumers time (money) and potential headaches. You may be in an industry where duplicates can be handled by a legacy system, but it takes 5-10 minutes of manual checks and corrections by support staff. That was my exact use case and I can't count the number of times we were thankful our de-duping handled "most" cases.

openasocket 3 days ago 0 replies      
So there's a lot of talk on here about the Two Generals Problem, so I thought I'd chime in with some misconceptions about how the Two Generals Problem relates to Exactly Once Messaging (EOM). WARNING: I'm going mostly on memory with this, I could be completely wrong.

EOM is NOT strictly speaking equivalent to the Two Generals Problem, or Distributed Consensus, in an unreliable network. In distributed consensus, at some given point in time, A has to know X, A has to know B knows X, A has to know B knows A knows X, ... It has to do with the fact that the message broker is in some sense the arbitrator of truth, so the consumer(s) don't need full consensus. In an unreliable network, you can have EOM. http://ilpubs.stanford.edu:8090/483/1/2000-7.pdf gives some examples of how that works.

HOWEVER, you can't have EOM when the consumers can fail. If a consumer fails there's no way, in general, to tell if the last message it was working on was completed.

There are a couple of edge cases where you can still have EOM. For instance, a system where you have a message broker A, and a bunch of consumers that read messages x from that queue, compute f(x), and insert f(x) onto message broker B, where f(x) may be computed multiple times for the same x (i.e. if f is a pure function or you don't care about the side effects). This system can implement EOM in the presence of an unreliable network and consumer failures (I think it can handle one or both of the message brokers failing too, not 100% sure) in the sense that x will never be in broker A at the same time as f(x) is in broker B, f(x) will never be in broker B more than once for the same x, and any y in B had some x that was in A such that y = f(x).

pfarnsworth 2 days ago 1 reply      
Sounds very cool. A couple of questions I had:

1) What happens if they lose their rocksdb with all of the messageIds?

2) Is their kafka atleast-once delivery? How do they guarantee that kafka doesn't reject their write? Also, assuming they have set up their kafka for at least once delivery, doesn't that make the output topic susceptible to duplicates due to retries, etc?

3) >Instead of searching a central database for whether weve seen a key amongst hundreds of billions of messages, were able to narrow our search space by orders of magnitude simply by routing to the right partition.

Is "orders of magnitude" really correct? Aren't you really just narrowing the search space by the number of partitions in kafka? I suppose if you have a hundred partitions, that would be 2 orders of magnitude, but it makes it sound like it's much more than that.

siliconc0w 3 days ago 1 reply      
Was thinking a 'reverse bloom filter' could be cool to possibly avoid the RocksDB for situations like this- turns out it already exists:https://github.com/jmhodges/opposite_of_a_bloom_filter

I love it when that happens.

squiguy7 3 days ago 2 replies      
I wonder how they partition by "messageID" they use to ensure that the de-duplication happens on the same worker. I would imagine that this affects their ability to add more brokers in the future.

Perhaps they expect a 1:1 mapping of RocksDB, partition, and de-duplication worker.

sethammons 3 days ago 0 replies      
"Exactly Once"

Over a window of time that changes depending on the amount of ingested events.

Basically, they read from a kafka stream and have a deduplication layer in rocks db that produces to another kafka stream. They process about 2.2 billion events through it per day.

While this will reduce duplicates and get closer to Exactly Once (helping reduce the two generals problem on incoming requests and potentially work inside their data center), they still have to face the same problem again when they push data out to their partners. Some packet loss, and they will be sending out duplicate to the partner.

Not to downplay what they have done as we are doing a similar thing near our exit nodes to do our best to prevent duplicate events making it out of our system.

philovivero 3 days ago 0 replies      
tl;dr: Clickbait headline. Exactly-once delivery not even close to implemented. Typical de-duping, as you've seen and read about hundreds of times already, is what they did.
incan1275 3 days ago 0 replies      
To be fair, they are upfront in the beginning about not being able to adhere to an exactly-once model.

"In the past three months weve built an entirely new de-duplication system to get as close as possible to exactly-once delivery"

What's annoying is that they do not get precise and formal about what they want out of their new model. Also, their numbers only speak to performance, not correctness.

On the plus side, I think it's awesome to see bloom filters successfully used in production. That sort of thing is easy to implement, but not easy to get right for every use case.

ratherbefuddled 3 days ago 0 replies      
"Almost Exactly Once" doesn't have quite the same ring to it, but it is actually accurate. We've already discovered better trade-offs haven't we?
iampims 3 days ago 1 reply      
If the OP doesn't mind expanding a little on this bit, I'd be grateful.

> If the dedupe worker crashes for any reason or encounters an error from Kafka, when it re-starts it will first consult the source of truth for whether an event was published: the output topic.

Does this mean that "on worker crash" the worker replays the entire output topic and compare it to the rocksdb dataset?

Also, how do you handle scaling up or down the number of workers/partitions?

qsymmachus 3 days ago 0 replies      
It's funny, at my company we implemented deduplication almost exactly the same way for our push notification sender.

The scale is smaller (about 10k rpm), but the basic idea is the same (store a message ID in a key-value store after each successful send).

I like the idea of invalidating records by overall size, we hadn't thought of that. We just use a fixed 24-hour TTL.

wonderwonder 3 days ago 3 replies      
Would something like AWS SQS not scale for something like this? We currently push about 25k daily transactions over SQS, obviously no where near the scale of this, just wondering about what limitations we will bump into potentially.
linkmotif 3 days ago 0 replies      
It's worth noting that the next major Kafka release (0.11, out soon) will include exactly once semantics! With basically no configuration and no code changes for the user. Perhaps even more noteworthy is this feature is built on top of a new transactions feature [0]. With this release, you'll be able to atomically write to multiple topics.

[0] https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+E...

ggcampinho 3 days ago 1 reply      
Isn't the new feature of Kafka about this?


robotresearcher 3 days ago 0 replies      
> [I]ts pretty much impossible to have messages only ever be delivered once.

IIRC, it's provably impossible in a distributed system where processes might fail, i.e. all real systems.

jkestelyn 3 days ago 0 replies      
Relevant to this topic: Description of exactly-once implementation in Google Cloud Dataflow + what "exactly once" means in context of streaming:


(Google Cloud emp speaking)

kortox 3 days ago 0 replies      
With deduplication state on the worker nodes, how does scaling up, or provisioning new machines, or moving a partition between machines work?
vgt 3 days ago 0 replies      
Qubit's strategy to do this via streaming, leveraging Google Cloud Dataflow:


majidazimi 3 days ago 1 reply      
What is so exciting about this? There is still possibility of duplicates. You still have to put the engineering effort to deal with duplicates end-to-end. If the code is there to deal with duplicates end-to-end, then does it really matter to have 5 duplicates or 35? Or may be they just did it to add some useful cool-tech in to CV?
gsmethells 3 days ago 2 replies      
Why do I get the feeling this is repeating TCP features at the Message level? There must a protocol that can hide this exactly once need away. TCP doesn't create downloads, generally, that are bad and fail their checksum test, hence packets that make up the file are not duplicated.
luord 3 days ago 0 replies      
This is interesting work. But I think I'll continue relying on at least once and idempotency. Exactly once is impossible anyway.

> In Python (aka pseudo-pseudocode)

This annoyed probably more than it should have.

spullara 3 days ago 1 reply      
This isn't the solution I would architect. It is much easier to de-duplicate when processing your analytics workload later and you don't need to do so much work.
PinguTS 3 days ago 0 replies      
That reminds me of the safety-related protocols we use since years in embedded electronics like rail-road signaling, medical, and other areas.
stratosgear 3 days ago 3 replies      
Site seems to be down. Any ideas how big these HN hugs of death usually are? How big of a traffic spike brings these servers down?
mooneater 3 days ago 0 replies      
Awesome story.What I would like to hear more about, is the people side. The teams and personalities involved with coming up with this new system and the transition.
throwaway67 3 days ago 1 reply      
... or they could have used BigQuery with a primary key on message ID.
As the U.S. fantasizes, the world builds high speed rail thetransportpolitic.com
391 points by jseliger  1 day ago   504 comments top 41
serhei 1 day ago 8 replies      
For those who think it's because of geography / hyperloop is better technology anyways / any other red herring besides Brezhnevian political stagnation:

It's not completely implausible that, 30 years from now, most of Europe and Asia are connected by hyperloops while the US has built nothing and Internet commentators are arguing that hyperloop is old news compared to yet-unproven teleportation technology, and anyways the population density of the US doesn't support hyperloops.

twblalock 1 day ago 5 replies      
The problems affecting high-speed rail in the US are the same problems that prevent low-speed rail, streetcars, subways, and buses from being more common -- people don't see themselves using such things and so they don't want to pay for them.

In the Bay Area, the Caltrain commuter train runs from San Jose to San Francisco, through the downtown areas of most major cities in between. It is currently so popular that it is significantly over capacity every day. Yet it is still a constant political battle to get funding to improve the system, even though the Bay Area is one of the most educated and politically liberal parts of the country, where support for public transit is higher than many other places.

Sometimes I think we would have much more transit funding overall if we set aside part of the transit budget to send Americans to other countries on vacation, so they will return knowing how good public transit can actually be.

gokhan 1 day ago 1 reply      
I'm on a vacation in Italy and just used one this morning from Florence to Bologna with Trenitalia. A lot of positive things: The ride was 35 minutes long, doesn't include any security theater, you can arrive the station 10 minutes before the departure and hop in a couple of minutes, comfortable, roomy, from city center to almost city center, and many more. And the train continued to all the way to Turin, visiting many cities including Milan in 2-3 hours. Cost was 16ish euros, I guess (deducing from a total payment for four people).

Doing the same though air travel would add at least a total of 2-3 hours for the whole thing. Don't know about the cost comparison but the user satisfaction is there.

tptacek 1 day ago 4 replies      
This topic comes up routinely on Hacker News, and it's no surprise why: there are a lot of Europeans interacting with a lot of Americans here, and European high speed rail is an enviable asset for the continent.

But last time we talked about this, it seemed to me that if you looked into the details, it was clear why we don't have HSR in the US. Even assuming we built a network that operated at Shanghai Maglev speeds, at the distances the network would need to operate, air travel would remain significantly more economical.

In a thread 3 years ago, I made a list of the top US cities by GDP, and then broke out the crow-flies ground distances between them:


Of the 55 edges on this graph, only 6 were 700 miles apart or less. Several of those are already served by the Acela.

There's a definite advantage to rail over air, in that rail can deposit you right in the middle of the city you're heading to. But that advantage can't make up for the fact that no train is going to compete with a plane for trips between the largest US cities.

chroem- 1 day ago 17 replies      
Unpopular opinion, but I really don't see why we should want high speed rail in the first place. It's slower than flying on a commuter airline, but the tickets cost nearly as much, and it's also enormously expensive to build. Then there's also the issue of throughput and last mile logistics. You're limited to putting people in a few train cars, as opposed to a continuous stream of people on a freeway. Being a high speed train, stops are necessarily few and far between, so once you arrive at the station you still have to figure out how to travel tens of miles to where you really want to go.

My perception is that it's a huge money pit for something that's quite frankly inferior to our current infrastructure. We would be much better off improving our current insterstate system.

dghughes 1 day ago 1 reply      
I envy mainland Europe with its rail system. I wish my region of Canada would build a rail system. What we have is old and mainly was for transporting ore, steel, grain. Even just regular rail not even high speed any speed is preferable in a snowstorm.

My region is small and would be perfect for a light rail system mainly because it's got few people scattered over a wide area with no direct route.

Bombardier even makes rail cars for many countries so it's a home-grown resource we could use.

I think south eastern Canada and north eastern US could have a great interconnected rail system. I'm only 800 km (~500 miles) from Boston but I may as well be on the dark side of the moon.

Like NY city before its subway system people were crowded in the city but when rail was expanded people could live in the suburbs and work in the city. I think a US and Canadian rail system would open up travel and trade on the eastern coasts of each country. Day trips to cities you'd never even think of visiting now or not even capable doing so now in a day.

mc32 1 day ago 8 replies      
Rail is not cheap. It's feasible when we have population density. It would make sense along the DC-Boston megalopolis and perhaps SF-San Diego, maybe some stretch of Texas. It makes little sense in the rest of the country.

That said, where it would make sense, like DC-Boston, we definitely should build it out. Build up the cities as the countryside is absorbed (as seen in Japan, and elsewhere in Asia) and let it become viable. Its deployment would definitely affect how cities and other communities grow and also depopulate, so we'd need to anticipate that and prepare for it.

Three things China has going for is vis a vis the US:

-Pop density

-Command economy (gov't can just move things through with little debate, displace 1MM people, if necessary.)

-Costs (in labor, materials, regulation, etc.)

notadoc 1 day ago 2 replies      
I wonder if the USA will ever build and modernize its infrastructure? We're still coasting on what was built 50, 60, 70+ years ago.

Then you travel to the rest of the developed world, and wow, what a difference in infrastructure.

armenarmen 1 day ago 1 reply      
Well, the fact that te government subsidized the automotive and oil industries with the Federal highway act in '56 lead to our deprioritization of rail. Had this not been the case, chances are we'd have European equivalent rail, and the small local trolley systems that dotted americas small and medium sized cities would never have been torn up.
closeparen 1 day ago 3 replies      
The US has a strong inter-city travel network in the airlines.

The TSA severely limits its effectiveness, so it could be tempting to build a rail network just to bypass the TSA, but there's no reason to think the same screening procedures won't apply to HSR after the first incident (or just threat).

sdiq 1 day ago 3 replies      
Every time I hear or read something about the US I realize the country is in many ways far behind Europe. When it comes to healthcare (in terms of accessibility), education (in terms of cost), infrastructure, etc, America seems to be doing much worse than these countries. Yet, ironically, America still leads these countries (and the rest of the world) in most other spheres.
jamespitts 1 day ago 1 reply      
We have a serious problem with retrogradism in this country.

A large number of people are suffering from changes outside of their control, and they are disconnected from those at the forefront of social and technological progress. Many of these people have lost trust in the system, and even in progress itself (outside of progressions that are accessible and affordable such as games or phones). As a result, there is little enthusiasm for investing in major improvements to systems or building any major infrastructure enabling progress.

"High speed rail? What is in it for me? I work part time and can't afford these medications. I want the life I used to have back."

Perhaps a good place to start is understanding the experience of people who are voting for an imagined retrograde society. This can be difficult for those of us who have had the privilege of a better education, or better opportunities in the cities, or even all of our needs met as we build what we build. The privileged must try, and must succeed in understanding what is happening here. This is because the votes of those within what is essentially a ghetto lead to major consequences, including underfunding high speed rail. The result isn't just ridiculously under-qualified and intellectually isolated politicians that are easy to make fun of.

The underprivileged will keep voting in this way until their concerns are answered (or not).

We at the technological forefront know more about what needs to be done in terms of advancing progress, possibly even to the point of solving half of all social problems. However, we must also pay heed to the immediate, harsh reality of the people left behind. Our environment -- natural, political, or infrastructural -- depends on this.

If the ethical demand to listen and react appropriately to the suffering of others does not convince us to strongly act, watching the destructive results of their votes should.

faragon 1 day ago 0 replies      
May be the US is doing a wiser thing. I live in Spain, where high speed train was pushed in the era of the housing bubble, and in my opinion is not that big deal, except for communicating the two bigger cities of Spain (Madrid and Barcelona). Lower capacity routes are on deficit, and I'm very skeptical about their long term viability.
thisrod 1 day ago 1 reply      
There is another question here. How the hell did the French build 300km of high speed track, going through central Paris, for only 10 billion dollars? If Australia could do that, the cost benefit analysis on Melbourne, Sydney and Brisbane would look very different.

Melbourne to Sydney is worth doing now, though it's a close thing. But the benefits come as time savings for rich businessmen, and Australia told them that if they really wanted it they could pay for it themselves.

d--b 19 hours ago 0 replies      
Something that is never debated when we talk about high speed train in the us: would it bring development the same way the iron horse brought development in the 19th century?

I mean, yes the US is sparsely populated (in the middle), but isn't it also because it doesn't have fast and easy transport system?

Wouldn't a high speed line between San Fran and Portland develop the very rural regions of Northern California?

High speed train also means high speed cargo transport, isn't that driving some economical development?

These are not rhetorical questions, i seriously have no idea of the answers, but it would be nice to see what experts think about that.

rmoriz 1 day ago 0 replies      
You can't even compare the population density of France with Germany, hence why always apply the "high speed rail" idea to the US? Image a high speed rail system between large cities but people still have to own and use a car, drive 200miles to/from the next station. Also either the train stops at every small town or it will be an express train that leaves the rural areas behind.

IMHO a high speed railway network is not the start but an evolution of an existing regional rail system/public transportation system that acts as a feeder and communter infrastructure.

The US lacks those public transpotation systems even in mid-size towns. That's a bigger problem IMHO.

hassancf 1 day ago 0 replies      
Even third world countries such as Morocco are building rail tracks for bullet trains...
bsaul 1 day ago 1 reply      
A big difference between europe and us is also in the fact that people in europe tend to live inside the cities, and not just go there to work. Train is considered faster than plane her in france, because you can go to the trainstation using subway, and board immediately, whereas you need to leave your place 2 hours before your flight.

i have the feeling that this advantage would be lost in the us, where people live in suburbs way more, and so any trip starts at least with a 45 minutes drive ( not to mention the fact that parking in an airport is probably more convenient than in a city center).

a_imho 18 hours ago 0 replies      
From the little data I gathered, trains/mass transit seems to be much more efficient than cars regarding greenhouse emissions. YMMV depending on your stance on climate change, but I find it sad people dismissing rails so easily in this thread.
ptr_void 1 day ago 0 replies      
'Why Trains Suck in America' : https://youtu.be/mbEfzuCLoAQ
ravenstine 1 day ago 0 replies      
When other countries play a bigger role in securing the global economy with military might, maybe we can start building a high speed rail infrastructure. Otherwise, I don't see an actual need for it. It would be an improvement, for sure, but all I see is people looking at much smaller countries and assuming that America is stupid for not doing everything they do.
ams6110 1 day ago 2 replies      
In the U.S., President Obamas initiative was met by Republican governors elected in 2010 who, for reasons that had little to do with sanity, resisted free federal money to fund the completion of intercity rail projects their (Democratic) predecessors had developed

It's not insane. Federal money is never "free" it's taken from the people and always comes with strings attached.

mnm1 1 day ago 1 reply      
I think it's too late for rail. Yes, we lost a generation of development in rail. We also lost a generation of development for pretty much every other transportation industry, and thus our whole infrastructure. The article briefly touches on it. Transportation in general hasn't been a priority for at least thirty years. I'm not worried about rail. Rail is dead in the US and has been for a long time. I'm worried about our highways. That's our infrastructure core, without which the US cannot survive. Not building rail projects in the US is pretty normal and on par for the downward trajectory we're on.

Not building and maintaining highways and bridges shouldn't even be an option. While some upper-class, rich people can afford to live in our cities, they are a huge minority and most people rely on cars and highways. Outside of a couple of cities, good city public transportation simply doesn't exist in the US and won't exist anytime soon.

I think we need to be realistic as to what is possible in the US. High speed inter-city rail isn't possible. And even if it is, can it compete with the price of plane tickets? Doubtful. Giving our cities good public transportation isn't possible. It may have been possible in the past, but not the last few decades. Having room inside a city for all who want to live there most certainly isn't possible. Building roads and bridges has now become almost impossible in many places. I have to wonder what is the plan for the US transportation infrastructure. As far as I can see, the plan is to let it deteriorate until it doesn't exist anymore. At least in that sense, it's consistent with education, social programs, and the rest of our crumbling society.

ortusdux 1 day ago 0 replies      
I still resent Rick Scott's decision to reject federal funds for Florida's high speed rail. It would have been the first high speed rail in the country.


mieses 22 hours ago 0 replies      
Rail is a bad idea wrapped in shiny engineering. Read Randal OToole http://ti.org/antiplanner/.
em3rgent0rdr 19 hours ago 0 replies      
High speed rail doesn't actually provide the eco-benefits over planes that proponents think it will. And high speed rail is only competitive against planes and cars at distances less than 500mi. Unfortunately the US is too sparsely populated and the big cities outside of the coastal corridors are too spread out for high speed rail to be economically and ecologically sensible. http://www.newsweek.com/why-high-speed-trains-dont-make-sens...
mickronome 1 day ago 0 replies      
Several comments in this thread almost appear to be constructed to prove the author right in how the debate derails.It's not much of my concern, but still I couldn't help to notice something that felt like an unusual occurrence here on HN, or maybe I'm simply seeing ghosts. I am rather tired to be honest!

Anyway, my flawed observation:

Some sort of deadlock where instead of discussing how to improve the situation, the discourse get stuck in debating which is the correct reason for not doing anything, instead of trying to come up with improvements?

Several times arguments are made that a non existent technology will make current investment pointless in the future, so no investment should be made now. Isn't that the argument implied by the title of the article?

Obviously, it could be true that future inventions would make it pointless, but that certainly is not something you can calculate/know off the cuff, if it's even worth speculating about. Building a high speed rail network take long enough time that all those possible avenues can probably be explored in excruciating detail before the first shovel hits the dirt a decade from now if everything moves quickly.

People are sceptical towards hyperloop, which is understandable in many ways. But what if it would work? Wouldn't it be worth investing quite a lot of money simply to figure out if it could work?

Obviously it could potentially only solve a very specific part of the transportation puzzle, but one that could have quite some positive effects.

Positioning cars and aircraft as more-or-less the only viable ways of communication for the foreseeable future sounds like an awfully odd position to me, even for a very sparsely populated country. While, the correct solution might not be high-speed rail, some variation of it it could still be the best solution in several instances.

Maybe someone would come up with something like a tethered electrical ground effect aircraft/train which could take advantage of the sparse population if they knew there were money to be made. Instead of massive resistance and cartloads of red tape ?

cartercole 1 day ago 0 replies      
so because every other country is subsidizing the shit out of stuff so should we? Economics drives our country not pipe dreams of people who want to get the taxpayers to foot the bill for their new hotness
pmurT 1 day ago 0 replies      
Even if we had high-speed rail the gov would regulate it to death like everything else - they'll make it just as painful as flying. Imagine the TSA salivating for the mission creep.
bpodgursky 1 day ago 5 replies      
IMO electric and self-driving car technology is advancing rapidly enough that investment in high-speed rail is going to end up like landline telephones -- it had a time when it was useful, but countries that missed the boat will end up doing A-OK without them.

Rail is convenient, but it will never ever be as convenient as having a car take you where you want to be, carry your kids, and carry your stuff around. As soon as self-driving technology eliminates the hassles of parking and clean solar-electric tech eliminates the environmental concerns, ridership is going to tank on all the fixed train lines. It might be 10-15 years out, but I would be shocked if any of the investments made today in rail ever pay off.

giardini 1 day ago 2 replies      
What's the justification for spending on high speed rail vs roads vs air travel vs doing nothing (Google Car is coming, remember)?
flimflamvir 1 day ago 0 replies      
The first one went OK, the second collapsed the Japanese rail industry. All require huge subsidies.

America is smart!

ensiferum 1 day ago 8 replies      
For americans a train is socialism. They need their V8s for crawling walking speed on the 4 lane highway burning a ton of fossil fuels. Actually the bigger the truck the better since it means freedom (or something). ;-)
anjbe 1 day ago 1 reply      
Im going to share my experience with commuter (not highspeed) rail: the New Mexico Rail Runner. https://commons.wikimedia.org/wiki/File:Trainroadrunner.jpg

The Rail Runner was built in 2006 primarily due to Governor Bill Richardsons efforts. It essentially covers two cities, Albuquerque (~500,000 people) and Santa Fe (the capital, ~70,000 people), which are already connected by Interstate 25.

I love trains; I recently took Amtrak to LA and back. And I love the Rail Runner. Its my favorite way to get to Santa Fe by far. Once I arrive, being without a car is not too bad: Santa Fe is a fairly walkable city, Albuquerque has a decent bus system, and a bicycle (which I can take on the train) makes things a lot easier.

The big problem with the Rail Runner is its cost.

Richardson originally was very vague about the cost, and initial estimates were (it turns out, a wildly optimistic) sub$100 million in initial capital. The state took out a loan to pay for construction. The total cost is now estimated to be about $800 million. Currently the state Department of Transportation pays about $25 million a year on the loan; as currently structured, that will slowly increase to $35 million per year until 2025 and 2026, where the payments jump to $110 million (per year!).

New Mexico is currently in a budget crisis (not just due to the Rail Runner). (http://fortune.com/2016/12/04/new-medico-budget-crisis/) There have been special legislative sessions called this year to sort things out, and theres conflict between all three branches of our state government. I have no idea where the DOT will find $80 million in their budget the next ten years, at least not without serious cuts to our already underfunded highways.

Then there are the operating costs. This is not so bad. Revenues only cover about 10% of operating expenses. But at least the rest is covered (at the moment) by county taxes, federal grants, and payments for use of the track by Amtrak and BNSF.

Ill be cynical: my personal belief is that Richardson intentionally hid the costs and pushed the Rail Runner as a shortterm publicity stunt for his 2008 Presidential run, without a care as to what it would do to the state ten years later. It is very like him. (Dont even get me started on the spaceport.) https://www.abqjournal.com/news/state/602848nm10-16-07.htm

The legislature sponsored a study to determine the feasibility of selling the Rail Runner (https://lintvkrqe.files.wordpress.com/2015/11/final-hm-127-s...). It concluded that nobody would be willing to buy it due to low revenues, high operating costs, and the plethora of exclusivity agreements that would need to be renegotiated (with BNSF, Amtrak, the pueblos, the federal Department of Transportation). And selling it wouldnt help since it wouldnt absolve us of the requirement to pay off the debt. At this point I dont foresee a solution other than refinancing the loan (again) to avoid those $100 million cliff payments, at the cost of further interest payments.

Like I said, I love the Rail Runner, and I really want to see it (or passenger rail of some sort) succeed in New Mexico. I do think the way the Rail Runner was handledintentionally hiding the costs and having no concrete plan to cover operating costsis completely unconscionable.

Not that it has to operate at a profit; after all, highways lose money too. But the Rail Runner loses so much money, and were already a poor state. It is valuable to connect New Mexicos capital with its largest city. I just feel like there has to have been a better way to do it. I hope the proposed train from Las Cruces, NM to El Paso, TX (http://www.lcsun-news.com/story/news/local/2017/06/28/study-...) will learn from the mistakes made with the Rail Runner.

Whew. After all that, Im curious: what successful rail projects have you seen, and what makes them successful?

exabrial 1 day ago 0 replies      
This sounds like a title written by someone that's never visited anywhere but New York or LA. The USA is very large, and we don't have high population density except for the Eastern seaboard (where rail seems to work pretty well there).

Roads are a much better, cheaper, faster, flexible option. We just need a 10x revolution in: storage density, fast charging, or efficiency.

taw55 1 day ago 0 replies      
What about the logistics industry?
ableton 20 hours ago 0 replies      
Interestingly a private company is tying to build a high speed train from Dallas to Houston tx. The great thing is that it would be privately owned so if it's a flop taxpayers arent on the hook.
scythe 1 day ago 0 replies      
http://en.wikipedia.org/wiki/Brightline appears to be for real in Florida and the Acela now carries a majority of traffic on some parts of the Boston-NY corridor and service to Washington. Texas has much cheaper gas and stronger car culture than anywhere in Europe or Canada. It's really just California that's lagging behind expectations; the other two projects could be better. And Florida / Texas / California / Northeast wraps up all of the locations in the US that are viable for high-speed intercity passenger rail. The only truly underserved corridor in North America is Toronto - Detroit/Windsor - Chicago, but most people don't even recognize it as a possibility because it crosses a border.

So instead of "what's wrong with the US" we should ask "what went wrong with CA HSR?".

graycat 1 day ago 0 replies      
How much money does Amtrak in the US Northeast Corridor make each year, in ballpark, round numbers? $100 million? $1 million? $1?
graycat 1 day ago 0 replies      
A guess: Now that Trump is talking about "infrastructure", the passenger train people are coming out of the woodwork again looking for big subsidies from the US Federal Government.

Some years ago, for a while I was a prof in Ohio. Well, there was a group all hot on connecting all the Rust Belt cities -- Chicago, Detroit, Cleveland, Columbus, Dayton, Cincinnati, Muncie, Akron, Indianapolis, South Bend, Youngstown, Toledo, etc. with passenger trains. They were really hot.

Look, guys, the US had a very good passenger rail network. Could go by train from one tiny crossroads to any other, all by train. And people did that. But soon that whole thing was killed off by, and may I have have the envelope please? Right, the Model T, etc. Private cars. A lot of the tracks grew up with weeds.

After WWII, soon, for trips up to 1000 miles, say with the whole family, people would rather just take the family car. Just after WWII, the passenger trains were still running, but, no thanks, people would rather take the family car, e.g., from Florida all the way to Grandma's near Buffalo, NY. As soon as I got married, my wife and I went to her family farm for Christmas, 900 miles, by car, car packed with stuff. Plane? Train? Bus? No thanks.

Gee, guys, now with the TSA, no way will I want to take a car full of luggage, toys, Christmas presents, etc. past the TSA. No way.

For me, for anything like family travel, public mass transportation, no matter how fast, how roomy, how cheap, how safe, due to the TSA and all the luggage handling problems, lack of privacy, being legally under the thumb of a lot of people, rules, bureaucrats, various cases of police, being subject to being forced to wait in my seat for four hours while whatever is going on, etc., the answer is no, no way, never, don't bother to ask again.

There are a lot of people and projects there in the woodwork eager to come out with lots of publicity, reasons, and excuses and eager to scarf up Federal subsidies. A LOT of people/projects. Clearly there is a whole industry of this stuff. They are always back in the woodwork, and as soon as they smell money, and they are good at smelling money, out they come, big publicity drives, etc.

mattfrommars 1 day ago 11 replies      
People keep on forgetting the immense size of the continental USA.


My own private basic income opendemocracy.net
414 points by deegles  4 days ago   264 comments top 31
maerF0x0 4 days ago 13 replies      
> I lucked into money.

The basic point is that he gets paid more whilst not doing more "work". For those whom "Work" is the only lever they use to convert into money, they see inequitable work:cash ratios as "unfair" (a morality statement).

Market economies do not function on work, but on value. He did the same work in a high value scenario. A glass of water provided in the middle of a developed city is worthless and thus free. A glass of water provided at the right time in a desert is invaluable and thus expensive.

My takeaway is this: Always meet the highest value need you can, and as well create additional value by helping others to meet higher values than they current do. Low placed people may not be able to "work" their way to upper echelons, but they can invent, intuit or otherwise create high value leverage of their's and others' work. I dont see it as unfair that a person making $1 a day is unlikely to become a deca-billionaire in their lifetime (total mobility); instead I want to ensure they have some mobility so that they can always better their situation, maybe by an arbitrarily selected 2x multiplier.

erentz 4 days ago 5 replies      
Wow I am astounded that all the comments so far are trying to tear down this person and what he is saying, and very transparently because there seems to be something threatening about this idea that he got where he has through luck. Why do people find this idea so threatening?

I got where I am today due to luck. I did work hard. But I know other people who work hard too. Who are just as capable. They just didn't get assigned to the class with the really awesome, motivating teacher. Or randomly picked out of the pile by a recruiter for a call. Or happen to wind up working somewhere where they made friends with a guy who was well connected. Etc.

To discount luck's role in how people get successful is a huge blind spot.

rrggrr 4 days ago 7 replies      
Gosh. OP made me sad with his thesis.

"1. For owners, work is optional."

... As an owner I wish this were true. For me and most owners I know work is mandatory, and its 60 hour weeks and a lot of sleepless nights. There is no safety net. There is no unemployment. I get no workman's compensation. I have no employment law protections. Its, as one HN comment said some time ago, "a suffering contest" a lot of the time.

"2. ...your stuff will keep on making money forever."

... Again, I wish this were true. Lifetime income producing investments are hard to find where volatility, fees, inflation, taxes and life's circumstances don't erode value. Sure, the truly wealthy are set. But most business owners are not truly wealthy, and the exit strategies just aren't there in many industries.

"3. We can get entrepreneurship without the enormous rewards to ownership we have today"

... Anyone whose dealt with the day-to-day grind of owning a business would be especially offended by this comment. A firehose of pain, suffering and risk flows to the owner in the form of litigation, regulators, employees, vendors, customers, bank officers, etc., etc. I've equated it to a "lazy susan" dispensing aggravation in every conceivable form. Its the reward - if you can get it - that makes it worthwhile. Try just getting divorced as an owner and you become an instant convert to rewarding ownership.

Yes, rent-seeking monopolies are bad. Yes, more needs to be done to create opportunities for wealth creation by employees - to give them the freedom to say "no". No, punishing ownership isn't the answer.

UPDATE x2: I'm really not confusing types of owners. The author's thesis is broad not only in the article, but in his works in general. Access to passive income investments (eg. REIT's) is among the few places small business owners can go to rent-seek. Rent seeking is bad under a few sets of conditions but doesn't deserve author's indictment.

jacknews 4 days ago 0 replies      
Owning land is certainly a private basic income.

Consider that land prices (and therefore rents) basically reflect the surrounding economy. By buying land you are essentially profiting from the work of everyone else in that economy, forever, and only taxed minuscule property taxes.

IMHO https://en.wikipedia.org/wiki/Henry_George had the correct solution; tax land (though not the structures developed on it) at it's full market rental value. And the concept should apply to ownership in all resources.

matt_wulfeck 4 days ago 8 replies      
> My first big lucky break happened in 2009 when Georgetown University hired me as a philosophy professor on their campus in Qatar

Apparently going to high school, studying and passing the SAT, applying and being accepted into university, studying, passing, and graduating university is just luck.

Have we we reached peak luck yet? Is there nothing ever we can do to improve our circumstances and our lives?

This is like a new age of economic predestination, where everything has been predecided for you by God and there's nothing you can do. Calvinist and BI proponents apparently have a lot in common.

OJFord 4 days ago 7 replies      
> [in the US] people who dont need their income are taxed less than people who do

I'm not American; I don't know the tax structure. But I can't even imagine that this could possibly be true, or what is true that is meant.

Without looking it up, I would assume that the USA subscribes to progressive taxation, under which the richer pay more tax in both absolute and relative terms.

Even if it has a flat rate of tax, (a structure, incidentally, that makes far more intuitive sense to me) the richer are still paying more tax in absolute terms, and equal in relative terms.

What Earthly metric is the author using?

bigtunacan 4 days ago 1 reply      
Can someone clarify this piece for me? I don't understand how the author is able to completely avoid taxation on the rents.

>My wife and I dont need the money we make from owning most of the business. We live off the salaries of our jobs, and reinvest virtually our entire share of the business. These reinvestments count as losses, and so officially we have never made any income or paid income taxes on our share of the business.

greedo 4 days ago 2 replies      
Ignoring the logical fallacies the author makes, one thing jumped out at me. He's not buying the average home in South Bend. If he's only paying $180/year in property tax, the assessed value of his average home is around $9K. A quick look at Google shows the median price (from Zillow, so a grain of salt is recommended) of $65K. If he and his brother are buying up $9K homes, his average mortgage is under $50/month. I'm sure that he's doing the socially responsible job of only charging his tenants just a little over his full costs...
Mz 4 days ago 1 reply      

Guy in financially comfortable position feels guilty, decides his education, willingness to live in Qatar, etc shouldn't really be worth this much, completely discounts the idea that anything actually matters, it is all merely "luck" and that's it.

Sounds like an existential crisis, not really a good commentary on the concept of basic income.

stuaxo 4 days ago 1 reply      
Seems to state the obvious to me, but looking at the comments on the article, I'm in a minority.
objectivistbrit 4 days ago 1 reply      
He talks about the unfairness of "our economic system" but he makes his income from Qatar's economic system. Qatar likes to spend their oil money on western professors to give themselves an air of legitimacy.

As for renting out property, it's never effortless or risk-free. If it was, the rate of return would drop to that of other minimum risk assets (e.g. treasury bonds).

brianwawok 4 days ago 1 reply      
Hey, South Bend made an article on the internet and it wasn't even about our mayor!

Rental income here is a bit weird because you have in general a very poor town, with a very expensive private university with old money funding houses for students. I bought my house here for a fair bit less than it would cost to rent something half the size.

obstinate 4 days ago 3 replies      
On some level, this is basically "capitalism 101." If I don't need my resources now, I can put them to work so that I can later benefit from them. This leads to me having productive assets, which pay me some return.

If I could not acquire productive assets, there would be much less reason to save. And it's unclear how one would save, as banks would likely not exist either. You can solve this problem somewhat by having a centrally planned economy. But then you have the problems that hit those.

tiku 4 days ago 1 reply      
So this is the part 2 to the "how to retire at 34" article from a few years back..
edpichler 4 days ago 1 reply      
The article is about something that should be obvious, capitalism: money goes to the capital owner.

Of course this has a lot of problems, but it's the best we have today. On this system, the rules of the game are: we need to work and spend less than we earn, and use the money to accumulate more capital, an not buying more things.

DailyDreaming 4 days ago 0 replies      
I agree with the author's points. There have been, almost certainly, Einsteins who have lived and died in poverty without ever having tasted a book. I feel like a basic income for everyone is supportable and would give people falling to an unknowable fate the lift they need to go beyond surviving.Less work seems to need to be done with tech advances today, but the only thing a job replacing robot seems to do is take the incomes of two u people and give it to the person who owns the robot. The slippery slope is that we will eventually do this for most jobs, if we survive long enough as a species, and I agree we need change to evolve as our society does.
dlwdlw 3 days ago 0 replies      
I'm having a lot of fun lately thinking about everything in cryptotokens:

Everyone gets 3 basicMealTokens and 1 basicFun token.

However, in SF, there are only sfMealTokens which require 20 basicMealTokens.

In Vietnam, 1 vietnameseChickenPlatterToken is only half a basicMealToken.

Tech workers are paid in techWorkTokens which can be exchanged for 200 basicMealTokens each or 100 basicFunTokens, or 1/1000th of a Tesla token.

Token transfers have friction so it is advantageous to maintain a surplus of tokens most easily transferrable (least hops) to what you want.

To ease this, SF maintains a sfBasicToken and makes the market to transfer tokens from all over the world to sfBasicTokens. sfBasicTokens have an advantage in that you can pay for things easily instead of paying 67.34 redditStatusTokens.

Some old geezer is willing to sell his house for a legacy usdToken, a token so old it had a physical representation and no historical memory.

The basicFoodToken ideas doesn't pan out. SF starts giving out sfBasicFoodTokens instead. However large numbers of people realize an arbitrage mistake and live like kings in Thailand, taking advantage of the buying power of sfTokens due to the Thailand people wanting to make it big with their startup dreams in the Bay Area.


dugluak 4 days ago 0 replies      
Its not two big lucky breaks it should be actually three big lucky breaks. Being born as a human is the first lucky break every human being on the earth gets. If you think about it 'merit' is man made. Nature most of the time works on pure chance.
imgabe 4 days ago 0 replies      
> No one is going to give tens of thousands of dollars every year to some guy who owned a couple of houses and said he knew how to manage more,

Well, not no one. There are hard money lenders who will loan money to "some guy" to do more or less that.

hartator 4 days ago 2 replies      
I think it's awesome to make critics about how capitalism is bad by using Qatar where basically no freedom exists and everything is controlled by the state. Not even mentioning actual slavery.
etergri 1 day ago 0 replies      
You'll all will talk and conclude absolutely nothing. IQ distribution is cancer and you're all getting confused with one another.
tunesmith 4 days ago 2 replies      
What I got out of this was the suggestion to fund Basic Income by taxing unearned income. I'd like to read more on how the numbers would actually work out. If that were the only funding source of BI, then how much impact would that be on the rich, and how much benefit would that be for the people that could use BI?
arwhatever 3 days ago 0 replies      
Well success is either based entirely on luck or entirely on merit, and I'm sure we'll figure out which of the two it is real soon now.
ableton 3 days ago 1 reply      
If you live in America you can do what you want. You can study hard and get a scholarship to college even if you are really poor as a kid. I know a poor Mexican immigrant who's kids all got college degrees for free. It doesn't really matter if you grow up poor (in America). There are many people who are very rich precisely because being poor as kids drove them to work hard. What does matter is if you grew up in a stable family. If you don't have that you get really whacked out paychologically, and it can affect people for their whole lives. People look to the government to tax and spend to solve social problems. However, oftentimes the government is the cause of such problems in the first place. For example, we see a huge number of people in prison today. Well studies show that 60% of people in prison grew up without a father in the home. The government today actively promotes divorce and children out of wedlock through paying single mothers. The "compassion" has raised a generation of young people who are lost. America has more money than ever but our families are falling apart, torn up by divorce and pornography, and our young people are paying the price. The government should focus on supporting strong families, and you would watch and see so many major social problems evaporate. I know that I am where I am today because of my dads positive influence on me. Without his support I would be nowhere right now. Every kid needs a loving family. It's time the government starts trying to promote that.

Of course in Quatar they have serious human rights issues that need to be resolved. I'm just talking about in America.

pavement 4 days ago 1 reply      

 nor is there a combination of bad choices that could conceivably put me in their position from my starting point.
Guess again.

dredmorbius 4 days ago 0 replies      
Read this essay (which is, as it goes, pretty good), then go back and take a look at, say, the first book of Adam Smith's Wealth of Nations. In particular the prices of commodities, labour, stock, and rent, as well as the factors influencing wages of labour (chapter 10).


It doesn't hurt to consider Ricardo as well.

From Ricardo, we get the Iron Law of Wages, and the Law of Rent. Key to understand is that these move in opposite directions:

* Wages tend to the minimum subsistence level, all else equal.

* Rent rises to claim the surplus value afforded.

That is, wages are based on the input costs, whilst rent is based on the output value (use value). The third element, price (sometimes "exchange value"), is what's at question.

(I distinguish cost, value, and price as three distinct concepts. This is a long-standing question, and in my view, a grossly confused one, in economics. They're related, but not deterministically. In the long term, C <= P < V. Bentham's "utility" is an exceptionally red herring. More, very much in development: https://redd.it/48rd02)

Note that this means that rent is determined by the pricing behaviour. If you're a "business owner" but you're not capable of extracting rents, you're either selling commodities or labour, you're not collecting rent. The term here is in the sense of economic rent. Simply "owning a business", without the economic circumstances which give rentier power, isn't sufficient -- don't confuse what it is you're doing with the systemic construct in which you're doing it. Weiderquist emphasises this point specifically, several commenters here clearly haven't grasped it.

Another elided discussion: rents are associated with access control, and can be thought of as authority over some (virtual or physical) gate. They're a natural element of any networked structure, in which nodes and links exist, most especially where some nodes have higher value, or control more flows. I believe though can't yet show that all cases of rent involve a fundamentally networked structure, again, virtual or real. My concern is that this may be a reflexive definition, I'm trying to determine that it is or isn't.

Another element of this is compensation for labour. Smith lays out five elements determining this, and I find them durable and comprehensive. In the Widerquist's case, the combination of requisite skills, and comparative unattractiveness, of teaching in Qatar, allow him to claim both a high wage and favourable working conditions (including a lighter-than-typical workload). This falls straight out of Smith. That is, he earns his salary "by doing a job few others are both willing and able to do".

If you're looking at the macro view, realise that these don't scale. That is, the innate and acquired capabilities to teach at University level are not widely distributed through the workforce, and the lack of appeal of various sorts of work is sufficient to dissuade (or prevent, or disqualify) others from taking part in it.

There are other elements here as well: the complimentarity of time and skills, on the one hand, with money, on the other (the classic business partnership). Tax structures (and who they benefit). The relationship of wealth and political power. Smith again: "Wealth, as Mr Hobbes says, is power." One of the shortest and clearest sentences in WoN, incidentally.

Spooky23 4 days ago 0 replies      
This guy sounds like a real shithead.

He makes his living teaching the children of totalitarian aristocrats at a faux university, is saving for the future on the backs of the poor back at home, and gets a kick out of having indentured servants at his beck and call.

I wouldn't shed a tear when his little empire of dirt collapses.

samnwa 4 days ago 0 replies      
There can only be so many people at the top.
the_cat_kittles 4 days ago 1 reply      
for the articles description: "how [the economy] rewards people who own stuff rather than people who do stuff"

i cant believe i never summarized my own feelings that succinctly. of course the reality is it does both, but i think its way out of balance towards ownership rather that doing.

moeymo 4 days ago 2 replies      
Property tax is $15 per month in South Bend for a house? Wrong.Capital gains is taxed lower than income? Wrong -- sometimes it is, sometimes the other way around. Multiple factual errors.
moeymo 4 days ago 1 reply      
Property tax is $15 per month for a house in South Bend, IN? lol.Capital gains is taxed lower than income? Wrong ("it depends").I stopped reading there.
Flawed reporting about WhatsApp theguardian.com
351 points by Calvin02  4 days ago   122 comments top 16
te_chris 4 days ago 5 replies      
For those who don't know, newspapers often have a Reader's editor whose job it is is to criticise and be the voice of the readers inside the newspaper. In this case, this is written by that person for the Guardian after what would appear to be a thorough investigation of the matter.

There's a lot of people here saying this should have happened faster, they're likely right, but also, given how extensive and thorough this is, it is more likely an example of how old-school editorial rigour just takes a lot of time.

gtf21 4 days ago 5 replies      
I think this is a really thorough mea culpa which is quite impressive, given the frequent failure of other newspapers to publish a prominent apology when they have got things far more wrong than this.
idlewords 4 days ago 0 replies      
It astonishes me that the Readers' Editor, someone with long experience in journalism, thinks retracting this story would mean taking it offline as if it never happened.

Frankly, I think this is a weak response. There is nothing in this investigation they could not have cleared up in January; instead, they dawdled and now they equivocate.

acchow 4 days ago 2 replies      
Pretty much every single person I know outside of the Bay Area and not working in tech believes that the government and the corresponding corporations running the service are reading all of their messages on:

* Whatsapp* FB Messenger* iMessage* Hangouts

They also all believe that the police can look at their Facebook posts because they have special access.

This is precisely why there was minimal reaction to the Snowden revelations - what revelations?

ngrilly 4 days ago 1 reply      
The linked Guardian's article doesn't really explain why they were wrong. This article by Moxie, designer of the Signal Protocol, is great: https://whispersystems.org/blog/there-is-no-whatsapp-backdoo....
lern_too_spel 4 days ago 1 reply      
Maybe one day they'll issue a correction for their PRISM reporting too. The solution is exactly the same as the solution in this case: the editors should demand that the journalists verify their claims with experts.
chicob 4 days ago 0 replies      
Speaking of security, the new possibility of a Google Drive backup for WhatsApp messages and files has been quite overlooked imo.

This backup is not e2ee, which means that if the other part is backing up data in Google Drive, then at least part of you WhatsApp history is not e2ee. Yes, it might be encrypted whithin Google Drive by whatever secure methods Google chooses, but not by you.

EternalData 4 days ago 0 replies      
Good on them for admitting to all these flaws. I'm especially interested in the fact that government officials seem to be citing articles to push people to certain communication channels.
jancsika 3 days ago 0 replies      
From the open letter:

"People believe that you perform due diligence on matters critical to their lives and safety."

And at the bottom of the open letter many security experts have signed in support. That is, "signed" in the colloquial sense.

Small digression-- let's say a person tasked with reviewing a story in the Guardian is not an expert in security. They would really love some way to start with one or two security experts they know and trust and "fan out" to other experts based on their relationships.

Is there a quick and easy way for the journalist to do that by looking at the names of cryptographers listed at the bottom of a webpage?

Also: can someone explain what "due diligence" means? Is it the expectation here that a journalist not only report what would look reasonable to a non-journalist reader, but also use their considerable skill to ensure that they present their readers with verifiable facts, to the best of their ability? Even if it takes a considerable amount of time and effort on their part? Even if verifying the evidence relies on clunky, cumbersome tools that no one wants to spend time using?

omnifischer 2 days ago 0 replies      
Sad that the writer (calling herself investigative journalist - https://twitter.com/manisha_bot ) of the flawed article does not even mention the amended article in her twitter account.
vzaliva 4 days ago 0 replies      
It is funny how while reading this article establishing Guardian's screw up I was nevertheless asked twice to give them money.
danjoc 4 days ago 4 replies      
This is not the original Title. Submitter is editorializing via title. Please don't do that on Hacker News.
majewsky 4 days ago 2 replies      
Hint: If site is empty for you, remove "amp." from domain name.
eehee 4 days ago 3 replies      
This entire conflict just seems completely absurd to me - why on earth are the 72 "experts" who signed the open letter so quick to trust WhatsApp without access to the source code?
Why We Chose Typescript redditblog.com
376 points by darwhy  2 days ago   367 comments top 30
TheAceOfHearts 2 days ago 7 replies      
If you want runtime assertions with flow you can use flow-runtime [0].

Babylon merged TypeScript support yesterday [1]. This means that in the future it should be easier to setup Babel with TypeScript.

I agree with the decision to go with TypeScript. It has drastically better community support. Most third-party components won't have flow annotations. Flow would've been a lot more successful from the start if it had started out with DefinitelyTyped support. Heck, even now I'm still wondering why they don't do that.

[0] https://codemix.github.io/flow-runtime/#/

[1] https://github.com/babel/babylon/pull/523#issuecomment-31172...

martin_drapeau 1 day ago 13 replies      
Is Typescript necessary for front-end Javascript?

In the many years I've done front-end with Javascript, type-related bugs were very, very rare. Textbook logic as why to use strong types make absolute sense. Yet in the real world, I've had to fight with logic, DOM, UX, framework incomprehension and other types of bugs - not types. Type issues were anomalies among bugs. When they came up, they were the easiest to fix.

Am I alone here? Anyone in the community have concrete examples of type-related bugs that took so much time to required using something like Typescript? Can anyone quantify that?

tqkxzugoaupvwqr 2 days ago 10 replies      
> Using a typed language in our frontend has already paid dividends: our code has fewer type-related bugs, we are more confident making large refactors, and our inline documentation is focused around concepts instead of object shapes and function parameters.

Sounds like they learned from their mistake of using Python on the server-side. Dynamically typed languages for large code bases are terrible.

sotojuan 2 days ago 3 replies      
It's hard to take these kind of posts seriously when Reddit mobile site is one of the worst performing SPAs I've ever used.
jondubois 1 day ago 5 replies      
I've worked with TypeScript on and off for a couple of years now. I don't like it.

The compilation step is a major pain. Even after using it for several months straight, I feel like I'm in a constant battle with the compiler. It's slow and difficult/annoying configuration problems keep coming up from time to time. It slows down my debug cycle and the compilation delay makes me lose my train of thought. I used to love using console.log() to quickly test an assumption in JavaScript; you cannot do this with TypeScript (it's not practical given the 5 to 20 seconds compile time); you have to use the debugger every time and step through stuff (even when you have a very good idea about which specific variable you want to check) - It's extremely cumbersome.

I have gone back and forth from dynamically typed languages to statically typed languages many times for years and I've spoken with engineers who used to be Java developers for many years, then switched to Javascript, then TypeScript and they shared the same thoughts as I did. TypeScript is slow and restrictive in a way that is unnecessary. It's got Microsoft all over it.

Also it forces all developers to use bulky commercial IDEs like WebStorm because you rely more on code completion to help you figure out the right types. You can say goodbye to Atom, Sublime and the rest... Atom's TypeScript plugin is not good enough unfortunately.

At my previous work, even developers who said that they liked TypeScript secretly didn't like it because they used the 'any' type Everywhere.

I wonder if the people who are making this decision have actually tried TypeScript themselves for any decent amount of time on a decent sized project. I don't think they know what they're getting into.

I decided not to renew a lucrative contract at a finance firm as a front-end developer in part because I did not enjoy using TypeScript every day.

opvasger 2 days ago 2 replies      
I'm coming at this from the Elm-camp, and my first impression (and largely why I think languages like Elm is promising) is how languages that are implemented as supersets of other languages have the potential to be as bad as their subsets.

The example that I was given was C++ and C, but I think TypeScript with it's gradual-typing approach is forced to remain potentially as bad as JavaScript itself - that is, if you're feeling weak and want to "get shit done", you can bypass all the goodness that TypeScript undeniably offers you.

For a language like Elm, the type-system is invariably gonna have your back - a value-proposition I think means a lot more in practice than some self-proclaimed pragmatists realize :)

tkubacki 2 days ago 3 replies      
Wondering why Dart was not considered. Large apps are written in it (Ad sense UI, Ad Words UI, Google CRM). Has good tooling (IntelliJ plugin, Webstorm). It's fairly easy to pick up - Java/C# like syntax. It's harder to shot yourself on your own foot (it's not JS superset). It has strong mode. It has superb tooling and package manager ( dartanalyzer, pub)
bjterry 2 days ago 0 replies      
I recently moved to a company using Flow from one using TypeScript, and it seems like the tooling ecosystem for TypeScript is way better than flow. The emacs plugin for flow in particular is practically nonexistent, and doesn't even provide proper syntax highlighting. TypeScript's by contrast is amazing.
maxxxxx 2 days ago 5 replies      
I recently had to do some Node.js scripting. My JavaScript experience was minimal and having worked almost exclusively with typed languages I considered Typescript. I decided against it eventually because I figured you need to know JavaScript first to understand the JavaScript ecosystem even if you are writing your own code in TypeScript.

Does this make sense or is it feasible to skip learning JavaScript and jump directly to TypeScript?

spiderfarmer 1 day ago 0 replies      
Reddit should stop promoting their app on every freaking action you do on their website. I refuse to install because of it. Just build a better website.
flavio81 1 day ago 2 replies      
Javascript is a weakly typed language and no superset like Typescript or Flow will solve this problem, just mitigate it.

However, on the other hand, I think that a good, experienced developer has no problem with that. The bugs that the experienced developer "fears" have nothing to do with type errors, which at the end are rather easy to solve...

erokar 1 day ago 1 reply      
The intellisense you get with TS is quite nice, as is improved refactoring. I also think typing in function signatures is a good thing and increases comprehension.

But the type-safety you get with these kinds of static languages only catches a few trivial bugs. There are also some situations where TS complains where it shouldn't, for instance it doesn't handle JS' built in reduce function very well.

In the end, it's a trade-off between the benefits and added costs. It is in no way a given that adding static typing to your JS project will be beneficial when all factors are considered.

dom96 2 days ago 1 reply      
As much as I am disappointed that Nim wasn't chosen, I am impressed that it was mentioned at all. Nim's JS backend is still rather young, and tooling is definitely lacking. But you can make some pretty cool things with it[1].

1 - https://picheta.me/snake/

paulborza 2 days ago 2 replies      
TypeScript is an excellent language.
bayesian_horse 1 day ago 3 replies      
Can someone please _prove_ that types make development somehow safer and more productive?

I for example believe that readability matters, and typescript is not that, compared to Python or even coffeescript.

And I really don't like how all the cool new languages lack significant whitespace.

You can probably tell I'm a Pythonista. A Pythonista always pays his technical debt.

z3t4 1 day ago 0 replies      
upvote should be a global with capital letters. but writing it like that might be a leaky abstraction of the underlaying database storage. should the web dev really have to know that some variable is represented by a tinyint later converted to a float, then "optimized" to a 32 bit int !? why make it into a failure point when there is so little gain in performance and make little sence in a high level, lose typed language such as javascript. stop writing javascript like its java!
mdip 2 days ago 0 replies      
Gotta comment ... I dove into TypeScript about a year ago and dropped it. I saw the value but because of a large number of libraries and custom components of my own, switching purely to TypeScript wasn't easy and I was in a hurry.

Fast forward several months and I picked it up again. I've now been writing everything that I would have done in JS in TypeScript and have built several applications using both Angular and React entirely in TypeScript. I've also sold my teammates on switching to the language.

When I was (stuck) writing JavaScript, the frustration factor was high for me. I'd get nebulous errors[0], hunt around the line it referenced, swear a little, and trace the code back to the cause. I would wildly estimate that one out of ten attempts at blindly running my code would succeed[1]. Even on unit tests, which had a higher degree of success since they were testing much less, still had a much higher fall-over[2] rate than I get in typed languages that I enjoy. The addition of types, which adds a little overhead, flipped that over. I am still surprised every time I refresh a page that uses code I'm modifying and it loads. The reduction in time spent debugging (and swearing) makes me enjoy the language more every day that I use it. It's even left me longing when I am writing code in other languages (mostly Java/C# these days) and features I have come to really enjoy (union types, intersection types and to a lesser extent the duck-typing nature of the language[3]).

Since crapping on any language, or feature/lack of feature of a language tends to become a religious war fought with verbal violence, allow me to admit a few points: Traditionally, I avoided JavaScript and jobs related to it. Personally, I hate the language. This means I've spent considerably less time researching all of the best practices/techniques for surviving those cases where I have to write JavaScript. I started in C and Pascal and prior to a few months ago spent 99% of my time in typed languages. I am an advocate for unit testing[4], but I find test-driven development requires me to work backward and it's less productive for me. I'd imagine that if I went all in with TDD, I might see fewer of these problems, but many of the best practices for JS development are best practices in the languages I am more proficient in and despite following these practices, JS design led to these practices being less effective at reducing bug frequency. Yes, I could just be a yelling 'get off my lawn' because I'm unwilling to change[5]. But I've also worked closely with highly-skilled JS developers who could rapidly produce incredible things as a result of its flexible, dynamic typing. Incredibly, though, one of those 'huge JavaScript advocates' was the one who told me to give TypeScript a chance late last year. Though he would always fight me on the "dynamic vs. static" thing, his argument was that TypeScript's type system was light-weight enough to keep out of his way while strict enough to lower the frequency of self-inflicted foot bullet-holes (paraphrased). Really, though, ... two nulls, asinine boolean implicit conversions necessitating code like double-bang and === / !== operators[6] should be enough.

[0] Often depending on whatever framework I was using, but I've rarely found one that returns an error that results in a really obvious 'oh, I know what I did to cause that'

[1] I like to check that a component renders visually appropriately and often do a quick check before I've written all of the required unit/integration tests to make sure it's rendering accurately (right data/right result).

[2] As in, something fails badly enough to stop execution rather than just failing on an assert for an incorrect result.

[3] It's a love-hate thing for me -- the result is being able to reduce boilerplate making mostly-compatible types interact, but the down-side is that the compiler giving a pass to "A=B" when "A" has at the properties of "B" results in some subtle bugs that have already bitten me more than once.

[4] Though I don't buy into TDD (either before writing the code or after) solves most of the issues of dynamic typing. I've had more than a few tests fail because of a type-related issue...in the test.

[5] Except that I love learning new languages and 'keeping up' and have found that as I've aged, I can pick up new languages far more quickly than I could in my early 20s... [plug]RUST![/plug] I'm also not terribly old, nor terribly sensitive about being called an oldster.

[6] I don't recall who, but someone was once reading code out-loud and said "if action fuckin' equals 'ADD' and payload.Length doesn't fuckin' equal zero". Adding in the fuckin' every time he encountered the "really, really [not] equals" operator. So that is how I mentally read those. I'll never forgive him (sorry for the swears ... and doubly sorry if you end up reading code like this as a result).

coldtea 1 day ago 0 replies      
>Should work on both the client and the server. SEO is very important to Reddit, so lack of universal rendering is a deal breaker.

This sounds like a total non-sequitur.

vog 1 day ago 2 replies      
From the article:

> Typescript also came with a lot of social proof and better assurances about its longevity There are several large projects using Typescript (examples include VSCode, Rxjs, Angular, and Typescript itself),

While I agree with the sentiment, I don't understand why they include "Typescript itself" in this list. Isn't that a circular argument?

dcgudeman 2 days ago 3 replies      
I would like to know if they are moving towards a SPA architecture and, if so, what framework they will be using.
slimsag 2 days ago 3 replies      
> Should work on both the client and the server. SEO is very important to Reddit, so lack of universal rendering is a deal breaker.

Are client-side only JavaScript applications not handled well by the likes of Google et. al. today? I was under the impression that they run a full JS interpreter.

macmac 2 days ago 1 reply      
I would have more confidence in the list if they spelled ClojureScript correctly.
noway421 2 days ago 0 replies      
The screenshot in the header is strange. Why would they reimplement arrayToDict function instead of using lodash's _.indexBy
emilsedgh 2 days ago 2 replies      
I personally skipped Coffescript, Javascript Generatorsand Angular.

And none of them passed the test of time.

So I think I made the right call by not adopting them super early.

I think I'm going to do the same with Typescript. Hopefully static typing will be adopted by ES.Next and then I'll port my programs to it.

iamleppert 2 days ago 0 replies      
From what I know of the Reddit community and their feedback on anything any of these "new devs" have done, it's not going to matter how pretty, well-typed or "scalable" (whatever that means) the code is.

All the new product is just awful. I feel sorry for them.

rmuratov 2 days ago 2 replies      
Anyone know existing open source projects written in Flow?
amagumori 1 day ago 2 replies      
ok. i get why you chose typescript.

however, why did you choose this weird cursive-ish monospace font? and...where can i get it?

sushisource 2 days ago 5 replies      
Lord that font in the header image is disgusting. Who would want a faux-cursive programming font?
lapsock 2 days ago 0 replies      
Buy why are you redesigning the site? It looks fine as it is. Let me guess some designer told you to redesign it in order to justify his paycheck.
revelation 2 days ago 1 reply      
"We picked Typescript because this is what we feel everyone else is using and wow are we late to this party"

Take nothing away from TS but the mobile Reddit is all the proof in the world that no matter the language, the paradigm or the ecosystem, someone can still use it to turn out a horrible product.

Judges refuse to order fix for court software that put people in jail by mistake arstechnica.com
338 points by kyleblarson  3 days ago   45 comments top 11
wonderwonder 3 days ago 4 replies      
Obviously a case of not enough wealthy people being affected. If the poor are falsely imprisoned its just business as usual and they lack the clout to hire an aggressive talented attorney. This will likely rectify itself as soon as a substantially wealthy individual is imprisoned improperly.

Just a continuation of the sad state of our legal system where punishment is not so much an issue of guilt but of wealth or more specifically the lack thereof.


rrggrr 3 days ago 1 reply      
>Even if there was standing, the plaintiffs did not establish that they would suffer harm or prejudice in a manner that cannot be corrected on appeal. They also fail to show that they lack an adequate remedy at law, as they may move for correction of erroneous records at any time, the 1st District continued.

Civil and criminal Courts are intentionally blind to the cost, suffering and disruption the process inflicts. It should be obvious that filing an appeal and moving to correct records is expensive at a minimum, and hugely disruptive to people struggling to survive, support families, etc. It may be easier and cheaper to simply do time for alleged offenses, innocent or not.

Its appalling. @wonderwonder's comment is on target.

Overtonwindow 3 days ago 0 replies      
"the public defenders office has filed approximately 2,000 motions informing the court that, due to its reportedly imperfect software, many of its clients have been forced to serve unnecessary jail time, be improperly arrested, or even wrongly registered as sex offenders."

There's the metric. Maybe there's a law firm that's willing to sue on behalf of all of these people. Surely the hassle of that lawsuit would push them to change, and those defendants would have standing.

andrewla 3 days ago 0 replies      
https://news.ycombinator.com/item?id=13069775 was posted as the nature of the problem began to occur, and has many interesting discussion on the underlying problems involved.
downandout 3 days ago 0 replies      
The law does allow for all of these people serving extra days in confinement to seek financial damages for each day. It seems to me like a civil attorney could file boiler plate lawsuits for each of these people, since the underlying facts in each case are nearly identical. Besides the money that both the attorney and his clients would enjoy, that would be by far the fastest route to getting this fixed. It will stay broken until it hits the county in the wallet.
baybal2 3 days ago 1 reply      
In any normal jurisdiction, a prima facie mistrial would've resulted in an automatic disqualification and disbarment of a presiding judge.

America admits no legal liability of court employees over anything except gross miscariage of justice, and even for such cases Americans invented insane legal theories that let few judges walk away from charges ranging from corruption and bribery to selling "freedom for blowjob"

Push for personal liability of judges in mistrials and violations of court protocols

angry_napkin 3 hours ago 0 replies      
Unit tests, while not the prescription for everything, do tend to be important in some domains.
slang800 3 days ago 0 replies      
Actual arguments used by the judge: https://www.documentcloud.org/documents/3514379-Order-Denyin...

From a brief reading, it seems like the complaint is delays/errors in document entry by clerks (like updates to probation terms or bail postings) and that the search interface isn't connected with other databases.

Just sounds like confusing software, or users that haven't been trained to use it. However, it's not clear if it's causing more or less clerical errors than the last system they had.

hvo 3 days ago 0 replies      
How about if the two names of sitting judges are added to the list,one appellate judge and one supreme court judge. I am confident the software will be fixed.
WCityMike 3 days ago 0 replies      
As a legal assistant in Illinois, which will start using this software statewide by the end of the year (excluding Cook and some other counties, which adopted other software), this is troubling news.
How I learned to code in my 30s medium.com
408 points by bradcrispin  5 days ago   199 comments top 39
soneca 5 days ago 8 replies      
I started to learn to code last November at 37yo.

About 30 hours a week for two months I finished the Front End Certificate from freeCodeCamp (highly recommend the site for starters). Then I decided it was better to build my own projects with the tech I wanted to learn (mostly React) using official documentation and tutorials. This is what I accomplished in around 3 months: www.rodrigo-pontes.glitch.me

Then I started to apply to jobs. After around 4 rejections, last week I started as Front End Junior Developer (using Ember actually) at a funded fintech startup with a great learning environment for the tech team.

Very proud of my accomplishment so far, but I know the rough part is only starting.

oblio 5 days ago 2 replies      
Somewhat related, perhaps the most spectacular story of a late coder I've ever heard is that of https://en.m.wikipedia.org/wiki/George_Pruteanu (somewhat controversial Romanian literary critic and politician).

Basically, despite having a major in Romanian literature and spending a lifetime as a literary critic, with almost 0 contact with computers, he decided in his late 40s and early 50s to understand the things behind the internet.

So he picked up on his own: PC usage, internet browsing, PHP and MySQL coding, enough to make his own website and a few apps. That, starting from a point where he could barely use a mouse.

When asked during a TV show how he did it, he replied:

Like I did things for my literary criticism: I read an 1 meter [high stack] of books about the subject.

Every time I need motivation I think about that quote :)

chrisdotcode 4 days ago 4 replies      
I'm sorry, but I can't help but be incredibly cynical and jaded about this, and from reading the comments, nobody seems to have the same sentiment. If this was titled "How I learned to play the piano in my 30s", I don't think anybody would bat an eye: learning an instrument is not like joining some secret cult, and anybody can develop basic music literacy over a year or two. I also do not doubt this man's proficiency, but 30 is not old outside of tech circles. This youth fetishization in tandem with the "everybody's dog should learn to code" meme I think is very short-sighted.

Tech is wildly lucrative, is in current demand, and is not physical labor. That reduces the barrier to entry to anybody who has a laptop and an Internet connection. Honestly how many people would be so eager to learn to code if you dropped down the average tech salary to 45,000 (matching other professions)? I think far less: people seem to learn to want to code to ride the high-pay wave, not for the actual love of code.

Again, let's compare to music. Anybody can go to a guitar store and buy a 200$ keyboard. But if I took a 14-week class and afterwards had the aught to call myself a "Music Ninja Rockstar" or some other such nonsense, and start applying to orchestras and bands, I would be called crazy.

Software has eaten the world, and it's here to stay. Increasing the general software literacy is no more different than saying we should teach everybody how to read (and a good thing). However, throwing each person in a bootcamp telling them "coding is wonderful! you can master it in 5 seconds and make 200k a year!" is no different than holding a similar bootcamp for any other vocation and then wondering why the average plumber can't actually fix your house, but can only use a plunger. I sincerely hope this trend stops. This mindset is broken, and the paradigm is highly unsustainable. Where will we be in 20 years?

brandonmenc 5 days ago 4 replies      
When computers were invented, a lot of the people involved were already adults - plenty in their 40s and above. Before home computers, you didn't get to use a computer until your 20s.

Therefore, the first few waves of programmers included a lot of "already olds."

This is always overlooked as evidence that older people can learn to program.

oweiler 4 days ago 2 replies      
I've started learning to code when I was 26 and people told me I was too old and should stay with my shitty job.

Fast forward ten years and I'm a senior software engineer which gives trainings on Spring Boot and Microservices and helps companies implementing Continuous Delivery and Microservice architectures.

You may think I'm gifted but I'm actually not. I'm a very slow learner and bad at Math. I mostly program from 9 - 5 and only work on side projects when I'm feeling to (which sometimes means not doing any commits for months).

But I like what I'm doing and work hard to improve.

projectramo 5 days ago 5 replies      
This is generally a decent article about the balancing non-technical skills, and exerting effort in learning.

I found it noteworthy that the "hook" in the title is that the person started in (gasp) their 30s. Why should that be noteworthy? Why wouldn't someone start coding in their 30s, 40s or 50s?

Now it is true that starting a new profession late in life may not always make sense because, presumably, you have to little time left you might as well "ride it out" contributing what you know.

So, yes, it is unusual for a doctor to start learning mathematics in their 40s (though not unheard of: https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di), but it isn't less strange to make such a change in computer science than any other field.

bradcrispin 5 days ago 2 replies      
I once said that "I realize nothing I do in engineering will ever end up on the front page of Hacker News." Feels like a once-in-a-lifetime moment. Thank you
jondubois 4 days ago 9 replies      
I've been programming for 13 years.I started when I was 14 years old and studied software engineering at university. These days, when I take on well-paid contract work, sometimes I find myself working alongside people who only started learning to code at around 25 and never went to university.

It's upsetting for me to think of all the fun I missed out on in my early life because I was learning programming and pushing myself through university and it turns out that it doesn't even get me a higher pay check in the end.

These days, nobody cares that I'm proficient in all of ActionScript 2 , ActionScript 3, C/C++, C#, Java, Python, AVR studio (microcontroller programming), MySQL, Postgres, MongoDB, RethinkDB, PHP, Zend, Kohana, CakePHP, HTML, CSS, Docker, Kubernetes, AWS, JavaScript, Node.js, Backbone, CanJS, Angular 1, Angular 2, Polymer, React, Artificial Neural Networks, decision trees, evolutionary computation, times/space complexity, ADTs, 3D shaders programming with OpenGL, 3D transformations with matrices, image processing... I can't even list them all. I could wipe out 95% of these skills from my memory and get paid the same.

It only gives me extra flexibility... Which it turns out I don't need because I only really need two of these languages (C/C++ and JavaScript) and a couple of databases.

analog31 5 days ago 1 reply      
When I was a kid, my mom was teaching high school, and thought that she might get laid off due to declining school enrollment in the rust belt. She took a year of programming courses at a community college. The next year, they asked her to teach the course, which she did.

Most of her students were 30+, many were working in the auto industry, including assembly line workers. At the time, there were a lot of bright people working the lines because it had always been possible to skip college and land a decent middle class job at the car plants. But that was coming to an end.

Her students were taking one year of CS and getting hired into reasonably decent programming jobs.

In fact, I was also interested in programming, and learned it in school. When I went to college, my mom discouraged me from majoring in CS because she literally thought programming was too easy to justify 4 years of classroom training, and she thought that the job market for programmers would quickly saturate.

Let's just say we guessed wrong. ;-)

But at the time, college level CS was still maturing as a discipline. Many of the 4 year colleges didn't have full blown CS major programs. I'm betting it's harder now, but I honestly don't know if programming per se has fundamentally gotten any harder.

Edit: Noting some of the comments, I certainly don't want to disparage the CS degree. After all, I majored in math and physics -- hardly a turn towards a practical training. I think these are fields where you have to be interested enough in the subject matter, to study it as an end unto itself. Being able to do actual practical work in a so called real world setting is always its own beast, no matter what you study.

jarsin 5 days ago 4 replies      
What i always tell people if you find yourself naturally drawn to it then you will eventually find some level of success. If your in for just the money then you will not stick with it and it probably won't happen.

Same is true for just about most things in life.

This guy found he was naturally drawn to it. End of story.

teekert 4 days ago 1 reply      
I also learned to code after 30. At some point Excel and Origin weren't dealing well with ever increasing data sizes in my field (biology). I did an intro course on Python (2) of 3 days (basic Python and some Numpy). Back on the job I immediatly switched to Python 3, learned about Jupyter and was lucky enough to have a job where I could take time to learn (although it doesn't take much time to get back up to Excel/Origin level data analysis skills with Pandas/Seaborn/Jupyter!).

That combination is still gold for me although bioinformatics is forcing me into VSCode/Bash/Git territory more and more. I can recommend anyone wanting to do data analysis to start with the Jupyter/Python/Pandas/Seaborn combo, the notebook just makes it very easy to write small code snippets at a time, test them and move on. Writing markdown instructions and introductions/conclusions in the document itself help you to make highly readable reports that make it easy to reproduce what you did years ago.

colmvp 5 days ago 1 reply      
> Immersion means 100% focus. If possible, no friends, no drinking, no TV, just reading and writing code. If you take five minutes off to read the news, be aware you are breaking the mental state of immersion. Stay focused, be patient, your mind will adapt. Eliminate all distractions, of which you may find doubt to be the loudest. Immersion is the difference between success and failure.

Certainly, I think Deep Work require full concentration. So when in the mode of learning, I find keeping focus instead of going to a website to read news, or checking e-mail/messages to be incredibly important in maximizing the incremental process of grasping concepts.

That being said, whereas the author seems to prefer taking a few months to go deep into it, I prefer to immerse myself over a long period of time by learning and practicing a few hours per day (just like an instrument), letting my mind stew in the knowledge during diffuse thinking periods, and then come back to it the next day.

AndyNemmity 5 days ago 1 reply      
I'm 36 and learning how to be a real programmer. Was a Linux Admin, and an architect for my career. Did presales, and became an expert at a lot of different roles within the field.

Never was truly a developer, and decided I wanted to accept a job as one. I've programmed in the past, how hard can it be?

Wow, it's been enlightening. Really hard. I thought it would be straight forward since I've used scripted quite a bit in perl in my past, but being a developer is much more than writing a few scripts to automate a task.

I'm a few months in now, and I am still slower than all my colleagues by quite a bit, and the main language I'm working in has changed already, moved from Python to Go.

Even right now, I'm stuck on an issue around pointers and data structures that feels like it should be easy, and I'm just not getting it.

All you can do is keep confidence up, and keep at it. Immersing in it, and knowing that irrational levels of effort will lead to results.

I thought it would be easier though :)

makmanalp 5 days ago 0 replies      
Every time I see stuff like this I think of Grandma Moses, an accomplished artist who started painting at 78: https://en.wikipedia.org/wiki/Grandma_Moses
alexee 5 days ago 2 replies      
My father is 59 and started to learn programming half a year ago. So far I was giving him algorithmic tasks to learn basic language constructs, he is now comfortable with basic Java and is able to solve most of easy problems from programming contests.And idea where to go from here? I don't think solving more difficult problems (like that involving algorithms or creative thinking) would make sense at this point. I tried to give him simple GUI project (tick-tac-toe in Swing), this kind of worked with lots of my help, but of course it was badly designed with model-view mixed, and he is unable to understand design pattern concepts at this point.
paul7986 5 days ago 1 reply      
At 31 I took my savings for my house, quit my robotic customer service job and started a startup. I worked on my 1st startup for three years and along the way taught myself front end development and design. Which I now do for a living.

I say startup and if it fails like 80 to 90% due you gained an in-demand skill that you can use to make a nice living.

partycoder 5 days ago 1 reply      
"Learning to code" is somewhat vague.

The "Sorites paradox" is something like: how many grains of sand form a heap? if you remove or add one, is it still a heap?

So, exactly what exactly makes you a programmer? that varies a lot depending on who you ask. Someone said a programmer should be able to detect and report a bug to a hardware manufacturer. Some others say that "learning" (partially, because most programmers don't know every single aspect of a programming language) a general purpose or Turing-complete language makes you a programmer.

I define an "X programmer" where X is backend, frontend, data, whatever... as someone who can not only implement a feature, but do it through understanding rather than through a heuristic of trial and error or reusing code. Also, a person that is able to troubleshoot what is going on if some of the underlying systems is not working as expected.

sonabinu 4 days ago 0 replies      
I started in my 30s after an earlier stint in high school. It was a real struggle. I work in a SE engineering role now with a focus on data science. My stats and math skills have given me an advantage but I still feel like I'm a rookie in many ways. It is important for more of us who transition to SW careers to speak about our struggles and techniques to hang in there. It will render confidence to those who feel alone as they try to find their footing.
hamersmith 4 days ago 0 replies      
Going from not working in the industry to leading a team of developers in just a few years is extremely impressive. I have over a decade of experience as developer and have not made it yet to that kind of lead position. Is this because your technical skills were superior to your peers or because you possessed additional soft skills, if so, what advice would you give for moving into Lead Developer/Engineering Manager roles?
ptr_void 5 days ago 0 replies      
As student trying to make sense of job space and prospects, there's just too many statements that gets posted on the internet that seems to contradict each other.
dzink 5 days ago 1 reply      
You need more stories like this to show people who wouldn't normally consider CS as a viable, lucrative path to a second career. Areas with high unemployment and people in dwindling old industries may get a second wind in life if they tried his approach. A big change like this also requires multiple exposures to the currently much easier to reach CS education as a possible solution, so I hope more people produce accessible content like this.
jordache 5 days ago 1 reply      
Is a full stack person still realistic with today's web technologies?

I mean to build up expert level skillset, you'd have to really dedicate your self into learning the particularities of not just languages but also their runtime environments.

Unless you have no life, and only sleep, eat, code, or super intelligent, being able to absorb and stay current with everything.....

Other than that, I just don't see the full stack mentality working

cafard 4 days ago 0 replies      
I learned to code at 18. I did not fall in love with programming: this owed at least in part to Fortran IV, punch cards, and a Burroughs mainframe that was often under maintenance. But I coded a craps game simulation, and passed.

I relearned to code at 31 or so. There was data over here that I needed in a different format over there, and didn't care to retype. I taught myself some minicomputer assembler from the instruction set reference. At that same job, I learned to write macros in the OS's command-line interpreter. I found that I enjoyed programming. And I went back to school.

That was a while ago, long enough that the second or third language that I learned on my own was Perl 4. I would never have called myself a ninja or a rockstar. Yet I have over the years written some very useful code.

cr0sh 5 days ago 1 reply      
A possibly similar tale is the one being done by some former Kentucky coal miners:


digi_owl 4 days ago 0 replies      
I have found that the problem i have with learning programming is not the logic of it, but of memorizing and internalizing all the functionality provided by the standard lib etc.
kulu2002 4 days ago 0 replies      
I learnt C, C++, Shell scripting gnu makefile creation directly on project. When I did my degree I only knew C just for sake of passing. I was directly exposed to writing device driver for I2C and SPI the very first day and someone just dumped a 1GB of technical junk on my PC which include some APIs of RTOS I was supposed to work on!But I would say that that was really a steeeep learning curve... I am amazed and surprised today when I look back from where I started 13 years back :)
logingone 4 days ago 0 replies      
What I found recently of someone who switched from another career to programming is not that they struggled with programming so much but that they struggled with the environment. I had the misfortune of working with an ex-lawyer, two years of programming experience. Hell. He also lacked the ability to have any sort of interesting conversation about programming as he had no background to reference.
skocznymroczny 4 days ago 0 replies      
I read this as "How I learned to code in 30s" and I thought it'd be a parody of "Learn X in Y" tutorials.
kodepareek 4 days ago 0 replies      
I started learning to code when I was 31. Though I did have an engineering degree, but I learnt basically nothing after getting into engg school. Spent most of the 4.5 years worrying whether I was smart enough for this to do this and setting myself up for very dismal results.

Became an advertising copywriter after college and spent 7 years in the copy mines. It was truly a profoundly uninspiring industry (though I continued to doubt myself and never really got to where I wanted to and should have)

Founded a startup with a friend hoping for a fresh start. Took forever to find a developer so in some strange moment of overconfidence (sanity?) I decided I would take a shot at it and started learning Python. Found myself hypnotized by the codeacademy course and knocked it off in 3 days or less.

Some a few started programs then a developer friend came on board as an advisor and told me to pick up Django. In a few months (with him and another good friend doing all the heavy lifting) I got enough into the thing to be able to scrape data, make API calls and develop the admin interface.

With everything I learnt I found a block of that constant self doubt melting away. I had never felt so capable and in control in my entire life.

Startup wound up though and I had to take a job at a design agency. Though I picked up the basics of HTML and CSS there most of my work was managing clients (aarghh) Left in a few months as a writer at this startup working part time.

But within a month of me joining the CTO quit and the company was in massive flux. I just stepped forward and said I would code. The other developers happily took the help and I got my first job as programmer. The next 1.2 years were just full days of writing scripts to automate our workflow and figuring out this danged JS, Node thingy (which I really love now btw)

When this place wound up too and I studied React, now have a big 6 month project at this company helping them automate their workflow with an admin app. Am writing the fullstack code, all by myself. Which is so exciting and empowering.

Programming is awesome. It's my one advice to anyone who asks me for advice these days. It changed my life completely. From being a constantly depressed and volatile guy I am now fairly confident and really rare to anger.

Surprise bonus, I have become far more creatively productive after leaving the creative industry and have written a bunch of songs (that I don't hate) and also started learning to play the Piano, something I always wanted to do.

Next up is Algos and Data Structures the next time I have enough saved for a 3 month immersion. I really do think they are super important. Plus picking up a new language. Suggestions welcome.

chirau 5 days ago 1 reply      
So do bootcamps teach data structures and algorithms?
maggotbrain 5 days ago 1 reply      
Reading that makes me glad to be a network engineer. Ethernet, BGP, and OSPF don't change all that much. I am all for learning the latest Python, NetMiko, NAPALM stuff for network automation. This article reads like masochism.
thinkMOAR 4 days ago 0 replies      
The title implies as if you are ever 'finished' with learning to code, anybody thinking about starting, this is a lie, it's a never ending road :)
mattfrommars 4 days ago 0 replies      
I'm facing problem of finding a mentor and space I want to succeed is being able to do anything with power of Python!
sAbakumoff 4 days ago 1 reply      
2017 : codecamps produce an army of amateurs that make interent of shit.
CognacBastard 5 days ago 0 replies      
This is great advice for someone learning to break into the coding world.
lhuser123 5 days ago 0 replies      
Good inspiring story
minademian 4 days ago 0 replies      
contains a lot of real advice. the sharing of experiences and insight into his process makes this piece really great.
commenter1 4 days ago 2 replies      
LordHumungous 5 days ago 0 replies      
It's not that hard jeez
Why Is NumPy Only Now Getting Funded? numfocus.org
355 points by numfocusfnd  5 days ago   107 comments top 25
rjbwork 5 days ago 2 replies      
We have this problem in the .NET world. Accord.NET is written by a brilliant academic and programmer. It's well written, and has a good API, but it is largely the effort of this one dude, with minor contributions from a smattering of other fellows.

Again, it is great in general, but it has bugs and rough edges here or there, and a lot of people don't trust it for production. I wish there was a way for people to be properly compensated for building and maintaining such vital scientific and mathematical computing software.

jordigh 5 days ago 3 replies      
Also a problem for Octave. Remember this?


It also stings a little when people say that it's completely obsolete because Matlab itself is "legacy" and we should all be abandoning the language, Octave included... and yet, even though I like numpy and Python and matplotlib and Julia and R, I still find myself reaching for Octave whenever I need a quick visualisation of some data.

js8 5 days ago 1 reply      
"The problem of sustainability for open source scientific software projects is significant."

Yeah, William Stein can tell these stories too: http://sagemath.blogspot.cz/2015/09/funding-open-source-math...

Radim 4 days ago 2 replies      
Is the idea that "foundational work" (in any field) can be done without "huge sacrifices" widely accepted?

It sounds a tad unrealistic to me, unsupported by history.

It's as if people want to have it both ways: Create innovative SW, but also don't take risks or make sacrifices.

Offer software "for free" (and belligerently oppose even something like GPL), but also get paid (preferably by the government, so the people actually footing the bill have no say in it) and be long-term sustainable.

What's next: get paid, but also don't pay income taxes? Give away project control, but also keep it? :)

All understandable desires, but a little schizophrenic.

Disclaimer: I am a big fan of open source and NumPy in particular. I mentor students and OSS newcomers, I even pay one full-time dev to work only on OSS. It's just that I try not to kid myself about where the time&money comes from and where it goes, and I try not to have random people pay for my hobbies.

Extremely relevant previous HN conversion on this topic:


wodenokoto 4 days ago 1 reply      
The "Every successful science library has the ashes of an academic career in the making" quote has been mentioned several times in the comments, so I thought I would give a plea to everybody who works in academia to help the people who build the foundational tools of your research by citing them in your papers:


sandGorgon 4 days ago 1 reply      
Because developers generally don't know (or don't like) the outreach necessary to fundraise.

For example in numpy case - https://github.com/numpy/numpy.org/issues/9That's a request in March 2017 to add a donation button to the website. Im not sure that 6 months back, if Numpy was legally structured to receive larger funding.I posted a similar comment (with many more replies) in the context of Octave and it's funding https://news.ycombinator.com/item?id=13604564

Tl;Dr Don't ask for donations - instead sell "gratitude-ware"

There are tons of people who WANT to support these projects, but you have to make it easy and accountable to do that. The best example that I usually give is Sidekiq.

@mperham is awesome that way "This is exactly why I disclosed my revenue: people won't know there's a successful path forward unless it's disclosed. I want more OSS developers to follow my lead and build a bright future for themselves based on their awesome software."

In fact, I believe there's a start-up to be done here. "Stripe Atlas for Open Source software"

danjoc 5 days ago 4 replies      
>the entire scientific Python stack was essentially relying upon on the free time work of only about 30 peopleand no one had funding!

30 people? I remember a time when a certain fruit company would enter a field, literally hire all 30 of those guys, and put them behind closed doors. Then in 2 years they'd dominate the field for the next decade.

Are these guys turning down offers? Or is the fruit company that poorly managed now?

projectramo 5 days ago 1 reply      
Q: Are we conflating two issues?

Is there a difference between the "sole developer problem" and the "lack of funding" problem.

I mean, even if a project finds funding, does it follow that it will attract more talented developers?

One way to distinguish the two issues is to look at for-profit software. In the cases where there is one primary developer, do they find it easy to keep the software going when the person retires?

I ask this because, I think, beyond the very real monetary issue, there is a question of how development works. Do we need one very talented individual who does the lion's share of the lifting?

travisoliphant 5 days ago 1 reply      
Original NumPy author here. I have a lot to say on this topic, given that it has literally consumed my life over the past 20 years. You can go here for some thoughts about some of this: http://technicaldiscovery.blogspot.com/ There are several articles there that relate but in particular http://technicaldiscovery.blogspot.com/2012/10/continuum-and... and http://technicaldiscovery.blogspot.com/2017/02/numfocus-past...

I knew what I was getting into when I wrote NumPy. I knew there was not a clear way to support my family by releasing open source software, and I knew I was risking my academic career.

I did it because I believed in the wider benefit of ideas that can be infinitely shared once created and the need for software infrastructure to be open-source --- especially to empower the brightest minds to create. I did it because others had done it before me and I loved using the tools they created. I hoped I would inspire others to share what they could.

There have been a lot of people who have helped over the years. From employers willing to allow a few hours here and there to go to the project, to community members willing to spend nights and weekends with you making things work, to investors (at Continuum) willing to help you build a business centered on Open Source.

There are many people who are helping to fix the problem. In 2012, I had two ideas as to how to help. Those who know me will not be surprised to learn that I pursued both of them. One was the creation of NumFOCUS that is working as a non-profit to improve things. The second was the creation of Continuum (http://www.continuum.io) to be a company that would work to find a way to pay people to work on Open Source full-time.

We have explored several business models and actually found three that work pretty well for us. One we are growing with investors, a second we are continuing with, and another we are actually in the process of helping others get started with and ramping down on ourselves.

Along the way, I've learned that open source is best described in the business world as "shared R&D". To really take advantage of that R&D you need to particpate in it.

We call our group that does that our "Community Innovation" group. We have about 35 people in that group now all building open-source software funded via several mechanisms.

We are looking for people to help us continue this journey of growing a company that resonantly contributes significantly to Open Source as part of its mission. If you are interested, contact me --- I am easy to track down via email.

ssivark 5 days ago 0 replies      
There is a long-standing problem in open source software, which is that there is no "business model" associated with funneling resources to people putting significant effort into it. Setting up a consulting business to monetize software creates the perverse incentive to make software harder to use, but there seem to be some examples where this model has worked out reasonably.

Open source projects are typically started by people working in the field, who have a strong urge to scratch some itch. Even if we find a way to find money for them to work full-time, they often don't have the desire to "productize" software, or to create/nurture/govern an organization around bringing together different stakeholders who might be able to use, or contribute to the software. (We got really lucky with Linus+Linux)

Q6T46nT668w6i3m 5 days ago 0 replies      
I believe Im one of (or near) the top 30 contributors (Ive made substantial contributions to all of the aforementioned packages), and Im funded to write scientific software. Im extremely fortunate. Unfortunately, like so many things, I suspect it has everything to do with pedigree (e.g. my lab, my institution, my peers, etc.) rather than my (or my coworkers) exact contributions. In fact, I dont know if any of my labs grants has ever explicitly mentioned our contributions to one of the discussed packages. However, this could change. Im extremely encouraged, for example, by the comments from new institutions like OpenAI or the Chan Zuckerberg Initiative about the necessity of funding software.
ArneBab 5 days ago 0 replies      
Another project which is easy to overlook: Think about how many scientists use Emacs for most of their development and writing. But there is (to my knowledge) not even a single paid developer working on it.

( http://gnu.org/s/emacs )

mschaef 5 days ago 0 replies      
It's surprising to me that people are surprised by this.

Even setting aside the fact that the people that can do this work are few in number, the vast majority of people need a way to support themselves and their family. If the number of people that have these skills is low, the subset that is both altruistic enough to donate them for a sufficient period of time and personally able to do so must be vanishingly small. (And the negative feedback a lot of OSS maintainers receive doesn't help.)

Companies have the same issue... there has to be a fairly direct connection between an expenditure (paying developers) and a return on that investment. That can be a very difficult argument to make.

mtmail 5 days ago 1 reply      
If your open source project needs funding there is https://opencollective.com/opensource (currently waiting, I'm not affiliated)
santaclaus 5 days ago 1 reply      
It blows my mind that NumPy is just getting funding. How did the Eigen (used in TensorFlow, among other things) folks keep it going?
mattfrommars 5 days ago 2 replies      
Ok, so the problem seems to be 'lack of maintainer' or could be stretched to contributer. The article later linked to https://www.slideshare.net/NadiaEghbal/consider-the-maintain... which kind of reminded me a problem which I'm facing.

After getting through basic of "Introduction to Computer Science Using Python" and forever pending goal to become a "Python Developer", is anyone here who is experience in Python willing to be my mentor? In return, free Python labor. :)

teekert 4 days ago 1 reply      
I tried at work to get some money to the maintainer of iRAP, an RNA sequencing analysis pipeline we depend on heavily at the moment. But business sees this as wasted money, it's there for free, why not take it? Reading this, I think I'm going to double down on my efforts again. We get so much value out of a huge pile of FOSS software, we should be donating. Meanwhile we have spend piles of money on Matlab for years and we aren't even allowed to run Linux on our Laptops if we wanted to.
largote 5 days ago 0 replies      
Because it's an important tool for Machine Learning, which makes money from that (of which there's plenty going around right now) flow into it.
jeremynixon 5 days ago 0 replies      
This is still an important problem for numerical compute in other languages. It's a struggle to do data analysis and write machine learning applications in Scala, Java, C++, etc. due to a lack of Numpy / Pandas style ease of use and functionality.
abousara 4 days ago 3 replies      
Each country have its own taxes to sustain fundamentals, same thing must apply in software industry. Software engineers might be paid +100k$ while developing over open source languages/frameworks or libraries. That's is not fair, it is like riding a starving horse.

A French point of view (after all, France invented VAT...) would suggest to introduce a taxe on software engineers salaries (1% ?) and redistribute this fund on most used languages/frameworks/libraries and use a part to sustain a new projects.

alannallama 5 days ago 0 replies      
This is exactly the problem Open Collective[0] exists to solve. Often times, there are people who want to financially support a given open source project, but there is no channel by which to do so. Creating the financial channel is the first step toward a much needed culture change where the assumption is you will support the open source you rely on, especially if you're making money off it.

[0] http://www.opencollective.com

prewett 4 days ago 1 reply      
I thought that Enthought sponsored a lot of NumPy development, kind of like as a corporate caretaker or something, is that not the case?
kem 4 days ago 0 replies      
I appreciate this article being posted, and have the utmost respect for NumPy developers. The urgency and discrepancy between use of certain important open-source libraries, and their support, is bewildering sometimes.

As I was thinking about it, though, I'm not surprised NumPy hasn't been funded before. The reasons why say a lot about biases in memory.

It wasn't that long ago that the sorts of things NumPy does were seen as fairly niche, and in the domain of statistics or engineering. It's only with relatively recent interest in AI and DL that this has been seen as within the purview of Silicon Valley-comp sci-type business, as opposed to EE or something different. I still am kind of a little disoriented--the other day, looking through our university's course catalog, I realized that certain topics that would have been taught in the stats or psychology departments are now being seen as the territory of comp sci. Statisticians have written excoriations about being treated as if they don't exist, as comp sci blithely barrels forward, reinventing the wheel.

I'm not meaning to take sides with these issues, only pointing out that I think the world we live in was very different not so long ago. It might seem puzzling that NumPy hasn't had more funding, but I think that's in part because what it's most profitably used for now wasn't really seen as much more than academic science fiction not too long ago.

The other part of it too, is that until relatively recently, if you were to do numerical heavy lifting, you'd almost certainly be expected to do that in C/C++ or maybe Fortran. There's a tension in numerical computing, between the performance and expressiveness that's needed, and Python is on one end of that continuum, far from the end that is traditionally associated with complex numerical computing. Sure, you had things like MATLAB with Python in the same functional role, but those were largely seen as teaching tools, or something that engineers did for one-off projects, having learned to do that in school (I still think the use of python in ML derives from the use of Python as a teaching tool in uni).

I'm not trying to knock Python or NumPy or anything, just kind of trying to convey a different perspective, which is that I can remember a time not too long ago when the use of Python in numerics was seen as primarily didactic in nature, or for limited circumscribed applications.

FWIW, it seems to me Python is kind of on a path similar to what happened with javascript, which was treated as kind of an ancillary helper language on the web, until Google started pushing its limits. Then there was browser wars 2.0, and huge efforts put into javascript, and it became a main player in network computing. To me, there's a similar trend with Python: it really kind of existed as a language for prototyping and scripting tasks, and now finds itself in a different role than it has been used for traditionally, and projects in that area are getting an influx of money accordingly. What I see happening is (1) a blossoming diversity of numerical computing communities (Haskell, Python, Julia, Kotlin, Scala, Rust, Go, etc.), due to competition and variation in application scenarios and preferences, (2) a huge influx of resources being put into Python to make it more performant, or (3) people jumping ship from Python into one of those other platforms to get more bang-for-the buck [or (4) some combination of all of these.]

anigbrowl 5 days ago 0 replies      
Because capitalism is an inherently exploitative economic paradigm?
carapace 5 days ago 0 replies      

> "And if youd like to take action to contribute to project sustainability, consider becoming a NumFOCUS member today."


How to read and understand a scientific paper: a guide for non-scientists lse.ac.uk
318 points by ingve  5 days ago   58 comments top 20
neutronicus 5 days ago 1 reply      
I have a little "hack" that I find extremely helpful for getting a sense of specific research fields.

Journal articles, even review papers, are cramped for space and so tend to be very dense. The author suggests methods for doing battle with this density, but I suggest that, before doing that, you search for a class of document that's allowed to be as expansive as the author desires, and whose authors have recently struggled to learn and understand their content, and so tend to be expansive:

PhD Theses

Find out what research group published the research, find out which graduate students have recently graduated from that group, and read their theses (if the author's command of the language of publication isn't what you'd prefer ... find another graduate student). I guarantee you it will function much better as an introduction to what the group does than trying to parse any of their journal publications. In particular, the "draw the experiment" step will often be solved for you, with photographs, at least in the fields where I've done this.

startupdiscuss 5 days ago 3 replies      
This is a good guide, but I will tell you a trick that is faster, easier, and more effective:

read 2 or 3 papers.

All that effort you would put into doing these steps? Instead, read 1 or 2 other papers that the author refers to in the beginning.

Science is a conversation. When you read the other papers, even if you don't understand them at first, you will get a sense of the conversation.

Also, some writers are abysmal, and others are amazingly lucid. Hopefully one of the 3 papers you read will be the lucid one that will help you understand the other 2.

closed 5 days ago 1 reply      
I love how simple and clear this post is.

As a kind of weird aside, if anyone ever emailed me about any of my journal articles, I would 100% respond to them (assuming they weren't a machine). I think most of my colleagues would do the same (except for articles featured in a newspaper, which might garner a lot of weird emails).

lumisota 5 days ago 0 replies      
Keshav's "How to Read a Paper" [1] is a good guide, though perhaps less in the "for non-scientists" camp.

[1] http://ccr.sigcomm.org/online/files/p83-keshavA.pdf

choxi 5 days ago 0 replies      
> As you read, write down every single word that you dont understand. Youre going to have to look them all up (yes, every one. I know its a total pain. But you wont understand the paper if you dont understand the vocabulary. Scientific words have extremely precise meanings).

That's a great tip. I've found that a lot of papers aren't necessarily complicated, but the vocabulary is unfamiliar (but you experience the same sense of confusion with both). It's interesting that we often conflate complexity with unfamiliarity, my reading comprehension abilities improved quite a bit by understanding the difference between the two.

glup 5 days ago 2 replies      
I don't understand the opposition to abstracts: dense means high information content, so if you know the field you can learn a whole lot (like whether you should read this paper or another one).
ChuckMcM 5 days ago 0 replies      
Oh this is awesome, well presented and clear.

A couple of notes, generally if you email the author of a paper they will send you a copy. Scholar.google.com can be used to evaluate the other papers referenced, highly cited ones will be 'core' to the question, less highly cited ones will address some particular aspect of the research.

For any given paper, if it cites one or two seminal papers in the field, you can build a citation cloud to create what is best described as the 'current best thinking on this big question'. You do that by following up the citations and their citations for two or three hops. (kind of like a web crawler).

With something like sci-hub and some work on PDF translation, it should be possible to feed two or three 'seed' papers to an algorithm and have it produce a syllabus for the topic.

deorder 5 days ago 0 replies      
I usually first start reading or glance over papers (and non-story books) from the end to the beginning before I read it the other way around. This has the following benefits for me:

- By knowing about the conclusion first I will better understand the motivation and why certain steps are being taken.

- I find out sooner if the paper (or book) is something I am looking for.

I like to read papers unrelated to my field to learn new thing to apply. To be honest, some papers still take me a long time to understand because they usually assume you already are researching the topic (for ex. certain terms, symbols and/or variables that are not being defined).

nonbel 5 days ago 4 replies      
There is a difference between reading and studying a paper. Many papers I just check the abstract for claims of A causes/correlates B (ie it is a "headline" claim), and look for a scatter plot of A vs B (it is missing).

Then I do ctrl-F "blind" (can't find it), ctrl-F "significance" (see p-value with nearby text indicating it has been misinterpreted). Boom, paper done in under a minute. There is really no reason to study such papers unless they have some very specific information you are searching for (like division rate of a certain cell line or something).

olsgaard 4 days ago 0 replies      
About identifying "The Big Question", I have a story from my days as a graduate student, where I failed to do so.

I was asked to help on a project that needed to identify humans in an audio stream. During my literature review, I came across the field of "Voice Activity Detection" or VAD, which concerns itself with identifying where in an audiosignal a human voice / speech is present (as opposed to what the speech is).

I implemented several algorithms from the literature and tested it on the primary tests sets referenced in papers and spend a few months on this until I finally asked myself "What would happen if I gave my algorithm an audiostream of a dog barking?"

The barking was identified as "voice".

As it turns out, the "Big Question" in Voice Activity Detection is not to find human voices (or any voices), but to figure out when to pass on high-fidelity signals from phone calls. So the algorithms tend to only care about audio segments that are background noise and segments that are not background noise.

sn9 4 days ago 1 reply      
>I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. Its appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not its a reputable study.

Better advice intended to make layman with zero background in science become more scientifically literate would be to tell them to read some textbooks.

Later on in the article, she tells people to write down each and every thing you don't understand in an article and look them up later. And this is excellent advice for people with a background equivalent to an advanced undergraduate or higher, but for people with zero background it would be better to read some textbooks and get yourself a foundation.

Honestly, even when I was in grad school in neuroscience, I asked around for advice on reading papers and the surprisingly universal response from other grad students was that it took 2 years to become reliably able to read and evaluate a research paper well. And this is 2 years in a research environment with often weekly reading groups where PIs, postdocs, grad students, and some undergrads got together to dissect some paper. These reading groups provided an environment in which you had regular feedback on your own ability to read papers by seeing all the things those more experienced than you saw and that you missed. A paper that took me 3+ hours of intense study would take a postdoc a good half hour to get more information out of.

I feel like this article makes reading articles well seem a lighter undertaking than it really is. It's really no wonder we see studies misinterpreted so often on the internet, where people Google for 5 minutes and skim an abstract.

kronos29296 5 days ago 0 replies      
As a student who needs to read research articles for my project, this article gave some new ideas on how to approach those long boring and cryptic pieces of text that just take days to understand. Thanks to the person who posted it.
luminati 5 days ago 0 replies      
A couple things I try to do when reading research papers, inspired by these two amazing [b|v]logs.[1]https://blog.acolyer.org/[2]https://www.youtube.com/user/keeroyz

I try to paraphrase the paper into a Acolyer like 'morning paper' blog post on evernote while mentally I am directing a 'two minute paper' video on the paper :)

DomreiRoam 4 days ago 0 replies      
I would like to have a digest or an overview written for a IT practitioner. I did go SC/IT conference and enjoyed the talks and I noticed 2 things: 1/ You learn new things and new approach that can bring value to our job 2/ It seems that the research sector discover stuff that are already known in the industry.

I think it would be great to have a journal/blog that would construct a bridge between the industry and the university.

yamaneko 5 days ago 0 replies      
This suggestion by Michael Nielsen is also very good: https://news.ycombinator.com/item?id=666615
pitt1980 5 days ago 0 replies      
What's odd to me, is that lots of professors have blogs in which they write quite a bit in plain language that doesn't require an instruction manual in order to be read
syphilis2 5 days ago 4 replies      
Why don't the authors do these 11 steps for us?
amelius 5 days ago 0 replies      
I'd like an answer to: how/where to ask the relevant community a question about a scientific paper.
minademian 4 days ago 0 replies      
this is a great guide. i wish more writing on the Internet has this blend of substance, message, tone, and grit.
apo 5 days ago 8 replies      
Sensible advice overall, but I completely disagree with these:

> Before you begin reading, take note of the authors and their institutional affiliations.


> Beware of questionable journals.

Institutional affiliation and journal imprimatur should have no bearing in science. These are shortcuts for the lazy, and they introduce bias into evaluation of the paper's contents.

Even more than that, dispensing advice along these lines perpetuates the myth that scientific fact is dispensed from on high. If that's the case, just let the experts do the thinking for you and don't bother your pretty little head trying to read scientific papers.

If the author's approach to reading a paper only works by checking for stamps of approval, maybe the approach should be reconsidered.

Stupidly Simple DDoS Protocol (SSDP) Generates 100 Gbps DDoS cloudflare.com
349 points by riqbal  4 days ago   103 comments top 13
majke 3 days ago 2 replies      
Author here. Allow me to extend the post a bit. It turns out that about 2.4% of the IPs that respond to SSDP queries, do so from a weird port number! For example:

 IP > UDP, length 95 IP > UDP, length 249
The first packet is SSDP M-SEARCH query. The second is a response from my printer. Notice - the source port for the response is not 1900 (but the dst port is okay). I'm not sure what the spec has to say about it, but it's pretty weird. What's worse - these responses won't be matched against "sport=1900" DDoS mitigation firewall rule.

I'm not sure what is the moral here. But if you ever see some UDP packets from a weird port, to a weird port - maybe it's this SSDP case.

hueving 3 days ago 1 reply      
More casualties from BCP 38 failures. This article mentions it but then dilutes the importance of it by suggesting SSDP is a problem. If IP spoofing did not work on the Internet, none of these UDP reflection attacks would work.

A scheme to strong arm the adoption of BCP 38 is key to stopping these attacks from growing. IoT has shown us that expecting device updates to disable these UDP protocols is a lost battle.

upofadown 3 days ago 6 replies      
>It's not a novelty that allowing UDP port 1900 traffic from the Internet to your home printer or such is not a good idea.

How would this even be possible? Home routers have to NAT everything. Normally you have to set up reverse NAT to get ports forwarded to the LAN.

voltagex_ 4 days ago 1 reply      
It will be years and years until those vulnerable miniupnpd versions are updated. Most are in embedded devices which will never see another update.

I'm glad to see miniupnp is still in active development: https://github.com/miniupnp/miniupnp but I can't work out if it's set to be vulnerable by default.

bsder 4 days ago 4 replies      
Why is IP spoofing STILL an issue? Why?
thomasdereyck 3 days ago 0 replies      
Shameless plug: When I read about SSDP a little while ago I was curious to see if I'd encounter it on many networks. As I was also trying to learn Swift/Apple development, I've written two (non-free) little apps for macOS/iOS to monitor SSDP messages:



Ever since creating it and just checking on some networks, I'm surprised of how many devices are actually using it. I probably saw this in Wireshark before as well, but probably overlooked it because you're never really looking for it. I wonder if many other such protocols are often used but easily missed...

saurik 3 days ago 3 replies      
> Internet service providers should never allow IP spoofing to be performed on their network. IP spoofing is the true root cause of the issue. See the infamous BCP38.

I don't see how it is at all reasonable to shift blame from a protocol that assumes the world can be trusted to the untraceable goal of "every single network in the entire world should only generate trusted data: then the problem would be solved".

> Internet providers should internally collect netflow protocol samples. The netflow is needed to identify the true source of the attack. With netflow it's trivial to answer questions like: "Which of my customers sent 6.4Mpps of traffic to port 1900?". Due to privacy concerns we recommend collecting netflow samples with largest possible sampling value: 1 in 64k packets. This will be sufficient to track DDoS attacks while preserving decent privacy of single customer connections.

OMFG. Do you want deanonymization attacks? Because this is how you get deanonymization attacks :/. The right form of solution here is not to encourage ISPs to log even more of our traffic (a practice I wish were illegal), but to try to kill off UPNP through every form of leverage possible (even if it breaks things).

I'd say this is "so disappointing", but I guess I shouldn't expect much from the company that tried its damndest to argue that nothing of importance was leaked from Cloudbleed even when you could still recover Grindr requests complete with IP addresses that they had managed to leak well after they tried to claim that data had been scrubbed :/.

gbrown_ 3 days ago 3 replies      

 More on the SSDP servers Since we probed the vulnerable SSDP servers, here are the most common Server header values we received: 104833 Linux/2.4.22-1.2115.nptl UPnP/1.0 miniupnpd/1.0 77329 System/1.0 UPnP/1.0 IGD/1.0 66639 TBS/R2 UPnP/1.0 MiniUPnPd/1.2 12863 Ubuntu/7.10 UPnP/1.0 miniupnpd/1.0 11544 ASUSTeK UPnP/1.0 MiniUPnPd/1.4
What an earth is internet facing and running 2.4 Linux kernels?

everdayimhustln 4 days ago 2 replies      
Pervasive IoT device deployment without in-the-wild security considerations and rapid updates is likely to add to DDoS bot farms.
ratinacage 2 days ago 0 replies      
I find it fascinating that the packets per second chart resembles an RC circuit's step response. I wonder if there is a good electrical circuit analogy for packets, packet size, and bandwidth.
walterkobayashi 3 days ago 1 reply      
Is it possible that SSDP Protocol can be run on a non-standard port (10000 - 65535) ?
IE6 3 days ago 0 replies      
So only slightly faster than GNU yes
dsl 3 days ago 4 replies      
It is unfortunate that CloudFlare shared enough PoC code to weaponize this.

Edit: for the downvoters, this isn't just my opinion, please read https://en.wikipedia.org/wiki/Responsible_disclosure

2D Syntax racket-lang.org
366 points by mr_tyzic  4 days ago   86 comments top 23
glangdale 4 days ago 2 replies      
I just love this. For some reason, our ways of specifying what we want a computer to do (I don't want to say 'language') remain mired the same territory we started in with punched cards (which I actually got to use as a 10 year old, which was fun).

I'm not sure if this particular cut is the right idea, but it's good to see experimentation. A bias I have here is that I think these ideas should be rigorously separated from the concept that some sort of WYSIWYG editor will allow non-programmers to code.

nprescott 4 days ago 0 replies      
Very neat, I really appreciate the Racket community's willingness to experiment with syntax.

Looking at the examples reminds me of Julian Noble's "Elegant Finite State Machine" in Forth[0], which takes a different approach to the same problem of creating a language to better specify a problem (in both cases graphically).

[0]: http://galileo.phys.virginia.edu/classes/551.jvn.fall01/fsm....

hyperion2010 4 days ago 1 reply      
It looks like this (e.g. `#2dcond`) implements a way to directly embed other languages in a racket file [0] and avoids the problems encountered when trying to do it using the `#reader` syntax [1] in a source file. Essentially letting you have multiple readtables in a file (probably not nestable though). I could be wrong about this (need to look more carefully when I have more time), but nonetheless could allow direct embedding of completely alternate syntax with the right setup.

[0] https://github.com/racket/2d/blob/master/2d-lib/private/read...[1] https://docs.racket-lang.org/guide/hash-reader.html

ziotom78 3 days ago 0 replies      
I find this extremely interesting! In the last weeks one of my colleagues had to work to some legacy application that was developed using National Instrument's LabView [1]. For those who do not know it, it is a visual language to develop interfaces to scientific instruments. Everything is done visually, including ifs and for loops.

My colleague, which has large experience with languages like C# and Assembly, is extremely frustrated by this way of working. Everything must be done using a mouse, and even the simplest tasks require some thought in order to be implemented properly. (Although I must say that he praises LabView's hardware support and its Visual Basic-like easiness in developing GUIs.)

I find Racket's 2D syntax to be far more promising than LabView's approach:

1. You can code it using a text editor: unlike LabView, no mouse is required;

2. Only a few classes of statements are affected by this (LabView forces you to do everything visually, even function definitions and mathematical operations);

3. You use this feature only if you think it helps; otherwise, plain text syntax is always available.

As a side note, I would like to give kudos to the Racket developers for this kind of gems. Racket really seems to be a language which makes language experiments easy to implement and try!

[1] http://www.ni.com/en-us/shop/labview.html

jacobparker 4 days ago 3 replies      
Nicely done!

Different but similar joke for C++: http://www.eelis.net/C++/analogliterals.xhtml

kazinator 4 days ago 1 reply      
I have some reservations about how this is designed.

All we need are columns labeled with conditions. We don't need rows. And the matrix can just have true/false/don't-care entries, with code assigned to rows.

Concretely, say we have these conditions:

 (> x y) (stringp foo) (oddp n)
Right? Okay, so now we can identify the combinations of these and assign them to code like this:

 (> x y) (stringp foo) (oddp n) #t #t (whatever) #t #t (other-thing) #t #f (etc)
There could be a way to mark some of the rows as having "fall through" behavior. If they match, the expression is evaluated (for its side effects, obviously), but then subsequent rows can still match.

This could be worked into a straightforward S-exp syntax without any diagramming shennanigans:

 (table-cond (> x y) (stringp foo) (oddp n) #t () #t (let [...] (whatever)) () #t #t (other-thing) #t #f () (etc))
Here, don't cares are denoted using (). Something else could be chosen.

A #f entry means "must be explicitly false". A blank column entry is a "don't care"; that condition is not taken into account for that row.

hota_mazi 3 days ago 2 replies      
While I appreciate and respect Racket's willingness to experiment and innovate, I'm a bit puzzled by this.

Tables are neat to read but pretty annoying to write, especially in ASCII form. It's true that code is read much more often than written, but still, I wonder how useful this really is.

sharpercoder 4 days ago 0 replies      
Tables are generally a very good idea for languages. SpecFlow/cucumber as most notable example, but I can see others benefitting greatly as well.
b123400 4 days ago 0 replies      
It reminds me of Funciton, https://esolangs.org/wiki/Funciton
ooqr 4 days ago 0 replies      
Surprisingly exactly what the title makes it sound like. Very cool!
fao_ 4 days ago 2 replies      
The reason why I think that this will not thrive, as other projects have not thrived, is because (at least initially) it adds to the mental burden. Scanning a cond for me is almost instantaneous. It took me a couple of very long seconds for me to figure out what was happening within that table, and even though I recognized it as a truth table, it was not easy to read. The information was too separate on screen to easily compare.

I think that were programming initially presented as such, this would not be a problem, but I expect that many developers are so finely attuned and specialized to text that other methods will not take off purely because of the learning curve.

igravious 3 days ago 0 replies      
Nobody has mentioned the dependently type prog-lang Epigram yet?

Epigram uses a two-dimensional, natural deduction style syntax, with a LaTeX version and an ASCII version. Here are some examples from The Epigram Tutorial:


The natural numbers

The following declaration defines the natural numbers:

 ( ! ( ! ( n : Nat ! data !---------! where !----------! ; !-----------! ! Nat : * ) !zero : Nat) !suc n : Nat)
The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat".

Project lives here: https://code.google.com/archive/p/epigram/ and the last commit on https://github.com/mietek/epigram2 is five years ago which leads me to believe that the project is abandon-ware.


oh_sigh 4 days ago 2 replies      
What editor do racketeers commonly use? I like the idea, but this seems like a burden for code editing in most normal editors except for perhaps emacs picture mode.
agumonkey 3 days ago 0 replies      
Reminds me (again) of Jonathan Edwards research. He did many editors with tables as first class so that you can avoid boolean nestings, and simplify verification / closure of your boolean mappings.

He published a video, sadly in flash http://www.subtext-lang.org/subtext2.html

Here's an article about "schematic tables" http://aigamedev.com/open/review/schematic-table-conditional...

joshlemer 4 days ago 0 replies      
If anyone wants to see some ascii-art in Scala, they just need to look at the Akka Streams-graph api (http://doc.akka.io/docs/akka/current/scala/stream/stream-com...)
vidarh 4 days ago 0 replies      
I love attempts at visual programming. As a kid I used to pour over the then-fashionable ads for CASE (Compater-Aided Software-Engineering) tools in DDJ and elsewhere and imagined them to do far more than they actually did... Also attempts like Amiga Vision [1]

One of the software engineers I like to go a bit fanboy-ish about is Wouter van Oortmerssen, who I first got familiar with because of Amiga E, but who has a number of interesting language experiments [2], one of which includes a visual language named Aardappel [3] that also used to fascinate me.

There are a number of problems with these that have proven incredibly hard to solve (that this Racket example does tolerably well on, probably because it doesn't go very far):

1. Reproduction. Note how the Amiga Vision example is presented as a video - there is not even a simple way of representing a program in screenshots, like what you see for the examples of Aardappel, which at least has a linear, 2D representation. That made Amiga Vision work as a tool, but totally fail as a language. This is even a problem for more conventional languages on the fringe, like APL, which uses extra symbols that most people won't know how to type. The Racket example does much better in that it can be reproduced in normal text easily.

2. Communication. We talk (and write) about code all the time. Turns out it's really hard to effectively communicate about code if you can't read it out loud easily, or if drawing is necessary to communicate the concepts. Ironically, if you can't read the code out easily, it becomes hard for people to visualise it too, even if the original representation is entirely visual. This example does ok in that respect - communicating a grid is on the easier end of the spectrum.

3. Tools. If it needs special tools for you to be effective, it's a non-starter. This Racket example is right on the fringes of that. You could do it, but it might get tedious to draw without tooling (be it macros or more). On the other hand the "tool" you'd need to be effective is limited enough that you could probably implement it as macros for most decent editors.

I spent years experimenting with ways around these, and the "best" I achieved was a few principles to make it easier to design around those constraints:

A visual language needs a concise, readable textual representation. You need to be able to round-trip between the textual representation and whatever visual representation you prefer. This is a severe limitation - it's easy to create a textual representation (I had prototypes serialising to XML; my excuse is it was at the height of the XML hype train; I'm glad I gave that up), but far easier to make one that is readable enough, as people need to be able to use it as a "fallback" when visual tools are unavailable, or in contexts where they don't work (e.g. imagine trying to read diffs on Github while your new language is fringe enough for Github to have no interest in writing custom code to visualise it; which also brings up the issue of ensuring the language can easily be diffed).

To do that in a way people will be willing to work with, I think you need to specify the language down to how comments "attaches" to language constructs, because you'll need to be able to "round-trip" comments between a visual and textual representation reliably.

It also needs to be transparent how the visual representation maps to the textual representation in all other aspects, so that you can pick one or the other and switch between the two reasonably seamlessly, so that you are able to edit the code when you do not have access to the visual tool, without surprises. This makes e.g. storing additional information, such as e.g. allowing manual tweaks to visual layout that'd require lots of state in the textual representation that people can't easily visualise very tricky.

Ideally, a visual tool like this will not be language specific (or programming specific) - one of the challenges we face with visual programming, or even languages like APL that uses extra symbols, is that the communications aspect is hard if we can not e.g. quickly outline a piece of code in an e-mail, for example.

While having a purely textual representation would help with that, it's a crutch. To "fix" that, we need better ways of embedding augmented, not-purely-textual content in text without resorting to images. But that in itself is an incredibly hard problem, to the extent that e.g. vector graphics supports in terminals was largely "forgotten" for many years before people started experimenting with it again, and it's still an oddity that you can't depend on being supported.

Note that the one successful example in visually augmenting programming languages over the last 20-30 years, has been a success not by changing the languages, but by working within these constraints and partially extracting visual cues by incremental parsing: syntax highlighting.

I think that is a lesson for visual language experiments - even if you change or design a language with visual programming in mind, it needs to be sort-of like syntax highlighting, in that all the necessary semantic information is there even when tool support is stripped away. We can try to improve the tools, but then we need to lift the entire toolchain starting with basic terminal applications.

[1] https://www.youtube.com/watch?v=u7KIZQzYSls

[2] http://strlen.com/programming-languages/

[3] http://strlen.com/aardappel-language/

kovek 4 days ago 0 replies      
I wonder how nesting of tables would be like? I guess if you had function calls inside, it would look nicer than drawing a table inside a table.
lispm 3 days ago 0 replies      
Adding tabular display to Lisp is useful, IMHO. I was thinking about using something similar, but mostly where code looks like data or specifically for Lisp data. I would not be very interested to use tables for control structures, but would like more support for auto-aligned layout for control structures.
niuzeta 4 days ago 0 replies      
Okay. I love this and everything about this.

> This notation works in two stages: reading, and parsing (just as in Racket in general). The reading stage converts anything that begins with #2d into a parenthesized expression (possibly signaling errors if the and and characters do not line up in the right places).

I'm cracking up, oh my god.

nemoniac 3 days ago 0 replies      
Neat idea. It draws directly from the approach to learning functions of multiple complex arguments in www.htdp.org

Looking forward to 3D syntax for functions of 3 arguments.

GregBuchholz 4 days ago 1 reply      
Yes, and next I'd love to see big parentheses:

 if > (+ a b) case x cond (- c d) (1 'foo) ((> y 2) 'quux) (2 'bar) (t 'error) (3 'baz) 
((http://imgur.com/oI0zVm3) if that isn't rendering for your setup)

kv85s 4 days ago 0 replies      
ebbv 4 days ago 2 replies      
This is not a good idea. That code is gross and illegible. There's definitely better ways to handle anything where you think this is the solution.
Effectively Using Matplotlib pbpython.com
366 points by kercker  2 days ago   75 comments top 21
pweissbrod 2 days ago 2 replies      
I needed jupyter as a medium of information sharing in my team but matplotlib has just too much of a learning curve to expect everyone to adopt it as tribal knowledge considering this was not a core part of their job. I found a compromise using the wonderful jupyter_pivottablejs library:


Thus allowing you to tweak visualizations on the fly without touching code. My workflow is:

sql -> dataframe -> pivottable

This is not a dig at matplotlib which is undeniably powerful. More like an alternative for those of us that want to convey good-enough flexible interactive visualizations without getting into the minutia with matplotlib

denfromufa 2 days ago 0 replies      
State of visualization in Python by Jake Vanderplas:


NicoJuicy 2 days ago 1 reply      
I'm picking up reinformencent deep learning and documenting progress with jupyter notebook.

Improving my python on the way ( and knowing numpy and matplotlib) has been a great experience the last 2 days. Although the progress seems to be "slow" ( translating formulas, n armed bandits to code ...). My best tip: download cheatsheets for: numpy, pandas, matplotlib, python, ... has been good for getting to know the language and libraries for ML.

So this tutorial/information will be put in good hands at a very opportunistic time ;) Thanks!

jofer 2 days ago 0 replies      
Another useful guide is Ben Root's Anatomy of Matplotlib tutorial: https://github.com/WeatherGod/AnatomyOfMatplotlib

I'm a bit biased, as I wrote this particular section (most of the rest is Ben's work), but the plotting method overview is a very useful cheatsheet: http://nbviewer.jupyter.org/github/WeatherGod/AnatomyOfMatpl...

It gives you a compact visual representation of what the main plotting methods do and the differences between them.

Twinklebear 2 days ago 1 reply      
A cool feature I recently learned about of matplotlib is that it supports LaTeX for text rendering [1]. You can go as far as rendering LaTeX math formatting for titles/labels, or just have the plot fonts match your text and/or figure captions so it fits nicely into your paper.

[1] http://matplotlib.org/users/usetex.html

bitL 2 days ago 3 replies      
matplotlib is an example of unnecessarily complex and confusing "organic" API. That's why there is so much resentment to use it; trivial things need non-trivial internal understanding and confusing boilerplates.
jbmorgado 2 days ago 0 replies      
I want to vouch for Matplotlib, I can see it gets a bad reputation when compared to these new shiny frameworks like plotly, but it's vastly more powerful.

If you are a researcher and you want to publish in B&W (something still very common in fields like Physics and Astrophysics), no other plotting library for Python comes near.

You can choose filling patterns, line patterns, annotate with LaTeX, etc. And, although hard, you can make your final product look as polished and perfect as you want (and as you are willing to take the time). No other library for Python comes near in these aspects.

There are simpler tools and it's easy to get a good enough looking plot, but if you want to get that perfect one exactly as you need, there's no way around Matplotlib (at least amongst the well known Python plotting libraries).

rjtavares 2 days ago 5 replies      
One aspect of matplotlib that is often overlooked is the animation capabilities. There should be more animations in data-sciency stuff (there's a reason small gifs spread so easilly on the internet).
analog31 1 day ago 0 replies      
MPL is my go-to graphing tool, but admittedly it's probably because I learned it first and now it's a habit. Almost every Python / Jupyter tutorial starts you out with MPL. But there are two things I like about it:

1. Easy to embed MPL graphics in Tkinter GUI's. Granted, my programs are not intended to be professional looking, but if I want to write stand alone software, e.g., for an automated experiment or industrial test, it invariably needs one or two graphs in a dialog.

2. If what you want is a static graph (no interaction), that's what MPL produces. With other packages that I've tried, every graph is its own JavaScript program running in the browser. A Jupyer notebook with dozens of graphs begins to hog down my computer.

maxs 2 days ago 0 replies      
I used matplotlib for a very long time. Now, I suggest using bokeh


I am finding the API a lot cleaner than Matplotlib, and it is very nice to have the ability to do integrated interactive plots in Jupyter.

edshiro 2 days ago 0 replies      
This looks like a great resource! I am currently picking up deep learning and one of the things that they understandably don't cover much is how to use matplotlib.

But being able to visualise the problem or your solution is so important to build more intuition and become a better wannabe data scientist.

bravura 2 days ago 3 replies      
Biggest matplotlib frustration:

I've spent hours trying to get matplotlib to render on screen on OSX, and followed countless stackoverflow and blog posts instructions.

I still can't.

imartin2k 2 days ago 1 reply      
I'm currently learning to use Matplot by visualizing HN activity (very early stages) so this comes very handy. Thanks for sharing.
hyperpallium 2 days ago 0 replies      
How does matplotlib compare with gnuplot?
eyeball 2 days ago 1 reply      
Anyone know of a good tutorial for plotnine? I'm new to graphing in python and am attracted to this because it should crossover to ggplot2 in R (which I'd also like to learn, but doing python for now). Will ggplot2 tutorials for R be enough to get going with plotnine?
NelsonMinar 2 days ago 0 replies      
A fantastic and sorely needed tutorial for orienting matplotlib into modern usage. I really appreciated his description of the matlab-style API vs the object oriented API. Also how to use it with pandas' shortcut methods.
emilfihlman 2 days ago 0 replies      
Having the graph go beyond a point with the last axis number under it is annoying as hell and everyone who does that should feel bad.
j7ake 1 day ago 0 replies      
Are there any advantages of using matplotlib versus say ggplot2?
kronos29296 2 days ago 0 replies      
Very informative. this clears up a lot of doubts I had because I was doing a lot of snippet copying for my plots before.
mynewtb 2 days ago 0 replies      
Everyone, check out toyplot! It is a very easy python module for plotting.
username4444444 2 days ago 3 replies      
from my personal experience, mpl's 3D plotting capabilities are pretty terrible (just try log-scaling your axes) and looking into Mayavi as a replacement has been on the list for a while.
Moderate drinking associated with atrophy in brain related to memory, learning washingtonpost.com
276 points by tuxguy  5 days ago   235 comments top 30
SubiculumCode 5 days ago 2 replies      
In my phd work I was heavily involved in hippocampal segmentation, and I can say with confidence that FSL FIRST is not a state-of-the-art segmentation method. It belongs to an earlier generation of segmentation methods with poorer reliability, which have contributed to a lot of contradictory results in my field of hippocampal development. I would not use it in my research.




[edit]I had meant to point a link to my chapter on hippocampal development.https://www.researchgate.net/publication/314194708_Hippocamp...

startupdiscuss 5 days ago 6 replies      
But they define "moderate" as 2 drinks/day for a man.

That is 14 drinks a week which puts you in the second highest decile for Americans!


codyb 5 days ago 4 replies      
Alcohol is getting a pretty bad rap to me the more I read about it. After reading this I was curious about ways to _increase_ hippocampal function. According to [0] it seems like exercise can increase hippocampal function. Another link I can't find again since I found it on my phone and am now on my laptop indicated things like learning languages, and an omega 3 rich diet can also help.

This is good news for me since NYC is very much a drinkers town and I enjoy partaking, but am also learning Italian, exercising more, and frequently eat with omega 3s in mind.

Here's to hoping they cancel out!

[0] - https://www.nature.com/nature/journal/v472/n7344/full/nature...

ACow_Adonis 5 days ago 3 replies      
As a stats man, before I read the article I said to myself: "bet it's from a survey".

With that kind of methodology, I imagine the "moderate" drinkers are going to contain the group of people who said they (frequently) drink, but not that much. I'd be willing to bet there will be a relationship between reporting that you drink frequently and the negative effects of alcohol that will outweigh the attempt to self report how much you drink (because the latter won't be reported accurately, but people who don't drink will tend to more accurately report and select themselves out into another category. Not only is it easier to self report whether you are a non drinker/drinker than it is to report you are a moderate/heavy drinker, I imagine there's also another confounding effect coming through given that there generally has to be something exceptional about you to be a non-drinker in our societies.

Self-disclosure: practically a non-drinker, nothing ideologically against it. Might have a beer every two months socially with food.

Thoughts? Especially from anyone reading the actual study?

hamstercat 5 days ago 1 reply      
I'd be interested to know what 3 times more mean. If your chance of brain damage is 0.5% normally and jumps to 1.5% with moderate drinking, that isn't too bad. If it jumps to 10% to 30% that's another story.
fludlight 5 days ago 1 reply      
Why not link to the actual study?


scottLobster 5 days ago 3 replies      
So just for reference, 14 units of alcohol (the low-end correlated to atrophy) is approximately:

7-9 (US) shots of hard alcohol (assumed 37.5% ABV)7 pints of Lager (assumed 4% ABV)9.3 125 ml glasses of average strength wine (assumed 12% ABV)

Looks like my 2-5 bottles of beer a week habit is fine. :)


hellofunk 5 days ago 3 replies      
I guess you need to decide what is important to you. For example, a few years ago, this was published [0]:

> One of the most contentious issues in the vast literature about alcohol consumption has been the consistent finding that those who don't drink tend to die sooner than those who do


richieb 5 days ago 0 replies      
As a comedian once said "They say that alcohol kill brain cells. Yeah, but only the weak ones!"
nickledave 5 days ago 0 replies      
Haven't seen anyone else comment on this specific statement from the abstract: "No association was found with cross sectional cognitive performance or longitudinal changes in semantic fluency or word recall." Based on other people's comments, seems like the main finding here is that maybe the hippocampus shrinks as we get older and maybe there's an effect on lexical fluency. We already know that, even in "healthy" subjects, the brain shrinks due to aging, and I think no-one can say yet how much atrophy can take place before it impacts memory or learning.
water42 5 days ago 0 replies      
this was posted earlier in the month and most of the top level comments have already been made


carbocation 5 days ago 2 replies      
A Mendelian randomization study that I find convincing suggests that there is no obvious safe level of alcohol intake from a cardiovascular standpoint.

This observational study linked by tuxguy points in the same direction, and it seems ripe for follow-up with genetic work that could support (or help refute) the likely direction of causality.

Cyph0n 5 days ago 9 replies      
I'm always interested to see how people react to a news article or study that criticizes alcohol. I have noticed over time that alcohol consumption is usually a taboo subject to discuss for some reason, and almost everyone I talk to immediately gets on the defensive when I say that I don't drink and never will. Can anyone shed some light on why exactly people don't like it when someone doesn't drink, especially when it's for health reasons?

Anyways, I'm not surprised that most of the comments here are either outright defensive or are just proposing ways to undermine the content of the study. Reading through the comments, arguments include: the one size fits all "correlation is not causation", ad hominem attacks on WaPo itself, half-jokingly accusing the article author of drinking, and arguing the semantics of what constitutes a "drink".

mortenjorck 5 days ago 10 replies      
Study finds correlation. Article mentions caveat that correlation is not causation. Headline explicitly states causation.

This is certainly a very interesting correlation, and demands further study. Perhaps there is a causal link. But it's just (predictably) irresponsible journalism to head such a piece with a factually incorrect headline.

devoply 5 days ago 2 replies      
Well why wouldn't it. Alcohol is not a health food. It crosses the blood brain barrier and is toxic to cells. So yeah put toxic stuff up there, there will be some consequence.
zelos 4 days ago 0 replies      
Did they consider that it might not be the alcohol itself, but correlation with behaviour? People who come home from work and have a drink or two are much less likely to do something that keeps their brain active.
protonfish 5 days ago 0 replies      
My concern about this study is that it used a large number of different measures: multiple structural brain measures and cognitive tests, then swept the negative ones under the rug and reported only positive correlations. This is like the XKCD jelly bean test https://xkcd.com/882/ There is no reason to accept these types of results without replication.
spdionis 5 days ago 1 reply      
Yeah but what's the point of having a good memory if you can't have a drink?
kmm 5 days ago 1 reply      
I wonder if there's a difference between acute and chronic alcohol consumption of the same amount. Drinking 14 units in one event every 2 weeks amortizes to one unit per day, but would it have the same effect?
nashashmi 5 days ago 0 replies      
Reminds me of what a nurse once said: wet brain syndrome, or being drink even when you are sober.
clubm8 5 days ago 0 replies      
What if I drink fourteen drinks once every 4 to 6 months?
KekDemaga 4 days ago 1 reply      
My personal rule still stands "All substances that impair in the short term, when used daily, impair in the long term."
tlholaday 5 days ago 1 reply      
Meanwhile, President Trump is a lifelong teetotaler.
accountyaccount 5 days ago 0 replies      
sure, that's the point
mothers 5 days ago 5 replies      
would you give a 5 year old a beer? no? exactly. everyone knows drinking is bad for you. they do it anyway, because "reasons" [1]anyone who disagrees is just being irrational.

[1] which may include partying and hooking up, as well as removing any awkwardness they have.

melling 5 days ago 0 replies      
Maybe now we can put to rest the "coding while drunk" discussion that occurs on HN from time to time:


B1narySunset 5 days ago 0 replies      
What about binge drinking on the weekend?
coldtea 5 days ago 0 replies      
>Even moderate drinking causes atrophy in brain area related to memory, learning

Well, that's for the better. Seeing that I drink to forget.

TimMurnaghan 5 days ago 5 replies      
Seriously, stop linking to sources that block adblockers.As far as I'm concerned they've taken themselves off the web.
mothers 5 days ago 1 reply      
       cached 3 July 2017 04:11:02 GMT