hacker news with inline top comments    .. more ..    18 Jul 2017 Best
home   ask   best   2 years ago   
Apache Foundation disallows use of the Facebook BSD+Patent license apache.org
1254 points by thelarkinn  2 days ago   346 comments top 52
numair 2 days ago 20 replies      
Finally, people are beginning to realize the insanity of this entire PATENTS file situation!

When I first brought up how misguided people were for embracing React and projects with this license, I was downvoted to hell on HN. But really, everyone, THINK ABOUT IT. This is a company that glorifies and celebrates IP theft from others, and lionizes their employees who successfully clone others projects. Theyve built their entire business on the back of open source software that wasnt ever encumbered with the sort of nonsense theyve attached to their own projects. And this industry is just going to let them have it, because the stuff they are putting out is shiny and convenient and free?

Having known so many people involved with Facebook for so long, I have come up with a phrase to describe the cultural phenomenon Ive witnessed among them ladder kicking. Basically, people who get a leg up from others, and then do everything in their power to ensure nobody else manages to get there. No, its not human nature or how it works. Silicon Valley and the tech industry at large werent built by these sorts of people, and we need to be more active in preventing this mind-virus from spreading.

By the way, the fact that Facebook is using this on their mostly-derivative nonsense isnt what should concern you. Its that Google has decided, as a defensive measure, to copy Facebooks move. Take a look at the code repo for Fuschia and youll see what I mean. Imagine if using Kubernetes meant you could never sue Google?

erichocean 1 day ago 0 replies      
To anyone concerned about React's virtual DOM and diff'ing stuff, and a potential Facebook patent thereof, in early 2012 I wrote and published (under a GLPv3 license) a virtual DOM implementation with efficient diff'ing when I forked SproutCore[0] to become Blossom.[1]

So even if Facebook tries to patent that particular invention/innovation, it may not stand up to legal scrutiny depending on the filing date. AFAIK, Facebook didn't do a provisional patent for virtual DOM stuff before July 2012 (long after I released Blossom), because that patent filing would have become public AT THE LATEST on January 1st, 2016 and nothing has come to light that I'm aware of.

Soyou should be safe (IANAL).

[0] Ironically given the subsequent popularity of React, the SproutCore team rejected my virtual DOM approach which is why I had to fork it. Live and learn. I actually came up with the specific virtual DOM + diff design in spring 2008, but didn't get around to writing the code for it until someone paid me to do it (I had asked Apple and they declined). Eventually, the copyright owner of SproutCore (Strobe, Inc.) got bought by Facebook, though I don't recall when

[1] https://github.com/erichocean/blossom

softinio 2 days ago 5 replies      
I really have no idea why react is so popular with such a silly license.

I agree with this move.

There are plenty of OSS projects out there without patent thing attached to its license so no reason to use react.

clarkevans 2 days ago 0 replies      
I'm not a lawyer, but perhaps Facebook's BSD+Patent license is not even open source.

It's tempting to consider the BSD license independent of the additional patent license. However, the OSI has not approved CC0 as being open source precisely because it expressly reserves patent rights [0]. In the OSI's justification, permissive MIT and BSD licenses may provide implicit patent license, so by themselves they are open source. However, like CC0, BSD+Patents expressly exclude this possibility. Indeed, Facebook's licensing FAQ deems the combined work of the BSD+patents to be the license [1]. Further, recent court case has shown that these licenses are not simply copyright or patent statements, but can be actual contracts [2].

Hence, we have to consider the BSD text + the patents file text as the combined license. This license is not symmetric and hence may violate OSI license standards. I've made this comment in the facebook bug report, https://github.com/facebook/react/issues/10191

[0] https://opensource.org/faq#cc-zero[1] https://code.facebook.com/pages/850928938376556[2] https://perens.com/blog/2017/05/28/understanding-the-gpl-is-...

captainmuon 2 days ago 3 replies      
I think this is an overreaction (pun accidental).

There are two things here: The copyright license, and the patent grant. Copyright applies to the concrete implementation. You have to agree to the license to be subject to it, and to legally use the code.

A potential patent applies to any implementation. Even if you write a clean-room clone of React, if it uses the same patent, Facebook has a patent claim. But that means the patent grant is not specific to the code; it doesn't even require consent, Facebook could allow you conditional patent usage even without your knowledge! A corollary is that you are strictly better off with the patent grant, it imposes no additional constraints on you.

License with no patent grant: Facebook can sue you for infringing patents, even if you are using a clone!

License with patent grant: Facebook cannot sue you for infringing patents, unless you do it first.


Second, I think the philosophy behind the patent grant is twofold: 1) that software patents are not legitimate. Enforcing a patent is not seen as a legitimate right, but an annoyance, like pissing on someones lawn. From that point of view, it seems not asked too much from somebody to refrain from doing that. (I don't know if that was the idea of the people who drafted that license, but it wouldn't surprise me.)


Another, unrelated observation (and please don't invalidate the first observations if this one is wrong as internet commentators are wont to do):

I see nowhere in the license [1] that it requires you to take the patent grant. Is that true? It would be silly to refuse it, because you are strictly better off with it, of course.

[1] https://github.com/facebook/react/blob/master/LICENSE

fencepost 2 days ago 2 replies      
I was interested in React based on what I'd read and was figuring it'd be worth looking into, but this provides all the reason I need to avoid it - I don't forsee a situation where I would personally or as a small company be suing Facebook, but I could see developing something then selling/trying to sell it to a larger company. If my code comes with a big side of "oh, and if you buy this you won't be able to sue Facebook or its affiliated companies for patent infringement" that could significantly hurt sales chances.
chx 2 days ago 2 replies      
RocksDB has fixed this https://github.com/facebook/rocksdb/commit/3c327ac2d0fd50bbd... now and moved to Apache / GPL dual license.
altotrees 1 day ago 0 replies      
I worked or a large company on several web-based apps right around the time React came out. There were some UI issues I thought could be sorted easily using React.

After going to our lead dev, who in turn went to our project manager, we received an email from our legal department a few days later that simply stated we would not be using React due to "certain patent constraints."

Having not done any prior research, I looked into what the problem might be and was pretty floored with what I found. At first I scoffed when they said no, but after reading about the patent situation I totally get it.

jorgemf 2 days ago 2 replies      
Can someone explain how this can affect to projects using react, as in a part of a product of a company or personal projects? Thanks

I found this [1]:

> FB is not interested in pursuing the more draconian possibilities of the BSD+patents license. If that is true, there is actually very little difference between BSD+patents and the Apache license. As such, relicensing should make little if any pragmatic difference to Facebook.

So what happens if Facebook doesn't change the license and in the future changes its mind?

[1] https://github.com/facebook/react/issues/10191

j_s 2 days ago 0 replies      
So push finally comes to shove.

Glad the long-term legal implications will be given serious consideration publicly, rather than the "this is not the droid you're looking for" I've seen nearly everywhere so far!

rdtsc 2 days ago 3 replies      
I was wondering about a similar issue for zstd compression library. It has a similar BSD+Patentsfile thing.

There is an issue with a related discussion about it going for more than a year:


Last update is interesting. Someone did a search and looked for any patents FB filed for and couldn't find any in last year. So somehow based on that they decided things are "OK".

To quote:


US allows to patent up to a year after the publication, allowing to conclude that ZSTD remains free of patents (?) - suggesting this "The license granted hereunder will terminate, automatically and without notice (...)" from PATENTS file has no legal meaning (?)


Anyone care to validate how true that statement is?

learc83 2 days ago 0 replies      
If Facebook has patents that cover React functionality. They almost certainly cover parts of other JavaScript frameworks. React is well executed, but it's conceptually simple.

I don't think avoiding React makes you any safer. You don't know how broadly Facebook or the courts will interpret their patents.

TheAceOfHearts 2 days ago 2 replies      
In the discussion they say RocksDB will be relicensed under dual license Apache 2 and GPL 2.

There's already an issue [0] asking them to consider doing something similar for react, and Dan Abramov said he'd route the request internally on the next work day.

I can't imagine they'd keep the existing license without harming their community image. But even if they keep the license, many applications should be able to easily migrate to preact [1] and preact-compat, which provides a react-compatible API.

Hopefully they relicense the project. It seems like it's the first thing that gets brought up each time react gets mentioned.

[0] https://github.com/facebook/react/issues/10191

[1] https://preactjs.com

tomelders 2 days ago 2 replies      
Ok. With the BSD + patent grant

Do you have a license to use Facebook's patents? Yes.

Do you have a license to use Facebooks patents if Facebook brings a patent case against you? Yes.

Do you have a license to use Facebooks patents if you bring a patent case against us? No.

If you do not have a patent grant, can you still use React? YES!

If you're going to down vote this, please say why. This is how I interpret the license plus patent grant. If I'm wrong, I'd like to know why.

tomelders 2 days ago 2 replies      
INAL - But this seems strange to me. The Apache license has what I see as being the same patent grant, with the same condition that if you make a claim against them, you lose the patent grant.

Apache 2.0

> 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

The important bit being...

> If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

But what people seem to be missing (as far as I can tell) is that you don't lose the licence to use the software. You just lose the patent grants. But with the BSD licence alone, you lose both the patent grand AND the licence. I really don't see how the Apache 2.0 License and Facebook's BSD+Patent Grant are any different.

mixedbit 2 days ago 1 reply      
Does this mean that a startup that uses React becomes un-buyable for any company that sells or plans to sell patent rights to Facebook?
ec109685 2 days ago 0 replies      
I don't understand how code with a bsd license without a patent grant is better for the apache foundation than facebook's bsd + patent license. With the former, the entity donating the source can sue you for patent infringement at any time.

Clearly the apache 2 license would be preferable (and what rocks db did), but I am puzzled the foundation accepts bsd code in their products, given their worry about patents.

plastroltech 23 hours ago 1 reply      
There is so much FUD around this PATENTS file.

If Facebook had not included this Patent grant and had released React under only the BSD license then any user would be in the exact situation which everyone is complaining so loudly about being in IF they decide to bring a patent action against Facebook. Specifically, you would be open to being sued by Facebook for violating a patent which they own.

What this grant says is for one specific circumstance (you haven't brought a patent suit against them) and for one specific limited set of patents (those related to React), Facebook will not sue you. If you like that protection then don't sue them. If you decide that you do want to bring a patent suit against them then you're right back where you were to begin with. Your one small limited protection is removed and Facebook can once again sue you if you violate one of their patents - just like they could before you started using React in the first place.

This business about it being asymmetrical is IMO a distraction. What would it mean for it to be symmetrical? That it would only trigger if you sue them for a patent violation related to React? What does that mean? You don't hold React patents, Facebook does. How would you sue them for a violation of a React patent? It makes no sense.

vbernat 2 days ago 2 replies      
How did Facebook was able to change the license of RocksDB so easily? The CLA is not a copyright assignment and therefore all contributors have to agree for the change. Did they contact anyone who has signed up the CLA?
Xeoncross 2 days ago 1 reply      
> Does the additional patent grant in the Facebook BSD+Patents license terminate if I create a competing product?

> No.

> Does the additional patent grant in the Facebook BSD+Patents license terminate if I sue Facebook for something other than patent infringement?

> No.


Consider re-licensing to AL v2.0: https://github.com/facebook/react/issues/10191

vxNsr 2 days ago 0 replies      
>>David RecordonAdded Yesterday 21:10Hi all, wanted to jump in here to let everyone know that the RocksDB team is adjusting the licensing such that it will be dual-licensed under the Apache 2 and GPL 2 (for MySQL compatibility) licenses. This should happen shortly and well ahead of August 31st. I'll leave the history and philosophy around licensing alone since it's generally a complex discussion to have and I'm not sure that it has actually been fully captured in this thread especially vis a vis Facebook's intent.Hopefully this morning's guidance to PMCs can be adjusted since I don't think any of us see a bunch of extra engineering effort as a desirable thing across the ASF projects which are already making use of RocksDB Thanks,--David

Looks like they're working on amending this issue, could very well be a case of legal getting involved and the regular engineers not realizing the change or simply not paying attention. Alternatively, maybe this is just crisis management and they were just hoping this wouldn't happen.

issa 2 days ago 0 replies      
I rejected using React in a project just for this reason. I'll be perfectly honest: I didn't (and still don't) completely understand the implications, but on it's face it sounds like trouble.
geofft 2 days ago 2 replies      
Am I reading this right that Apache's unwillingness to use rocksdb under the custom license pressured Facebook into switching to Apache || GPLv2? That is pretty cool!
rmgraham 15 hours ago 0 replies      
IANAL, but...

People should be aware that Atom (the editor) uses React internally, so it's possible you face similar legal exposure without even shipping anything just because you agreed to the terms by installing an editor.

isaac_is_goat 2 days ago 2 replies      
If you can't use React because of the license, use InfernoJS (MIT). In fact...you should probably use Inferno JS anyway.
Communitivity 1 day ago 0 replies      
Thanks to David Recordon this has now been fixed and RocksDB is now dual licensed under Apache or GPL 2. The ball has been started to have the same change occur at React, which AFAIK is still under the old BSD+Patents license. Please click through the OP's link to get the current details.
didibus 2 days ago 1 reply      
So if I understand correctly, by using React, you agree that if you sue Facebook, you'll need to stop using React? And that goes no matter the reason why you're suing them for?

So say Facebook was infringing on one of your patent, you could still sue them, but you'd have to give up React if you did. Is that correct?

nsser 2 days ago 1 reply      
Facebook's open source projects are potential trojan horses.
jzelinskie 2 days ago 1 reply      
As the other comments say, RocksDB is going to be dual licensed both GPLv2 and Apache. What's the advantage to doing so? If I choose to consume the library via the Apache license, I'd never have to contribute back code; doesn't this invalidate the copyleft of GPLv2?
gedy 2 days ago 1 reply      
This keeps coming up as a concern with back and forth "it's a big deal"/"it's not a big deal" - so if FB has no ill-intent from this, are there any simple, obvious changes they could/should make to the React license?
maxsavin 2 days ago 1 reply      
And now, for a tinfoil hat moment: even though companies like Microsoft and Google are using Facebook's libraries, it is possible that they have some kind of private deal in regards to the patents clause.
snarfy 2 days ago 0 replies      
Is it any patent, or only patents pertaining to the "Software" as defined in the license (react)?

I cannot sue Facebook for patents in react or lose my react license, but I could some other patent I own e.g. fizzbuzz that Facebook is violating. Is this correct or is it any patent?

If it is any patent, I cannot believe that was the intent even if that's how Apache Foundation is interpreting it.

alecco 2 days ago 0 replies      
This is the major drawback of adoption for Zstandard, too.
Steeeve 2 days ago 0 replies      
Interesting note from the discussion:

> As noted earlier, the availability of a GPLv2 license does not preclude inclusion by an ASF project. If patent rights are not conferred by the 3-clause BSD and required by ASLv2 then would not these licenses be incompatible?


> he has discussed the matter with FB's counsel and the word is that the FB license is intentionally incompatible. It is hard to make the argument that it is compatible after hearing that.

ouid 2 days ago 0 replies      
If your landlord does something that they would normally be within their rights to do in retaliation to you enforcing some other provision in your agreement, then it is illegal. I'd bet that this was not the kind of statute that they could override in a lease agreement.

I wonder if such a clause is actually enforceable. Are there any actual cases where this clause was invoked?

hellbanner 1 day ago 0 replies      
What will Facebook have to do now - they either have to opensource their infringing software or re-write them convincingly (Wasn't there an Oracle vs Google case about duplicate implementations of a function?)
brooklyntribe 2 days ago 1 reply      
I'm a vue.js guy myself. Think it's far more cooler then React. And not every going to face this issue, me thinks.


0xbear 2 days ago 1 reply      
Edit: Caffe2 is similarly afflicted. Torch and PyTorch are not. Some of Torch modules are, however.
didibus 2 days ago 0 replies      
What patent does React depends on?
GoodInvestor 1 day ago 0 replies      
Luckily React itself is no longer unique in itself, and project can use the ideas popularized by React with liberally licensed alternatives such as Preact or Inferno.
vbernat 2 days ago 1 reply      
Reading a bit more the thread, it's quite surprising. The assignee is Chris Mattmann. From his webpage, he is not a legal counsel. The only evidence of a problem they show is that BSD alone implies a patent grant but coupled with an explicit patent grant, this is not the case anymore. The other evidence is brought by Roy Fielding who does not appear to be a legal counsel either about a discussion (oral?) with Facebook's legal counsel that the license is incompatible with ASLv2.

The whole decision seems to have been taken by "not-a-lawyer" people with their own interpretations. Doesn't the Apache Foundation have some software lawyers they can ask?

hordeallergy 1 day ago 0 replies      
When has Facebook ever demonstrated any integrity? Why anyone chooses to have any association with them is inexplicable to me.
shams93 1 day ago 0 replies      
Do 3rd party implementations like inferno or preact have legal issues from being based off Facebook intellectual property?
Kiro 2 days ago 2 replies      
Why would I care about patents if I'm outside the US? Software patents are not even valid where I live.
flushandforget 2 days ago 1 reply      
Please can someone paraphrase the implications of this. It's hard for me to understand.
erichocean 1 day ago 0 replies      
Well, now I can use CockroachDB, so that's nice. :)
luord 1 day ago 0 replies      
I've never liked react (and not only because of the sketchy licensing, in fact, that's fairly low among my qualms about react) so it's nice to see validation.

I'm sticking with Vue, even if (and that's a big if) it might also infringe facebook patents.

anon335dtzbvc 2 days ago 1 reply      
I've made a pull request to fix that situation https://github.com/facebook/react/pull/10195
guelo 2 days ago 0 replies      
I hate Facebook but I also hate patents so I like this license and wish more projects would use it because lawsuits impede progress, damage the economy, and no matter what the laws are curious smart people will always invent.
weego 2 days ago 0 replies      
I'm sorry but I struggle to take anyone here seriously that thinks that having to not use a specific js framework would be grounds to be legally neutered in a patent litigation case
BucketSort 2 days ago 0 replies      
Ding ding. Vue.JS wins.
known 2 days ago 0 replies      
Sounds rational
Employees Who Stay in Companies Longer Than Two Years Get Paid 50% Less forbes.com
679 points by askafriend  1 day ago   523 comments top 45
elnygren 1 day ago 20 replies      
Best way to increase salary at current workplace? Get offers from other companies & ask for a meeting with your boss (or whoever decides your pay, so HR etc.).

Bring up the offers, discuss what you'd get from there and also go through the potential career you could build at those companies. For example: Consultancy X would pay me $k/mo and every 6months it will go up the ladder if I perform well.

With this kind of discussion you should be able to get a raise that brings your pay even above the competing offers.

Treat your current as one of the options you can make that day; make your employer fight for you every 6-12 months. And remember it's not personal, it's just business. Your employer would let you go and throw you under the bus if it made business sense.

tldr; don't get too emotionally attached to job; that's when they get you.

nilkn 1 day ago 10 replies      
I think it's important to understand that there's a significant amount of selection bias possible here. In general, folks who switch jobs every two years are the folks who are not getting offered big raises by their current companies. The ones who are getting offered big raises may still choose to leave due to other reasons, but they often will choose to stay instead. And they won't be making a big fuss about it online.

There's another phenomenon here. Almost by definition, average developers are not going to get big raises. They'll get average raises. The average developer raise is probably 3-5%, which is not big but it's more than the average raise across all professions, at least in the US.

This leads to an interesting question: why can the average developer get a big raise by switching jobs? I think at least part of the answer is that companies simply have more information about employees who've been around for at least a few years than they do about potential hires still on the market. It's a lot easier for a company to realize an employee is about average once that person has been on the job for 12+ months than it is during the interview phase, where the questions are heavily sandboxed and generally focused on basic problem solving ability rather than the candidate's ability to convert that into business value.

Finally, in general people who are average but think they're above average really do not like to confront the fact that they're average. So average developers with big egos being offered average raises will often very vehemently argue that the problem is all with the companies they've worked at and never with themselves.


Another point worth focusing on here is that raises are really determined by the business value that you're producing, not your raw intelligence or passion for coding. Very smart coders may or may not be any good at converting that talent into lots of value for the company. It may even be that sometimes a less talented coder gets offered a much bigger raise because other skills allowed them to create significant value.

I think this explains why so many folks are average but think they're above average. It's because they might indeed be above average in some metrics, but not the metrics that matter to their employer.

sidlls 1 day ago 10 replies      
There is a plateau as one reaches senior positions.

As I went from "no" experience to my current position my job switches (every ~2 years) always were for 10%-25% (in one case, 100%, but that was Midwest to Bay Area so other factors are at play). I never got more than 5% merit increases otherwise.

Most of my friends who have been at the same place for >10 years in engineering (not software) are just now reaching parity with my base salary (CoL-adjusted) and they didn't spend 6 years in academia before transitioning to industry.

Two year tenures isn't job hopping. It's a reflection of how this industry works. Very few companies offer sufficient breadth and depth of product complexity, career advancement, or other similar things to make it worthwhile. I'd say the sweet spot is 2-4 years, except at very large companies (e.g. Google) (EDIT:) or companies which are developing complicated products with physical engineering or regulatory factors complicating development. Anything longer, especially if there is a lot of long-term stasis on a resume (e.g. "tech lead on product X" for more than a year or two), is an indication to me of someone who either isn't capable of stretching himself or doesn't want to. Anything less, especially if more than one project per job exists, indicates an inability to see a project through to maintenance or someone who is easily bored.

needlessly 1 day ago 4 replies      
I don't get people who say, "Well I would never hire someone who has never worked more than 5 years at a single place!!!"

I would never have increased my from $68k to $115k in 5 years.I probably would've been somewhere at like $80k right now at best if I was didn't switch jobs twice.

If it means some hiring manager is going say some snarky opinion, then yes I'll take my extra money.

manicdee 1 day ago 2 replies      
My contrary opinion is that people who job hop every two years are the ones who come in, make enough progress that management thinknthey are pretty nifty, then leave before they have to domany maintenance on the technical debt they left behind.

Sure, it is good to be highly paid, but the situation just reinforces the idea that people who wear suits are paid far too much.

Though I find myself i the situation of wanting to earn more, so I am seriously considering switching to SAP. Sell my soul, buy a house, live with my conscience formthe rest of my life?

fizl 1 day ago 4 replies      
This is a really poor article, and I'm surprised Forbes would publish it. As far as I can tell, there's absolutely no data behind any of the assertions in the article, and the title is just conjecture by the author based on a whole lot of assumptions.
jondubois 1 day ago 2 replies      
Yes that never made sense to me. Employees who have been around for a couple of years are much more efficient and know their way around - They are worth more, and yet they always get paid much less than 3-month or 6-month contractors or even new full-time employees in many cases.

Also if they say that they want to leave a company, usually employers let them go relatively easily without making any significant counter-offer. It means that employers don't even see the value there; they actually think that every employee is 100% replaceable and don't account for the massive efficiency loss incurred.

southphillyman 1 day ago 1 reply      
Never understood why companies will refuse to give valuable employees reasonable raises but when turn around and pay an unproven new hire even more money to replace them. So you refuse to give me a 10% increase because "never negotiate with terrorists" but will gladly pay the 20% increase that I'm getting at my new job to my replacement (because that's the market).Just seems so short sighted.
jurassic 1 day ago 1 reply      
Maybe I'm more conservative than others here, but anything less than 2 years tenure at a company seems suspicious to me. The people I've known who bounced after ~18 months or less were often the ones I would've wanted to quit anyway, who weren't cutting the mustard and weren't on track for promotion. For them, talking a big game at interview time every two years probably is income-maximizing because the longer they stick in the same job without promotions the more obvious their stagnancy is on the resume.
dsmithatx 1 day ago 2 replies      
I keep seeing articles like this that fail to point out an important fact. The economy has been strong for a long time and it is a workers market. Once there is another downturn it will be an employers market. Job hoppers will be lucky to land interviews, much less command high salaries. That is unless you are in the top % of your skill level or have some skill like Oracle DBA that is highly sought after.

I worry about younger people shaping their world view and career around the last 10 years. I did that an in 2002 when I was laid off it was a painful few years of realization not finding work.

zelos 1 day ago 1 reply      
Yet large companies still launch investigations into why retention is so bad and have employees fill out surveys to try to figure out why people are leaving. Multiple-choice surveys, of course, with no questions about compensation.
sverhagen 1 day ago 0 replies      
Sad, I am. I think I'm perfectly capable to sell myself to the next job. But I like what I'm doing, I like the goals that are still ahead and being worked towards, great team, all good. And whether you read the article with a bit (or a lot) of skepticism, it seems common knowledge that the biggest steps in salary are made when going to that next job. So... where's the silver lining for loyal dogs? Should I just pick up a book on negotiating, and take it up with my VP of People?
bitexploder 1 day ago 0 replies      
This article flies in the face of what I have been reading for years. This article is taking some simple data "employees that change jobs get X% raise" and extrapolating it to a whole lot of wrong. It does vary by organization.

For example if you are at an organization where you are basically the most senior employee already and you can take on a new role at a larger organization with new responsibilities or skill growth a job change can make sense. If you are just bouncing around between say Amazon, Google, and Facebook this can often be a dud strategy in the long term.

Take this with a grain of salt, since I don't have time to dig up citations, but I have read that the long term compensation is actually higher for individuals that don't switch jobs so often. I guess the career employee is a bit of a legend in IT these days so we could have a lively discussion about the selection and difference in a big tech firm and a more traditional F500, but I think in the long run these things will normalize because they are basic human nature and organizational structure problems and nothing is intrinsically special about technology companies. They are the special darlings of the era, so they get to break the rules a bit. Maybe it really will foment a long term change in "the rules" or always be a bit of a bubble in terms of how the organizations operate.

Steeeve 1 day ago 1 reply      
Job hopping works. But the more you do it, the more you can expect to see skepticism from hiring managers.

Software development is a little bit different because right now there is more demand for workers than there are good developers. It hasn't always been that way and it won't always be that way.

When you're working for the right employer at the right salary, an extra 25-50% isn't going to be enough to lure you away. That and at some places stock options are worth something and vesting schedules actually play into the decision process.

nfriedly 1 day ago 1 reply      
I wonder how strongly this holds after you're into the higher end of the salary range?

My best increase ever was going from my highschool/college web dev job to freelance, where I more than tripped my hourly rate from $12.50/hr to $45/hr. (It probably wasn't quite 3x after accounting for taxes and healthcare and such, but it was still a good jump, and brought more flexible hours too.)

Since then, I've gotten 20-25% increases a couple of times, topping out at $120k.

I was switching more like every 3-5 years, so a 5%/year raise actually ends up being in the same ballpark, if not slightly better. Every two years might be better from a pure cash perspective, but I didn't feel like I was "done" with my roles until the 3-5 year mark - I think I learned more delivered more value, and achieved a better sense of accomplishment doing it my way.

I also live in Ohio, with the exception of one year I spent in SF. I imagine I could double my current salary moving back to SF or over to NYC, but I'm happier here. (And $100k+ goes a bit farther here...)

epynonymous 1 day ago 0 replies      
a few comments, this article seems to want to incite people to jump ship which is fine, but fwiw, i have been with the same company for about 14 years and i dont think i'm getting paid less than my peers, rather i think i'm way above most with similar experience and titles. part of it maybe that this article generalizes and doesnt segment the different industries, i work in software and my company has done a lot to make certain that they're competitive and do not lose key people. i met a guy at google, he has been there since 2004, i doubt he's making less than his peers, there are certainly benefits, say restricted stocks (please look at goog from 5 years ago and you'll understand). this article is probably grouping everyone who works the counter at mcdonalds to investment bankers, nice work forbes.

also, i think working for the same company does give you more opportunites to understand different parts of a company that you certainly would miss out on if you jumped ship every two years.

also note that as a hiring manager, i look down on candidates that jump ship too frequently, no matter how strong his/her skill.

johan_larson 1 day ago 9 replies      
It's a bit strange that this sort of job-hopping isn't a red flag to employers. You'd think managers would be reluctant to hire an employee who has jumped ship many times before, just as they were becoming useful.
animex 1 day ago 2 replies      
Ha, this is funny. I rarely ever stay longer than 2 years at a company and every time I jumped, my salary went up significantly i.e. >10%+ HOWEVER, I've seen guys who have stayed with a company earning less for years, ending up as the VP/Presdient of the company eventually and their pay going way up.
jryan49 1 day ago 0 replies      
If the article was even based on actual facts and figures, wouldn't survivorship bias be a huge factor here? Maybe the bottom of the barrel are bringing the average down because they are stuck with jobs with no mobility and aren't valuable enough to get the raises.
hughperkins 1 day ago 1 reply      
Correlation != causation.

Seems perfectly plausible to me that the more confident candidates are more likely to be poached by other companies, for example.

nextstep 1 day ago 1 reply      
From 2014, but still relevant. Unfortunately, the best negotiating tactic is to bring a competing offer and essentially threaten to quit. Or just switch firms every ~18 months like the post suggests.
abalashov 1 day ago 0 replies      
I have been self-employed since age 22, but from 18 to 22 I had 6 jobs at 5 companies in 3.5 years. I started out in tech support at a small local ISP, hopped, came back as a sysadmin, then became primary sysadmin, and really cut my early career teeth there (and am ever grateful for the experience!). But it was a small college-town operation that ran on student-type labour; at my peak, I think I topped out at $16/hr. The next job hop was a move to Atlanta, where I commanded a $55k salary (about double my peak hourly earnings at NEGIA) and ultimately reached $70k at 21 not terrible for a 21 year-old in Atlanta in 2007.

It's easy to place big gains when moving up from entry level, especially if you entered into an employment bargain that presumed being paid vastly below market for doing fairly sophisticated and diverse things in exchange for having a diamond-in-the-rough skill set. I learned more at the small company than I have ever learned in my life, and more quickly, and was able to effectively parlay that into big-boy corporate jobs in Atlanta.

In one sense, this validates the thesis. But it's important to remember that it does plateau. When moving up from entry level and early-career pay, you're flying close to the ground and it feels like you're going fast. The ground rushes past you, and it's intoxicating. Minimum-wage checkout clerk to full-time salaried assistant store manager? Zoom!

The momentum doesn't last. Had I continued in W-2 employment, I would have likely hit low to low-mid $100k by now, after ten years, but no more.

syntaxing 1 day ago 1 reply      
I wish these studies would incorporate the value of benefits. Benefits are a huge reason (especially for people with families) why people stay at jobs. Imagine having a job for $60K with four weeks of vacation and good 401K matching compared with $80k with one week vacation and horrible health insurance.
conanbatt 1 day ago 1 reply      
The single most important change that should happen to change things like this is having salaries being public.

The day that happens, salaries statistics will very likely take a huge jump.

hdi 1 day ago 0 replies      
That's been my experience as well.

But it didn't happen because I was on the lookout for a fat paycheck. It happened because the companies I worked at the time couldn't provide the stimulation, technical capabilities, working environment and personal development I was looking for.

Then I realised very few of them do where I am, so now I do 6-12 month contracts and it's helped with saving a bit of my soul and getting paid a bit more.

richardknop 1 day ago 0 replies      
This is unfortunately true in our industry, from my experience. The only way to make substantial raise for top performers is to have a new job offer with +20-30%. Otherwise the HR department will give you X reasons why you asking for such big raise is inappropriate.

This is part of free market economy, companies are trying to minimize their costs and maximize profits so it makes sense not to waste money giving big raises. This is why you need to play the free market game and force their hand.

Once you get a much better competing offer, your current employer will probably offer to match it. But at that time why stay anyways if you had to force them to consider a substantial raise by going for interviews and getting a better offer?

Also, there is an old advice which says to never accept a counter offer.

I do think there is a ceiling for this approach. You can probably do it 4-5 times and get to quite high salary (150-200k). After that this tactic does not work as well anymore so staying at one place for a long time and earning an internal promotion becomes better option.

castratikron 21 hours ago 0 replies      
I'm not sure why companies prefer that someone leave and take all of their institutional knowledge with them, then hire someone new at a higher wage that would have kept the first person at the company. They are not taking into account the value of knowledge. If your domain knowledge increases 20% within a year, then you should be getting a 20% raise.

With the elimination of pensions, the only vesting time that most people have is for their 401k, which is usually only a couple of years. Not sure what employers are expecting to happen or why they're surprised when their hot college grad employees decide to leave for a big pay bump.

EADGBE 1 day ago 0 replies      
When I started in this industry, my father, who also works in this industry gave me this advice. Something along the lines of "if you want a raise, go somewhere else, that's the only way to move up quickly". He was right, three times in the last 4-5 years.
usmannk 1 day ago 2 replies      
I'm reluctant to apply this mindset to the tech, or at least SV tech, industry. While it's common practice to hop jobs every 2-5 years, it seems to be more for new or different opportunities (Different sized company, a new domain, new technical challenges, etc.) than it is for a more competitive offer. Internal promotion, both in compensation and position, seems to be the relative norm.
k__ 1 day ago 1 reply      
Sure, I mean who gives you more than a 20% raise?

When you change jobs, you can renegotiate everything. More money, more holidays, less hours, etc.

Last time I changed jobs, I worked less hours and got 20% more money, my last boss would have laughed at me if I wanted this.

Pogba666 1 day ago 0 replies      
went thru many comments here and despite lots disagreement between people, I believe one thing people will all agree with is: in this game, individuals are much weaker than companies and much more vulnerable regardless.

company can never go bankrupt because of loosing one super star, but a worker and his/her family can be in a bad situation if company decides to do something to him/her. So the whatever negotiation between company and works can never be a fair game at any point.

methodin 23 hours ago 1 reply      
What makes the most sense to me is making moves early on in your career then hopefully finding a company/CEO/team you really like and settle in for a while. You aren't going to learn some of the valuable lessons if you swap jobs too often as you will never really become an expert at what you're doing.

Certainly valid you are underpaid and don't particularly enjoy your job.

anon4728 1 day ago 0 replies      
There's this management mythology that:

- Worker bees will stay because they're clueless and don't have the initiative to keep moving.

- Pressure from on high to fudge performance reviews downwards to "save the company money." (At where my mom worked, a middle manager blanket reduced all subordinate reviews down because BS: "people are overly generous.")

If you want least turnover / most morale / most productivity, pick people whom can grow the most and grow them until they no longer can keep up or find something else. Develop people (training, mentoring, promotions, bonuses, raises, perks, etc.), don't just consume them as static widgets to fill a hole.

johnward 22 hours ago 0 replies      
I often refer to this gem on salary negotiation: http://www.kalzumeus.com/2012/01/23/salary-negotiation/
georgeburdell 1 day ago 0 replies      
How much does this apply to senior employees? I have a PhD, started out at the "Senior" level as a new grad, and I just recently got my first promotion to "Staff" Engineer, which represents roughly the 85th-90th percentile for pay in my company.
sergers 1 day ago 0 replies      
Depending on size of company, there may be some distinct departments.

I stayed on the same team, 6 years, but multiple promotions. Only 20+k in salary different.Jumped teams twice in next 3 years, +20k.

Now I have doubled my salary I started with. Been with company 10 years.The only thing I regret is not jumping teams earliar.. a few years max. Great learning experience through different roles on my original department... But didn't help me financially

g9yuayon 19 hours ago 0 replies      
As I often told my friends, go to a company like Netflix. Netflix actually tries very hard to make money not a concern to its employees, and they do hell of a great job.
trentmb 1 day ago 1 reply      
I just hit my two year mark. Guess it's time to start job hunting.

My current employer 401k contributions vest at 3.5 years- would I be out of line insisting that my 'next' employer make a one time contribution equal to what I sacrifice?

known 1 day ago 0 replies      
Employees != IT Employees
mcguire 1 day ago 0 replies      
"Its a fact that employees are underpaid."


pm24601 13 hours ago 0 replies      

1. A company can be underpaying for the employee's experience but paying correctly for the skills the company needs.

2. It should not be a bad thing for an employee to leave a company to advance their career - the current company may not offer the needed opportunities.

3. Managers should be proactively asking the question of the employee: "Let's talk about recognizing when you SHOULD move to a new position at XYZ or some place else?"

pc86 1 day ago 0 replies      
blazespin 1 day ago 0 replies      
This might be corrupted because people who switch are willing to move geographically.
vacri 1 day ago 2 replies      
> Jessica Derkis started her career earning $8 per hour ($16,640 annual salary) as the YMCAs marketing manager. Over 10 years, shes changed employers five times to ultimately earn $72,000 per year at her most recent marketing position.

Amazing, thanks Forbes. I never knew that if you started out on minimum wage and then moved into a mundane professional role, you'd earn considerably more!

mustafabisic1 1 day ago 1 reply      
Also a great way to increase your salary is to Take the lowest pay possible at first. Here is what I mean https://medium.com/the-mission/take-the-lowest-pay-possible-... And this is not my article and I don't have any affiliations with it :)
The Limitations of Deep Learning keras.io
720 points by olivercameron  22 hours ago   233 comments top 40
therajiv 21 hours ago 14 replies      
As someone primarily interested in interpretation of deep models, I strongly resonate with this warning against anthropomorphization of neural networks. Deep learning isn't special; deep models tend to be more accurate than other methods, but fundamentally they aren't much closer to working like the human brain than e.g. gradient boosting models.

I think a lot of the issue stems from layman explanations of neural networks. Pretty much every time DL is covered by media, there has to be some contrived comparison to human brains; these descriptions frequently extend to DL tutorials as well. It's important for that idea to be dispelled when people actually start applying deep models. The model's intuition doesn't work like a human's, and that can often lead to unsatisfying conclusions (e.g. the panda --> gibbon example that Francois presents).

Unrelatedly, if people were more cautious about anthropomorphization, we'd probably have to deal a lot less with the irresponsible AI fearmongering that seems to dominate public opinion of the field. (I'm not trying to undermine the danger of AI models here, I just take issue with how most of the populace views the field.)

toisanji 21 hours ago 4 replies      
There is some good information in there and I agree with the limitations he states, but his conclusion is completely made up.

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction."

There are tens of thousands of scientists and researchers who are studying the brain from every level and we are making tiny dents into understanding it. We have no idea what the key ingredient is , nor if it is 1 or many ingredients that will take us to the next level. Look at deep learning, we had the techniques for it since the 70's, yet it is only now that we can start to exploit it. Some people think the next thing is the connectome, time, forgetting neurons, oscillations, number counting, embodied cognition,emotions,etc. No one really knows and it is very hard to test, the only "smart beings" we know of are ourselves and we can't really do experiments on humans because of laws and ethical reasons. Computer Scientists like many of us here like to theorize on how AI could work, but very little of it is tested out. I wish we had a faster way to test out more competing theories and models.

Houshalter 11 hours ago 1 reply      
This article is a bit misleading. I believe NNs are a lot like the human brain. But just the lowest level of our brain. What psychologists might call "procedural knowledge".

Example: learning to ride a bike. You have no idea how you do it. You can't explain it in words. It requires tons of trial and error. You can give a bike to a physicist that has a perfect deep understanding of the laws of physics. And they won't be any better at riding than a kid.

And after you learn to ride, change the bike. Take one where the handle is inversed. And turning it right turns the wheel left. No matter how good you are at riding a normal bike, no matter how easy it seems it should be, it's very hard. Requires relearning how to ride basically from scratch. And when you are done, you will even have trouble going back to a normal bike. This sounds familiar to the problems of deep reinforcement learning, right?

If you use only the parts of the brain you use to ride a bike, would you be able to do any of the tasks described in the article? E.g. learn to guide spacecraft trajectories with little training, through purely analog controls and muscle memory? Can you even sort a list in your head without the use of pencil and paper?

Similarly recognizing a toothbrush as a baseball bat isn't as bizarre as you think. Most NNs get one pass over an image. Imagine you were flashed that image for just a millisecond. And given no time to process it. No time to even scan it with your eyes! You certain you wouldn't make any mistakes?

But we can augment NNs with attention, with feedback to lower layers from higher layers, and other tricks that might make them more like human vision. It's just very expensive.

And that's another limitation. Our largest networks are incredibly tiny compared to the human brain. It's amazing they can do anything at all. It's unrealistic to expect them to be flawless.

siliconc0w 19 hours ago 0 replies      
A neat technique to help 'explain' models is LIME: https://www.oreilly.com/learning/introduction-to-local-inter...

There is a video here https://www.youtube.com/watch?v=hUnRCxnydCc

I think this has some better examples than the Panda vs Gibbon example in the OP if you want to 'see' why a model may classify a tree-frog as a tree-frog vs a billiard (for example). IMO this suggests some level of anthropomorphizing is useful for understanding and building models as the pixels the model picks up aren't really too dissimilar to what I imagine a naive, simple, mind might use. (i.e the tree-frog's goofy face) We like to look at faces for lots of reasons but one of them probably is because they're usually more distinct which is the same, rough, reason why the model likes the face. This is interesting (to me at least) even if it's just matrix multiplication (or uncrumpling high dimensional manifolds) underneath the hood,

CountSessine 20 hours ago 2 replies      
Surely we shouldn't rush to anthropomorphize neural networks, but we'd ignoring the obvious if we didn't at least note that neural networks do seem to share some structural similarities with our own brains, at least at a very low level, and that they seem to do well with a lot of pattern-recognition problems that we've traditionally considered to be co-incident with brains rather than logical systems.

The article notes, "Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way". But this ignores a lot of the subtlety in psychological models of human consciousness. In particular, I'm thinking of Dual Process Theory as typified by Kahneman's "System 1" and "System 2". System 1 is described as a tireless but largely unconscious and heavily biased pattern recognizer - subject to strange fallacies and working on heuristics and cribs, it reacts to it's environment when it believes that it recognizes stimuli, and notifies the more conscious "System 2" when it doesn't.

At the very least it seems like neural networks have a lot in common with Kahneman's "System 1".

cm2187 20 hours ago 2 replies      
I think the requirement for a large amount of data is the biggest objection to the reflex "AI will replace [insert your profession here] soon" that many techies, in particular on HN, have.

There are many professions where there is very little data available to learn from. In some case (self-driving), companies will invest large amount of money to build this data, by running lots of test self-driving cars, or paying people to create the data, and it is viable given the size of the market behind. But the typical high-value intellectual profession is often a niche market with a handful of specialists in the world. Think of a trader of financial institutions bonds, or a lawyer specialized in cross-border mining acquisitions, a physician specialist of a rare disease or a salesperson for aviation parts. What data are you going to train your algorithm with?

The second objection, probably equally important, also applies to "software will replace [insert your boring repetitive mindless profession here]", even after 30 years of broad adoption of computers. If you decide to automate some repetitive mundane tasks, you can spare the salary of the guys who did these tasks, but now you need to pay the salary of a full team of AI specialists / software developers. Now for many tasks (CAD, accounting, mailings, etc), the market is big enough to justify a software company making this investment. But there is a huge number of professions where you are never going to break even, and where humans are still paid to do stupid tasks that a software could easily do today (even in VBA), and will keep doing so until the cost of developing and maintaining software or AI has dropped to zero.

I don't see that happening in my life. In fact I am not even sure we are training that many more computer science specialists than 10 years ago. Again, didn't happen with software for very basic things, why would it happen with AI for more complicated things.

meh2frdf 17 hours ago 2 replies      
Correct me if I'm wrong but I don't see that with 'deep learning' we have answered/solved any of the philosophical problems of AI that existed 25 years ago (stopped paying attention about then).

Yes we have engineered better NN implementations and have more compute power, and thus can solve a broader set of engineering problems with this tool, but is that it?

kowdermeister 18 hours ago 3 replies      
> In short, deep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humansas embodied earthly creatures.

Well maybe we should train systems with all our sensory inputs first, like newborns leans about the world. Then make these models available open source like we release operating systems so others can build on top of that.

For example we have ImageNet, but we don't have WalkNet, TasteNet, TouchNet, SmellNet, HearNet... or other extremely detailed sensory data recorded for an extended time. And these should be connected to match the experiences. At least I have no idea they are out there :)

debbiedowner 21 hours ago 0 replies      
People doing empirical experiments cannot claim to know the limits of their experimental apparatus.

While the design process of deep networks remains founded in trial and error, and there are no convergence theorems and approximation guarantees, no one can be sure what deep learning can do, and what it could never do.

fnl 12 hours ago 0 replies      
Put a lot simpler: Even DL is still only very complex, statistical pattern matching.

While pattern matching can be applied to model the process of cognition, DL cannot really model abstractive intelligence on its own (unless we phrase it as a pattern learning problem, viz. transfer learning, on a very specific abstraction task), and much less can it model consciousness.

pc2g4d 21 hours ago 1 reply      
Programmers contemplating the automation of programming:

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like."

ilaksh 15 hours ago 0 replies      
Actually there are quite a few researchers working on applying newer NN research to systems that incorporate sensorimotor input, experience, etc. and more generally, some of them are combining an AGI approach with those new NN techniques. And there has been research coming out with different types of NNs and ways to address problems like overfitting or slow learning/requiring huge datasets, etc. When he says something about abstraction and reasoning, yes that is important but it seems like something NNish may be a necessary part of that because the logical/symbolic approaches to things like reasoning have previously mainly been proven inadequate for real-world complexity and generally the expectations we have for these systems.

Search for things like "Towards Deep Developmental Learning" or "Overcoming catastrophic forgetting in neural networks" or "Feynman Universal Dynamical" or "Wang Emotional NARS". No one seems to have put together everything or totally solved all of the problems but there are lots of exciting developments in the direction of animal/human-like intelligence, with advanced NNs seeming to be an important part (although not necessarily in their most common form, or the only possible approach).

eanzenberg 21 hours ago 2 replies      
This point is very well made: 'local generalization vs. extreme generalization.' Advanced NN's today can locally generalize quite well and there's a lot of research spent to inch their generalization further out. This will probably be done by increasing NN size or increasing the NN building-blocks complexity.
lordnacho 21 hours ago 3 replies      
I'm excited to hear about how we bring about abstraction.

I was wondering how a NN would go about discovering F = ma and the laws of motion. As far as I can tell, it has a lot of similarities to how humans would do it. You'd roll balls down slopes like in high school and get a lot of data. And from that you'd find there's a straight line model in there if you do some simple transformations.

But how would you come to hypothesise about what factors matter, and what factors don't? And what about new models of behaviour that weren't in your original set? How would the experimental setup come about in the first place? It doesn't seem likely that people reason simply by jumbling up some models (it's a line / it's inverse distance squared / only mass matters / it matters what color it is / etc), but that may just be education getting in my way.

A machine could of course test these hypotheses, but they'd have to be generated from somewhere, and I suspect there's at least a hint of something aesthetic about it. For instance you have some friction in your ball/slope experiment. The machine finds the model that contains the friction, so it's right in some sense. But the lesson we were trying to learn was a much simpler behaviour, where deviation was something that could be ignored until further study focussed on it.

andreyk 20 hours ago 3 replies      
"Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data."

This statement has a few problems - there is no real reason to interpret the transforms as geometric (they are fundamentally just processing a bunch of numbers into other numbers, in what sense is this geometric), and the focus on human-annotated data is not quite right (Deep RL and other things such as representation learning have also achieved impressive results in Deep Learning). More importantly, saying " a deep learning model is "just" a chain of simple, continuous geometric transformations " is pretty misleading; things like the Neural Turing Machine have shown that enough composed simple functions can do pretty surprisingly complex stuff. It's good to point out that most of deep learning is just fancy input->output mappings, but I feel like this post somewhat overstates the limitations.

eli_gottlieb 20 hours ago 0 replies      
>But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.

I'm really looking forward to this. If it comes out looking like something faster and more usable than Bayesian program induction, RNNs, neural Turing Machines, or Solomonoff Induction, we'll have something really revolutionary on our hands!

19eightyfour 7 hours ago 1 reply      
If this article is correct about limitations, couldn't one simply include a Turing machine model into the process to train algorithms?

Some ideas:

- The vectors are Turing tapes, or

- Each point in a tape is a DNN, or

- The "tape" is actually a "tree" each point in the tapeis actually a branch point of a tree with probabilities going each way, and the DNN model can "prune this tree" to refine the set of "spanning trees" / programs.

Or, hehe, maybe I'm leading people off track. I know absolutely nothing about DNN ( except I remember some classes on gradient descent and SVMs from bioinformatics ).

gallerdude 20 hours ago 1 reply      
I'm sorry, but I don't understand why wider & deeper networks won't do the job. If it took "sufficiently large" networks and "sufficiently many" examples, I don't understand why it wouldn't just take another order of magnitude of "sufficiency."

If you look at the example with the blue dots on the bottom, would it not just take many more blue dots to fill in what the neural network doesn't know? I understand that adding more blue dots isn't easy - we'll need a huge amount of training data, and huge amounts of compute to follow; but if increasing the scale is what got these to work in the first place, I don't see we shouldn't try to scale it up even more.

cs702 18 hours ago 0 replies      

Here's how I've been explaining this to non-technical people lately:

"We do not have intelligent machines that can reason. They don't exist yet. What we have today is machines that can learn to recognize patterns at higher levels of abstraction. For example, for imagine recognition, we have machines that can learn to recognize patterns at the level of pixels as well as at the level of textures, shapes, and objects."

If anyone has a better way of explaining deep learning to non-technical people in a few short sentences, I'd love to see it. Post it here!

deepGem 11 hours ago 0 replies      
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another.

Per my understanding - Each vector space represents the full state of that layer. Which is probably why the transformations work for such vector spaces.

A sorting algorithm unfortunately cannot be modeled as a set of vector spaces each representing the full state. For instance, an intermediary state of a quick sort algorithm does not represent the full state. Even if a human was to look at that intermediary step in isolation, they will have no clue as to what that state represents. On the contrary, if you observe the visualized activations of an intermediate layer in VGG , you can understand that the layer represents some elements of an image.

danielam 16 hours ago 1 reply      
"This ability [...] to perform abstraction and reasoning, is arguably the defining characteristic of human cognition."

He's on the right track. Of course, the general thrust goes beyond deep learning. The projection of intelligence onto computers is first and foremost wrong because computers are not able, not even in principle, to engage in abstraction, and claims to the contrary make for notoriously bad, reductionistic philosophy. Ultimately, such claims underestimate what it takes to understand and apprehend reality and overestimate what a desiccated, reductionistic account of mind and the broader world could actually accommodate vis-a-vis the apprehension and intelligibility of the world.

Take your apprehension of the concept "horse". The concept is not a concrete thing in the world. We have concrete instances of things int he world that "embody" the concept, but "horse" is not itself concrete. It is abstract and irreducible. Furthermore, because it is a concept, it has meaning. Computers are devoid of semantics. They are, as Searle has said ad nauseam, purely syntactic machines. Indeed, I'd take that further and say that actual, physical computers (as opposed to abstract, formal constructions like Turing machines) aren't even syntactic machines. They do not even truly compute. They simulate computation.

That being said, computers are a magnificent invention. The ability to simulate computation over formalisms -- which themselves are products of human beings who first formed abstract concepts on which those formalisms are based -- is fantastic. But it is pure science fiction to project intelligence onto them. If deep learning and AI broadly prove anything, it is that in the narrow applications where AI performs spectacularly, it is possible to substitute what amounts to a mechanical process for human intelligence.

latently 19 hours ago 1 reply      
The brain is a dynamic system and (some) neural networks are also dynamic systems, and a three layer neural network can learn to approximate any function. Thus, a neural network can approximate brain function arbitrarily well given time and space. Whether that simulation is conscious is another story.

The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:


The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:


denfromufa 21 hours ago 2 replies      
If the deep learning network has enough layers, then can't it start incorporating "abstract" ideas common to any learning task? E.g. could we re-use some layers for image/speech recognition & NLP?
LeicaLatte 10 hours ago 0 replies      
This is why Elon Musk is projecting. We are long ways away from AI.
thanatropism 16 hours ago 0 replies      
This is evergreen:


See also, if you can, the film "Being in the world", which features Dreyfus.

zfrenchee 19 hours ago 2 replies      
My qualm with this article is disappointingly poorly backed up. The author makes claims, but does not justify those claims well enough to convince anyone but people who already agree with him. In that sense, this piece is an opinion piece, masquerading as a science.

> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].

> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]

I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.

msoad 21 hours ago 1 reply      
Then people are assuming Deep Learning can be applied to a Self Driving Car System end-to-end! Can you imagine the outcome?!
LeanderK 17 hours ago 0 replies      
the author raises some valid points, but i don't like the style it is written in. He just makes some elaborate claims about the limitation of Deep Learning, but conveys why they are limitations. I don't disagree about the fact that there are limits to Deep Learning and many may be impossible to overcome without completely new approaches. I would like to see more emphasis on why things, like generating code from descriptions, that are theoretically possible, are absolutely impossible and out of reach today and not make the intention that the tasks itself is impossible (like the halting-problem).
ezioamf 20 hours ago 1 reply      
This is why I don't know if it will be possible (at current limitations) to let insect like brains to fully drive our cars. It may never be good enough.
nimish 21 hours ago 1 reply      
This is basically the Chinese Room argument though?
pron 4 hours ago 0 replies      
> Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.

We're still a long way from even insect level "intelligence" (if it could even be called that), hence the harm in calling it AI in the first place. The fact that machine learning performs some particular tasks better than humans means little. That was true of computers since their inception. The question of how much closer we are to human-level AI than to the starting point of machine learning and neural networks over 70 years ago is very much an open question. That after 70 years of research into neural networks in particular and to machine learning in general, we are still far from insect-level intelligence makes anyone suggesting a timeline for human-level AI sound foolish (although hypothetically, the leap from insect-level intelligence to human-level could be technically simple, but we really have no idea).

graycat 19 hours ago 2 replies      
On the limitations of machine learning asin the OP, the OP is correct.

So, right, current approaches to "machinelearning* as in the OP have some serious"limitations". But this point is a small,tiny special case of something else muchlarger and more important: Currentapproaches to "machine learning" as in theOP are essentially some applied math, andapplied math is commonly much morepowerful than machine learning as in theOP and has much less severe limitations.

Really, "machine learning" as in the OP isnot learning in any significantlymeaningful sense at all. Really,apparently, the whole field of "machinelearning" is heavily just hype from thedeceptive label "machine learning". Thathype is deceptive, apparently deliberatelyso, and unprofessional.

Broadly machine learning as in the OP is acase of old empirical curve fitting wherethere is a long history with a lot ofapproaches quite different from what is inthe OP. Some of the approaches are undersome circumstances much more powerful thanwhat is in the OP.

The attention to machine learning isomitting a huge body of highly polishedknowledge usually much more powerful. Ina cooking analogy, you are being sold astate fair corn dog, which can be good,instead of everything in Escoffier,

Prosper Montagn, Larousse Gastronomique:The Encyclopedia of Food, Wine, andCookery, ISBN 0-517-503336, CrownPublishers, New York, 1961.

Essentially, for machine learning as inthe OP, if (A) have a LOT of trainingdata, (B) a lot of testing data, (C) bygradient descent or whatever build amodel of some kind that fits thetraining data, and (D) the model alsopredicts well on the testing data, then(E) may have found something of value.

But the test in (D) is about the onlyassurance of any value. And the value in(D) needs an assumption: Applications ofthe model will in some suitable sense,rarely made clear, be close to thetraining data.

Such fitting goes back at least to

Leo Breiman, Jerome H. Friedman, RichardA. Olshen, Charles J. Stone,Classification and Regression Trees,ISBN 0-534-98054-6, Wadsworth &Brooks/Cole, Pacific Grove, California,1984.

not nearly new. This work is commonlycalled CART, and there has long beencorresponding software.

And CART goes back to versions ofregression analysis that go back maybe 100years.

So, sure, in regression analysis, we aregiven points on an X-Y coordinate systemand want to fit a straight line so that asa function of points on the X axis theline does well approximating the points onthe X-Y plot. Being more specific coulduse some mathematical notation awkward forsimple typing and, really, likely notneeded here.

Well, to generalize, the X axis can haveseveral dimensions, that is, accommodateseveral variables. The result ismultiple linear regression.

For more, there is a lot with a lot ofguarantees. Can find those in short andeasy form in

Alexander M. Mood, Franklin A. Graybill,and Duane C. Boas, Introduction to theTheory of Statistics, Third Edition,McGraw-Hill, New York, 1974.

with more detail but still easy form in

N. R. Draper and H. Smith, AppliedRegression Analysis, John Wiley and Sons,New York, 1968.

with much more detail and carefully donein

C. Radhakrishna Rao, Linear StatisticalInference and Its Applications: SecondEdition, ISBN 0-471-70823-2, John Wileyand Sons, New York, 1967.

Right, this stuff is not nearly new.

So, with some assumptions, get lots ofguarantees on the accuracy of the fittedmodel.

This is all old stuff.

The work in machine learning has addedsome details to the old issue of overfitting, but, really, the math in oldregression takes that into consideration-- a case of over fitting will usuallyshow up in larger estimates for errors.

There is also spline fitting, fitting fromFourier analysis, autoregressiveintegrated moving average processes,

David R. Brillinger, Time SeriesAnalysis: Data Analysis and Theory,Expanded Edition, ISBN 0-8162-1150-7,Holden-Day, San Francisco, 1981.

and much more.

But, let's see some examples of appliedmath that totally knocks the socks offmodel fitting:

(1) Early in civilization, people noticedthe stars and the ones that moved incomplicated paths, the planets. WellPtolemy built some empirical models basedon epi-cycles that seemed to fit thedata well and have good predictive value.

But much better work was from Kepler whodiscovered that, really, if assume thatthe sun stays still and the earth movesaround the sun, then the paths of planetsare just ellipses.

Next Newton invented the second law ofmotion, the law of gravity, and calculusand used them to explain the ellipses.

So, what Kepler and Newton did was farahead of what Ptolemy did.

Or, all Ptolemy did was just someempirical fitting, and Kepler and Newtonexplained what was really going on and, inparticular, came up with much betterpredictive models.

Empirical fitting lost out badly.

Note that once Kepler assumed that the sunstands still and the earth moves aroundthe sun, actually he didn't need much datato determine the ellipses. And Newtonneeded nearly no data at all except tocheck is results.

Or, Kepler and Newton had some good ideas,and Ptolemy had only empirical fitting.

(2) The history of physical science isjust awash in models derived fromscientific principles that are, then,verified by fits to data.

E.g., some first principles derivationsshows what the acoustic power spectrum ofthe 3 K background radiation should be,and the fit to the actual data from WMAP,etc. was astoundingly close.

News Flash: Commonly some real science oreven just real engineering principlestotally knocks the socks off empiricalfitting, for much less data.

(3) E.g., here is a fun example I workedup while in a part time job in gradschool: I got some useful predictions foran enormously complicated situation out ofa little applied math and nearly no dataat all.

I was asked to predict what thesurvivability of the US SSBN fleet wouldbe under a special scenario of globalnuclear war limited to sea.

Well, there was a WWII analysis by B.Koopman that showed that in search, say,of a submarine for a surface ship, anairplane for a submarine, etc. theencounter rates were approximately aPoisson process.

So, for all the forces in that war at sea,for the number of forces surviving, withsome simplifying assumptions, we have acontinuous time, discrete state spaceMarkov process subordinated to a Poissonprocess. The details of the Markovprocess are from a little data aboutdetection radii and the probabilities at adetection, one dies, the other dies, bothdie, or neither die.

That's all there was to the set up of theproblem, the model.

Then to evaluate the model, just use MonteCarlo to run off, say, 500 sample paths,average those, appeal to the strong law oflarge numbers, and presto, bingo, done.Also can easily put up some confidenceintervals.

The customers were happy.

Try to do that analysis with big data andmachine learning and will be in deep,bubbling, smelly, reeking, flaming, blackand orange, toxic sticky stuff.

So, a little applied math, some firstprinciples of physical science, or somesolid engineering data commonly totallyknocks the socks off machine learning asin the OP.

x40r15x 9 hours ago 1 reply      
I am sorry but GMO is actually bad for you.... Monsanto tried to spread gmo corn in France, they tested it on rats for a year and the rats developed multiple tumors the size of an egg.
erikb 18 hours ago 1 reply      
I don't get it. If reasoning is not an option how does deep learning beat the boardgame go?
known 10 hours ago 0 replies      
DL/ML == Wisdom of Crowds
reader5000 21 hours ago 1 reply      
Recurrent models do not simply map from one vector space to another and could very much be interpreted as reasoning about their environment. Of course they are significantly more difficult to train and backprop through time seems a bit of a hack.
beachbum8029 20 hours ago 1 reply      
Pretty interesting that he says reasoning and long term planning are impossible tasks for a neural net, when those tasks are done by billions of neural nets every day. :^)
sarah5 9 hours ago 0 replies      
nice article
deepnotderp 20 hours ago 1 reply      
I'd like to offer a somewhat contrasting viewpoint (although this might not sit well with people): deep nets aren't AGI, but they're pretty damn good. There's mounting evidence that they learn similar to how we do, at least in vision; https://arxiv.org/abs/1706.08606 and https://www.nature.com/articles/srep27755

There's quite a few others but these were the most readily available papers.

Are deep nets AGI? No, but they're a lot better than Mr.Chollet gives them credit for.

AndrewKemendo 20 hours ago 0 replies      
the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data.

Yes, but that's what human's do too, only much much better from the generalized perspective.

I think that fundamentally this IS the paradigm for AGI, but we are in the pre-infant days of optimization across the board (data, efficiency, tagging etc...).

So I wholeheartedly agree with the post, that we shouldn't cheer yet, but we should also recognize that we are on the right track.

I say all this because prior to getting into DL and more specifically Reinforcement Learning (which is WAY under studied IMO), I was working with Bayesian Expert Systems as a path to AI/AGI. RL totally transformed how I saw the problem and in my mind offers a concrete pathway to AGI.

Seeing AI for iOS microsoft.com
824 points by kmather73  3 days ago   125 comments top 29
nharada 3 days ago 6 replies      
Yes, YES, this is what I'm talking about Microsoft. I'm surprised how muted the reaction is from HN here.

On the technical side, this is a perfect example of how AI can be used effectively, and is a (very obvious in hindsight) application of the cutting edge in scene understanding and HCI. There are quite a few recent and advanced techniques rolled into one product here, and although I haven't tried it out yet it seems fairly polished from the video. A whitepaper of how this works from the technical side would be fascinating, because even though I'm familiar with the relevant papers it's a long jump between the papers and this product.

On the social side, I think this is a commendable effort, and a fairly low hanging fruit to demonstrate the positive power of new ML techniques. On a site where every other AI article is full of comments (somewhat rightfully) despairing about the negative social aspects of AI and the associated large scale data collection, we should be excited about a tool that exists to improve lives most of us don't even realize need improving. This is the kind of thing I hope can inspire more developers to take the reins on beneficial AI applications.

mattchamb 3 days ago 0 replies      
Just as an extra piece of information for people, the person presenting the videos in that page is Saqib Shaikh, who is a developer at Microsoft. Earlier on HN, there was a really interesting video of him giving a talk about how he can use the accessibility features in visual studio to help him code. https://www.youtube.com/watch?v=iWXebEeGwn0
dcw303 3 days ago 3 replies      
US App Store only at this stage it seems. Pity, I'd like to try this.

edit: I'm wrong. it's in other stores as well, but not in the Australian app store, which is the one that I tried.

booleandilemma 3 days ago 1 reply      
It correctly identified my refrigerator and bookshelf. Color me impressed.

Things like this make articles like this one seem silly: https://www.madebymany.com/stories/what-if-ai-is-a-failed-dr...

ve55 3 days ago 4 replies      
If I have a friend that is visually impaired and is using this, I have to consent to their phone recording me and analyzing me and sending all of that data off to who knows where.

And this is just from my perspective - someone who is not visually impaired. For the person who is, every single thing they look at and read is going to be recorded and used.

It's an unfortunate situation for people to put in, and I'm sure everyone will choose using improvements like this over not using them. As much as I would love to see a focus on privacy for projects like this, I don't imagine it happening any time soon, given how powerful the data involved is.

I imagine a future where AI assistants like this are commonplace, and there is no escaping them.

engulfme 3 days ago 1 reply      
If I remember correctly, this came out of a OneWeek project - Microsoft's company-wide weeklong hackathon. Very cool to see a final published version of this!
ian0 3 days ago 3 replies      
Wow. You can imagine a near future where this, a small wearable camera and an earphone could really make a big difference to a persons daily life.

Screw Siri, thats a real AI assistant :)

scarface74 2 days ago 1 reply      
The text recognition from documents is amazingly primitive. It doesn't use any type of spell checking to make a best guess at what a word is. It's straight text recognition.

On the other hand the "short text" feature works amazingly well to read text is sees from the camera. It's fast and accurate when reading text even at some non optimal angles.

How do you get it to try to recognize items that the camera sees?


Oops. I guess it would help if I swiped right....

GeekyBear 3 days ago 0 replies      
One of my friends from college has limited vision and the feature to read text aloud will be a game changing convenience.

He has a magnifier in his home, but it isn't portable and is limited to working only with documents and images that can lie flat.

edit: After speaking with my friend, he already uses a popular app called KNFB Reader that works very well on short text and documents, but costs $100. On the plus side, it works on Android or iOS.

EA 3 days ago 0 replies      
I am recommending this for my elderly family members with poor eyesight. This could greatly increase their quality of life.
Finbarr 3 days ago 1 reply      
I'm pretty blown away by this. I took a picture of myself in the mirror with the scene description feature, and it said "probably a man standing in front of a mirror posing for the camera". I took a picture of the room in front of me and it said "probably a living room". Think I'll be experimenting with this for days.
zyanyatech 3 days ago 0 replies      
The use of AI and ML for application purposes is starting to get to a point that it can really be used for problem solving, we did a demo app similar use of this technology, https://zyanya.tech/hashtagger-android

I am going to give Seeing AI a try as well, but I totally understand why a research department would like to have a demo as an Application available for public.

rb666 3 days ago 0 replies      
Well, it needs some work, but pretty cool nonetheless, can see where it was going with this :)


mechaman 3 days ago 0 replies      
Does it do hot dog or not?
rasengan0 2 days ago 0 replies      
I love the low vision pitch as there are a dearth of low vision resources particularly those hit with age related macular degeneration. I wonder if they are any censored items in the backend that may limit functionality -- Seeing AI won't be seeing any sex toys...
hackpert 2 days ago 0 replies      
This is amazing. I don't know if a lot of people here realize this, but it is really hard to pull off this level of integration of different computer vision components (believe me, I've tried). Microsoft has really outdone themselves this time.
vinitagr 3 days ago 0 replies      
This is quite an amazing technology. With new products like HoloLens and this, i think Microsoft is finally coming around.
NicoJuicy 2 days ago 0 replies      
Microsoft app ' Office Lens ' is the only app I use for screenshots of documents in Android. I see part of the tech is used in this app to take a screenshot also.

Love it

leipert 2 days ago 0 replies      
Awesome technology; and todays SMBC [1] seems to be related.

[1]: https://www.smbc-comics.com/comic/the-real-me

coworkerblues 3 days ago 0 replies      
Does this mean RIP OrCam (the other startup form mobileye creator which basically does this as a full hardware / software solution) ?


ClassyJacket 3 days ago 2 replies      
Not available in the Australian store. Ah, I forgot my entire country doesn't exist.
parish 3 days ago 0 replies      
I'm impressed. Good job MS
armandososa 2 days ago 0 replies      
Remember this? http://i.dailymail.co.uk/i/pix/2013/05/13/article-2323625-19... when it was a really big deal that Microsoft agreed to help Apple and release some software for their platform?
cilea 2 days ago 0 replies      
I think this is also cool for learning English as well. An English learner who'd like to express what s/he sees can verify with AI's response.
MrJagil 3 days ago 0 replies      
The videos explaining it are really nice https://youtu.be/dqE1EWsEyx4
arized 3 days ago 0 replies      
Looks amazing, I really need to dive into machine learning more this year... Waiting impatiently for UK release to give it a try!
jtbayly 3 days ago 0 replies      
Unimpressed. Took a picture of a ceiling fan and it said, "probably a chair sitting in front of a mirror." Took a pic of a dresser, and it said "a bedroom with a wooden floor." Tried the ceiling fan again and got an equally absurd answer.

Deleted app.

chenster 3 days ago 1 reply      
LeoNatan25 3 days ago 3 replies      
Whenever I take a picture with the camera button on the left, it shows a loading indicator and the app crashes. Not a great first impression. Coming from a company the size of Microsoft, such trivial crashes should have been caught.
Maryam Mirzakhani, first woman and Iranian to win Fields Medal, has died thewire.in
610 points by urahara  3 days ago   105 comments top 25
mrkgnao 3 days ago 1 reply      
I hope to study (non-inter-universal) Teichmueller theory some day, as a side quest on my own journey (which is vaguely directed in some sense toward number theory at present). It's the best way I can think of to honor her memory: to learn to appreciate the ideas that she devoted her life to understanding better.

Here is a picture of her drawing on one of her vast sheets of paper.


Mz 2 days ago 0 replies      
If anyone needs any background info on her work, there are a number of sources gathered here:


It includes this, a quote that is the best laymen's explanation of her work that I could find:

Mirzakhani became fascinated with hyperbolic surfaces doughnut-shaped surfaces with two or more holes that have a non-standard geometry which, roughly speaking, gives each point on the surface a saddle shape. Hyperbolic doughnuts cant be constructed in ordinary space; they exist in an abstract sense, in which distances and angles are measured according to a particular set of equations. An imaginary creature living on a surface governed by such equations would experience each point as a saddle point.

It turns out that each many-holed doughnut can be given a hyperbolic structure in infinitely many ways with fat doughnut rings, narrow ones, or any combination of the two. In the century and a half since such hyperbolic surfaces were discovered, they have become some of the central objects in geometry, with connections to many branches of mathematics and even physics.

MichaelBurge 3 days ago 1 reply      
It looks like her research is here:


And the paper in question that got the Fields Medal might be one of these:



It looks like her work involved counting the number of equivalent paths between two points on surfaces that have been punctured.

jacobkg 3 days ago 0 replies      
I remember how excited I was when she received the award as a role model for women and immigrants, and a reminder of how great minds can come from places the US considers enemies. My wife (33) spent most of last year being treated for Breast Cancer (now in remission). This news makes me profoundly sad.
oefrha 2 days ago 2 replies      
I took her hyperbolic geometry class last year (my senior year at Stanford) and she looked perfectly healthy. She even gave me some advice on my grad school offers. What a shock...
snake117 3 days ago 1 reply      
This actually shook me awake when I read this. It was just a few months ago my aunts where sharing articles (through Telegram) with us, from Iran, about Maryam and her accomplishments. She was brilliant and really did serve as an inspiration to many Iranians all over. May she rest in peace.
akalin 2 days ago 0 replies      
Terry Tao wrote a eulogy for her on his blog: https://terrytao.wordpress.com/2017/07/15/maryam-mirzakhani/
lr4444lr 3 days ago 0 replies      
Terrible to lose such a notable person at age 40 due to breast cancer, diagnosed when she was in her mid-30s no less.
hi41 18 hours ago 0 replies      
Deeply saddened to hear this news. I have immense respect for people like Maryam and Vera Rubin. They could have used their incredible genius to earn riches. Instead they devoted their lives to further math and science. When it was declared that Maryam won the Fields Medal I read the summary. I could barely understand even one word of the summary leave alone the actual work. People like Maryam amaze me. I am just a ordinary mortal working in a tech company. If God had asked me I would have told him to take my life and let Maryam live so we could have progress in math. Life is so unfair. I can't image how much math's progress has suffered with her passing. Deepest condolences to her family. RIP.
mncharity 2 days ago 0 replies      
The 2014 article https://www.quantamagazine.org/maryam-mirzakhani-is-first-wo... , with the IMU video https://www.youtube.com/watch?v=qNuh4uta8oQ .

2008 interview http://www.claymath.org/library/annual_report/ar2008/08Inter... .

"I would like to thank all those who have sent me kind emails. I very much appreciate them and I am very sorry if I could not reply."

theCricketer 2 days ago 0 replies      
An interview with Maryam from 2008 when she was a Research Fellow at the Clay Mathematical Institute -> this is inspiring: http://www.claymath.org/library/annual_report/ar2008/08Inter...
urahara 3 days ago 0 replies      
She was absolutely brilliant in so many ways, what a loss.
throwaway5752 2 days ago 0 replies      
What a tragic loss for the mathematics community and her husband and young daughter. Deepest condolences to her family, friends, colleagues, and students.
bhnmmhmd 3 days ago 0 replies      
This was terrible news today. It made me genuinely sad. I hope she rests in peace while all the humanity honors her.
adyavanapalli 3 days ago 0 replies      
Oh no :/Such a terrible loss..
rurban 2 days ago 1 reply      
I cried a bit. Life is unfair
tomrod 3 days ago 0 replies      
What a sad day. Too soon.
afshinmeh 2 days ago 0 replies      
RIP, very sorry to hear this news.
Jabanga 2 days ago 0 replies      
How tragic.
Lotus123 2 days ago 0 replies      
Sad news
dbranes 3 days ago 2 replies      
It's immensely disappointing that this comment section has has an abundance of comments on iranian socio-political issues, while completely devoid of any discussion on dynamics on moduli spaces.
brian_herman 3 days ago 2 replies      
Can we get a black bar for this?
thr31238893 3 days ago 0 replies      
This is such sad news. RIP.
0xFFC 3 days ago 6 replies      
Iranian here, one point I really want to mention is following despite our theocratic regime, really I have to mention, Iran is not Suadi Arabia or anything like that. Girls and Boys do get the almost exactly same education, they have great opportunity to be a successful scholar in any branch of science they want (yes, It is like USA, your parent demographic will decide which university you most like end up into. But this issue is the issue of most part of the world and we cannot blame Iran, USA or any specific country for this, we should blame the system).

She got her bachelor degree from one of our top universities (Sharif University) and went to Harvard from there and there is plenty of science-eager students like her are there waiting for an opportunity to become next Maryam Mirzakhani.

Come to Tehran. I know you hear a lot of bad thing about Tehran and Iran from Right wing political media outlets. But believe me, you are going to see Paris of the middle east and whole new generation of liberal people who believe in personal freedom and freedom of speech.

It really bothered me when I saw her talking about how people of USA think about Iran as desert and all women wearing an abaya. No they are not, we are a new generation and we are different and although our regime tries its best to suppress us, we are the future of Iran.

bhnmmhmd 3 days ago 4 replies      
How is it that brilliant, genius people who truly help our world become a better place die young, while people who bring nothing but destruction and war, live for decades?
Things I wish someone had told me before I started angel investing rongarret.info
593 points by lisper  1 day ago   201 comments top 35
birken 1 day ago 4 replies      
> But the cool kids don't beg. The cool kids the ones who really know what they're doing and have the best chances of succeeding decide who they allow to invest in their companies.

The company I was an early employee of (that ended up being a "unicorn") was not a cool kid, and we certainly were begging people to invest both at the angel stage and (especially) the series A stage. And those people got a really really good return on their money.

This isn't to say there aren't valuable signals perhaps involving "cool kids" status, but there are a lot of diamonds in the rough.

> I figured it would be more fun to be the beggee than the beggor for a change, and I was right about that.

As a much smaller time angel investor myself than the author, I'm still the beggor. You are only the beggee if you are writing 25k+ checks (and more like 50k-100k to really be the beggee). If you are writing 5k or 10k checks, you are going to be begging people to take your money, cool kids or not cool kids. So if you are looking to get into angel investing today without allocating 6-figure amounts to your hobby, I wouldn't advise doing it for ego reasons :)

mindcrime 1 day ago 5 replies      
There is a small cadre of people who actually have what it takes to successfully build an NBT, and experienced investors are pretty good at recognizing them.

I really do question this. The "problem of induction"[1] comes into play when you start talking about pattern matching and learning from "experience". That is, there's no guarantee that the future will look like the past.

Before Zuckerberg was Zuckerberg, I wonder how many people would have said "Hey, I recognize in this kid the innate capacity to be an NBT"? Of course they got funded, but I believe most of it was after they already had demonstrable traction.

On that note, one of the things that makes fund-raising such a drag, is that so many angels (at least in this area) want to see "traction" before investing. Even though, typically, you would thing that angels are investing at such an early stage that nobody would really have traction yet. Maybe it's just that the angels here on the East Coast are more risk averse.

[1]: https://plato.stanford.edu/entries/induction-problem/

TheBlerch 1 day ago 3 replies      
The author makes good points here. While it's true that YC and other venture investors invest in many companies to increase the chances of large returns on the best of their portfolio companies, there is another significant advantage to YC having a bunch of companies in each batch - the teams that are not doing so well are a source of talent for the teams that are doing well. At some point YC can and has encouraged teams they think aren't making enough progress to join teams that are. A friend in one batch described his batch consisting of: 1/3 working on great ideas/products that could be big, 1/3 working on mediocre ideas/products and 1/3 working on bad ideas/products, and those in the bottom 1/3-2/3 still had good team members that could be sourced for talent for the best 1/3 and for previous YC companies doing well.
jonnathanson 23 hours ago 2 replies      
Every single word in this article burns clear and bright and true. Every word. Every paragraph. Every penny paid for every hard lesson learned.

If you want to get into the angel game in 2017, and you want to do it to make money, then I'd sincerely advise you to go take out $5-10k for a weekend in Vegas, and try to get really good at a game of complete chance, like roulette.

"Good" at roulette, you're thinking? What can that possibly mean?

It means having a large bankroll and knowing your tolerance for burning through it. It means understanding how to pace yourself, so that you're not blowing through your bankroll in the span of a few minutes. It means getting the itch out of your system, if, indeed, this is merely an itch.

Can't afford to fly to Vegas and blow 10 grand in a weekend? Don't get into angel investing. You can't afford it. I say this not as a snobby rich asshole, but rather, as the sort of nouveau-riche asshole who lost quite a bit of money many years back, doing exactly what the author did, and losing money I learned in retrospect I didn't really want to lose.

I still make the occasional investment, but as part of a group. By and large, those investments go to founders we've worked with before, or who come highly regarded. We invest super early, we eat a fuckton of risk, and we expect to lose 99.999% of the time. We're too small-time to play the game any other way at the moment.

Angel investing is about bankroll and access, and if you're wondering whether you've got the right access, you don't. So you're left with bankroll. Have fun, and try to get lucky if you can help it. :)

justinsb 1 day ago 0 replies      
Ron was one of our investors in FathomDB, and that turned out to be a bad financial investment, much to my personal dismay & regret.

However, something that I think the essay modestly overlooks is the non-financial elements. The investors made a huge difference in my life & that of the others that worked for FathomDB. I like to think that we moved the industry forward a little bit in terms of thinking about modular services (vs a monolithic platform-as-a-service) although it turned out that the spoils went mostly to the cloud vendors. Many of the ideas developed live on in open-source today.

Of course, this all serves Ron's point in that it doesn't make for a good investment. But that doesn't mean that no good came of it - and it makes me want to work harder next time so that it is both a good outcome and a good investment.

So: thank you to Ron and all our investors. It is no accident that you are called angels.

seibelj 1 day ago 7 replies      
My 2 cents - As an investor or potential employee when analyzing a startup, pay close attention to how scrappy and capital efficient they are. Do they have excessively nice office space? Are the founders making too much in salary? Does it seem like the executives are working like animals, or do they have the big company mindset where they take it easy? Startups are nothing like established, revenue-generating companies and the mindset should be entirely different.

The #1 thing a startup can do to survive is to be as stingy as possible with their capital.

brianwawok 1 day ago 1 reply      
I was hoping for a fact like

"And this is how I made 42 investments in my first 3 years. All are now bust, and I am out 1.4 million dollars"

Obviously not fun to tell the world how much money you lost, but it would help to add color to the people behind the VCs, that developers love to see as the frenemy (terrible people out to screw you, but man their money is nice sometimes).

Nelson69 1 day ago 1 reply      
So fundamentally as an angel you're in early. That usually means that you face dilution. I also wouldn't think it would be that unusual for the business to make a pretty dramatic pivot or two and that initial angel investment may have been for something else entirely by the time the company finds its legs. There are basically 3 things you can do in that dilution situation: 1) Do nothing and go from basically owning the business to not. (You still get to watch and be part of the ride) 2) Pony up more money to match the big investors, assuming the terms allow it, or 3) Fight it or any change every step of the way.

A VC once told me that there were "good angels and bad angels" Too many bad ones and he wouldn't invest. A couple specific bad ones and he wouldn't invest. To that, there are also good angels that will make introductions, spend time coaching, and really help beyond what I'd call a "hobby." It seems like there are good people out there with money and knowledge and they really want to help out others in an angelic sort of way knowing full well they will likely lose their investment.

danieltillett 1 day ago 1 reply      
While the points Ron raises are really good, there is another source of investing error which is "generals fighting the last war" effect. As an Angel investor you are drawn towards founders and companies that resemble you and your experiences. This is almost certainly going to lead you astray as conditions will have changed and everyone's experiences are so limited.
polote 1 day ago 5 replies      
Summary: beginners almost never invest well.

And it is the same for stocks, many people think they will make money by investing in a specific company because their logic says it is good idea.

Investment is a job, and to win you need experience

trevyn 1 day ago 6 replies      
>There are a myriad ways to make a company fail, but only two ways to make one succeed. One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Is he implying some sort of unethical behavior as the second way?

rwmj 1 day ago 2 replies      
I wonder how often VCs/angels are conned out of money (and I don't mean by delusional entrepreneurs, but by genuine con-artists). I assume it must happen, and with all the money sloshing around may be common.
PangurBan 1 day ago 0 replies      
Thank you to the author for sharing your experience and insights. Some tips for being a better angel investor and reducing risk: 1) I can't stress this enough - learn the ABC's of private investments. I have seen a ridiculous number of angel investors as well as founders who don't understand the fundamentals of private equity investments and returns - even people who have been working at a startup for several years.2) Limiting yourself to meeting with and investing in startups an industry in which you've worked, so that you understand their industry, better evaluate them and add value3) Limiting yourself to meeting with and investing in startups in industries in which you or close colleagues and friends have worked so that you can consult with them regarding the potential investments4) Joining an angel group, such as Band of Angels, so that you have a group of fellow angels to learn from and discuss investments with5) Meeting with successful serial entrepreneurs who make angel investments to ask what they look for6) If you have not founded a company or worked at an early stage startup, learn Lean Startup methodologies7) Make sure you understand how a startup achieves product-market fit8) Put together a list of good people and companies who can help startups you invest in, with everything from operations to tech to growthThere's much more, but this is a start.
apeace 1 day ago 0 replies      
> There are a myriad ways to make a company fail, but only two ways to make one succeed. One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Any guesses on the second way?

The best I could think of was: find dumb investors to pump it full of money and hype it. Then rely on all the hype to get it sold.

I'm hoping for a less cynical answer!

geetfun 1 day ago 0 replies      
Being a good investor takes a certain kind of temperament. Can't really teach this. For most, as Buffet says, stick with the index fund.
caro_douglos 1 day ago 0 replies      
This post reminds me of "the war of art" where you're encourage to make the decision right off the bat of whether or not you're a professional or an amateur.

I'm somewhat biased when it comes to angels because most of the experiences I heard (http://etl.stanford.edu/) were almost always homeruns. Sure there's the down in the gutter claims every investor tries to sob about where they lose money but let's be honest wouldn't it be great if someone gave a talk and said how much they lost (and how much they're continuing to lose) by attempting to get rich quick.

So far bootstrapping appears to best way of weeding out the shitty angels who haven't been in the game for a minute.....it's a pleasure to not deal with someone wanting to give you a check while telling you what their expectations are for YOUR business not their 10k+ check.

untangle 1 day ago 0 replies      
Cap table economics are an equally-important reason to fear angel investing. Unless the company is very successful, angels tend to get diluted-out of the money by VC rounds. In the baseball vernacular, angel investors must pick triples and homeruns to make money. Singles and outs will result in total loss. Doubles may break even. It's a tough way to get ahead. Impossible without some insider edge (YC, pundits, stars, etc.).
dabei 1 day ago 0 replies      
Seems to me acting as a lone Angel investor is not very efficient and quite limiting to the kinds of opportunities that are open to you. It's analogous to the constraints you face with as a lone founder of a startup. Maybe better to team up and benefit from each other's insight and capital. And totally agree you have to do this seriously as a job unless you don't care about the money.
david927 1 day ago 2 replies      
> There is a small cadre of people who actually have what it takes to successfully build an NBT, and experienced investors are pretty good at recognizing them. Because of this, they don't have trouble raising money.

That's a pretty specious statement. I don't know how he came up with that; it certainly doesn't match with a lot of reality.

kushankpoddar 1 day ago 0 replies      
I am realising that a weekly exercise on 'inversion' should be a must-do for founders/investors.

Inversion essentially means think hard about: "What factors can cause my venture to fail?, How to avoid those factors?"

This sounds simple but it could be a very powerful idea. https://www.farnamstreetblog.com/2013/10/inversion/

Radim 1 day ago 0 replies      
Off-topic, but lisper, congrats on your neat HN karma points!

 It's 222-2222 I gotta answering machine that can talk to you

Theodores 1 day ago 1 reply      
I prefer the phrase philanthropy to angel investing. As I understand it philanthropy is using your own hard earned money for lost causes of one's own choosing. This is different to fundraising or giving money to charity. With a modest philanthropy budget you can change lives and be able to support others achieve their dreams. Everything can be on an individual basis with no formal framework. For instance, what happens if you pay someone's way so they can finish their degree? What is the potential return? Or, more radically, what happens if you find a homeless person a place to live? Do they get a job and return to society? These things can be found out with radical personal philanthropy. I would say there is good value in this if you do want to learn about society and the human condition. I also think that financial and time losses are an investment. This type of work where you really do invest in individuals should help anyone angel investing to have the chops to do it well.
stevenj 1 day ago 1 reply      
Interesting read.

I'd love to hear from other angel investors with (perhaps) different experiences and opinions.

kumarvvr 1 day ago 2 replies      
The author has mentioned "random shit that markets do, like completely ignore clearly superior products..."

Can anyone give me such examples?? I am curious.

sgroppino 1 day ago 1 reply      
Perhaps the key is to invest in what you know?
sophiamstg 1 day ago 0 replies      
I think I must agree with you on taking it as a full-time job!
max_ 1 day ago 0 replies      
>One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Anyone figured this out?

graycat 1 day ago 1 reply      
Good news: I can agree with some of theOP.

Much better news: I do believe that it'sfairly obvious that there are goodsolutions to the most important problemmentioned in the OP.

First a remark on scope: I'm talkingabout information technology (IT) startupsbased heavily on Moore's law, theInternet, other related hardware,available infrastructure software, etc.,and I'm not talking about bio-medicaltechnology which I suspect is quitedifferent.

Second, a remark on methodology: When theOP says "almost certainly" and similarstatements about probability, sure, (A) inpractice he might be quite correct but(B), still the statement is nearly alwaysjust irrelevant.

Why irrelevant? Because what matters isnot the probability, say, estimated acrossall or nearly all the population, or allof business, or all of startups, or evenall of IT startups. Instead, what isimportant, really crucial, really close tosufficient for accurate investmentdecision making, is the conditionalprobability given what else we know. Whenthe probability is quite low, still theconditional probability -- of success orfailure -- given suitable additionalevents, can be quite high, thus, givingaccurate decision making. So, net, what'skey is not the probability but what elseis known so that the conditionalprobability of the event we are trying toevaluate, project success or failure,given what else we know is quite high.

So, back to the OP. We can start with thestatement:

> The absolute minimum to play the gameeven once is about $5-10k, and if that'sall you have then you will almostcertainly lose it.

Here for the "almost certainly" to be trueneeds to depend on what else is known.Sure, if not much more is known, then"almost certainly lose it" is correct.But with enough more known, the firstinvestment can still likely be a bigsuccess.

The big, huge point, first investment or101, is what else is known.

> There is a small cadre of people whoactually have what it takes tosuccessfully build an NBT, and experiencedinvestors are pretty good at recognizingthem.

I agree with the first but not with thesecond. From all I can see, there ishardly a single IT investor in the US whoknows more than even dip squat about howto evaluate an IT investment. E.g.,commonly the investors were history oreconomics majors and got MBA degrees.Since I've been a prof in an MBA program,I have to conclude that a history oreconomics major with an MBA has no startat all evaluating IT projects.

Here is huge point:

We can outline a simple recipe in justthree steps for success as an IT startup:

(1) Find a problem where the first good ora much better solution will be enoughnearly to guarantee a great business,e.g., the next big thing.

(2) For the first good or much bettersolution, exploit IT. Also exploitoriginal research in high quality, atleast partly original, pure/appliedmathematics. Why math? Because the ITsolution will be manipulating data; alldata manipulations are necessarilymathematically something; for morepowerful manipulations for more valuableresults, by far the best approach is toproceed mathematically, right, typicallywith original work based on some advancedpure/applied math prerequisites.

(3) Write the corresponding software, getpublicity, go live, get users/customers,get revenue, and grow the revenue to asignificant business.

So, right: Step (2) is a bottleneck: Thefraction of IT entrepreneurs who can dothe math research is tiny. The fractionof startup investors who could do anevaluation of that research or evencompetently direct such an evaluation isso small as to be essentially zero.

So, net, the investors in IT are condemnedto miss the power of step (2) and, thus,flounder around in nearly hopeless mudwrestling in a swamp of disasters. And,net, that's much of why angel investorslose money.

So, the main problem in the OP was losingmoney on IT projects. The main solution,as both an investor and an entrepreneur,is to proceed as in steps (1)-(3).

For IT venture capitalists (VCs), theycan't use step (2) either, e.g., can't dosuch work, can't evaluate such work, andcan't even competently direct evaluationsof such work, but they have a partialsolution: Likely enforced by their LPs,in evaluating projects they concentrate oncases of traction and want it to besignificantly high and growing rapidly.

So, with this traction criterion, and someadditional judgment and luck, some of theVCs get good return on investment (RoI),but they are condemned to miss out on step(2).

So, what is the power of step (2)? As wewill see right away, clearly it'sfantastic: Clearly with step (2) we cando world changing projects relativelyquickly with relatively low risk.

The easiest examples to see of the powerof step (2) are from the US DoD for USnational security. Some of the bestexamples are the Manhattan Project, theSR-71, GPS, the M1A1 tank, and laserguided rockets and bombs, all relativelylow risk projects with world changingresults. Each of these projects, and manymore, was heavily dependent on step (2)and met a military version of steps (1)and (3).

More generally, lots of people and partsof our society are quite good atevaluating work such as in step (2) andproposals for such work, just on paper.We can commonly find such people asprofessors in our best researchuniversities and editors of leadingjournals of original research in the moremathematical fields.

I started some risky projects, e.g., anapplied math Ph.D. from one of the world'sbest research universities. From somegood history, only about one in 15entering students successfully completessuch a program. The completion rate ofapplied math Ph.D. programs makes the NavySeals and the Army Rangers look likefuzzy, bunny play time. With much of myPh.D. program at risk, I took on aresearch project. Two weeks later I had agood solution, with some surprisingresults, quite publishable. Later I didpublish in a good journal. I could haveused that for my Ph.D. research, but I hadanother project I'd pursued independentlyin my first summer -- did the originalresearch then, in six weeks. The rest ofthat work was routine and my dissertation.While working part time, the Navy wantedan evaluation of the survivability of theUS SSBN fleet under a special scenario ofglobal nuclear war limited to sea, all intwo weeks. I did the original appliedmath and computing, passed a severetechnical review, and was done in the twoweeks. Later I took on a project toimprove on some of our work in AI fordetection of problems never seen before inserver farms and networks. In two days Ihad the main ideas, and a few weeks laterI had prototype software, nice results onboth real and simulated data, and a paperthat was publishable -- and was published.My work made the AI work look silly; itwas. Once in a software house, we were ina competitive bidding situation. I lookedat what the engineers wanted and saw someflaws. Mostly on my own, I took out aweek, got good on the J. Tukey work inpower spectral estimation, wrote somesoftware, and showed the engineers how tomeasure power spectra and how to generatestochastic process sample paths with thatpower spectrum. As a result, my companywon sole source on the contract. So,before I did these projects, they all wererisky, but I completed all of them withoutdifficulty.

Lesson: Under some circumstances, it'spossible to complete such risky projects,given the circumstances, with low risk.

But IT VCs can't evaluate the risk beforethe projects are attacked or even evaluatethe results after the projects aresuccessfully done. So IT VCs fall back ontraction.

I confess: It appears that the IT VCs arenot missing out on a lot of reallysuccessful projects. Well, there aren'tmany IT startups following steps (1)-(3).

So, for IT success, just borrow from whatthe US military has done with steps(1)-(3).

The problem and the opportunity is thatnearly no IT entrepreneurs and nearly noIT investors are able to work effectivelywith steps (1)-(3), especially with step(2).

The IT VCs have another problem: The knowthat for the next big thing -- Microsoft,Apple, Cisco, Google, Facebook -- they arelooking for something exceptional. Andthey know that those for examples havevery little significant in common. Stillthe IT VCs look for patterns for hottopics at the present or recent past.That's no way to find the desiredexceptional projects. E.g., when the USDoD wanted the Manhattan Project, theydidn't go to the best bomb designers ofthe previous 20 years; doing so would nothave resulted in the two atomic bombs thatended WWII. Instead, the US DoD listenedto Einstein, Szilard, Wigner, Fermi,Teller, etc., none of whom had anyexperience in bomb design.

d--b 1 day ago 0 replies      
I love the difference of tone between this article and the usual Silicon Valley pieces.

For me, the most important thing that Ron conveys is that being an entrepreneur is an incredibly foolish thing to do. Silicon Valley created myths of passionate geeks who worked in their mom's garages and went on to make billions. Who doesn't want that?

But the reality of Silicon Valley today is that because of these myths, most people work their twenties away for a chance to buy a lottery ticket...

logicallee 1 day ago 1 reply      
The investor names only "one way" to succeed (though alluding to a second one that this investor does not name):

>To make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition.

This is an insane sentence. Let's make it only slightly more insane to throw it into starker relief:

>To make a product that fills a heretofore unmet market need that nobody has expressed or even thought about until the company announces it, and to do it absolutely perfectly, instantly without any development time, and make it free for the consumer, while getting money from a sustainable high-margin source and having a proprietary moat that makes it impossible for any other market players to enter even a similar market. Also I'll add that the company must have such strong network effect that the utility of any competitor's product is negative (people would regret getting it even for free) unless the competitor is able to get at least 98% market share.

That's pretty insane, and if you re-read what I quoted you will see it's the same kind of insanity.

Why do people even write stuff like this.



Downvoters don't understand my objection. I'm not going to edit this comment. If you don't get it, you don't get it. This investor literally named "good, fast, cheap" (except as: better, faster, cheaper) as three of four requirements that must be met. (The fourth named requirement being "heretofore unmet".) You cannot get more insane than this except in magical la-la land where there are no trade-offs of any kind. It's absurd.

banku_brougham 1 day ago 1 reply      
TLDR; you will lose money because you don't know anything.
CalChris 1 day ago 0 replies      
I remember Ron although I was unsure about the name. Fair winds and following seas.

Unless it's in your background and in your DNA, it seems that angel investing will end in tears.

lowercase_ 1 day ago 0 replies      
Interesting perspective, but he assumes that everyone's experience will be his own. First of all, he was in LA. I don't know of many success angels down there.
flylib 1 day ago 0 replies      
"If you want to make money angel investing, you really have to treat it as a full time job, not because it makes you more likely to pick the winners, but because it makes it more likely that the winners will pick you."

plenty of good entrepreneurs have great angel investing track records doing it part time (Elad Gil, David Sacks, Aaron Levie)

RandyRanderson 1 day ago 0 replies      
It's impossible, with any certainty, for an investor to prove that his or her judgement is better than random chance. This is high schools stats.

What I take from this is that this person doesn't have a grasp if high school math or is not being honest.

Also if you listen closely to a lot of investors they'll basically tell you their metric is "can I sell this to a greater fool?". This is why there is so much investment in 'hot' areas when, in reality, those are the areas to stay away from as the unicorn shares are likely already over-priced.

Kindness is Underrated (2014) circleci.com
526 points by hengputhireach  2 days ago   209 comments top 36
BenchRouter 1 day ago 9 replies      
People often conflate "kindness" with "kid gloves" (for lack of a better term). Being kind doesn't have to mean giving "compliment sandwiches" all the time, or avoiding direct feedback. In many contexts, being kind just means being a professional.

See Allen's comment in the linked post, for example. It's direct ("I'm confused"), but polite. It's asking a question of the submitter in a respectful way that's likely to engender a productive conversation as opposed to putting people on the defensive. Allen's leaving the possibility open that his assumptions are wrong (and often our assumptions are).

It quite literally requires less effort - Allen didn't have to expend the extra effort to type out "this is stupid".

I guess I don't see what's so difficult about that particular type of kindness.

zeteo 1 day ago 4 replies      
> Bezos talks about a lesson imparted by his grandfather on one of the cross-country road trips they would take every summer: Jeff, one day youll understand that its harder to be kind than clever.

Sure, that's nice rhetoric. And yet the "kind" Bezos has presided over some of the worst working conditions in the developed world [1] while the "blunt" Torvalds has kept together the very scattered Linux team for decades without controlling their income or work conditions. Apparently the more money you have, the more you can get away with a "do as I say, not as I do" standard.

[1] http://www.salon.com/2014/02/23/worse_than_wal_mart_amazons_...

jasode 1 day ago 4 replies      
>, an atmosphere of blunt criticism hurts team cohesiveness and morale; theres time and energy lost to hurt feelings, to damage control, to trust lost between team members - not to mention the fact that people are working in a fundamentally less humane environment. It may seem faster and easier to be direct, but as a strategy its penny wise and pound foolish.

This is one of those statements that I think we want to be true but we have no evidence that it's true. Many contradictory examples exist in the real world:

You can yell at your team and insult them and be successful. (Famous examples are Steve Jobs and Bill Gates' "that's the stupidest idea I've ever heard!")

You can be soft-spoken and be successful. (Warren Buffet would be an example. He doesn't yell at the people in his Omaha office or his presidents/CEOs at Berkshire subsidiary companies.)

Likewise, you can be blunt & harsh and fail. You can also be diplomatic & nice and fail.

Same in other endeavors. You can yell at the football team and win the Super Bowl (Mike Ditka - Chicago Bears). Or, you can be soft-spoken and win the championship (Tony Dungy - Indy Colts). Likewise, you can do either style and still be the worst team in the league.

Doesn't seem to be much correlation either way.

My conclusion based on life experiences is that companies can have both the blunt and the diplomatic approaches. The blunt communication works well in upper management. (E.g. one VP tells another VP that "it's a stupid idea.") Everybody is a Type A personality and has a thick skin. However, the reality is that many employees (especially lower-level positions) feel demeaned by direct language. (As the endless debates about Linus' style attests.) Therefore, they require indirect language and those VPs have to dynamically adjust the communication to that personality.

Personally, I don't like the style of indirect communication the author uses in examples of Daniel, David, and Allen but I fully understand it's necessary in the real world for certain people.

eksemplar 1 day ago 5 replies      
Being in middle management in a workplace of 7000 it often surprises me how little time people in tech devote to diplomacy.

You can certainly get a point across by being direct, but to make a truly lasting change you need to convince people it's a good idea. I've yet to see this happen without kindness and diplomacy.

So while the IT security officer can certainly get a strict password policy implemented, without also making sure people understand and agree that security is a good idea the end result becomes a lot of written down passwords hiding on postits under keyboards.

scottLobster 1 day ago 2 replies      
Part of working effectively with a group is learning to take blunt non-personal critcism in stride. In English 110 freshman year we were required to get into groups and review each other's work (essays, papers, assignments for class) for this very purpose. All of the criticism was blunt if non-personal (you have a run-on sentence here, this is phrased weirdly, etc...), and it was obviously the first time receiving such criticism for some of the students. All of our writing improved as a result, though, and because it was non-personal even the most insecure people in the class eventually adapted to it.

I'll submit that personal remarks like "only a fucking idiot would..." and such are bad not because they hurt feelings but because they are worthless and distracting. They make the conversation about a person instead of what people are supposed to be talking about, if only for a fraction of a second, and can disrupt conversation.

If someone is doing something that harms the objective, you tell them what they're doing, why they need to stop and possibly how they can fix/improve things going forward. That's effective blunt criticism, and there's no need for personal insults anywhere in the chain.

marcoperaza 1 day ago 1 reply      
There is a big difference between being NICE and being GOOD.

To paraphrase Charles Murray: "nice" is a moment-to-moment tactic for avoiding conflict, not a guiding principle for living your life. We should default to being nice amicable people, but being good often requires otherwise.

Unfortunately, niceness has been raised to the highest virtue in recent years. This is a mistake with civilizational consequences.

matthewowen 1 day ago 0 replies      
I agree that kindness is important.

I don't think the examples given are examples of kindness.

Concretely, they're insufficiently direct.

If you think someone is doing something that isn't well thought out, and you think you understand the problem well enough to say that they haven't thought through it fully (which is a scenario that arises in workplaces), don't say that you're "confused". It's a variant on false shock. Just say " I don't think this change considers the following scenario:". You can soften that with a disclaimer of "perhaps I'm missing something", but saying "I'm confused" when you think the other person is consumed is mildly passive aggressive.

Likewise, if you think someone should do something, don't say "it'd be nice if we could". Make the request directly. You can still add "let me know if there's something I'm not considering that prevents that". It's frustrating otherwise, because it is unclear what is a request or nice-tp-have and what is an instruction that approval is contingent upon. In the long term, lacking that clarity becomes annoying, especially for non-native speakers or people from different cultures who expect different lvels of directness.

There is a position between aggressive "don't do that, it's stupid" and the indirect formulations in this post, and that's where you should aim. Polite and kind, but still clear and direct.

Honestly, if you just state the problems with the approach clearly and avoid words like "stupid" or "dumb", you're 90% of the way there.

ivanbakel 1 day ago 0 replies      
In a similar vein, one of the articles that has more influenced my interactions has been The Minimally-nice OSS Maintainer [0]. It doesn't produce an instant slipstream where all your collaboration is suddenly super-fluid, but niceness does help reduce those abrasive moments which, in my experience, can slow a community down a lot more than working well speeds it up. It goes hand-in-hand with good community curation - so long as you're trimming out bad actors, you have to be able to acknowledge bad behaviour in yourself.

0. https://news.ycombinator.com/item?id=14051106https://brson.github.io/2017/04/05/minimally-nice-maintainer

agibsonccc 1 day ago 0 replies      
I struggle with this a ton. 1 thing I can't really get past with this, is: People themselves often take "ideas" as "personal criticism" in practice.

As much as I like the ideas this post advocates, I feel like some of this is on a case by case basis.

It should always be a goal to keep criticism professional, not personal.

One other thing that should be kept in mind here of is culture.

I live in japan where you really can't even say "no" let alone "wrong". There's are extremes like: Linus and the other being many asian cultures.

Like any advice like this, try to look at the intent and the points that work for your situation not "Silicon valley startup only".

qdev 1 day ago 0 replies      
The article ends by discussing trust, and perhaps that is more fundamentally important than kindness -- kindness is one vehicle that allows trust to evolve, but probably not the only one.

An environment of trust (and safety) allows open technical discussions and lets you come to decisions in a way that helps everyone learn and evolve without "losing face" and without breeding an undercurrent of anger and resentment. Knowing that each person is willing to listen to the other respectfully and that each person is prepared to say they are wrong, can improve the discussion rather than making it more wishy-washy.

You need to have this if you're going to be working day after day, maybe for years with the same people. Lose trust and the feeling that it is safe to make potentially "stupid" statements, and people will just blindly follow the loudest most belligerent person because it's not worth the emotional cost of trying to engage in "debate".

So maybe "Trust is Underrated" would be a better title for the original article.

sillysaurus3 1 day ago 4 replies      
It takes a lot more work to get your point across while being kind. Sometimes I'm not sure it's worth it. Especially when it seems like no manager qualifies as "kind." So if you want to advance, what do you do?

It's still annoying that becoming a manager is correlated with advancement, but that's life.

siliconc0w 1 day ago 0 replies      
I get it's possible to qualify statements, de-personalize, and obfuscate blame but I'm not convinced this is the ideal environment. It's diplomatic, but it's slower and less clear. It can work but I've also seen it fail where someone takes a comment as a suggestion when it wasn't. It's basically 'level 0' or the default mode of communication.

A good workplace culture is, essentially, leveling up from this. It's agreeing while diplomatic language is more comfortable and it's how we might communicate outside work, we're agreeing to suspend it to better achieve our shared goals. If someone challenges your idea, you need dispassionately and genuinely consider their objections and either defend your idea or acquiesce to the better idea. Some people just can't do this. Ideas are personal things and arguing about them feels uncomfortable and they don't like to feel uncomfortable. And, maybe getting a little carried away, but I think there is general societal issue where we think if you're uncomfortable something must be wrong. Good decisions are born out of argument not trust. Saying "I'm confused" or "Help me understand" when you already understand and just disagree is level 0 language. It kinda works but it's slow and inefficient and as engineers - this isn't good enough.

jeffdavis 1 day ago 0 replies      
This article makes it sound like kindness is just expending extra time for the same message, and it's magically "nice".

That explanation of kindness doesn't make sense. Some people try to be nice and, by mistake, end up being rude. And business people make deals quickly all of the time, using jargon and cutting out pleasantries while still being kind.

No, kindness is a skill of words and actions that must be developed over time. It's about navigating complex ideas and decisions effectively.

For instance, "no" is generally rude, not because it's too short, but because it doesn't provide good feedback on a complex idea. What is the proposer trying to accomplish? What existing alternatives exist, or what others might be explored?

If you don't have the time to give good reasons, then point them toward others that you trust to give good advice. E.g: "This proposal is unacceptable. Discuss with group XYZ and explore alternatives." Or even: "This proposal is unacceptable -- the proposed use case is not important enough to justify what you are trying to do."

strictfp 1 day ago 1 reply      
I think Linus is extreme, but I can totally understand that he got fed up with being nice and getting ignored. I don't agree with his conclusion that people don't get him reprimanding them, though. I think they mostly get it, but think they can get a way with ignoring him. And that is an attitude problem we have in our industry. A lot of people seem to think that they are the shit and are really bad listeners.
depsypher 1 day ago 1 reply      
I think we do need to have empathy in our dealings with people online, and in general it's in our own best interests to do so. Many open source projects' lifeblood are their communities, and other things being equal, you'll get more contributions if you're not a complete jerk.

The flip-side is that high quality maintainable code is the product of top-notch commits, and rejecting commits is sometimes necessary to keep the standard of quality high. A good maintainer shouldn't cave to pressure of accepting a flawed commit just to avoid hurting someone's feelings.

This article in fact had what looks like a prime example of that. The comment mentioning a PR might "break a limit" but "we'll cross that bridge when we get to it" was touted as an example of how to give guidance. I'd argue that code quality slipped right there as a direct result of social pressure to accept a subpar commit.

It's not easy by any measure, but I think it pays to be not only clever and kind, but also consistent and firm when it comes to reviewing people's work.

jancsika 1 day ago 1 reply      
Linus' story is that early on in the history of Linux he was not direct enough in his criticism of a kernel dev's code to make it clear he wouldn't accept it into the kernel. So the kernel dev kept working on the code in the hopes of it being accepted, and then when Linus finally made it clear it wouldn't be accepted the dev became-- according to reports Linus heard-- suicidal.

Consequently Linus says he decided to go in the direction of communicating in the manner that he is now known for. (Which makes me wonder-- if he had a personal encounter early on with his sarcasm causing the same bad outcome, would he have decided as confidently to go in the other direction?)

Regardless, I think jaromil who maintains Devuan is a great counterexample. He's quite nice and non-sarcastic, approachable to newcomers, and he seems to be able to herd cats just as well.

TheAceOfHearts 1 day ago 0 replies      
I disagree that these three things are the same: that sucks!, youre doing it wrong!, only an idiot would. Sometimes you really are doing things wrong, and I'd regard being told so as a kindness. The situation where I've seen it most commonly is when someone is learning to speak a language. If you don't correct them, they'll continue making mistakes. When someone corrects me I give serious thought to what they're saying.

In my last job I had lots of hour-long arguments with coworkers on different topics, many of which I ended up conceding the point. I'm incredibly appreciative of them having taken the effort to help me understand the their views, and convince me otherwise.

I think there's a lot of stigma on disagreeing with people. But I don't see why that should be the case. If you have an argument with someone and you both end up leaving with a better understanding of the problem, why is that a bad a thing? I've had plenty of discussions where I fundamentally disagreed with someone, only to go and later drink a few beers them. Just because you disagree with someone doesn't mean you hate or dislike them, and there's no reason to take it personally. It's fine for someone to hold different views than you own.

An example of this are hate-speech laws, which I'm thankful that the US doesn't have. Personally, I consider them horrible mistakes, but I respect that others disagree. FWIW, the reason I disagree with hate-speech laws is that I think you should be able to openly speak your mind on any topic, because it means you can have a discussion and learn from it. If you can't have an open discussion about some topic, you might never be presented with the opportunity to rise above whatever might've lead you to some terrible belief.

I've certainly said a lot of stupid things online, and every time I've been called out on them I think I've grown and learned a bit. I have no doubt I'll continue saying stupid stuff, because in many cases I won't know any better, and I fully hope that others will call me out on it.

amirouche 1 day ago 0 replies      
overgard 1 day ago 0 replies      
I think directness can be a form of kindness though. For an intelligent professional, being treated with kid gloves and not receiving direct feedback is often detrimental to everyone involved, and the resentment that can form from leaving a situation lingering can be vastly more damaging than having an argument might have been.

Also, while I've been critical of Linus' approach in the past, I think given that his standards are well known and consistent it's probably not that hurtful if he rips you to shreds over a patch because its well known that thats just what hes like.

crispinb 1 day ago 0 replies      
We live in societies designed to systematically select for greed and dog-eat-dog individualism, to which kindness is antithetical. Given this, for kindness to survive beyond the private/family sphere requires heroism. Heroism is lovely, but is by definition too much to expect on average. To promote greed as the primary organising principle of mass societies was a reckless experiment. It failed, to which our world's collapsing ecosystems are primary witnesses.
ppod 1 day ago 2 replies      
I think that kindness is a gift just like cleverness. You can work to become more educated, work to be more rational, more evidence-minded in your judgements, but you will still be behind someone who works the same amount but has a natural ability. The same is true of kindness. Of course, we should all work to be kind, but it comes easier to some than to others. I know some people who, in a very natural way, are pretty much incapable of being unkind.
bitL 1 day ago 2 replies      
How does author solve the problem of being kind, other people mistaking it for weakness and taking advantage of it?
kevmo 1 day ago 0 replies      
Aggressive kindness has opened so many doors and smoothed so many paths for me. It's painless and pays enormous dividends while making you feel great about yourself.

I also get tons of free shit by just being nice to service workers.

Aron 1 day ago 0 replies      
Basically, most people walk around with inflamed highly sensitive status buttons that get triggered by any indication of relative power balance out of line with officially designated titles e.g. your interlocutor is pretentiously using large words. Kindness is acting like everyone is equal maximally, regardless of the truth of the matter.
maxxxxx 1 day ago 0 replies      
Kindness and sincerity have to go together. I see way too many people going through rituals that are supposed to make them look kind but they are not sincere.
makecheck 1 day ago 0 replies      
It can be very motivating to see someone get mad at you though. All at once, lots of things become clear: (1) this is important to that person, (2) you need to treat this seriously, and (3) this is really uncomfortable, it would be good to avoid future discomforts (i.e. change behavior more permanently, not just this one time).

Kindness actually triggers the exact opposite of the 3 things above: suddenly everything seems like no big deal and nothing ever changes. Just great: now youre setting yourself up for several more unpleasant interactions in the future, instead of just fixing something from the beginning.

There are a lot of other considerations too...

For one, the person yelling is usually not the only unkind person in the interaction, even if thats the most obvious one. It is unkind, for instance, to be a lazy person who goes into situations utterly unprepared, showing no respect; at that point, YOU arent being nice so why do you expect niceness in return?

And sometimes niceness gets in the way of well-understood, efficient processes. On a mailing list, say, youre better off making a direct statement that isnt wrapped in two extra paragraphs of polite tone for everyone to read through. And heck, when youre driving, you can create MAJOR traffic problems by being kind instead of just following the rules (ironically bubbling back and impacting 50 people for a mile because you wanted to be kind to one person; just watch some videos).

rickpmg 1 day ago 0 replies      
I think opponents of being kind tend to think:

1- you can't be kind without appearing weak and

2- being blunt and being kind are two different things

hbarka 1 day ago 1 reply      
Can't this be simply distilled as being a gentleman/woman? There was that generation.
throwme_1980 1 day ago 10 replies      
As a developer, kindness is EARNED, you want people to be kind to you despite of who you are and your mediocre contribution to the code base , unnecessarily refactoring code when you're meant to be working on an important feature ? No sir, I don't think it'll be kindness you will get from or any business manager.

If however you want well deserved respect and kindness, show that you excel at your job, you are able to deliver for me in a timely fashion and exceeding expectation.You can't handle being criticised ? You have no business being in business, go open a charity bookshop. One has to understand, developers like in any other creative industry can go off on a tangent by themselves if not given direction explicitly, sometimes that means being very much assertive and firm.If that is perceived as being unkind then tough luck.

EGreg 1 day ago 2 replies      
Experience has taught me there is a serious difference between being nice and being kind.

Often, we are nice because we are afraid of hurting people's feelings. As a result, though, we sometimes end up stringing people along and the ultimately make them lose more time and energy than if we had breached their comfort zone early, and communicated our expectations when they weren't yet super-invested. And after all is said and done, if we string them along, they end up blaming us more as well.

This was a hard life lesson to learn, but sometimes, to be kind, one must risk not being nice.

My advice would be: before communicating a tough expectation, do your homework (research how it's done) and be diplomatic. Different cultures have different linguistic paradigms that help grease the wheels towards agreement. Use them. And at the end, be firm but offer support for the transition. If they want it, they will take it. In any case it's likely you will be respected and won't burn bridges that way.

loeg 1 day ago 1 reply      
minademian 1 day ago 0 replies      
h/t to CircleCI for doing this kind of work in the tech industry.
unclebucknasty 1 day ago 0 replies      
The missing link and unspoken driver behind much meanness (in development and otherwise) is contempt.

Contempt is one of the worst regards a person can hold for another--perhaps even worse than hatred. It's a fundamental lack of respect for another's worth, either within a domain or more generally.

One can muster the will to express kindness for someone they dislike. But, it is virtually humanly impossible to be kind towards those one holds in contempt.

kronos29296 1 day ago 0 replies      
I came here thinking here is another situation or anecdote and this time about kindness and being screwed over because of it or something. Instead it is about workplace professionalism being called kindness and a recruitment pitch disguised as click bait. (Click baits are increasing in HN) my .02$
Google is releasing 20M bacteria-infected mosquitoes in Fresno techcrunch.com
543 points by chriskanan  3 days ago   209 comments top 45
jimrandomh 3 days ago 9 replies      
This is called the sterile insect technique, and it is a well-established practice for getting rid of mosquito populations that could threaten humans. It is very safe, both to humans (male mosquitoes don't bite) and ecologically (species other than mosquitoes aren't affected at all).

It sounds like Google is working on improvements to the process. This is important work, because mosquitos are a major cause of disease, especially in Africa, and we haven't been able to fully solve the problem with existing technology.

WaxProlix 3 days ago 8 replies      
I recall hearing when I was younger that mosquitoes were an outlier in the natural world. With most species, the balance of any food web would be pretty thoroughly disrupted by a major culling. As I heard it, this isn't the case for mosquitoes - if you could press a button and kill them all tomorrow, most ecosystems would be largely unimpacted.

Am I just making this up/misremembering it?

Edit: found a few sources.






Keyframe 3 days ago 0 replies      
Unlike other questions, I'm interested in logistics behind this. How do you produce 20m mosquitos and where do you hold them? How do you transport them and how do you release them? How do you 'store' them and when releasing are most harmed, are they 'sprayed' or you 'open a box and they will go by themselves'? How do you decide where to release them? Is it all at once (1m per week) or is there a pattern, is it related to wind... so many questions!!

I wan't a documentary "How it's made: Mosquitocide". I'm willing to make one if someone can provide access to info and logistics.

polskibus 3 days ago 7 replies      
Google, while you're at it, please find a way to eradicate ticks. They are getting more and more irritating and dangerous in Northern Europe!
sjcsjc 3 days ago 1 reply      
"Verily, the life sciences arm of Googles parent company Alphabet, has hatched a plan to release a ..."

My immediate reaction on reading that sentence was to wonder why they'd written it in some kind of Shakespearean English.

My next reaction was to feel stupid.

sillysaurus3 3 days ago 2 replies      
So whats the plan to get rid of them? Verilys male mosquitos were infected with the Wolbachia bacteria, which is harmless to humans but when they mate with and infect their female counterparts, it makes their eggs unable to produce offspring.

Thank goodness. We can't eliminate mosquitoes fast enough.

Wildlife will probably find other food sources, so bring on the weapons of mosquito destruction.

teddyg1 3 days ago 2 replies      
Can someone with knowledge of this particular experiment explain how they've overcome the regulations that have stopped Oxitec / Intrexon with their aedes aegypti solution? They key regulatory factors cited against Oxitec, especially in their Florida Keys trials in the past year, were centered around controlling for the release of only males (which do not bite humans), thus avoiding transmission of any kind from the genetically modified varieties, or bacterially modified varieties in this case.

Oxitec has worked for years to filter their mosquitoes so only ~0.2% of the released mosquitoes are female[1]. They then had to demonstrate that and more in many trials before being allowed to release their mosquitoes in the wild in Panama and Florida.

Otherwise, it's great that Google can overstep the other factors that would stop this solution like NIMBYism and working with county / municipal boards. These solutions are great.


yosito 3 days ago 1 reply      
It's interesting that Google is doing this rather than some government organization. What's Google's motivation? Is it purely altruistic, a PR move, an experiment, or does it have some direct benefit to them?
davesque 3 days ago 5 replies      
I'm aware that this is a known technique and thought has been given to whether or not it will impact the food chain, etc. But I do wonder this: has anyone considered what the effect will be of removing this constant source of stimulation for our immune systems?
RobLach 3 days ago 0 replies      
Just want to point out that a megacorp breeding and releasing a sterilization disease is pretty sci-fi. Also a mutation away from a Children of Men style dystopia.
sxates 3 days ago 1 reply      
"You don't understand. I didn't kill just one mosquito, or a hundred, or a thousand... I killed them all... all mosquito... everywhere."
amorphid 3 days ago 0 replies      
Reminds of when UC Riverside released some stingless wasps to prey on a whitefly infestation in Southern California. This was in the early 1990s.

I think this paper is relevant, but I only scanned it:


Tepix 1 day ago 0 replies      
Similar project in Germany:


However they use gamma rays to sterilize the mosquitoes instead of bacteria.

azakai 3 days ago 0 replies      
Why is "Google" in the title? The only connection between Google and this company is that they share a parent company, Alphabet.
Lagged2Death 3 days ago 1 reply      
What kind of planning and permitting process does a project like this require?

Or would it be legal for me to just go and release a cloud of mosquitoes myself?

Raphael 3 days ago 1 reply      
What an unfortunate headline.
dzink 3 days ago 0 replies      
From what is explained so far, this process doesn't kill mosquitoes. It just makes sure that some of the females (that reproduce 5 times in a life of 2 weeks as an adult) get fertilized with unproductive eggs. http://www.denguevirusnet.com/life-cycle-of-aedes-aegypti.ht... The eggs of aedes aegypti can be spread anywhere and the fertile hatch whenever their area gets wet in the next year or so.

Does anyone know what % population reduction impact this process results in? They'd have males likely die after 2 weeks and that just wipes the reproductive chances of the females in that period. Google is treating for 20 weeks in dry weather, which is not exactly the peak reproductive season of this mosquito.

pcmaffey 2 days ago 0 replies      
Has any research been done on potential benefits of widespread micro blood transfusion as a result of mosquitoes? The diseases are the obvious downside, wondering if resistances, etc may be an unrecognized upside.
franga2000 20 hours ago 0 replies      
I just love how people simply refuse to use the Alphabet name and keep calling it Google.
briandear 3 days ago 0 replies      
First they came for the mosquitos, but I didnt speak up because I wasnt a mosquito. Next they came for the invasive fire ants and then we all cheered because mosquitos and fire ants were finally gone.
LinuxBender 3 days ago 1 reply      
Does this prevent reproduction of the mosquitos, or of the disease? If mosquitos, will this have a negative impact on bats? My bats eat mosquitos and moths, but there are not many moths any more.
phkahler 3 days ago 3 replies      
I wish the other mosquito killing efforts would go forward.
stanislavb 3 days ago 2 replies      
All good. Yet I thought that was a responsibility of the gov... A big corp spending millions for free seems, you know, questionable
crimsonalucard 2 days ago 0 replies      
Unless this solution virtually slaughters every single mosquito wouldn't this technique only select out unfit mosquitos eventually leading to populations of mosquitos with genetic countermeasures to this method of eradication?
markburns 3 days ago 0 replies      
Does anyone know why the mosquitoes wouldn't evolve to be repulsed by others infected in this way?

Or is this a similar class of problem to antibiotics becoming useless over time?

I.e. it's useful to do now so let's cross that bridge if we come to it?

Or is there something else I don't understand about this?

jondubois 2 days ago 0 replies      
>> Verilys male mosquitoes were infected with the Wolbachia bacteria, which is harmless to humans

What they mean is; harmless in the short term and hopefully also harmless in the long term.

tcbawo 3 days ago 1 reply      
I can't wait until the day we start releasing solar powered the mosquito-hunting drones.
vinitagr 3 days ago 0 replies      
This is some real breakthrough. I don't remember i have heard of anything like this before. Any amount of success with this solution will have a lot of consequences on other problems.
Harelin 3 days ago 0 replies      
For those of us who live in Fresno and are curious as to which neighborhoods are being targeted: Harlan Ranch and Fancher Creek. They say "communities outside of these areas will not be affected."
jackyb 2 days ago 0 replies      
I always wondered, how do they count so many mosquitos? Is there a technique to determine that it's 20M?
SubiculumCode 3 days ago 0 replies      
I wish they'd do it in Sacramento where most of the mosquitoes live.
pcollins123 3 days ago 0 replies      
Google is releasing 20M bacteria-infected mosquitoes in Fresno... wearing small cameras and a projector that can display text advertisements
makkesk8 3 days ago 0 replies      
I've never been interested in biology. But this is so cool! :O
WalterBright 3 days ago 1 reply      
I'm curious how mosquitoes will evolve to beat this.
banach 2 days ago 0 replies      
What could possibly go wrong?
mrschwabe 3 days ago 1 reply      
No one should have the right to play god with our biosphere.
walshemj 3 days ago 0 replies      
could we have a less clickbaity title
pcarolan 3 days ago 2 replies      
kuschku 3 days ago 1 reply      
will_pseudonym 3 days ago 0 replies      
What could possibly go wrong?
chris_wot 3 days ago 0 replies      
At least they aren't attempting to go viral.
forgottenacc57 3 days ago 2 replies      
What could possibly go wrong? (Eye roll)
ultim8k 3 days ago 0 replies      
I came up with this idea last year! I didn't know someone was already building it.
unclebucknasty 3 days ago 3 replies      
Wait. Is there no regulation around this? Any company or individual can cook up whatever specimen they want and simply release it into the environment en masse?

Am I missing something?

Used GPUs flood the market as Ethereum's price drops below $150 overclock3d.net
404 points by striking  1 day ago   327 comments top 29
DanBlake 1 day ago 11 replies      
Just took a quick look at this.If you are located in the 'mining valley' of Washington where power is ~2c/kwh you are still getting healthy profits.

A computer with 7 GTX 1070 graphics cards should produce ~230 mh/s and draw 1 kw. This would cost approximately $30/month in power factoring in kw demand + cooling.

The above setup will currently generate $385/month in ETH.

So basically for miners who are in the right spot with the right facility, this is still profitable. The question is of course for how long. You also need to factor in the cost of equipment, datacenter, employees and difficulty/price.

But even if you dont have a facility in washington and just mine from your apartment, your power cost would probably be $100 a month. So its still 'profitable', just not nearly as much as it was in the run up.

Cliffnotes: 'professional' miners dont care. Even with the 'crash' today, they are making more per day than they were before the entire run up. For instance the 'worst' time for mining was December 2016 where you would only make $7.50 a day gross in ETH.

abalone 1 day ago 3 replies      
All I can think of is the careless environmental impact of all that dirty electricity consumption. For, let's be honest, a mostly speculative activity.

One cryptocurrency crashes, another gets hyped up, and the computational cycle repeats. When will it end.

zanny 1 day ago 2 replies      
On the bright side, this has been a great test of etheriums scalability. Which isn't great, but when this mining craze dies down I won't hesitate to run ethminer when I'm not home for a little extra dough.

What I would really expect is an overreaction to the price crash, which means the difficulty rate might drop a lot. At this point, doing what a lot of people do with bitcoin - mining small amounts for a long period of time and just holding it until it reaches all time highs to cash out - is probably really easy money.

Probably most relevantly is how crypto valuations are bound together. Bitcoin is also down about 20% from its ATH, and will certainly drop more as long as eth pulls it down. The entire market will rise and fall on the hype of just one blockchain. Coins nobody even cared about like peercoin saw 5x returns on miners during this eth bubble.

crypt1d 1 day ago 1 reply      
Its mostly RX series that are being sold off and the reason for that is the ever-increasing Ethereum DAG size. I dont know the specifics, but due to the DAG size the ETH hash rate on AMD RX400/500s is starting to slowly drop, and will be behind the performance of their Nvidia counter-parts in a few months time.

(Source: I run a mining operation.)

geff82 1 day ago 3 replies      
And then they'll buy back at horrendous prices when the Price goes up again? Seems like shortsighted people do this. At least they could play some tremendous video games in the mean time ;)
strictnein 1 day ago 2 replies      
I sold my hardware last week. GTX 1060s, a 1080, an AMD R9 290X, some other stuff.

Unless I'm missing something, there's no huge flood of video cards on ebay. There's maybe ~20% more than there was a week ago. All told, for the in demand mining hardware, you're only talking about a couple thousand cards.

Thriptic 1 day ago 2 replies      
A quick search of eBay shows no good deals on gtx 1070s. Used cards are selling for what I bought my cards new for a month ago or more (380).
zo7 1 day ago 2 replies      
Interesting, looking at the price graph Ethereum's price seems to correlate with Bitcoin's, which lost about %20 of value ($500) recently. In case anyone's wondering, the crash seems mainly driven by anxiety of an upcoming blockchain fork splitting the currency in two next month.


schiffern 1 day ago 3 replies      
Off topic, but the "ETH/USD" label on the price graph bothers me. Shouldn't it be USD/ETH?


150 ETH/USD would mean that you can get 150 coins per 1 USD. On the other hand, 150 USD/ETH correctly captures the mathematical relationship.

corporateslave2 1 day ago 3 replies      
GPU trading? More profitable than buying the underlying currency since GPU's in relatively good condition always hold a certain value?

High levels of correlation with BTC and ETH, along with other cryptocurrency, but a floor on how low it can go.

Pays off while holding by mining coins. Much like a dividend.

dawnerd 1 day ago 1 reply      
Good, maybe video cards will start to come back in stock at their MSRP.
vortico 1 day ago 3 replies      
Sorry if this is a dumb question, but does a single party control the difficulty level of Ethereum and Bitcoin? If so, it seems like they have massive control over the market. If not, how does it work?
j_s 1 day ago 1 reply      
In case anyone is not aware, there are user-friendly tools (that take their cut) which ensure maximum mining profitability for available hardware.

NiceHash, MinerGate, Awesome Miner and others - many have an affiliate program and fight against botnets (and antivirus often block the actual mining programs they download).

rdl 1 day ago 0 replies      
At least the NV cards, if they become non-viable for crypto mining, are useful for a lot of other GPU computation (or just as graphics cards).
wunderg 1 day ago 1 reply      
I thought ethereum would move to Proof of State from Proof of Work which will make mining as it's obsolete.


nsxwolf 1 day ago 2 replies      
I see an ebay buy it now for four 8GB RX 480s for $360. $90 a piece seems pretty crazy low - does that mean there's a high likelyhood of hardware failure after these things were used for mining?
horusthecat 1 day ago 1 reply      
Does anyone know where the money that initially went into the cryptocoins during the run-up this year came from, or where it went?
justforFranz 17 hours ago 0 replies      
Cryptocurrency noob here. Isn't the ability to "mine" a cryptocurrency a design failure of the cryptocurrency?
kushankpoddar 1 day ago 2 replies      
There is a point of view out there that Europe's higher than average reliance on renewables has bumped up electricity prices there and contributed in making the place less competitive for industries. You can see that argument in action when European miners are losing out to others due to high power costs.
Nursie 1 day ago 2 replies      
Excellent, my partner is looking for a card right now, a (lighly) used 1060, 1070 or 580 might be just the thing!

In the meantime, my machine which is a gaming rig that is mostly idle, may as well do a bit of mining...

stOneskull 1 day ago 0 replies      
this 'flooding the market' claim seems to be made up.
tossandturn 1 day ago 2 replies      
Worst place to ask this, I know, but... If I wanted to upgrade my old machine that currently has an HD 4770 (PCI Express 2.0 x16), where exactly could I find a worthwhile upgrade for less than $20?
Shinchy 1 day ago 0 replies      
Annoying as all hell, I am in the market for a 1080ti and the prices have rocketed up in the past two weeks.
waspear 1 day ago 0 replies      
Ethereum's Casper protocol upgrade (Proof of Stake) might have a long term affect on GPU market as well.
Temasik 1 day ago 0 replies      
Mining has no future
foota 1 day ago 0 replies      
Maybe I should have sold the R9 Fury I got for gaming a few months ago...
aussieguy123 1 day ago 1 reply      
Anyone doing deep learning?
the_end 1 day ago 3 replies      
ryanSrich 1 day ago 3 replies      
Maybe this is an incredibly uneducated comment, but won't they have to buy those GPUs back when the price of ETH inevitability goes up again? This is all just FUD from August 1st, ICO instability, and lack of Ethereum use cases. All of these issues have resolutions planned. So one would be stupid to think Ethereum stays at $150 for even a year.
Machine Learning Crash Course: The Bias-Variance Dilemma berkeley.edu
510 points by Yossi_Frenkel  1 day ago   53 comments top 10
taeric 23 hours ago 4 replies      
This seems to ultimately come down to an idea that folks have a hard time shaking. It is entirely possible that you cannot recover the original signal using machine learning. This is, fundamentally, what separates this field from digital sampling.

And this is not unique to machine learning, per se. https://fivethirtyeight.com/features/trump-noncitizen-voters... has a great widget that shows that as you get more data, you do not necessarily decrease inherent noise. In fact, it stays very constant. (Granted, this is in large because machine learning has most of its roots in statistics.)

More explicitly, with ML, you are building probabilistic models. This is contrasted to most models folks are used to which are analytic models. That is, you run the calculations for an object moving across the field, and you get something within the measurement bounds that you expected. With a probabilistic model, you get something that is within the bounds of being in line with previous data you have collected.

(None of this is to say this is a bad article. Just a bias to keep in mind as you are reading it. Hopefully, it helps you challenge it.)

rdudekul 23 hours ago 0 replies      
Here are parts 1, 2 & 3:

Introduction, Regression/Classification, Cost Functions, and Gradient Descent:


Perceptrons, Logistic Regression, and SVMs:


Neural networks & Backpropagation:


amelius 23 hours ago 5 replies      
The whole problem of overfitting or underfitting exists because you're not trying to understand the underlying model, but you're trying to "cheat" by inventing some formula that happens to work in most cases.
therajiv 23 hours ago 4 replies      
Wow, the discussion on the Fukushima civil engineering decision was pretty interesting. However, I find it surprising that the engineers simply overlooked the linearity of the law and used a nonlinear model. I wonder if there were any economic / other incentives at play, and the model shown was just used to justify the decision?

Regardless, that post was a great read.

eggie5 22 hours ago 1 reply      
I've always liked this visualization of the Bias-Variance tradeoff: http://www.eggie5.com/110-bias-variance-tradeoff
gpawl 13 hours ago 0 replies      
Statistics is the science of making decisions under uncertainty.

It is far too frequently misunderstood as the science of making certainty from uncertainty.

CuriouslyC 22 hours ago 0 replies      
One good way to solve the bias-variance problem is to use Gaussian processes (GPs). With GPs you build a probabilistic model of the covariance structure of your data. Locally complex, high variance models produce poor objective scores, so hyperparameter optimization favors "simpler" models.

Even better, you can put priors on the parameters of your model and give it the full Bayesian treatment via MCMC. This avoids overfitting, and gives you information about how strongly your data specifies the model.

plg 22 hours ago 0 replies      
like many things in science and engineering, (and life in general) it comes down to this: what is signal, what is noise?

most of the time there is no a priori way of determining this

you come to the problem with your own assumptions (or you inherit them) and that guides you (or misguides you)

known 23 hours ago 0 replies      
Brilliant post; Thank you;
Pogba666 1 day ago 0 replies      
wow nice.Then I have things to do on my flight now.
Two days in an underwater cave running out of oxygen bbc.com
463 points by Luc  21 hours ago   273 comments top 13
j9461701 18 hours ago 25 replies      
I might be speaking out of line, but taking on these kinds of risks with young children at home seems kind of selfish. The fact that he went back into the same cave that nearly killed him only a month later...almost as if to say:

"I would rather my kids grow up without a Dad than live without my adrenaline fix"

I am neither a father or a cave diver though, so I might be missing a piece of the puzzle. Would either groups of people care to comment?

benzofuran 19 hours ago 4 replies      
When you're learning to cave dive, one of the first things that you learn is that you may very well die in there.

Most of the training focuses on systems, skills repetition, and understanding and using redundant systems - folks getting into cave diving typically are already extremely experienced divers who if anything need only some minor skill tweaks - most cave instructors will not take on students who don't already have significant open water technical diving experience (multiple tanks, mixed gas, rebreathers, decompression, wreck, etc).

A running joke is that the lost line drill (where you're placed intentionally off of the guide line and have to find it without a mask/light/visibility) is the most punctual cave task you'll ever do - you have the rest of your life to get it right.

Here's a few good books on it (non-affiliate links):

Caverns Measureless to Man by Sheck Exley (the father of cave diving): https://www.amazon.com/Caverns-Measureless-Man-Sheck-Exley/d...

The Darkness Beckons by Martyn Farr: https://www.amazon.com/Darkness-Beckons-History-Development-...

Beyond the Deep by Bill Stone (the Tony Stark of cave diving): https://www.amazon.com/Beyond-Deep-Deadly-Descent-Treacherou...

The Cenotes of the Riviera Maya by Steve Gerrard (patron saint / mapper of Yucatan caves): https://www.amazon.com/Cenotes-Riviera-Maya-2016/dp/16821340...This is more of a map and explanatory notes but gives great insight into the complexity of it. Currently there are 2 systems that almost all cenotes are part of in the Yucatan, and there's some really interesting work going on trying to link the two. Current work is going on at about 180m depth through a number of rooms at the back of "The Pit", and there are multi-day expeditions going on trying to find the linkage.

ljf 2 hours ago 0 replies      
Does anyone know any more detail about their plan to drill down to him - and is similar rescues have been preformed this way? I'd be interested if the air he was breathing was 'trapped' and if drilling down would release it, and drown him, or if the air had a slow route in and out of the pocket he was in. Fascinating stuff.
curtis 19 hours ago 3 replies      
Cave diving is one of those things that I am happy to only experience vicariously through the stories of others.
biggc 19 hours ago 4 replies      
> He realised the water at the surface of the lake was drinkable

Can someone explain this phenomena? How can the water in a sea-cave become potable?

gregorymichael 19 hours ago 1 reply      
Did a sensory deprivation tank for the first time a few weeks ago. An hour was tough. Hard to imagine 60, with the added doubt of "you may never get out of here."
acdanger 18 hours ago 0 replies      
See also this story if you want to make sure you don't really want to go diving in a subterranean cave: http://www.bbc.com/news/magazine-36097300
A_Person 12 hours ago 5 replies      
I'd like to address the false assumption in this thread that cave diving is more dangerous than driving a car!

I cave dive on a regular basis with two other guys. We've dived together as a team for nearly 10 years. I'm late 60s and single, the second guy is 50s and has a partner but no children, and the third is early 40s with a six-year-old, who he has every intention of seeing grow-up into adulthood.

We often dive in a system comprising a complex maze of 8kms of underwater tunnels. Some are large, and would fit several divers across, but some are small, and you can barely squeeze through. The only entry to and exit from this system is a small pond, about 6 feet across and 4 feet deep, just big enough for one person to get in at a time. Then you scrunch yourself up, and drop down through a slot to enter the system.

We'd generally go about 700m into this system, making up to 13 seperate navigational decisions (left? right? straight ahead?) which we have to reverse precisely to get back out at the end. This is all completely underwater, there's no air anywhere except for two air pockets hundreds of meters apart. As I like to say, in cave diving there is no UP there is only OUT!

It all sounds pretty dangerous, right? Wrong.

NAVIGATION. The whole system is set up with fixed lines, each of which has a numbered marker every 50m or so. Before each dive, we consult the map, and plan exactly where we're going to go. I commit that plan to memory, write it down on a wrist slate, and also in a notebook which I take underwater. All three of us do this independently. Underwater, when we come to a junction, each of us checks the direction to go, then marks the exit direction with a personal marker. If anyone makes a mistake, for example, turns in the wrong direction, or forgets to leave a personal marker, the other two pick that up immediately. On the way back, when we get to each junction, each of us checks that it's the junction we expected, and we can see our personal markers. Each individual's markers can be distinguished by feel alone, so we could get the whole way back, separately, in total darkness, if we had to. So the odds of us getting lost in the system are very low.

LIGHT. These caves are absolutely pitch black, so naturally you need a torch. In fact, nine torches! Each of us individually has a multi-thousand-lumen canister battery light, plus 2 backup torches, each of which would last the whole dive. I could also navigate by the light of my dive computer screen, and I'm considering carrying a cyalume chemical lightstick as well. So then I personally would have five different sources of light, and we'd need 11 sources of light to fail before the team would be left in the dark. The risk of this happening is basically zero.

GAS. Each of us has two tanks in a fully redundant setup. If one side fails, we just go to the other and call the dive. In fact, our gas planning allows one diver's entire gas supply to fail, at the point of maximum penetration, and either one of the other two divers could get that guy back, plus himself, without relying on the third diver at all. However, gas is certainly a limited resource underwater, so it's always on our minds, and all three of us will turn the dive as soon as anyone hits their safety limit.

There's lots more equipment involved, but let's leave it there for the moment, and turn our attention to...

DRIVING! Each of us lives >400 km away from that system. So there and back is a five hour drive. During that drive, you could fall asleep and run off the road; have local fauna run out in front of your car; get head-on crashed by drunken drivers, and so on. Several of those are external risks that are not under our control.

So the simple fact of the matter is this. Our cave dives are almost certainly SIGNIFICANTLY SAFER than driving to and from the dive site! The cave dives carry significant potential risks, but most of those are mitigated with proper training and equipment. Whereas there's not much I can do to stop a drunken driver running head-on into me.

Certainly there are risks like tunnels collapsing and blocking the exit. But statistically, I'm sure that those are orders of magnitude less likely than having a heart attack, or falling over and breaking your neck.

Hope that helps :-)

ajarmst 17 hours ago 2 replies      
Reminded me of the very sad story of Peter Verhulsel: http://www.upi.com/Archives/1984/11/12/Scuba-diver-lost-in-c...
fit2rule 18 hours ago 1 reply      
I'm quite surprised at the detail that the rescuers attempted to drill into the cave from above in order to provide supplies .. is anyone familiar with the depth of the cave pocket? This seems like a surprising choice to make given the logistics - but I guess a safer one, in the end .. assuming one has a drill system available and the depth is not too great.
surgeryres 15 hours ago 1 reply      
No one has mentioned the risk subjected upon the rescue team to come get him. So there's that.
belovedeagle 20 hours ago 4 replies      
I wonder - did they take steps to replenish the cave's oxygen? If not, it's useless for the next person...

I guess this is kind of silly and naive, but it's what I would do.

tysonrdm 19 hours ago 5 replies      
There should be a law against bringing in and leaving nylon ropes in the cave. If this continues, all the caves are going to be filled with nylons ropes left by previous divers. Do we want these caves, too, to eventually become a garbage dumping ground?
Bitcoin Is Having a Civil War as It Enters a Critical Month bloomberg.com
359 points by discombobulate  1 day ago   303 comments top 28
buttershakes 22 hours ago 15 replies      
This is a fight for control of Bitcoin. It is business interests on both sides fighting for a position of authority. SegWit2x is an attempt to remove control from the core dev team, which while technically strong is full of zealots with questionable motives and terrible management skills. Bitcoin ABC and Unlimited have their own parts to play as factions. It's getting tense, but it's been years in the making. Groups unwilling to compromise on the most basic points. I suspect that SegWit2x will end up taking over the network, but I'd rather see a pure large block faction like Bitcoin ABC. Either way the core developers are going to lose control of a 40 billion dollar network, possibly one of the biggest fails in modern technology. They will be left on a minority chain which will have little relevance going forward. I've said it before but anyone who put money into Blockstream has to seriously be wondering what the hell they are doing, there CEO should have been kicked out a long time ago, he has no relevant experience to actually running an organization and has royally messed it up.
shp0ngle 22 hours ago 3 replies      
The basic question is how to scale; off-chain or on-chain. The rest is just theatrics and typical nerdy hyperbole.

One side of the fight (Core / blockstream) wants to scale off-chain, pushing transactions to side-chains and/or lighting networks, and want to profit from off-chain solutions.

The other side of the fight (segwit2x / miners) wants to scale on-chain, making the blocks bigger, and profit from block fees.

Both sides have pros and cons.

Pros of off-chain solutions - more scalable, don't need expensive confirmations for each transaction, more long-term. Cons: the solutions don't exist yet and might be vaporware; segwit etc are just stepping stones.

Pros of on-chain solutions - making the blocks larger can be done now, no need to wait for new software and new networks. Cons - makes the blocks larger, which makes running bitcoin nodes harder. Also cannot scale this way infinitely (you need to keep all the transactions on a disk forever).

The discussion about segwit is in reality just discussion about how to scale, and who profits.

As for me, I don't really care, Bitcoin is inefficient either way

apeace 23 hours ago 2 replies      
The days are counting down to the "Segwit2X" rollout, the idea supported in the "New York Agreement" (NYA)[0].

There is a contingency plan in place should the Core-supported User Activated Soft Fork become activated.[1]

Segwit2X has working code, has been tested in beta, and is now in RC.[2]

Without commenting on the merits of the different approaches, the current situation is thrilling to watch as a spectator. To call it a "Civil War" is not an exaggeration.

[0] https://medium.com/@DCGco/bitcoin-scaling-agreement-at-conse...

[1] https://blog.bitmain.com/en/uahf-contingency-plan-uasf-bip14...

[2] https://lists.linuxfoundation.org/pipermail/bitcoin-segwit2x...

arcaster 22 hours ago 2 replies      
After working in the space for about a year, after being a developer and enthusiast surrounding crypto since the early days of BTC this "bickering among core devs" is nothing new.

Any press or "talks" that say otherwise are either being influenced with serious bias or are simply reporting false information.

I like DLT tech, however, if bitcoin has shown us anything it's that once you solve the double-spend problem you're still left with an even more grotesque problem of governance.

People pick fun at ETH since it has a "single leader", but Vitalik is more of a back-seat conductor than a "grand leader". Also, most arguments of "bitcoin being a truly decentralized platform because our devs are decentralized" can easily be diffused by vaguely looking into how BlockStream operates...

The political shit-storm being paraded by BTC needs to end soon, we really don't need another 2-3 years of douchey BTC core devs arguing on the internet and bad-mouthing any project that isn't BTC.

xutopia 20 hours ago 2 replies      
To me this whole process shows how great cryptocurrencies really are. The process is live, it is public and it is messy.

Compare with how our usual currencies are handled. Behind closed doors with powerful banks or private companies deciding for our governments.

BenoitP 21 hours ago 2 replies      
I'm starting to see a bit clearer on how a fork would pan out:

Miners: Hashing power has little influence. As long as there are miners, and two chains rejecting each other transactions will be processed. At first, transactions processing might take a while, but difficulty will adapt. This will create two legitimate currencies. Now everybody in possession of 1 BTC would have 1 BTCa + 1 BTCb.

Exchanges: Little power. They will trade both BTCa and BTCb, and accept commissions.

Trader of goods, in embedded devices: They might have to modify their client to accept both currencies, but they would have to follow the market rates. Otherwise they would have to suffer income loss from people using them to profit from arbitrating the markets.

BTC-rich individuals: They have now 1 BTCa + 1 BTCb. But there is transaction replayability. If they spend 1 BTCa, their BTCb can also get spend the same way. And they lose their BTCb. Chains have a strategic advantage to replay transactions getting to the other one because: 1) they get to keep the commission, 2) they ascertain themselves as more encompassing economically (not sure on this one maybe, they want to stay neutral).

Now, if BTC-holders can wallet-emptying-double-spend them to 2 different addresses they control on the 2 chains. And, compared to the ones who got their transaction replayed, they have kept both their BTCa and BTCb.

TL;DR: IMHO, come the technical fork, some BTC-holders will be tumbling until they irrevocably acquire their BTCa + BTCb, and use them to make runs on the markets, effectively materializing the economic fork.


I'd love the opinion of someone who lived through the ETH-ETC split, especially about the transaction replayability part.

jancsika 1 hour ago 0 replies      
Can someone please ELI5 why ASICBOOST would be considered an exploit?

Especially considering Satoshi clearly got caught off guard by the quick rise in GPU mining-- which led to the bootstrapping mechanism putting Bitcoin in fewer hands than it otherwise would have. But I never saw Satoshi call GPU mining an exploit.

rwmj 21 hours ago 3 replies      
If Bitcoin splits, what do you predict would be the effect on holders of bitcoins?

- They have twice as much money (yay!)

- They have twice as much money but the value is split, so it's worth approximately the same.

- One of the branches wins or mostly wins.

- The split does so much damage that some (all?) value of coins is lost.

rihegher 23 hours ago 0 replies      
Already discussed a few days ago on HN https://news.ycombinator.com/item?id=14758587
Animats 20 hours ago 1 reply      
The scary thing is that the developers want to go from initial release of new code to wide deployment in a few days. This on something where any security flaw can be attacked anonymously and profitably. What could possibly go wrong?
nfriedly 20 hours ago 3 replies      
What is Bitcoin and friends good for right now besides speculating with and trading for other currencies?

A while back there was a BTC marketplace where among other things, I spent 1 BTC on a steam key for the game Portal (a poor trade in hindsight).

But they shut down and the only other place that I can think of that accepts BTC is humblebundle.com - and presumably they convert it to USD right away.


> Bitcoin payments have been disabled for the Humble Capcom Rising Bundle.

So, yea, who accepts BTC right now?

placeybordeaux 21 hours ago 3 replies      
I just moved ~20% of my crypto holdings from BTC to LTC. The rest I'll likely keep close to 40% of my cryptoholdings in BTC, but move it onto my own wallet. If a fork actually happens, I'd prefer to be in control of the private keys.

It's kind of odd that there is still so much FUD about segwit, as it has already activated on LTC. It hasn't appeared to open any security holes.

ihuman 23 hours ago 2 replies      
How is this different then what happened with Bitcoin XT and Bitcoin classic?
badloginagain 22 hours ago 5 replies      
Sorry if this is a stupid question, but why not both? It doesn't appear that the two strategies are mutually exclusive. Is it just that SegWitX2 is considered too rushed? Is it just that miners have a vested interest in maintaining influence?

Personally it seems like smart contracts and other similar services beget an ecosystem that could swell the market cap by a significant amount, I assume miners would have a long term goal of doing just that.

As a disclaimer, I own Bitcoin, but I'm definitely a layman and I don't really have a horse in the race. What I'm most concerned is what these changes are going to accomplish when looking back 10 years from now. I'm in BTC for the long-term, and this whole thing stinks of petty bias and tribal power plays.

hellbanner 17 hours ago 0 replies      
I see a lot of mention of 51% attack; the selfish miner attack could be done with closer to 33%(!):



JohnJamesRambo 21 hours ago 1 reply      
Can someone show some math on how much more expensive it would be to run a node if block size is allowed to increase from 1 MB? It sounds like a silly made-up excuse.
lamontcg 21 hours ago 0 replies      
From a technical analysis perspective (yeah i also do palm reading and seances), Bitcoin's chart looks like a perfect triangle continuation pattern.
narrator 22 hours ago 2 replies      
This is why I'm bullish on Litecoin. Already has segregated witness, lightning network, low transaction fees, and a low drama community.
bleair 21 hours ago 0 replies      
It appears the emotional power of "money" coincidences with zealotry. It must be exciting to have your money tied into something that could split. If the split does happen and both forks keep running won't the world economy of bitcoins simply doubled? I assume someone will provide an exchange to move to/from bitcoin-zeal vs. segwit2
deletia 19 hours ago 0 replies      
This just in: centralized authorities worried about the deflation of our fiat bubble and crypo currencies position as the next generation of digital commodities capitalize with FUD propaganda two weeks before a software patch (proposed last year) is rolled out exactly* in the way it is intended to.
someSven 23 hours ago 0 replies      
I was a bit surprised by the crash, I would have sold and buyed back cheeper. But I thought the information was already in the price. I could imagine someone tried to make it crash hard by selling a lot and creating a panic, and failed.
jgord 12 hours ago 0 replies      
I'm not sure why SegWit is put forward as a "scaling solution". It does make some room by moving signature data out of the main block, which may allow 2.5x as many transactions - but thats a one time improvement, afaict.

The real problem is simply that the blocksize is way too small. At peak daily loads we are trying to put 20MB of transactions into a single 1MB block. Of course the unprocessed transactions pool up in the 'mempool' waiting for the next block, and are eventually cleared later in the day in off-peak times.

The reason they don't just pool up indefinitely, and crash the server, is due to economics - people pay higher transaction fees to get their important transactions into the next block. Miners earn part of their income from those fees, so they put the best paying transactions into the block first.

Most people who might like to use Bitcoin to pay for actual things, will balk at paying 3$ to send 500$, which means less people use the system, or they only use it for important big trades - thus, an equilibrium is set up where transaction volume is kept low.

Keep in mind bitcoin blocks occur on average every 10 minutes. A global rate of 3 trans/sec is clearly not a large number for a system used by millions, all across the globe.

Litecoin has the same architecture, but doesn't have this bottleneck problem - they have 3/4 the blocksize, process 4x as frequently and handle less that 1/10th the volume of transactions. So there is no mempool, fees are low etc.

The max blocksize is set to 1MB in code [ think #define or static const ], so increasing it means releasing new software - old versions will not be able to process large blocks, so this means a "hard fork".

I would argue that a blocksize increase is urgently needed and justifies a prudent hardfork - because it is currently preventing Bitcoin from growing. Not only do we need a 2MB block yesterday [ some say 8MB ], but we need a clear block size upgrade schedule for the next few years so Bitcoin can handle steady growth, without the need for many future hardforks.

Blocksize increase over the next few years could yield a 20x to 200x increase in throughput using the current architecture ... this releases the stranglehold on transaction flow and user growth, and buys time to build out all the other nice new technologies that can augment, or scale beyond, the linear architecture of the blockchain.

This issue has been delayed and debated for 2.5 years, so now it really is urgent and people on both sides are pretty angry. Sadly, its metastasized into an ugly political civil war .. but I think at heart it is a fairly normal engineering issue that could have been resolved routinely. Maybe having a ton of cash riding on your code makes easy choices hard.

xiaoma 23 hours ago 0 replies      
Standard. This is what bitcoin leaders do.
faragon 23 hours ago 2 replies      
TL;DR: Bitcoin is popping, and Ethereum is going to be the new bubble.
m777z 22 hours ago 3 replies      
With this much uncertainty, paying ~$2000 for 1 bitcoin seems insane to me. But that's why I don't invest in cryptocurrencies; too much volatility for my taste.
pteredactyl 23 hours ago 6 replies      
Bitcoin continues to evolve. With that comes growing pains. And internal struggles as it is mostly an open source project.

But for Bloomberg to use a 'civil war' hyperbole signals fear from the establishment. Established capital more specifically. And really, that is bitcoin's biggest threat.

Disclosure: I own bitcoin.

gremlinsinc 23 hours ago 1 reply      
If I wanted to grab an altcoin like Nxt/Ardor -- would it be safer to buy bitcoin now, or wait till Segwit when it could become cheaper? Nxt/Ardor and all crypto's are downward spiral now--because of the bitcoin split coming, so I feel now's a buyers paradise.. and Nxt/Ardor chains look very promising from a tech standpoint.
Monolith First (2015) martinfowler.com
424 points by levosmetalo  2 days ago   158 comments top 28
taurath 2 days ago 6 replies      
If you don't have a product yet or the parameters could change quickly with new business insight, you need to be able to change it fast. With microservices you will be spending half your time figuring out orchestration, building data flows that people can understand, and doing ops. Last startup I was in delayed their launch date for >6 months because of their architecture. Way too many people think they need it, but a load balanced monolith can take you from 0 income to able to hire more engineers.
phamilton 2 days ago 5 replies      
I all comes back to Conway's Law (Your software will look like your organization).

Microservices allow and require low coupling in the organization. If you want to reduce coupling in your org, you'll be well served by microservices. If you want tight collaboration in your org, you'll be well served by a monolith. As orgs grow into multiple independently executing units, a monolith starts to limit the ability to independently execute.

morphemass 2 days ago 2 replies      
Just about every one I've interviewed with recently has been breaking their monolith up into micro-services for some reason.

When I've done this in the past I had a key goal: reliability. The cost was about 10x the development effort of the monolith in order to add an extra 9 to the reliability. The monolith was wonderful for getting up and running quickly as a business solution but it actually crippled the business because they had failed to identify how essential reliability was. KYC.

Personally I've come to the conclusion that the main benefits of SOA/MSA are not necessarily technical but more organisational/sociological. Having distinct silos of activity/responsibility, separate teams and communications channels; all can make a large project more manageable than the monolith by allowing the lower level problems to be abstracted away (from a management perspective).

jakozaur 1 day ago 0 replies      
Rule of thumb: Number of full-time backend engineers divideed by 5 and rounded up is number of microservices you can afford.

E.g. if you have 500 employees having 100 microservices is fine. If you have 3 engineers and try to have 20 microservices you are wasting tons of time, you should do monolith.

yellowapple 18 minutes ago 0 replies      
Even if you're building a monolith, though, you're generally well-served by a monolith that pretends to be a bunch of microservices - i.e. it could be split into microservices easily if the need arises, kind of like how some "hybrid" OS kernels could (in theory) be split into proper microkernels if the internal function calls were replaced with messages (the NT kernel is built this way, IIRC). Each part of this "chunky" monolith should provide a proper internal API, and no other part should have to call into that part's internal functions.

This should be easy to achieve in most "object-oriented" languages (like Ruby; a Rails monolith should have no problem being structured this way, even if quite a few of the ones I've seen in the wild seem to forego this). Erlang (and Elixir by descent) is also well-suited to this, since you can break your application into a collection of processes that - whether individually or in combination with other processes - can act like their own little microservices.

gluczywo 2 days ago 2 replies      
even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are

This line of thought reaches two decades ago and was expressed in a wonderful essay "Big Ball of Mud" http://www.laputan.org/mud/

EDIT: updated with the quote

FRex 2 days ago 0 replies      
The common pattern he mentions reminds me of the concept of 'semantic compression' (one big function and lots of variables first, then break it up into structs, classes, functions, etc.) by Casey Muratori: https://mollyrocket.com/casey/stream_0019.html

It's a very nice and natural way to write code to do it all horribly dirty and only when a sizeable portion if ready to start cleaning it up and making it look and read good.

Both are basically "good comes from evolving/refining bad".

dankohn1 2 days ago 0 replies      
It may be a bit simplistic for HN, but you may enjoy I talk I've given, "Migrating Legacy Monoliths to Cloud Native Microservices Architectures on Kubernetes", and especially the visual metaphor from slide 26 on of chipping awsy at a block of ice to create an ice sculpture.


alexandros 1 day ago 1 reply      
We started resin.io with a microservices architecture from day one, and we are still happy with the result. It was very painful to get it up and running, but once that was over, we were good to go. The boundaries we defined early on are still solid, and the result works well. One critical detail however, is that all our persistent state lives in one place, minus specific well-understood exceptions. Arguably, starting with microservices helped us define strong boundaries we weren't tempted to blur over time.

All this said, I do sometimes wish we had started with a monolith, if only because we paid the microservices tax in deployment and infrastructure maintenance way too early, long before we had the scale to warrant it. I feel starting with a monolith would have probably meant more progress in less time, though with a risk of not being able to refactor smoothly when the time came.

Overall a hard call to make, since I'm happy with the result, but wonder about the pain it took to get here, and at the same time counterfactual universes are hard to instantiate...

navalsaini 2 days ago 1 reply      
I agree, monolith first and have proposed a talks to few JS conferences on this topic. I however have not worked in a company that uses microservices architecture at a big scale (like uber, instagram, etc). I am keen to understand - (1) what does it mean to run a microservices architecture from an org. point of view? (2) How are principles like 3 depths deep enforced? (3) How does a developer decide to create a new microservice vs when to reuse one? (4) Who manages the persistence layer and associated devops tasks (backups, failover, repset, etc)? ... mostly that is the uncovered bits for me. I came across a very recent talk by uber on these lines - JS at Scale (https://www.youtube.com/watch?v=P7ek4scVCB8). I think a few talks on the organizational side of microservices would give people a clear idea if they really need one. Also, though the startups use the term microservices, but their architecture does not in reality has as many boxes as compared to the uber talks that most of us listen to. The startup microservice architectures do have single point of failures and they just break it up to make it easier to scale beyond a 100 or so concurrent users. The decomposition is mostly around tasks that are IO bound (serving APIs) and other tasks that are more CPU bound (some primary usecase). So startups using microservices may not be that bad actually. They could just mean that we do an RPC using redis for some computationally intensive usecase.
bsder 2 days ago 2 replies      
Fowler failed miserably when building a monolith. See: https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...

Why should we believe his statements about microservices?

Personally, my experience in microservices vs monolith has been as follows:

If your system needs fast update with quick rollout of new features, monolith is probably superior. Being able to touch everything quickly and redeploy is generally quicker in a monolith.

If your system needs to be able to survive component latency/failure, microservices are probably superior. You will have hard separation that enables testability from the beginning.

Overall, I find the monolith vs microservices debate insipid. We have LOTS of counterexamples. Practically everybody writing Erlang laughs at people building a monolith.

kishorepr 2 days ago 0 replies      
Like others have pointed out here, it's incredibly hard to know the application boundaries up front, which are are required for building micro services.

I think solutions that are a hybrid of Monolith and Microservices work out well. As another person pointed out.. this can be fairly easily achieved by having a monolith with multiple sub-projects to get separation of concerns. The code is all in 1 place so it's easier to design and refactor. You can also deploy different sub projects as microservices if you need to later on. So it's basically having a monolith with separately deplorable sub-components.

Once boundaries are clearly understand, it can then be easier to physically separate services

nichochar 2 days ago 0 replies      
I disagree with this, but only because it makes the assumption you're working with a single core type language, like java, python or C++.

I think if you design fault tolerant micro service based services with something like the erlang BEAM VM, things will workout well, since you're being very careful about message passing from the beginning.

olingern 2 days ago 4 replies      
> Almost all the cases where I've heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

I wholeheartedly disagree with this point.

I've found that if I build monolith first, it becomes harder to draw the line of how to separate endpoints, services, and code within the system(s).

If I design in a "microservice first," or just a service oriented design -- I find that there is much more clarity in system design. In terms of exposing parts of a system, I find that the microservice first approach makes me consider future optimizations, such as caching policies, whereas, in a monolith, I would proceed in a naive, "I'll figure it out later" approach.

Each school of thought has its downsides. Monoliths move fast and abstracting parts of the system later that arise as bottlenecks is a tried and true pattern; however, there aren't too many product / business folks who want to hear:

"Hey, we just built this great MVP for you. It probably won't handle significant load, so we're going to go off in a corner and make it do that now. Oh yeah, we won't have time to develop new features because we'll be too busy migrating tests and writing the ones we didn't write in the beginning."

The flip side is, microservice first has a lot of overhead, and (as things evolve in one system) refactoring can be extremely painful. This is an okay trade off where I'm at... for others, maybe not so much.

lisa_henderson 2 days ago 0 replies      
Last year I worked at an electronic publishing firm which had wasted $3 million and 5 years on a Ruby On Rails application which was universally hated by the staff, and which we replaced with 6 clean, separate services. The problem with the Ruby On Rails app is that it was trying to be everything to everyone, which is a common problem for monoliths in corporate environments. But the needs of the marketing department were very different from the needs of the publishing department. A request for a new feature would come in from the blogging/content division which would be added to the Ruby On Rails app, even though it slowed the app down for everyone else.

Six separate services allowed multiple benefits:

1.) each service was smaller and faster

2.) each service was focused on the real needs of its users

3.) each service was free to evolve without harming the people who did not use the service

There was some duplication of code, which suggests a process that is the exact opposite of "Monolith First":

Start with separate services for each group of users, then later look to combine redundant code into some shared libraries.

rukuu001 2 days ago 0 replies      
Here's Matt Ranney talking about how Uber's microservices-first approach allowed them to scale their workforce super fast; also how those microservices became a kind of decentralized ball of mud:


Havoc 2 days ago 0 replies      
I'd say the more correct interpretation is "don't introduce the complexity of modularity too early"
garganzol 2 days ago 0 replies      
Every one who eats the food from a thought leader like Martin Fowler eventually meets a trap. Shiny ideas "that sound interesting" are like a candle light for a moth.

I created a simple rule long time ago: <insert name of a "thought leader" here> last.

marichards 2 days ago 0 replies      
Modular monoliths can be a simpler medium.Writing modules of functionality that work on their own (in memory integration test) can easily be tested, separated into microservices or assimilated into a monolith. Be wary of runtime function shared between modules as it will strictly couple the two and risk side effects on each other, tending towards spaghetti. But for monolith quick wins they can help for sharing management dependent resources like database transactions.
tomerbd 19 hours ago 0 replies      
There is a really interesting discussion here, but I need to quit my day job to read it all :O
sctb 2 days ago 0 replies      
Discussion from a couple of years ago: https://news.ycombinator.com/item?id=9652893.
rukuu001 2 days ago 0 replies      
You (and I) are almost certainly going to get it wrong first time around. Which approach is most forgiving of errors? I'd say monolith.
y2hhcmxlcw 2 days ago 3 replies      
At what point will corporations that still design massive systems as an unmaintanable monolith figure out they can architect things better and save a ton of developer dollars? At what point do they start taking good points from articles like this and either break those up into microservices or some other solution?
oDot 2 days ago 1 reply      
There is a middle-aged, and it's building a monolith that's anticipated to being broken down.
stuartaxelowen 2 days ago 0 replies      
I quite like the "web server and stream processors first" strategy, since it will take you much farther and retain the same code efficiencies as the monolith, but will also give more operational efficiency at minimal extra cost.
holografix 2 days ago 0 replies      
Monolithic 12-Factor apps where you can abstract some of the requirements to managed services, like a DB service, an email service etc. Someone already mentioned here but stateless app processes is a must.
jaxn 2 days ago 1 reply      
I think the same argument should be made for NOT writing tests for a prototype.

Build something useful, fast. Then refactor. Write tests when refactoring or fixing a bug, but not when prototyping.

a_imho 2 days ago 5 replies      
Might be OT, but what is the opinion on Martin Fowler in general?
Archiveteam are backing up SoundCloud archiveteam.org
334 points by hunglee2  1 day ago   213 comments top 24
radarsat1 1 day ago 4 replies      
Their website seems to be down: I'm just wondering, are they downloading everything accessible through the player, or just songs marked "Download"?

Even given that they could restrict themselves to songs marked okay to download, how much of that will be DJ mixes containing copyrighted songs?

I'm just wondering because Soundcloud actually has support to specify your copyright terms, which does not default to "everyone can download this", so it's an interesting case..

The website works now, it just says "selective content"

I'm sure SC serves up a lot of content per day, but how do you think they will react by suddenly having someone download all of their 900 TB or whatever it is in one day? How much will Archiveteam be contributing to SC's downfall by suddenly causing them a huge unexpected bill?

As someone who really wants the SC content backed up properly, I nonetheless see how this raises some interesting legal issues.

rnhmjoj 1 day ago 2 replies      
StavrosK 1 day ago 4 replies      
Hmm, I've been playing with IPFS lately, and just had an idea: Since IPFS is perfect for archival, Archiveteam could put their files on IPFS, and users could help out by pinning stuff on their local nodes. For example, I could ask their website to give me a 10 GB list of files to pin (if I wanted to "donate" 10 GB to them), and I'd keep them available.

The only problem is that I don't know whether IPFS has any way to gauge availability, so I'm not sure if the team could tell which files were only hosted by a few people.

kevinmannix 18 hours ago 1 reply      
Shameless plug (has collected a bit of dust): https://github.com/krmannix/downcloud

It's a node tool built a few years ago to download the playlists of users through your command line. Might be helpful for a situation where you'd like to back up your own playlists.

You'll need to get an API key - no sure how feasible that is at this moment.

fvdessen 1 day ago 5 replies      
Why doesn't SoundCloud want my money ? There are no ads, and no paid plans for listeners. A lot of songs in my library disappear once the artist gets big and wants some cash from iTunes. I would have no problem paying for access to these songs but it's just not possible. I would also like to buy some band posters / t-shirts, vinyls, cds, show tickets etc, not possible either. It's like they are actively avoiding revenue streams. I don't get it.
cetra3 1 day ago 0 replies      
I've got a tool to grab all of the songs from your feed. I use this to offline sync mixes (not individual songs).


It would be an absolute shame if Soundcloud disappears. There has been so much music I have discovered on this service.

naturalgradient 1 day ago 3 replies      
Archiveteam seems like a really cool project, what I was wondering (and couldn't find in the FAQ) is who is paying for all the storage? Is it donated by big tech companies?
skeletonjelly 1 day ago 1 reply      
> Resource Limit Is Reached

> The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I suppose I prefer an archive over the blog being unavailable

probably_wrong 1 day ago 5 replies      
The more I think about this, the more convinced I am that Archiveteam are actually detrimental (in the long run) to the well-being of the Internet.

Don't get me wrong, I appreciate the work they do, and without them lots of content would simply disappear. But solving this problem should be at the core of the protocol itself (Xanadu, anyone?[1]), not depending on the resources and goodwill of a single team.

Just like IPv6, I don't think the problem will be solved as long as there's a patch that somehow works.

[1] https://en.wikipedia.org/wiki/Project_Xanadu

ipsum2 1 day ago 6 replies      
I'm interested in people's opinions on the legality of this. They mention "Archive Team considers the SoundCloud service in danger and, as it hosts a lot of original content, finds it important to prepare to save it selectively (a full grab would be too big and would raise concerns of mass copyright infringement).", but how is downloading any portion of artist's music not copyright infringement?

I've written my own Soundcloud offline audio player, but didn't distribute it because it was against their TOS.

Steeeve 1 day ago 4 replies      
If these kind of entities actually want to preserve resources, they shouldn't be generating a petabyte of bandwidth charges. Contact soundcloud and come to an agreement that will be responsible.
simonhfrost 1 day ago 1 reply      
Anyone interested in backing up their own personal (public) SoundCloud files will find this tool useful: https://github.com/mafintosh/soundcloud-to-dat
Sami_Lehtinen 1 day ago 0 replies      
Did anyone backup GrooveShark? I had some unique pieces stored there which seem to be lost forever.
eatbitseveryday 1 day ago 1 reply      
How can tracks be downloaded when many of them are pay-to-access? or stream-only?
random_calc 1 day ago 0 replies      
How would you find all files to download without scraping all Soundcloud pages?
philfrasty 1 day ago 4 replies      
Just from a legal standpoint: isn't ...considers the SoundCloud service in danger... slander? Especially since the Internet Archive isn't a nobody (with maybe some inside information?).

Just remembered cases like Deutsche Bank vs Leo Kirch which are legal nightmares.

imartin2k 1 day ago 4 replies      
I hadn't heard of Archiveteam before, but the fact that HN brings their site down doesn't make me too confident in their backing up skills :)
chinathrow 1 day ago 2 replies      
What's the current context here? Is SoundCloud going away anytime soon?
_pmf_ 1 day ago 1 reply      
I don't know whether SoundCloud does not have a huge, Wikipediaesque donation banner on every page illustrating the severity of their financial situation; it's embarassing for them, but think of it like this: the artists have a right to know that the platform that manages their life's work needs their support.
omarforgotpwd 1 day ago 1 reply      
That's probably not going to help with their huge bandwidth costs
majortennis 1 day ago 0 replies      
Aww I like soundcloud
CharlesDodgson 1 day ago 2 replies      
Does anyone feel that this is all a bit pointless. Like is there a greater social need to preserve SC?

I found a lot of the content to ephemeral, things like podcasts or DJ mixes. I dunno, it just seems a bit silly to put resources on it.

streamspeek 1 day ago 2 replies      
I'd recommend transcoding to 96kb/s VBR Ogg Opus during crawling.
chazzeromus 1 day ago 1 reply      
Probably not an amazing and erudite source as others have posted but there's this tweet: https://twitter.com/chancetherapper/status/88592122075935539...

Soundcloud is here to stay

Cybersecurity Humble Book Bundle humblebundle.com
402 points by ranit  20 hours ago   81 comments top 13
dsacco 19 hours ago 9 replies      
So, I've read most of these. Here's a tour of what is definitely useful and what you should probably avoid.


Do Read:

1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.

2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.

3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.

4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.

5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.


You Can Skip:

1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.

2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.

3. The Art of Deception - See above for Social Engineering.

4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.


What's Not Listed That You Should Consider:

1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.

2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.

3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.

4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.

5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.

6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.

EnFinlay 20 hours ago 5 replies      
Is there a legal / not crazy expensive way to buy humble bundle books and get them printed on standard 8.5x11, bound in a series of binders / duotangs / twine? I'm going to buy the bundle, but greatly prefer physical pages to reading on a screen.
Tepix 16 hours ago 3 replies      
I use 2FA on Humble Bundle. In order to log in, I have to solve several captchas.I then have to solve more to buy stuff.

All in all I have to solve the captcha 5 times or so, each time involves marking multiple images.

What sense does this make?

Either they trust the captchas (then they only need one), or they don't (then they should remove them). I've complained about this to them in the past but they haven't changed it.

mr_overalls 19 hours ago 2 replies      
Schneier's "Applied Cryptography" by itself justifies the $15 bundle, IMHO. This is a great deal.
dronemallone 13 hours ago 1 reply      
Security Engineering is free on the author's website :) http://www.cl.cam.ac.uk/~rja14/book.html
kirian 18 hours ago 0 replies      
I find this ironic this offering - "Bitcoin payments have been disabled for the Humble Book Bundle"
twoquestions 18 hours ago 0 replies      
Great, now there's another collection of books which I'll want to read which I'll feel bad about missing the deal for, then kick myself for never actually reading them in-depth.

I think I've bought 50 books from Humble Bundle (spending about $1/book), but I've only cracked open a few of them.

Also thank you dsacco for the recommendations!

znpy 18 hours ago 0 replies      
Remember to choose a charity entity for your donation!

ProTip: entities like the FSF, the EFF, Wikimedia and many others can be helped via the humble bundle!!

_coldfire 9 hours ago 0 replies      
To download all books at once: https://gist.github.com/graymouser/a33fbb75f94f08af7e36

Improved *nix version further down the thread

Change "MOBI" to "PDF"/"EPUB" if desired

nonamechicken 19 hours ago 5 replies      
I am interested in learning more about securing web servers (nginx, nodejs). Is there a book in this bundle that could help me? If you know any good books, please recommend me one.
komali2 18 hours ago 0 replies      
Fantastic, glad to have more reading to prep for defcon!
SadWebDeveloper 19 hours ago 1 reply      
CEH v9 at 15 USD bundle-level is quite a joke, IMHO that should go to the 1 USD level but anyway as someone said Applied Cryptography might be the selling point here.

Personally speaking the only books valuable in this bundle are "Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation" and "Applied Cryptography: Protocols, Algorithms and Source Code in C, 20th Anniversary Edition" the other are either quite outdated, too oversimplified or script-kiddie level stuff.

gergles 20 hours ago 1 reply      
"Pay what you want^"

^As long as it's at least $15.

It bothers me that Humble Bundle has so heavily embraced this type of marketing.

Brazil to open 860K acres of protected Amazon rainforest to logging, mining, etc independent.co.uk
279 points by SimplyUseless  2 days ago   64 comments top 13
ufo 2 days ago 0 replies      
The article didn't talk about the political context: the current president is facing corruption charges and a threat of impeachment and is buying support from congress by opening the government coffers (or what remains of it) and ceding to various lobbying groups.
the_absurdist 2 days ago 6 replies      
Take a commercial flight where you have a chance to look down on the rainforest in Brazil or Peru.

You'll be absolutely astonished and sickened at how much is already gone.

noddy1 2 days ago 3 replies      
sadly capitalism seems to be an incredibly effective and efficient way to completely destroy the rainforests.

there does not seem to be any solution.

fithisux 2 days ago 0 replies      
It never made countries richer. Only poorer. In this case, it will make the planet poorer.
spodek 2 days ago 2 replies      
What responsibility does a typical reader of HN have?

What can a reader do?

deusum 2 days ago 2 replies      
While an important issue, the article states that it is still in the proposal phase.
grwthckrmstr 2 days ago 0 replies      
As an average HN reader, I'm trying to understand the environmental consequences of these actions by the Brazilian government to Brazil and the World. Can someone ELI5?
chatman 2 days ago 1 reply      
The forces which threw out Dilma Roussef are behind this over commercialization. Dilma would never have let this happen.
jokoon 2 days ago 2 replies      
I predict that in the future, brazil will be invaded to stop logging so that those trees can keep making oxygen.

I wonder, doesn't most oxygen come from those trees? I've read the expression "lung of the planet". Essentially it seems it's a matter of survival.

Although increased CO2 concentration might also accelerate the growth of trees, so I don't really know.

horsawlarway 2 days ago 4 replies      
randyrand 2 days ago 0 replies      
This is great news!
gigatexal 2 days ago 0 replies      
Wow. A failing state pimps its resources. This is terrible.
louithethrid 2 days ago 2 replies      
There will be the usual walk through the park of unusable vectors to solve this.

The only thing that could stop this is direct action - meaning, editing organisms to resist humanity. Making trees fireproof, equipping animals with diseases. The only wulf to hold back man, is another man.

Remacs A community-driven port of Emacs to Rust github.com
293 points by sndean  1 day ago   160 comments top 16
mschaef 1 day ago 14 replies      
This looks like a lot of work for what amounts to very little end-user benefit.

Honestly, I'd rather see more progress on the Emacs Lisp/Guile migration than on a putative C/Rust migration. At least with the Guile switch, it's something that will have an obvious (and probably positive) impact on the people that use the editor.

BeetleB 1 day ago 4 replies      
Not intended to be a port, but I occasionally search for an editor that is customizable in Python to the extent Emacs is.

Leo Editor (http://leoeditor.com) fits that bill. It's been around since the 90's (and the web page looks like it's from the 90's). Even though it has many users, the documentation is quite poor. If I had another life to live, I would learn it well and improve the docs - and then port over everything I like in Emacs to it.

It is sorely in need of more tutorials. You'll find some in various people's blog posts, but not enough. I know enough of it to know that it is very powerful, but not enough to use it well day-to-day.

kronos29296 1 day ago 1 reply      
Looks promising. First neovim and now this. Even if it doesn't replace emacs (which it won't) if it can enrich the community and maybe bring more users while fixing archaic code bases of hard to find errors and UBs then it is still a win. Neovim pushed vim (Vim suddenly became more active around the time neovim gained traction atleast according to github graphs and releases). So I hope this project brings new life to the core C code in emacs and fixes the problems that it never had (according to the core dev knowledge). When I tried emacs I found it to be a bit like atom at startups. Couldn't get into the keybindings coming from vim.

If it gets faster, I might give it a try again (in evil ofcourse, gotta have them keys).

njharman 1 day ago 0 replies      
Makes sense. Rust is systems language and emacs is an OS replacement.
flavio81 1 day ago 1 reply      
As one poster here said, "Makes sense. Rust is systems language and emacs is an OS replacement."

If anything, it could be great if Rust is used to implement a faster Emacs Lisp compiler/interpreter, thus preserving all of Emacs Lisp source (and saving some implementation time.)

And make it fully multitasking so there are no more problems of key input getting stuck because of a process.

Blackthorn 1 day ago 1 reply      
What a lovely idea!

Question to the devs: are you planning on redoing how the gui subsystem works, to work with (for example) the gtk event loop? I've always wanted to try to put a qt frontend on emacs, but have always been horrified away by that subsystem.

Tomte 1 day ago 2 replies      
I guess every language needs an Emacs port to grow up.

None of those ports ever see even the tiniest beginning of something actually usable (heck, even Climacs faltered), but I'm still all for it!

gkya 1 day ago 3 replies      
How hard would it be for the existing developers of CEmacs to get on board with Rust if they decided to make this the upstream core? I did not study Rust ever but did follow its evolution, and the language seems complicated to me, compared to c.
Myrmornis 1 day ago 1 reply      
The README doesn't say much about whether and how the rewrite would differ from the main Emacs implementation in terms of threading architecture. A major problem (I think this is an uncontroversial statement) with the main Emacs implementation is that it is fairly common, in every day use, to block the main thread responsible for accepting keyboard input and repainting the screen etc. This (again, uncontroversial I think) is embarrassing in such an otherwise great bit of software in 2017.
NoGravitas 1 day ago 0 replies      
This is only topic-adjacent, but here is my number one emacs wish: for the GTK frontend to be pure GTK with no X, so that it can be run directly as a Wayland client.
coldtea 1 day ago 2 replies      
I'd like to see a community-driven port of Sublime Text to Rust, with a basic ST3 like cross-platform UI, and offering a base set of UI primitives (buttons, panels, dropdown, inline styled annotations, etc) available for plugins, and first class support for JS plugins/extensions.
comradesmith 1 day ago 1 reply      
What we need next is Rim to start the next phase of the holy wars.
hammerandtongs 1 day ago 3 replies      
Some speculatively useful things that could arise out of this -

It may have useful security properties over time. People use emacs for a great deal of arbitrary text reading and transform from things directly from email and the web. Elisp would largely be at fault for security bugs but perhaps this would help in areas.

Perhaps a binding to use Alacritty directly for a port to Wayland? Emacs has a pretty terminal centric view of its UI output as is.

A tight binding to Servo allowing a serious upgrade to Emacs visual and ui capabilities.

hammerandtongs 1 day ago 1 reply      
Doesn't seem to have a copyright assignment policy in place so this can't be upstreamed as is.


So opened - https://github.com/Wilfred/remacs/issues/238

0x7a69 1 day ago 1 reply      
As an avid emacs user... I would never use this.
0xbear 1 day ago 0 replies      
Prediction: within a year this project will be dead due to the lack of uptake. It doesn't solve any real problems and uses a relatively obscure programming language. There's no way it will ever reach anything close to a critical mass.
A real world guide to WebRTC deepstreamhub.com
368 points by wolframhempel  1 day ago   59 comments top 16
xrjn 1 day ago 1 reply      
We recently started using WebRTC to transmit video from Raspberry Pi's where I work. There were a lot of gotchas that weren't obvious to someone who's never worked with VOIP or related technology: STUN and TURN servers[0], 300 seconds idle timeouts that shouldn't affect the connection but killed the video stream regardless, and dropped calls which forced us to reboot the Pi.

In the end we managed to get something smooth working with UV4L[1] on a RPi costing us a fraction of the previous solution.

[0] http://numb.viagenie.ca/ has a free one

[1] https://www.linux-projects.org/

myfonj 1 day ago 5 replies      
> You will need [...] and drumroll - a server.

Actually, you can make poor human to do signalling servers work: http://blog.printf.net/articles/2013/05/17/webrtc-without-a-...

nottorp 1 hour ago 0 replies      
They missed the most important piece of information about this:

How the hell do I turn it off in all my browsers?

Tepix 1 day ago 3 replies      
I'd love to see a minimal example of

a) client and server that only use the WebRTC data connection for low latency UDP based communications

b) a p2p example for a group of clients using WebRTC data connection

This website seems to cover b) but not a)

patrickaljord 1 day ago 2 replies      
Nice guide. A little warning, while WebRTC is known as a p2p tech, what few people realize before starting using it is that while it is p2p technically, you need like 3 to 4 centralized servers to make a connection between two peers.Just google TURN, STUN and ICE servers and protocols, you will also need a signaling server (usually your app) and a web server where the app is hosted. That's why most people use webrtc as a service solutions or all in one webrtc servers that are hard to customize/setup.
dbrgn 1 day ago 0 replies      
Nice guide. In contrast to what the article says about data channel size limitations (1200 bytes), 16 KiB seems to be safe: https://lgrahl.de/articles/demystifying-webrtc-dc-size-limit...

In case you want to do chunked data channel transfers but don't want to implement the chunking yourself, I wrote a library to do just that: https://www.npmjs.com/package/@saltyrtc/chunked-dc It's based on this specification: https://github.com/saltyrtc/saltyrtc-meta/blob/master/Chunki...

ShirsenduK 1 day ago 5 replies      
IMO, WebRTC is the technology which will make browsers trump apps. Especially, now as iOS 11 starts shipping with support for it.
aphextron 14 hours ago 0 replies      
WebRTC badly needs interop with the official Bittorrent protocol. It's a neat API for now, but doing this would allow web clients to take advantage of the massive seedbox infrastructure already out there, making it actually useful.
samsonradu 1 day ago 0 replies      
Wowza Streaming Engine now provides WebRTC support (beta), which is great news since it can ingest WebRTC streams and transcode into any other formats for usage with non-compatible devices.

As mentioned in the article already, P2P is quite heavy to scale (one-to-many streaming) so you will likely need centralisation. WebRTC is also fragile in the wild due to NAT. Wowza would split the NAT problem into smaller pieces at least.

Not involved with the product, just really excited about it.


allpratik 1 day ago 0 replies      
Shamelessly plugging a job requirement here! But it is hard to find people who love taking on challenges like this. WebRTC based development is hard and scaling it, is even harder. We're developing a WebRTC based product to be used on massive scale.

And we have an opening for experienced NodeJs backend guys. We'll give you opportunity to transition in WebRTC role as well.

If interested and experience in NodeJs ecosystem please forward your CV to mail ( at ) khandagale.com while mentioning HN in title.

Position Location: Remote/IndiaMin exp needed: 2+ yrs

n-gauge 21 hours ago 0 replies      
What I would like to see is something like:

var conversation = new dataChannel("Chat") // if 'Chat' is valid then join 'Chat' else create a new connection

if (conversation.init) {

 onConversation.receive(fn); // some event model conversation.send("Hi"); // some receive fn...}

Pigo 1 day ago 3 replies      
So none of this functionality would work for mobile browsers? Or can you at least receive messages somehow?

After some googling I found OpenWebRTC which says "WebRTC standard would transcend the pure browser environment and that native apps, implementing the same protocols and API's, would become an important part of the WebRTC ecosystem". But I thought the beauty of this would be transcending the need for native apps.

tenryuu 1 day ago 1 reply      
Screensharing dabbles into Firefox, but doesn't offer any instructions for it, even though it's properly supported by the browser
bflesch 1 day ago 0 replies      
This is a great guide, really appreciate it.
davidcarrington 18 hours ago 0 replies      
Helpful site to test for WebRTC IP leaks.


z3t4 1 day ago 0 replies      
"Chrome extension not found"
Gpu.js GPU Accelerated JavaScript gpu.rocks
318 points by olegkikin  3 days ago   75 comments top 23
dewhelmed 3 days ago 1 reply      
From their post-hackathon devpost at https://devpost.com/software/gpu-js,

What I learned:Not to make a compiler during a hackathon.

jingwen 3 days ago 2 replies      
I've built an animated raytracer using this library.


The compile-able JavaScript subset is still fairly limited, so vector operations are quite painful. Beyond that, it's a great library to start with for parallel computation in the browser if you have no background in WebGL/GLSL.

RubenSandwich 3 days ago 1 reply      
One thing that might be helpful is if you linked to the Github repo on the site: https://github.com/gpujs/gpu.js. Especially for a project like this, high-performance computing, I'm gonna want to look at the code before I trust it.
AndrewKemendo 3 days ago 0 replies      
I saw this a few months back when I was researching GPGPU implementations in the browser. It looked like one of the better projects out there. Do you know how much theoretical GPU allocation you can get from a JS implementation vs say a native wrapper?

Worth noting that Clarifai just released an SDK for offline mobile DL training/evaluation. Not browser based but I'd be curious what the difference in GPU utilization is practically.

andreasklinger 3 days ago 3 replies      
Is js via gpu (WebGL i assume) already efficient enough to make the concept of "computation time instead of banners" finally viable?
daenz 3 days ago 1 reply      
cool! shameless self-promotion, I wrote a similar thing for a blog post

travelling salesman in js+gpu: https://amoffat.github.io/held-karp-gpu-demo/


snarfy 3 days ago 4 replies      
Things like this make me fear banner ads mining bitcoins (not that they aren't already).
skrowl 3 days ago 2 replies      
v0.0 Alpha

My project manager just hear "So, you're saying this is production-ready? Great!"

codefined 3 days ago 3 replies      
Is anyone else getting consistently slower speeds on their GPU compared with their CPU? I seem to be getting:

CPU: 0.426s 7.6%

GPU: 2.399s 4.7%

Running Chrome, Latest Stable. Windows 7. It seems odd to me that it would take 6x longer when my graphics card (GTX 690) would theoretically be much faster than my CPU (Intel i7-3930k)

netvarun 3 days ago 1 reply      
Next steps: A computational graph based deep learning system with a Tensorflow like API on Javascript/Node lala-land.

https://github.com/tqchen/tinyflow would be a great showcase (and useful) project to port for Gpu.js

bhouston 3 days ago 0 replies      
This is very similar to the idea of Sh/RapidMind (eventually acquired by Intel.)

Some details:


Never really took off. In the end OpenCL and CUDA were the winners in this space and OpenCL and CUDA, while explicit GPU languages, can be simulated on the CPU. I think this pattern will continue.

noobermin 3 days ago 2 replies      
What is NUS, National University of Singapore?
stared 3 days ago 0 replies      
How does it compare to weblas (GPU Powered BLAS for Browsers, https://github.com/waylonflinn/weblas)?

I had in mind matrix operations for neural networks, as in https://github.com/transcranial/keras-js.

ilaksh 3 days ago 1 reply      
How does this compare to Turbo.js?
Nican 3 days ago 1 reply      
I am curious, how does it get an AST from the JS, and compiles it to compute kernels?
sscarduzio 3 days ago 3 replies      
After running these GPU benchmarks I had to reboot my Macbook because all my open windows went scrambled and unreadable.

Not sure what's the root cause for this.

TazeTSchnitzel 2 days ago 0 replies      
Function.prototype.toString is one of the more interesting JavaScript features. This is a fun use of it.
revelation 3 days ago 0 replies      
Well that worked entirely as expected, browser locks up, unresponsive and utterly slow.
nikkwong 2 days ago 0 replies      
This looks coolcan someone help explain to me a real world use case?
gthinkin 3 days ago 0 replies      
Super excited to experiment around with this.
tshiran 3 days ago 0 replies      
This is cool
Abishek_Muthian 3 days ago 3 replies      
A major caveat is, it's still JavaScript so the performance vary according to the browser apart from other variables.

Benchmark on iPad Air 2 iOS 10.3.2 is close for both chrome and safari.


CPU: 6.110s 1.9%GPU: 0.487s 1.0% (12.55 times faster!)

Chrome 59

CPU: 4.454s 51.8%GPU: 0.483s 0.9% (9.22 times faster!)

Project looks promising though, congratulations to the team.

In Urban China, Cash Is Rapidly Becoming Obsolete nytimes.com
308 points by KKKKkkkk1  1 day ago   320 comments top 43
thablackbull 1 day ago 12 replies      
I think this is a good illustration of the different approach in how China and America approach the consumer economics space. In America, there is a very heavy reliance on "trickle down". We create a "best-in-class", "top-of-the-line" product that only the elite and wealthy can buy into. These people then set the tone of how it will develop in the future. The first mover also tries to do everything in their power to lock you into the ecosystem. Examples of these are iPhones (and the app ecosystem), Tesla, etc.

In China however, they throw away that pride of creating the greatest and shiniest. Instead, they start off with an "inferior" product but use the scale of their population to bring down costs and they try to ensure everyone has access. The top example is smartphones. In the article itself, "Even the buskers were apparently ahead of me. Enterprising musicians playing on the streets of a number of Chinese cities have put up boards with QR codes so that passers-by can simply transfer them tips directly." China starts off with a product that gives every citizen a chance to get in on, not just the wealthy, and they build up their economy from the bottom up, rung by rung.

The way I see it, we have democracy in politics but in products, its authoritarian because its guided by the billionaires and wealthy. In China, they have an authoritarian political system, but democratic economy space because they can actually vote with their wallets.

hn_throwaway_99 1 day ago 7 replies      
I think this article hit the nail on the head when it pointed out that being ahead of the curve at one time (with Japan and their advanced flip phones in the early 2000s) can make it harder to adopt the next big advance because what you have is already "good enough".

Thus, in the US, I suspect that mobile payments haven't taken off as much because they are only very slightly faster to use than a credit card, as opposed to cash, which is much slower with people counting out amounts and cashiers counting out change. Before Android Pay came out and there was Google Wallet, I tried using Google Wallet but was super disappointed - it failed about 5% of the time, where my credit cards almost never failed. Lately, though, I've started using Android Pay and I've been really impressed. Just one tap with my phone and it's done, and it's very reliable. Still, though, it's really only slightly faster than swiping my credit card, especially for small amounts where a signature isn't required.

HeavenFox 1 day ago 6 replies      
Born in China and moved to U.S. since college. In my observation, there are several reason for the boom in mobile payment in China that makes it hard to replicate.

- Low transaction fee & minimal barrier. For merchants taking WeChat pay, the transaction fee seems to be a flat 0.6%, compared to 2%-3% in the U.S. For the food cart vendors and similar one man shops, they just use the person-to-person payment feature (like Venmo), which has no transaction fee and does not need a merchant account. While Square helped somewhat in the U.S., the transaction fee, for many, is perceived as a rip off.

- Cards are a pain to use. In the U.S. you sign. In EU you use a PIN. In China you do both. Usually, it seems slower than cash! Mobile payment, comparatively, is a breeze. However, it's quite difficult to argue that taking out the phone, unlock it, open the app and show the QR code is easier than using a card in the western world.

There are some deeper historical reasons for these two conditions, which I would not dive into in this comment, but needless to say, it's a much, much more fertile ground for mobile payment to blossom.

ksec 1 day ago 1 reply      
Some Perspective and Experience While i was in China.

I couldn't set up my WePay in time, so like the Author, I had to use Cash. And from my experience between a Tier 2 City and Tier 1 City like Shanghai, it is actually those Tier 2-3 City refuse to accept cash. They thought Cash were cumbersome. And Shanghai may be more welcoming cash / Credit Cards for variety of reasons ( Likely privacy ).

I tried to buy breakfast for $2, ~20 cents in USD, i didn't have WePay so I paid in cash. The Shop owner said she didn't have any changes and decide to give me the meal for free.

Over the 15 days, I kept thinking about QR code as Tech. Trust me, it is crap. I dont believe there is that much time difference holding up the queue as noted in the article. You have to Open up WeChat, and let each other scan, input your payment by youself, show them you have paid. To me this is very backwards. Octopus in Hong Kong, Scuria in Japan, Oyster Card in London Transport, Offline Mobile Payment from Master Card and Visa, All these are 10x better in UX. The merchants input my bill, I beep. That is it.

Then suddenly one day it clicked, and I knew why QR Code succeed. With NFC or any other Wireless payment, you need something, a electronics, a beacon or what ever for sellers to work. With QR Code, They "Print" a QR Code and laminated it. And you can photocopy as many pieces as you want. The barrier of entry for QR Code is so low every one could use it. At the expense of consumer UX.

And then weeks after I visited China, Apple announced in WWDC, iOS 11 will come with Auto opening QR Code in Camera mode. That, may have just made the friction of using QR Code much more bearable for me. And it is likely both tech will exist side by side in the long future. I cant see QR Code as it is used today ever getting replaced by the more expensive NFC solution, and NFC will likely stay in transport and other areas.

NamTaf 1 day ago 6 replies      
Everyone in Australia just uses paypass/paywave contactless payment with their credit cards. I rarely carry any signifiant amount of crash for that reason. It's essentially the same as using your phone except you use your credit card and doesn't require a PIN or anything for <$100. Once I have a transaction over $100 I can tap and enter my pin, or use chip and PIN, or if everything fails occasionally I need to swipe and PIN. Most critically, signature is no longer allowed.

Without having used WeChat's approach with QR code scanning, it doesn't seem that different for the end user in practice. Either way, you're scanning an object you carry against some target environment and a merchant is processing the transaction for you. It's even closer once you use eg: Apple Pay with your CCs loaded into the Apple system - you're tapping your phone rather than scanning via the camera.

I'm guessing that the CC infrastructure simply wasn't there and pushing out EFTPOS units was a far higher barrier to entry than simply running an app on your newly acquired cheap chinese smartphone. As such, the CC merchants missed the boat whereas the big phone app companies got in on the ground floor. On the flipside, many Western countries had all the EFTPOS terminals already, so contactless just became the next iteration on that.

I don't really necessarily agree with the problem of the country building their commerce systems around Tencent, etc. as 'private companies' since most of the west hinges on Visa, Mastercard and to a lesser extent Amex. They're all private companies too. Maybe the government will step in and standardise the QR code system eventually to reduce the risk. I don't really see it playing out much differently to how the West did with CC contactless.

ajiang 1 day ago 3 replies      
I had the most surreal experience in the Shenzhen airport the other day. I was out of cash, with only my credit cards and Android pay. There was no place in the entire airport that I could pay with Visa or Mastercard - the first time I felt truly helpless in China. Uber didn't work either. I was forced to set up WeChat pay, which works absolutely everywhere.
hbarka 1 day ago 2 replies      
Take the credit card swipe versus chip example. The chip is an improvement over the really old swipe tech in theory. In the US, using the chip is a terrible experience because it takes a lot longer for the process to complete. Why? Because merchant processors have no interest in putting the chips on a fast network because they can get higher fees on the swipe and conversely the stores are charged higher fees for enabling the chip. There you go. There's the moral question and the profit question all in a real example that didn't improve the experience for the end user.
theylon 1 day ago 1 reply      
Living in China for the past two years - Important thing to remember - in China and in most of South-East Asia the tech boom came at a later stage when smartphones were already prominent. Android + Chinese manufacturing made it possible to produce cheap smart phones. The result is that these countries have completely skipped the laptop phase and went straight to smartphones, most of the population doesn't even know what a laptop is. What the west calls eCommerce (amazon, Ebay etc) is called mCommerce in Asia (Taobao app, JD.com App etc). Asian consumers find it easier and more natural to shop using smartphones rather than laptops like Western consumers.

Oh and yes, When I leave the house I don't even take my wallet, everything is paid using wechat.

itchap 1 day ago 1 reply      
I have been living in Shanghai for few years now. Alipay and Wechat are really a big thing. It is far from the vision of mobile payment people have in Europe for example, where it is more of solution for small payments.

People are using Alipay and Wechat pay for everything, being online shopping, restaurants, supermarkets, train tickets, electricity, water, even some tax declaration. So it is not just a replacement to cash and bank cards, it is starting to go beyond that.

I go for months without having any cash or bank cards on me. Unfortunately everything falls apart when my phone runs out of battery.

justjimmy 1 day ago 1 reply      
And for those rare places that only do cash, it's not uncommon for the person in line to turn around and ask a stranger for cash and they'll pay that person in a WeChat transfer. Done in seconds. Happened to me and observed it myself a couple of times.

With digital wallets, there's zero need for credit cards and their predatory interest rates.

China is just absolutely crushing it. The Amazon Self Serve store that's still in testing in USA? Tao Bao pushed their version and it went live last week.

2muchcoffeeman 1 day ago 2 replies      
Is this faster than PayWave?

Ever since I could tap to pay with my card in AU, I have been going months without using cash. I usually still have money, but never use it. I've gone almost 2 months with literally 0 dollars in my wallet.

EZ-E 1 day ago 3 replies      
Can confirm, in most supermarket, paying take literally one second. You pull out your phone and show your Alipay/WeChat QR code, the cashier scans it and 1 second later it's done.

People paying in cash is a small minority

Speeding up lines, they also can have more clients for less cashiers

In France, most people pay with a bank card, the rest with cash. The process is longer and less user friendly

lucaspiller 1 day ago 1 reply      
It sounds like one of the reasons why this has grown so quickly - compared to mobile payments in the west - is that you can just sign up and have it instantly. Apple Pay and the like need your bank and the retailer to support it which doesn't happen straight away. In this case it sounds very much like the dream of Bitcoin and cryptocurrencies.

The article mentions that not everyone has access to it though, if you just sign up why is that (they don't want it or something else)?

And what about privacy? Obviously everything you buy is being fed into a big government database somewhere. Do people in China not care about that (has privacy been eroded so much)?

psy-q 1 day ago 1 reply      
It's interesting that they worry about lock-in from Tencent and Alibaba in China while not mentioning the same lock-in and issues for Google, Microsoft and Apple in the rest of the world.

People who don't want to get a Google account already get a degraded product when using Android phones, and I'm not even sure an iPhone works without an Apple ID. That Google Wallet flopped is just a happy coincidence in this regard, but Apple Pay so far hasn't, and then we have the same situation there as with e.g. Alibaba.

Some nations like Switzerland at least have a unified national mobile payment platform (Twint in this case) carried by an alliance of banks. This doesn't put all the power with just one or two privately owned US or Chinese companies, and it brings banking regulations into the game. Maybe that approach should be copied, but it only works in countries that have those regulations and banks willing to cooperate.

williamle8300 1 day ago 5 replies      
No thanks. Cash is king. Money should never be traceable, and you should be able to buy with anonymity.
lostboys67 1 day ago 0 replies      
This is more to do with the PRC wanting to control the populace ;-( other wise why are wealthy Mainland Chinese desperately buying up real-estate over sees as a method of moving wealth out of the country.
foobarian 1 day ago 0 replies      
This brings me back to college days... when you could use your student ID to pay for everything instantly. And the ID# was encoded on the magnetic strip unencrypted.
Animats 1 day ago 1 reply      
Alipay is trying to expand into the US. Citcon is trying to get US merchants to integrate Alipay. First Data is on board with Alipay. At least 4 million US merchants already accept Alipay.

That's probably more traction than Google Pay or Apple's payment system ever got.

"Alipay, leader in online payments. 400,000,000 Users."

1024core 1 day ago 0 replies      
I don't understand why there's no mention of M-Pesa https://en.wikipedia.org/wiki/M-Pesa

When I heard about it several years ago, it was, effectively, the largest bank in Kenya.

tristanj 1 day ago 2 replies      
If you are curious how smartphone payments work in practice, Here's a video of someone using Wechat Pay to order squeezed-to-order orange juice from a vending machine


In practice, scanning and paying is much faster than shown in the video. He is using an old phone, on a newer one scanning the barcode takes around 0.5 second and the verification takes 2-5 seconds (not 15 seconds as shown in video).

Paying in stores works differently. The WeChat app has a wallet section with your personal payment QR code (which changes every time you open the app). To pay in a store, let the clerk scan it like this https://wx.gtimg.com/pay/img/wechatpay/intro_2_1.png then enter your six digit payment passcode (optionally, you can scan your fingerprint). After entering your passcode, it takes around 5 seconds to process and verify. You get a receipt as a WeChat message (last screen shown here https://www.royalpay.com.au/resources/images/retail-example.... ).

Many stores (usually restaurants) have a nearby QR code you can scan to follow the business's WeChat "Official Account". Follow this account to earn loyalty points, discounts, and freebies whenever you pay with WeChat wallet at this business in the future. The business can send you chat messages about promotions too (you can mute them if you like). This feature ties in really well with WeChat pay.

There are other uses of WeChat Wallet too, most of which are shown in this promo video:


At timestamp 0:09, that sign/sticker signifies this store accepts WeChat Pay

At timestamp 0:24, watch the clerk scan the customer's barcode

At timestamp 0:30, the customers scans the store's payment QR code and types in the amount they want to pay. More at 0:50

At timestamp 0:38, this store has a dedicated QR code scanner

At timestamp 0:50, it's the same as 0:30 where the customer types in how much they are paying. Paying like this is common at street stalls.

At timestamp 1:00, two friends use WeChat Wallet to transfer money to each other

Opening WeChat wallet on your phone is very easy. On iPhone just force touch the WeChat app and a quick menu for the QR code scanner and WeChat Wallet appears. In my opinion, it's much faster and more convenient than paying with credit card.

nichtich 1 day ago 1 reply      
The success of AliPay and WeChat Pay is mostly the result of a good banking infrastructure. Even before these services popping up, you already can transfer money from any bank to another bank instantly and with no fee (or a small fee, depending on the bank and your account type). PBoC has been pushing banks to make interbank transfer efficiently and cheaply for years. The new mobile payment systems just use the already built infrastructure and replaced the ugly out of date bank webpages with cute looking apps and also greatly relaxed security so you don't have to keep the 2fa usb key with you anymore.
mikkelam 1 day ago 0 replies      
In Denmark we have been using mobilepay[1] for a couple years now. Contactless credit cards are still widely used as not all stores accept mobilepay, as a result it is mostly being used between friends, e.g. when you split the bill at a restaurant


Hasz 1 day ago 2 replies      
Cash will never die.

Fundamentally, a wechat/alipay transaction is traceable. Traceability is great for expense reports, but a non-starter if you're a drug dealer -- especially with China's draconian drug laws.

Even if you were to launder the money through fronts or a series of transactions, a state level actor (i.e, China) would have no real difficulty in tracking down such activity. With cash, this becomes much, much harder in cost, time and manpower. Not impossible for a state, but certainly an effective deterrent from casual or low level investigation.

This difference in traceability will keep cash and other compact tangible mediums of value (precious gems and metals, ivory, drugs, etc) in use by subsets of the population for as long as people value the relative anonymity and are willing to put up with the costs and risks a physical medium entails.

hunvreus 1 day ago 0 replies      
I haven't had cash on me for more than a year now.

I use Alipay or WeChat for pretty much anything: groceries, bills, plane tickets, hotels, rent...

I actually have to use LastPass to remember the codes of my debit/credit cards.

Even traveling to Hong Kong feels backward; you have to go to an ATM, get money and then struggle with change in your pockets for the rest of the week.

I really hope this trend goes global and helps create a more competitive space for banks; it would be good to see them trying to innovate a bit.

jccalhoun 1 day ago 1 reply      
I had heard of WeChat and AliPay but didn't know how it worked. It is quite similar to what Walmart does with their Walmart Pay https://www.walmart.com/cp/walmart-pay/3205993 and what a number of retailers tried to do with CurrenctC https://en.wikipedia.org/wiki/Merchant_Customer_Exchange

CurrentctC never made it out of test marketing and I don't think Walmart Pay is taking off.

The difference is that these two are from big retailers and the Chinese versions are from apps or online retailers.

rz2k 1 day ago 0 replies      
I was sure I'd read a lot of comments from people living in China who disagreed, because of this column[1] as well as a few others like it.

However, I realize that column is almost 50 months old. At the time it was part of a worry, because it was very difficult for anyone to understand what was going on in the Chinese economy and therefore which economic policies would be harmful. (Eg. stimulus during bubble, or tightening during stress)

Has cash really become a lot less popular in the past few years?

[1] http://foreignpolicy.com/2013/05/01/on-getting-paid-in-wads-...

allengeorge 1 day ago 1 reply      
Perhaps another factor in this shift is that there's a higher incidence (anecdotally) of counterfeit bills - especially larger notes?
diefunction 20 hours ago 0 replies      
The point is, you dont need to take a card, you can pay eveywhere, all you need to bring is your phone. Nobody thinks bringing a phone, a wallet and keychain everywhere is quite painful ??? i would rather to use my phone to open my door.
arkitaip 1 day ago 0 replies      
It's a shame that WeChat outside of China is just a chat app. But I guess that when China is your home market, then that's all you really need.
erikb 1 day ago 0 replies      
Tourist shops and shops that sell milk powder in Europe also accept at least Alipay already.

And while it's hard to connect your Wechat account to your bank account as a foreigner, it's like everything in China: If you have Guanxi it's no big deal. You just give cash or bank transfers to your friends and they send you Wechat money.

inlined 1 day ago 1 reply      
I recently took a trip to Shanghai and Kyoto and was blown away by this. In Shanghai even the taxis displayed their QR to accept payments. Kyoto accepted WePay and AliPay; the latter was most impressive because I was told AliPay requires the Chinese equivalent of a SSN
sjg007 1 day ago 3 replies      
Surprised that QR codes are not more ubiquitous in the USA. Seems like a great platform.
yufeng66 1 day ago 1 reply      
One important factor is the Chinese government view cash as necessary evil. They prefer the money to be in the digital form to avoid issues such as grey economy and difficulty in collecting tax. As such they refuse to issue large denomination cash bills. The largest RMB bill is only 100, which is about 13 USD. It was first issued in 1988, when the nominal GDP was only about 2% of the current size. Obviously this makes large transaction in cash difficulty. Thus created an opening for alipay and wechat.
nabla9 1 day ago 0 replies      
In the Nordics it's contactless credit cards in shops and mobile pay in vending machines and between individuals.

Difference for consumers is minimal. Cards can be faster to use.

alvil 1 day ago 0 replies      
Only idiot can support cashless society creation.
Zolomon 1 day ago 1 reply      
Sweden has been mostly cash-free for 8-12 years now.
notadoc 1 day ago 1 reply      
In urban USA, cash is pretty rare too. Almost everyone uses credit.
dis-sys 1 day ago 1 reply      
Criminals are now targeting those QR codes. They just walk around and put their own QR code stickers on top of the legitimate ones, hoping that future payment will end up in their accounts.

What really impress me is that Alipay immediately responded to such reports and agreed to foot the damages. They also started to promote the voice notification feature so street vendors don't have to stop periodically to just check whether they are actually getting paid.

You will never ever see Chinese big four state owned banks take good care of their customers/users to such extent. It is definitely a good example on why capitalism is the best choice in 2017.

known 1 day ago 0 replies      
Aren't Chinese compromising their Privacy?
chj 1 day ago 0 replies      
Paying with phone means you can go out wallet-less.
miguelrochefort 1 day ago 2 replies      
Why isn't there a WeChat equivalent in the west?
baybal2 1 day ago 1 reply      
Same thing in Russia, but for a different reason.

I think no banker in sane mind will issue a credit to an average Russian

frozenport 1 day ago 4 replies      
Fucking non-sense, American's strive to create products that are different. The Chinese stereotype is to copy those. The copies are often not cost equivalent won't perform the same function as the original. Further it is often done with the full intent of tricking people, for example knock off Apple stores, Italian clothing etc.

In absolute terms the average Chinese person can't afford higher quality items.

A 32-year-old state senator is trying to get patent trolls out of Massachusetts techcrunch.com
271 points by jessiemcr  2 days ago   44 comments top 9
Animats 2 days ago 7 replies      
I'm always amused at the Silicon Valley attitude towards "patent trolls". Figuring out some way to make people slave for below minimum wage - fine. Obtaining a monopoly and then raising prices - fine. Using a monopoly in one area, such as an app store, to keep out competitors for your own products - fine.Shipping crap that doesn't work - fine. But enforcing patent rights - that's bad.
mc32 2 days ago 2 replies      
This can only be good for Mass tech and tech companies in general. One can only hope more politicians wise up to the negative externalities of patent trolls. That and opposition to anti-compete clauses are what politicians should to attract talent to their regional economies.
asdfologist 2 days ago 1 reply      
How dare he? They're just trying to atone for their sins.
lr4444lr 2 days ago 2 replies      
Pardon my ignorance, but if these firms are buying up patents, aren't they paying fairly and squarely for IP assets? Whether or not they produced the IP, how does that factor into whether or not they are "bad actors" in commercial exchange?
pnw_hazor 2 days ago 6 replies      
" You hear about the three or four kids in a dorm room who are tinkering around with an idea, then suddenly, they get slammed with one of these completely vague cease-and-desist letters from a place theyve never heard of, citing patents they didnt know existed. The threat is: turn over everything youre doing to us, or pay us $30,000."

Has this ever happened? I don't think so.

Also, it would be nice if the interview included a peppercorn of information about the bills.

WA state has a new anti-patent troll law. It is useless.

edit: link to the bill sponsored by Eric Lesser https://malegislature.gov/Bills/190/S128

edit2: after scanning the bill, it looks like every "mainstream" patent troll would be able to continue to operate without making any changes to their practice. Only the most incompetent lawyers have any chance of triggering the "bad faith" element in this law. (Same as WA state anti-patent troll law.)

astrodust 2 days ago 0 replies      
The title seems abbreviated for no apparent reason.
Pulcinella 2 days ago 0 replies      
What can they really do at the state level? I believe patents are a federal issue. I'm not sure what a state law can do about a patent troll trying to enforce "their" federally granted patent rights.
marcoperaza 2 days ago 0 replies      
What a garbage heap of an article. There is not even a single sentence giving a summary of what the bill would do.
EGreg 2 days ago 1 reply      
I guess when you hit 2 to the 5th power you want to make big changes in the world :)
Tokyo street fashion and culture: 1980 2017 google.com
292 points by drops  2 days ago   50 comments top 21
mc32 2 days ago 3 replies      
If you want to discover Japanese culture via photography, I would suggest Daido Moriyama[1], [2], [3]. All NSFW-ish. He has a great look into what lies beyond the surface in Japanese culture. For more night culture, Kohei Yoshiyuki[4]





tenryuu 2 days ago 4 replies      
I know this has nothing to do with the article, but the UI/UX for this site feels so off-puttingIt's like a worse version of the windows 8 start screen. Can't even middle click the scroll wheel and navigate with gestures.

I wouldn't mind seeing a piece on Shiyuba fashion though

dookahku 2 days ago 1 reply      
So, wait, what are we looking at?


Is this like a Medium.com content publishing platform?

From the about page:


sireat 2 days ago 6 replies      
Enjoyable collection but somewhat a little bland.

You hear about all the wild stuff you'd see in Sninjuku and this was rather mild. The most provocative was one Ganguroo pic.

My uneducated guess was that these pictures represent the "official" and branded street culture.

That is these pics represent something you could actually buy with relative ease and not make yourself.

I didn't see any gothic lolitas and those are just something that westerners would know about.

There must be 100s of substyles that are one-offs and produced by individuals.

Anyone know of more "street" pictures?

rdiddly 2 days ago 0 replies      
Well that was surprisingly tiring! I started to imagine keeping up with all those styles as they were happening -- sounds expensive. Fashion is inherently, I think, of the moment... in the "now"... and to lay out 37 years' worth of it in a row like that, has the probably unintended effect of laying bare its impermanence, frivolity (not in a good way) and unimportance. Grandpa comment.
Animats 2 days ago 1 reply      
Aw. Cute.

For a good sense of what's happening now, see this walk through Harajuku at 2160p.[1] At that resolution, all the clothing details are visible.

[1] https://www.youtube.com/watch?v=LiQ4YDH3g80

callumjones 2 days ago 1 reply      
What is Google Arts & Culture? How long has it been around?
lanius 2 days ago 1 reply      
The photos are all black and white until 1984, when they're all suddenly in color. Was there any particular advancement in color photography in 1984?
doctorshady 2 days ago 0 replies      
It seems like a lot of the simpler styles - even as far back as the mid 1980s would probably work without sticking out a whole lot even now. I suppose there's a universal lesson of some sort in that.
wst_ 2 days ago 0 replies      
Funny thing... Most of these pictures are still valid today.
jansho 2 days ago 0 replies      
Just a heads up to UK residents, the BBC iPlayer currently has a wealth of Japanese culture videos, including street fashion
wyclif 2 days ago 0 replies      
Last time I was in Japan, the "wolf boy" haircut (sorry, I can't remember the Japanese word to describe this) was still quite popular even though it was pass. Then I was at the airport in the Philippines and there were some Japanese flying to Tokyo, and all the boys had the same haircut. Man, that cut will never die.
pilaf 2 days ago 0 replies      
There's a very well put together video about the history of Tokyo street fashion and music from the 70s till today on YouTube. Worth a watch for the visuals alone:


jhanschoo 2 days ago 0 replies      
For those fascinated on the subject of Japanese fashion trends, I'm pleased to recommend http://neojaponisme.com/category/fashion-2/
jpatokal 2 days ago 0 replies      
If you liked the story, you'll like this video of 40 years of Japanese street fashion: https://youtu.be/xmsxWmKz-B8
hkmurakami 2 days ago 0 replies      
Personally, I think the best way to get a snapshot of the fashion of the times (apparel, makeup, hairstyle, shoes) is to watch a few of the popular TV dramas from each year.
franciscop 2 days ago 1 reply      
Ah I always thought that Gyaru and Gal were the same fashion style slightly lost in translation, but now I see that they were actually different styles!
throw7 2 days ago 0 replies      
god damn it, that website feels like a fashion style itself.
nomagicbullet 2 days ago 3 replies      
What a horrible navigation. You can't freely scroll at your own pace, advancing with the keyboard is inconsistent, and what's worse, zoom is busted with keyboard and trackpad. This is a disservice to people with disabilities and people without them.

What's so wrong about regular scrolling? Why do designers feel the need to fight agains the browser? When a site decides to re-engineer basic user interactions (zoom, scrolling), the user has to focus on learning new behaviors instead of consuming your content (which is what they should be focused on).

The web has great UI patterns. Use them. Don't fight them.


gallerdude 2 days ago 0 replies      
What's up with the navigation? Horrific on mobile.
known 2 days ago 0 replies      
TIO: Try it online tio.run
347 points by blacksqr  2 days ago   90 comments top 26
jonahx 2 days ago 1 reply      
Some clarifications are in order:

This site is not a competitor to codepen, jsfiddle, etc. It's main purpose is to allow play with programming languages you'd otherwise have to install locally -- specifically, it's extremely helpful when browsing https://codegolf.stackexchange.com/ -- the author is a prolific and impressive contributor to that site.

It's sad to see a simple, well-designed, free, open-source project embracing the values of the HN crowd ("The TIO web app is free of charge, ad-free, and doesn't use tracking cookies or third-party analytic scripts.") -- and the top comment, as well as many others, are essentially a nitpick about a bug that can be trivially fixed.

TryItOnline 2 days ago 1 reply      
Looks like getting mentioned on Hacker News is an excellent way to get your servers overloaded. :) I've added a couple of additional servers.

I'll work on the issues that were brought up here asap. TIO only has two developers at this point (and only one of us works on the web app), so "asap" might take a little while.

olalonde 2 days ago 4 replies      
Really cool. Would be great if each language had an "hello world" example which would populate the fields.
zepolen 2 days ago 5 replies      

 cat /etc/passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin systemd-timesync:x:999:998:systemd Time Synchronization:/:/sbin/nologin systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin polkitd:x:998:997:User for polkitd:/:/sbin/nologin rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin abrt:x:173:173::/etc/abrt:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin cockpit-ws:x:997:995:User for cockpit-ws:/:/sbin/nologin rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin chrony:x:996:994::/var/lib/chrony:/sbin/nologin tcpdump:x:72:72::/:/sbin/nologin systemd-coredump:x:993:993:systemd Core Dumper:/:/sbin/nologin apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin epmd:x:992:991:Erlang Port Mapper Daemon:/dev/null:/sbin/nologin runner:x:1000:1000::/home/runner:/bin/bash tio:x:1001:1001::/home/tio:/bin/bash

keganunderwood 2 days ago 4 replies      
Not a comment about tio but I didn't realize it isn't possible to write a simple hello world in C# dot net core.

What a shitty situation.

What can we do to make it better? Provide a default csproj file on load? Any better ideas?

Microsoft (R) Build Engine version (C) Microsoft Corporation. All rights reserved.

code.cs(1,22): error CS1022: Type or namespace definition, or end-of-file expected [/home/runner/project/project.csproj]code.cs(1,22): error CS1026: ) expected [/home/runner/project/project.csproj]


code.cs(1,22): error CS1022: Type or namespace definition, or end-of-file expected [/home/runner/project/project.csproj]code.cs(1,22): error CS1026: ) expected [/home/runner/project/project.csproj] 0 Warning(s) 2 Error(s)

Time Elapsed 00:00:04.92No executable found matching command "dotnet-project/bin/Debug/netcoreapp*/project.dll"

Real time: 6.033 sUser time: 2.973 sSys. time: 0.442 sCPU share: 56.61 %Exit code: 1

sbierwagen 2 days ago 4 replies      
Looks like http://repl.it/ but with more languages.
wink 2 days ago 1 reply      
A description would be nice. I tried clojure and it successfully printed when my code was (+ 1 3) but I didn't grasp the input/arguments/etc...
jamestimmins 1 day ago 0 replies      
This is extremely cool. I'm impressed that they were able to get it up and running with so many different languages.
_kst_ 2 days ago 2 replies      
I was pleasantly surprised to see my own joke language 99 in the list.


I would have been even more pleasantly surprised if it worked.

jhgjklj 2 days ago 1 reply      
The site is usable but can be made a lot better simply by changing the priorities.

Hide all those compiler flags etc in the first page. The idea is if any one needs such advanced settings surely the source code will be multi page source code, even for the special use case it would be in the 1 % users.

Include some template code which is readily compilable. THis is will be useful

The sharable link should be integrated with reddit, google +, etc to be more resistance free sharing.

arikrak 2 days ago 1 reply      
Nice, looks pretty useful!

I think the default execution time should probably be less than 60 seconds. Most simple programs that people run on this kind of site should have an execution time of less than a second, so ~10 seconds should certainly be enough by default. This would free up some resources on your end and would inform people more quickly that they may have an infinite loop in their code.

winstonsmith 2 days ago 0 replies      
I really look forward to using this. It would be a great help to mobile users if uneeded options, e.g., 'header', could be made to disappear to save screen real estate.
cecicasa 2 days ago 0 replies      
limeblack 2 days ago 1 reply      
The play button turning into a gear that spins is pretty cool. First time I have seen that.
pepijndevos 2 days ago 1 reply      
Would be nice to support Piet.
ramgorur 8 hours ago 1 reply      
no c++/g++ ??
pmarreck 2 days ago 2 replies      
Needs Elixir support.

For the record, hello world in elixir is

IO.puts "Hello World"

danieldrehmer 2 days ago 0 replies      
This is pretty great!

It would be nice to add swift and elixir on the list.

prodikl 2 days ago 1 reply      
am i not getting this? i typed echo "hello"; for PHP and the output says "echo 'hello';"

it just repeated back what i typed instead of processing the code hmm

a-b 2 days ago 0 replies      
Code samples for each language would be appreciated
eridius 2 days ago 2 replies      
Looks neat, but the code editor really needs syntax highlighting and intelligent indentation.
netcyrax 2 days ago 0 replies      
Site is down.
meggar 2 days ago 1 reply      
no swift?
lai 2 days ago 5 replies      
This looks cool, but can you guys fix your push state code, I can't hit back to come back to HN.
razorunreal 2 days ago 2 replies      
Breaks the back button.
behnood85 2 days ago 2 replies      
I don't like it. I still prefer Codepen and jsfiddle.
End to End Machine Learning Pipeline Tutorial spandan-madan.github.io
364 points by tancik  2 days ago   23 comments top 13
tekkk 1 day ago 1 reply      
Reading articles like this written by people who want to share their fabulous domain knowledge for free of charge really is the reason why I read Hacker News. Thank you, i hope i will have the time to read through it all with thought and later hopefully utilize it with my own projects.
fabatka 1 day ago 0 replies      
Hi! This is really great page, I love reading it. Just a few tips:

The for loops in your code can be made more conscise: instead of

 for i in range(len(movies_with_overviews)): movie=movies_with_overviews[i]
you can write

 for movie in movies_with_overviews:
Also, at around In[82], you don't declare Y, but still reference it at the train-test split. Another way to do the train-test split is by using the train-test split in scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.mod...

deepGem 1 day ago 1 reply      
These are the tutorials that depict the reality of a machine learning career. Everyone broadly understands that data preparation is the key, but few realize what that involves. Half of this tutorial is just about getting and prepping data for training. Kudos!
sekasi 1 day ago 0 replies      
While much of this goes over my head, detailed write-ups like this by people who have no direct way of gaining a financial outcome from all their hard work is the cornerstone of why the internet is fantastic.

Amazing work!

Omnipresent 1 day ago 1 reply      
This so so helpful. It would take me months to gather resources to learn this stuff and I wouldn't even know what I would be looking for. To the author: please share more content if your valuable time permits
AndrewKemendo 1 day ago 1 reply      
Great write-up. Especially the fact that half of it was about finding cleaning and structuring data! You can tell someone isn't applying ML if they aren't spending most of their time getting their data organized. It's the "sharpening the axe" part of the hour Lincoln describes.

For example, they never introduce you to how you can run the same algorithm on your own dataset

I actually think the tensorflow tutorial on CNNs actually runs through training and classification on your own set with inception pretty well.

You mention you're a CV student. Any particular area of focus?

jonheller 1 day ago 0 replies      
This is wonderful. I just became interested in this subject but had difficult finding resources that weren't simple copy/paste examples, as you mentioned, or semester-long courses. Thank you!
praveer13 1 day ago 1 reply      
Are there more great resources like this to learn finding, cleaning and structuring data? Would greatly appreciate it if someone could point me in a direction.
ireadfaces 1 day ago 0 replies      
I saw this tutorial by you somewhere Spandan, and found it here on HN. I am yet to explore it but I have marked your GIT repo already. Thanks for the hard work.
allpratik 1 day ago 0 replies      
Spandan, this is fantastic and detailed write up. Kudos! And thanks for investing your time to do this!
mcintyre1994 1 day ago 0 replies      
This looks amazing, thankyou for sharing! :)
code4tee 1 day ago 0 replies      
Very nice work. Thanks for sharing.
craptocurrency 1 day ago 0 replies      
Amazing piece of work
I mean, why not tell everyone our password hashes? theobsidiantower.com
294 points by jorkro  3 days ago   151 comments top 21
jjguy 3 days ago 1 reply      
PSA to everyone responding to the title: please RTFA, it's sarcasm.
lucasgonze 3 days ago 9 replies      
That inspired this idea: make all password databases public, in an encrypted form. Just post them in a standard location. This is to get rid of the fiction that these are ever private and to eliminate an incentive to break in.
developer2 3 days ago 3 replies      
One reason: you'd be surprised how many companies allow entering the hash as an alternative password to login to customers' accounts in production. Lazy method for customer support teams who don't have support tools to access customer information. Also frequently done to allow developers to debug problems on a customer's account when a bug cannot be reproduced elsewhere.

If such a company's database of hashed passwords is leaked, then an attacker doesn't even have to crack the hashes - the hash itself is a valid version of the password. Yet I've seen this behavior at multiple companies; only one of them pushed back against my request to remove that "feature", and I didn't stay with them much longer after that.

igonvalue 3 days ago 1 reply      
What exactly are these passwords used for? The post mentions "controlling this object in the RIPE database" but I'm missing some context necessary to understand that.
zpallin 3 days ago 0 replies      
Shots fired.

That ending was an incredibly well delivered stab at Deutsche Telekom. This is why I love vigilante security.

w8rbt 3 days ago 2 replies      
It depends on the hash type. Cryptographic hashes (MD4, SHA1, SHA256, etc.) are made to be efficient and fast to compute while password hashes (bcrypt, scrypt, etc.) are much more difficult to compute. The difference is staggering.

 john --test --format=nt Benchmarking: NT [MD4 128/128 X2 SSE2-16]... DONE Raw:29037K c/s real, 29037K c/s virtual john --test --format=bcrypt Will run 16 OpenMP threads Benchmarking: bcrypt ("$2a$05", 32 iterations) [Blowfish 32/64 X3]... (16xOMP) DONE Raw:5472 c/s real, 490 c/s virtual
Edit: NT hashes are one round of MD4. These are Microsoft Active Directory hashes. OpenBSD uses Blowfish hashes by default.

delegate 3 days ago 0 replies      
It's also nice enough to mention the hashing algorithm used - MD5, just so you don't have waste time guessing.
pishpash 3 days ago 0 replies      
Passwords are broken for precisely this reason. You are operating under the fiction that permanently handing over entropy from a limited source to an untrusted party, even through a (for a time) one-way function, is ever a good idea. Please do make all password hashes public. It will finally force the move away from passwords.
ThePhysicist 3 days ago 0 replies      
Nice, they even have a rest API and web form to update the information:


joshfraser 3 days ago 1 reply      
Stupid question, but what does this particular password hash unlock?
royce 1 day ago 0 replies      
Threads discussing rainbow tables are not applicable.

These hashes are not unsalted MD5. They are md5crypt ($1$[salt]$[hash]), as found in many Unix-likes and some Cisco IOS.

jackjeff 3 days ago 0 replies      
It's hard to imagine worse, except maybe putting the passwords in the clear...

Single unsalted broken MD5 is a far cry from scrypt... and even scrypt is probably a bad idea with all this crypto currency hashing hardware out there, unless you have a seriously strong password.

Just don't publish hashes.

rdl 3 days ago 0 replies      
The PGP option has been the preferred option for as long as I can remember (circa 2000).
ewzimm 3 days ago 1 reply      
As you might have guessed, my password hash is password.
pmarreck 3 days ago 2 replies      
a simple unsalted hash wouldn't work due to rainbow-tabling, and even a salted hash would be vulnerable to someone gaining unauthorized access to the salt and regenerating a rainbow table with it (although if one used bcrypt, that might be practically impossible)
cerved 3 days ago 1 reply      
komali2 3 days ago 1 reply      
>whois -h whois.ripe.net DTAG-NIC

Wait, was that just a straight bash command? Is this installed on my computer?

>$ whoisusage: whois [-aAbdgiIlmQrR6] [-c country-code | -h hostname] [-p port] name ...

Holy shit lol, that's neat.

eveningcoffee 3 days ago 1 reply      
Because your password is part of your identity and is actually used to cross check during identity matching.
justin_vanw 3 days ago 0 replies      
Nah, why even give an inch? Yes, if you properly deal with passwords, the artifact you store gives virtually no information to an attacker, but on the other hand, why give them even almost nothing?
basseq 3 days ago 0 replies      
Sure, in an ideal world: post the hashes, the salts, the hash algorithm, everything. If it's done "right" (e.g., the hash function has enough complexity), then brute force cracking, rainbow tables, etc. would take so long that it wouldn't be feasible to crack them with any volume.

Of course, you could still crack some (problem), so keeping multiple secrets hidden through obscurity (the hashes, the salts, etc.) is another layer of security.

This doesn't guarantee security, but it's certainly more secure. But it is additive: there's no reason to just use MD5 (or plaintext) because "my hashes are secret".

schoen 3 days ago 4 replies      
I was kind of disturbed that GitHub publishes every user's public key.


This is a different situation and public keys are not directly analogous to password hashes: there isn't a reliable way of cracking public keys in the same sense that there's a semi-reliable way of cracking hashes. But it was still strange and uncomfortable to me that they would reveal this "target" (and if there were specific key generation bugs, like RNG seeding errors, people might actually be able to crack a few of them and know that they had suceeded).

Relatedly, I was thinking about the magic crypto-cracking device in the movie Sneakers. Once they had it, they could immediately use it to log on to random network-connected services, defeating the authentication. So, how is that supposed to work? How do they automatically know what credentials would be accepted for a particular service? Are there common network authentication protocols based on public-key cryptography that have the property that the verifier tells the prover the public keys that it trusts?

MPC-HC v1.7.13 is released and farewell mpc-hc.org
340 points by rakshithbekal  1 day ago   103 comments top 25
NuDinNou 1 day ago 8 replies      
Farewell old friend. For those looking for an alternative: you should try MPV. It's a video player for the geeks/hackers https://mpv.io/manual/stable/
satysin 1 day ago 3 replies      
MPC-HC is a superb media player but it is not surprising to see this happen. Interest in maintaining open source Windows applications written in C/Win32/C++/MFC is going to keep dropping as there are not as many people with the skills or motivation to do it. Especially for something as complex as a media player.

Even on the Linux side I have seen a drop in the number of full blown media players being developed, they are mostly front ends to things like mpv and mplayer.

ksec 1 day ago 2 replies      
I have been using MPC-BE for many many years and it is being actively developed with developers feedback and response on Doom9.


greyskull 1 day ago 2 replies      
MPC-HC was always my go-to. Starts up instantly and the performance was always superb, much better than VLC in seeking. I don't know what it is, but more often than not, VLC pauses for a moment when seeking to a random part of the file, while MPC-HC has always been instant.

I suppose I'll try out MPV.

sotojuan 1 day ago 4 replies      
Damn. I remember all the debates on /a/ (4chan anime board) on what are the best options and presets... for watching anime. Looking back it was kind of silly but we were able to do that because the player was so well built.

On Linux/macOS I use mpv - I recommend it!

tibiapejagala 1 day ago 1 reply      
If you are reading this, thank you for your hard work all these years
TheKIngofBelAir 1 day ago 0 replies      
> K-Lite contains a custom build of MPC-HC that contains additional fixes and improvements compared to the officials builds. This will continue in the future. The internal codecs that MPC-HC uses are also still actively maintained.


snvzz 1 day ago 1 reply      
Maybe it's time for mpc-qt: https://github.com/cmdrkotori/mpc-qt

It reimplements mpc-hc UX using qt for the UI and libmpv for the heavy lifting. The issue with this one is that it doesn't have public builds yet, but it has been in active development for years.

GunlogAlm 1 day ago 0 replies      
Hopefully some people step forward, I'd hate to see MPC-HC come to an end. :(
castell 1 day ago 1 reply      
Very unfortunate, MPC-HC was so simple to use, has a slick UI inspired by MS Media Player and to quickly review/seek videos it was the very best. (much faster than VLC for that task)
rakshithbekal 1 day ago 0 replies      
I really want to use mpdn because it uses WMF but I haven't gotten results with it like I have using MPC-HC and madvr. Anybody know any other program that's as advance as mpc, compatible with madvr and uses Windows media foundation over directshow?
AsyncAwait 1 day ago 1 reply      
As a potential replacement, there's https://github.com/cmdrkotori/mpc-qt which uses mpv backend, but has the MPC look and feel.
xvilo 1 day ago 1 reply      
How can this come to an end :( - I am not a C/C++ dev. But I would like to support this project money wise if it would help?
Strom 1 day ago 1 reply      
An excellent player with a ton of great features. However I'm not sure if it ever caught up to VLC in terms of performance. Specifically it was around 10x slower at seeking in H.264 video compared to VLC. When used on low performance machine (Pentium 4 @ 3 GHz + 7200 rpm HDD), this resulted in a sub-second seek time in VLC compared to over 5 seconds in MPC-HC when viewing 10Mbit bitrate video. Especially annoying when I wanted to rewatch a single moment over and over again.
GuB-42 1 day ago 2 replies      
Is it dead or done?

MPC-HC is just a DirectShow frontend, or at least, that's how I used it. Filters do most of the work.And no one seems to care about DirectShow anymore but that's mostly because everything works fine.

It will die eventually, because Microsoft is trying hard to kill DirectShow (to replace it with something inferior...) and the opensource guys mostly go to mplayer, but for now, updates are not really necessary.

drngdds 1 day ago 5 replies      
RIP. Is VLC the best alternative at the moment?
jaimehrubiks 1 day ago 0 replies      
Long live the best player ever mpc!
sergiotapia 1 day ago 1 reply      
Those looking for a great alternative try using MPV.


 $ brew install mpv $ mpv ~/Media/my-movie.mp4
And you're off!

krautsourced 1 day ago 1 reply      
I've used MPC for quite a while in the past, but have since moved to https://potplayer.daum.net/

It's super feature rich, with (in my opinion) a much nice interface than e.g. VLC.

NamTaf 1 day ago 0 replies      
Well, nuts. I really liked MPC-HC as part of the CCCP. I'll check out some of the alternatives offered in this thread. Hopefully one follows the slim design and flexibility of MPC.
kozak 1 day ago 0 replies      
I like how MPC-HC uses proper hardware codecs for h.264 and doesn't try to do it programmatically on its own like some other players do.
r3demon 1 day ago 0 replies      
sad, but things change, eventually
vasili111 1 day ago 0 replies      
Used MPC for a long time. After switched to Daum Pot Player.
fithisux 1 day ago 0 replies      
The fate of C++ projects.
pussypusspuss 1 day ago 1 reply      
A more verbose title would've been useful. I shouldn't have to click through to the linked site to find out what the acronym MPC-HC means.
Ask HN: What tasks do you automate?
371 points by flaque  1 day ago   314 comments top 94
naturalgradient 1 day ago 6 replies      
I take enormous pleasure in automating every part of my research pipelines (comp sci).

As in, I like to get my experiment setup (usually distributed and many different components interacting with each other) to a point where one command resets all components, starts them in screen processes on all of the machines with the appropriate timing and setup commands, runs the experiment(s), moves the results between machines, generates intermediate results and exports publication ready plots to the right folder.

Upside: once it's ready, iterating on the research part of the experiment is great. No need to focus on anything else any more, just the actual research problem, not a single unnecessary click to start something (even 2 clicks become irritating when you do them hundreds of times).Need another ablation study/explore another parameter/idea? Just change a flag/line/function, kick off once, and have the plots the next day. No fiddling around.

Downside: full orchestration takes very long initially, but a bit into my research career I now have tons of utilities for all of this. It also has made me much better at command line and general setup nonsense.

EnderMB 1 day ago 1 reply      
My most proud "automation" was writing a bot that would play Farmville for me.

I was at university, and Farmville was all the rage on Facebook. My girlfriend wanted me to play because it'd mean she'd be able to trade stuff with me or something (I forget why exactly), and I eventually caved in.

After ten minutes of playing it, I was bored. I couldn't really judge people that would click plants hundreds of times, several times a day, though, because I played World of Warcraft. It was just a more interesting type of grinding...

I figured out that in order to grind through the game most efficiently, I'd need to plant Tomatoes every two hours, so I wrote a bot that would:

1. Spin up a VM.

2. Open the browser to Farmville.

3. Open up an automated clicking application I had written that worked on Flash.

4. Find the outermost vegetable patch.

5. Click in a 20x20 grid (or however big the whole area was).

6. Replant, and close.

I didn't tell my girlfriend about the bot, and I'd turn it off when I went to visit her, so she was shocked when she went on my farm to see that I was a higher level than her. I'd jokingly feign ignorance, saying that I was just playing it like her, until one day when I had left the script running and she saw my farm picking itself while I was studying.

zbjornson 1 day ago 2 replies      
All of my thesis project in immunology was automated, which involved several hours of blood processing repeated several thousand times (with some parallelization) by a team of a dozen robots. There are pics, schematics and vids here: http://www.zachbjornson.com/projects/robotics/.

I also like to say that the final analysis was automated. It was done entirely in Mathemtica notebooks that talk to a data-processing API, and can be re-ran whenever. The notebooks are getting released along with the journal article for the sake of transparency and reprodibility.

(Also, I automated my SSL cert renewal ;))

mohsinr 25 minutes ago 0 replies      
I have small bash script which keeps checking for Internet, if my machine does not have live working internet, it sends a notification with alert (text + sound) "You are offline, you may read some books :)" and then it launches iBooks so I can do some reading when offline.

PS. Also when Internet is back, it alerts again so I can resume online Work if I have to.

ajarmst 1 day ago 8 replies      
I'm the kind of nerd who greatly prefers writing automation code to doing anything remotely repetitive. (I'm afraid to work out the actual timings because I'm pretty sure that I often spend more time coming up with the automation than just doing the task would take).

I've got a script that automatically rips, converts and stitches together audiobooks from the library so that I can play them on my phone. It just beeps periodically to tell me to put the next CD in.

I also had a batch job that downloaded Doonesbury cartoons (including some delay logic so I wasn't hammering the server) and built a linked series of html pages by year and month. I've ported it to a couple of other webcomics so that I can binge read.

I also write a lot of LaTeX macros, doing things like automatically import and format code from a github gist into lecture notes (something like \includegist{C,<path/to/gist>), or autogenerate pretty PDF'd marks summaries for students from my home-rolled marks. database.

Another thing I like is building little toys to demonstrate things for students, like a Mathematica page that calculated the convergence rate and error for the trapezoidal rule (numerical integration) with some pretty diagrams.

I once wrote a bunch of lisp code to help with crypto puzzles (the ones that use a substitution code, and you try to figure out the original text). The code did things like identifying letter, digraph and trigraph frequencies, allowed you to test substitutions, etc.

As developers, we tend to focus on these big integrated projects. But one of the biggest advantages that people who can code have is the ability to quickly get a general purpose computer to assist with individual tasks. I write an awful lot of code that only gets run a handful of times, yet some of those projects were the most pleasure I've ever had writing code.

dmorin 34 minutes ago 0 replies      
Sometimes I see a lengthy text article that I tell myself I'll bookmark and read later, but I know I'm never going to read it. I much prefer audiobooks and podcasts. So I automated scraping the text from the article, piping it through text-to-speech, turning it into an MP3, and moving it to my phone so it shows up in my audiobook library. Next step is to make it an RSS feed so I can treat it like a podcast.
kvz 1 day ago 5 replies      
Since I have a toddler in longing for a house with a garden which starts ar 800k EUR in pleasant neighborhoods in Amsterdam now, which is above my paygrade. So i wrote a script that compares surrounding towns on a number of metrics (4+ rated restaurants per citizen for instance) and let's me know when there are houses for sale with a garden facing south (or north but only if it's sufficently long that we are likely to enjoy some sun (10m+), etc.

So far this has not resulted in us buying a house and the hours that went into the project would have probably long paid for a good real estate agent :)

shade23 1 day ago 5 replies      
- Downloading a song of youtube, adding meta data via beets and moving to my music lib

- Adding tasks to my todolist client from every app I use(including my bookmarking service when I bookmark with specific tags)

- Changing terminal colours based on time of the day(lower brightness in the evenings and hence dark colours, too much sunlight in the mornings and hence solarized themes)

- Automatically message people who message me based on priority(parents immediately/girlfriend a longer buffer).

- Filters on said messages incase a few require my intervention

- Phone alerts on specific emails

- Waiting for a server which you were working with to recover from a 503(happens often in dev environments) and you are tired of checking every 5 seconds: Ping scripts which message my phone while I go play in the rec area.

- Disable my phone charging when it nears 95% (I'm an android dev and hate that my phone is always charging)

- Scraping websites for specific information and making my laptop ping when the scenario succeeds(I dont like continuously refreshing a page)

I dont think several of these count as automation as opposed to just some script work. But I prefer reducing keystrokes as much as possible for things which are fixed.

Relevant to this discussion:Excerpt from the github page

>OK, so, our build engineer has left for another company. The dude was literally living inside the terminal. You know, that type of a guy who loves Vim, creates diagrams in Dot and writes wiki-posts in Markdown... If something - anything - requires more than 90 seconds of his time, he writes a script to automate that.


egypturnash 1 day ago 1 reply      
I am not a programmer, but I've automated a few things in my life.

I self publish graphic novels. I have a script that runs on a directory full of page files and outputs a CSV in the format InDesign expects. I wrote it after manually editing a CSV and leaving a page out, and not noticing that until I had an advance copy in my hands and 400 more waiting to be shipped from the printer. That was an expensive learning experience.

I like to rotate my monitor portrait mode sometimes, but hate trying to rotate the Wacom tablet's settings as well. So I have a script that does this all in one go. It used to try to keep track of separate desktop backgrounds for landscape and portrait mode, but this stopped working right, so I took that part out.

I have a bunch of LIFX bulbs in my apartment. The one near the foyer changes color based on the rain forecast and the current temperature, to give me an idea of how to dress when going out, thanks to a little Python script I keep running on my computer. Someday I'll move it to the Raspberry Pi sitting in a drawer.

I recently built a Twitter bot that tweets a random card from the Tarot deck I drew. I've been trying to extend it to talk to Mastodon as well but have been getting "request too large" errors from the API when trying to send the images. Someday I'll spin up a private Mastodon instance and figure out what's going on. Maybe. Until then it sits on a free Heroku account, tweeting a card and an image of its text about once a day.

And does building a custom Wordpress theme that lets me post individual pages of my comics, and show them a whole chapter at a time, count as "automation"? It sure has saved me a lot of hassle.

ekzy 45 minutes ago 0 replies      
Last year I automated a bit of my dating by sending Tinder messages via their API. It worked, and this is how I met the woman I now live with :D http://jazzytomato.com/hacking-tinder/
saimiam 1 day ago 8 replies      
My day to day decisions are mostly automated - what to eat for breakfast? what clothes to wear any given day of the week? when to walk my dog and for how long? When to leave work and which back roads route to take to get back home? Lunch options? When to call the folks? Exercise schedule? All automated.

It gets a little repetitive and boring at times but I'm able to save so much time and energy this way to focus on what's important to me.

MichaelMoser123 1 day ago 3 replies      
In 2003 I had a perl script to query the job boards for keywords , scrap the result and send out an application email with CV attached to it (I took care to send one application to a single email). I think this was a legitimate form of spamming - at that moment the local job market was very bad.
Toast_ 1 day ago 1 reply      
I'm aggregating flash sales and sending post requests to azure ml using huginn. It's a work in progress, but huginn seems to be working well. Also considering giving nifi a go, but the setup seems a bit over my head.



dhpe 1 day ago 1 reply      
I need to upload invoices every month from all ~20 SaaS products I subscribe to an accounting software. Most of the invoices can be just redirected from email to another SaaS that will let me download a zip file containing all invoices from a date range. Other software requires me to login to the product, navigate to a page and download a PDF or print an HTML page. I have browser-automated all of these laborious ones as well so everything will be in that zip file. Saves me 30 min monthly and especially saves me from the boring work.
dannysu 1 day ago 5 replies      
A bot for reserving hotel rooms.

I wrote a bot to reserve hotel rooms a year in advance for a national park in the US.

It was so difficult to book. After couple days of failed attempts to reserve my desired dates, and after staying up late into the night one day, I went ahead and wrote a bot to automate the task of checking for availability and then completing the checkout process once available.

And... it worked.

xcubic 1 day ago 1 reply      
In Lausanne, Switzerland, it's very difficult to find an appartement because there are too few appartements for too many people and it mostly follows "First-come, first-served".

So I created scrappers for 3 websites + 1 facebook group. It simply looks for apartments with my specifications and notify me when a new one comes up.

I can say, I successfully found an apartment. The whole process usually takes at least 3 months, I did it in 1.

rcarmo 1 day ago 3 replies      
- Data pipelines (as seen elsewhere here)

- Anything related to infra (I do Azure, so I write Azure templates to deploy everything, even PaaS/FaaS stuff)

- Linux provisioning (cloud-init, Ansible, and a Makefile to tailor/deploy my dotfiles on new systems)

- Mail filing (I have the usual sets of rules, plus a few extra to bundle together related e-mails on a topic and re-file as needed)

- Posting links to my blog (with screenshots) using Workflow on iOS

- Sending SMS from my Watch to the local public transport info number to get up-to-the minute bus schedules for some pre-defined locations (also using Workflow)

- Deploying my apps on Linux (I wrote a mini Heroku-like PaaS for that - https://github.com/rcarmo/piku)

- Searching for papers/PDFs on specific topics (built a Python wrapper for arxiv/Google/others that goes and fetches the top 5 matches across them and files them on Dropbox)

- Converting conference videos to podcasts (typically youtube-dl and a Python loop with ffmpeg, plus a private RSS feed for Overcast)

Every day/week I add something new.

(edit: line breaks)

jf___ 1 day ago 4 replies      
carving up marble with industrial robots


Cad -> robot code compiler is built on top of pythonocc

nfriedly 1 day ago 2 replies      
Paying all of my bills. All of them. My bank (Fidelity) can connect to most bigger companies to have the bills automatically sent to them and then they automatically pay it (with an optional upper limit on each biller).

For other bills, I got all but one to put me on "budget billing" (same amount each month, so Fidelity just sends them a check for that amount without seeing the bill). For Windstream, which varies by a dollar or two each month, I just send them an amount on the upper end and then let a credit accrue. Both of these require an update maybe once a year or so.

Windstream is a bit funny - I don't know why they can't pick a number and stick to it. Also, they apparently raised my "guaranteed price for life" a couple of times and didn't notify me until ~8 months later when they were threatening to disconnect my service for being more than a month behind. (They had turned off paper billing on my account but didn't actually enable e-billing - service still worked so I didn't even think about it. We eventually got it straightened out, but Windstream is ... special.)

Beyond that, I made a bot that automatically withdrew Elance earnings to my bank account (that got me banned for a week or so when I posted it to their forum).

I made another bot that bought and sold bitcoins and litecoins and such. It was moderately profitable until my exchange (criptsy) got hacked and lost all of my float (worth ~$60 USD at the time.)

I connected an Arduino IR blaster to my TV to make it automatically turn on my sound bar (the TV would turn it off, but not on?!) - http://www.nfriedly.com/techblog/2015/01/samsung-tv-turn-on-...

Oh, and of course, code tests and deployment. Nearly every git commit I make gets a ton of tests, and for most projects, each tag gets an automated deployment to to npm or bluemix or wherever.

nurettin 1 day ago 2 replies      
In my city, there are many stadiums which cause traffic congestion during rush hours. I made a scraping bot which tells me if there's going to be traffic on my designated routes the next day. Going to try making it an app and see if it's any useful to others.
The_Notorious 1 day ago 1 reply      
Find yourself a configuration management server such as Puppet, Chef, CFEngine etc, and learn to automate system deployment and management with it. I use Puppet CE as my main automation tool.

Use Python/Shell for tasks that are not well suited for a configuration management server. Usually, this is when procedural code makes more sense than the declarative style of Puppet manifests. Interactive "wizards" (i.e. add domain users accounts to a samba server, and create home directories for them) and database/file backups are my usual uses for these types of scripts.

Fabric is a useful tool to use with python. It allows you to send SSH commands that you put into functions to groups of servers in bulk.

I also use python for troubleshooting network issues. It has libraries to interact with all manner of network services/protocols, as well as crafting packets and creating raw sockets.

Look into PowerShell if you work in a Windows environment. Everything from Microsoft is hooked into PowerShell in their newer versions.

abatilo 1 day ago 0 replies      
A little different than what other people are doing, but I have tried to automate my savings. I use Mint to figure out what my budgets for things should be, then I use Qapital to automatically save the money I didn't spend but was budgeted.
profpandit 1 day ago 1 reply      
This is a great question. The PC has been around for a long time now. For the most part, users/developers have been sitting around, twiddling their thumbs and waiting for the tool and app gods to rain their blessings. This question begs the need to be proactively involved in the process of designing how you use your PC
fenesiistvan 1 day ago 0 replies      
Support tickets integrated with service monitoring.

Around 3 years ago, we started to get a lot of customers for our VoIP tunneling solution, mostly from UAE. Most of these were unfriendly customers abusing our support, so I started to implement a CRM to track "support points". I spend a half year to develop this solution (with lots of other functionality such as service monitoring) and when I finished, there was no any demand for the VoIP tunneling solution anymore :)

This is how I wasted half year instead to focus to solutions relevant for our business.

Thanks good, we started to have new customers again since last year and actually my CRM/support point tracking software is very useful now, but I still don't think that it worths 6 months time investment.

Conclusion: focus on your main business and dont spend too much time with automation and other helper software (or hire somebody to do it if your business is big enough)

dqv 1 day ago 0 replies      
A PBX that only let's you record voicemail greeting by dialing in and listening to the whole greeting before it can be saved. So... recording their greeting would take a good 15 minutes if they mess up and have to start over.

I wrote a simple lua script for freeswitch that dials the line, follows the prompts, and plays the person's greeting to the PBX. Of course, one day, the damn PBX will be replaced by freeswitch.

ecesena 1 day ago 0 replies      
Tweeting. I suck at it. I started with a txt, which became a spreadsheet, which is becoming distrosheet.com.

Sooo slooowly that the homepage still has stock cats&dogs images. The most upsetting thing is that I've got more than one person telling me "I like the homepage". My mental reaction was "wtf!?". </rant>

Anyway, I still don't tweet much, but I'm getting there.

ASipos 1 day ago 0 replies      
Downloading fan fiction from fanfiction.net

I have written a Python script that builds a HTML out of all chapters of a given fan fiction and then calls Calibre to convert it to MOBI for my Kindle.

Unfortunately, my life doesn't have too many automatable aspects... (I am a math researcher.)

whiskers08xmt 6 hours ago 0 replies      
Every robotic task tangially related to Auditing. I work with robotic task automation at one of the big 4, and it's really amazing how much trivial work that's being done by humans.
patd 1 day ago 2 replies      
Most of my side projects have been about automating the little things that end up taking me a lot of time.

At my first job, part of my work (next to junior dev) was to deploy EARs on Websphere. I automated it so that people just had to drop it on a shared folder and I'd just take a look if it failed to install automatically.

I wrote a command-line tool to search and download subtitles https://github.com/patrickdessalle/periscope

I made a browser plugin to compare the price of the European Amazon and a few other websites (it grew to more countries and websites) http://www.shoptimate.com

And now I'm working on a tool that regularly checks if some of my content is getting adblocked because it's something I periodically do by hand http://www.blockedby.com

In the end, automating things can take more time than actually doing it. But if it's used by others and saves them time as well, it's gratifying.

leipert 1 day ago 1 reply      
Sorting my mails with imapfilter. I have a yaml file where I write down which mails go into which folder depending on sender or recipient or another header field. Runs on a raspberry pi every ten minutes between 8 and 8.
imroot 1 day ago 1 reply      
My expense reports and timesheets.

The three shittiest parts of my job every week are:

- Approving timesheets

- Entering in my timesheets

- Entering in my expense reports

I've written a script that goes in using a phantom.js script, and automates the submission of my timesheet on Friday afternoon at 3:00 +/- 15minutes. It now takes into account travel time, Holidays, and approving time if I have time approvals due.

Same holds true for submitting expense reports in Oracle. I upload the receipt to Expensify, and as long as it's tagged properly in Expensify, it'll automatically generate the correct expense report in Oracle for the proper project based on the receipts in Expensify. This saves me, on average, about 6 hours a month.

prawns 1 day ago 0 replies      
Downloading porn and culling the old stuff. Currently automated management of over 100TB and growing!
wslh 1 day ago 1 reply      
Designing and developing UIs. I want to develop web UIs like you develop UIs with Visual Studio or Xcode. I cannot believe how much efforts we need to build and modify web experiences.
neya 1 day ago 6 replies      
I had tons of startup ideas that I'd always wanted to give it a try. After a point, it became frustrating to test them out one by one, either by writing custom applications in Rails or use Wordpress. But, both costed me a significant amount of time.

For example, I had this idea for a travel startup for a very, very long time and I decided to build it on Wordpress. The monetization model was selling some E-Commerce items, so I naturally tried out some of the plugins and was shocked at how long it took for me to get a simple task done. I had such a terrible experience that I'd never recommend it to anyone. Wordpress by itself is fine, but when you try to extend it, you face so many hiccups.

That's when I realized there's no use blaming the tool. It's because of the differences in philosophies between me and the core Wordpress team. So, I naturally spent another 4 months writing a Rails app for this travel startup and still wasn't satisfied with my time to market. Clearly, there had to be a better, faster way?

In essence, I realized every online startup requires these components:

1. Authentication / Authorization

2. CMS - To manage content on the site, including home page, landing pages, blog, etc.

3. Analytics - To help track pageviews, campaigns, etc

4. CRM - To manage a sales pipeline and sell to customers. Also to know very well who your customers really are.

So, I went ahead and wrote this mammoth of an application in phoenix (using DDD's architectural patterns), that has all the modules above. Now, everytime I have an idea, I just login into my interface, setup the content and the theme/design and launch a campaign...bam! My idea is now live and I can test it out there on the market.

You can think of it like a complete combination of all the startups out there:

1. Mailchimp - I can send unlimited emails, track opens, analyse them. Handled by my marketing module. I can customize the emails too, of course.

2. Unbounce - I can design my own landing pages. Handled by my CMS.

3. Buffer - I can schedule shares from within my interface based on best times by engagement. Handled by my marketing module.

4. Hubspot - My system has a full, hubspot/zoho clone of CRM.

Here are some of the key highlights:

1. All my data is collected on BigQuery and I own it instead of sending to third parties.

2. There is no forced limitation on my marketing - For example, if you used mailchimp, you know you're limited to just 2000 recepients. If anything more, it quickly gets expensive. But my system is my own, no limitations whatsoever.

3. I can spend less time developing my idea and more time executing it.

4. I have my own custom business dashboard for each of my idea, that tells me how good/bad it's performing, so that I can turn it off when needed.

Probably not the kind of automation you were expecting, but yeah.

EDIT: Added more details.

reddavis 1 day ago 0 replies      
I automated my dehumidifier.

I wrote about it here: https://red.to/blog/2016/9/15/automatically-controlling-a-de...

and OS'd the Rails app: https://github.com/reddavis/Nest-Dehumidifier

foxylad 1 day ago 0 replies      
Easy - anything boring. "Boring" usually means repetitive and not mentally challenging, which to my mind is exactly what computers are for.

Even if the task happens infrequently and the script takes longer than the task, automating it is worth the investment:- It prevents having to remember or re-discover how to handle the task next time.- It ensures the task is handled consistently.- It prevents potential manual errors.

For example, on the financial side, my company runs bank accounts in five countries, each with different GST/VAT taxes. Over time, I've developed scripts that grab the mid-month exchange rates that our Internal Revenue service requires to be used; crunches downloaded bank transaction data into categories (including tax inclusion or not); and exports it all into a huge Google spreadsheet. This provides global and country balance sheets and profit and loss, and when tax reporting time comes for each country, a tab on the spreadsheet provides all the figures so filling returns is a five minute process. Occasionally the scripts will flag an unrecognised transaction, and rather than manually correcting this in the spreadsheet, I'll add a rule to the script so it is recognised next time.

Cumulatively this probably took several tens of hours to code, but it means we don't need to employ an accounts clerk. It takes about fifteen minutes a month to download the bank data (manually - oh how I wish banks had APIs) and run the scripts. Our accountant loves this - the spreadsheet is shared with him, he can check our formulae or add other metrics, and he prepares our annual report an order of magnitude faster than any of his other clients.

natch 1 day ago 0 replies      
Many things. Trivial one, recently wrote a script to electronically sign six documents from my divorce and related tax paperwork using ImageMagick. Just to avoid having to do it with Gimp or Preview or some other GUI tool, and then re-do it when there are revisions. Yes there are online tools but I'm working with people who don't use those, nor do I want to upload these documents anywhere I don't have to.

Often I'll spend as much time writing an automated solution as it would take to do the task manually, even if I'm only going to run the automated solution once. The work is way more fulfilling, and I can fix mistakes easier, and can learn and develop new techniques.

leoharsha2 7 hours ago 0 replies      
I made a bot which tells what should I wear today depending on weather and the clothes that I have. It messages me every day at morning
raleigh_user 1 day ago 1 reply      
I automated pretty much all groceries & goods I use through a combination of Shipt and Amazon Subscribe and Save. Took a few hours one Saturday to compile list of everything I use and estimates on needing more but I genuinely enjoy not having to think about if I need toothpaste or if I have food for dinner
kensoh 1 day ago 0 replies      
I automate as much as possible the tasks involved in coding web automation scripts - https://github.com/tebelorg/TagUI
ghaff 1 day ago 0 replies      
I wrote a little script [1] to automate a lot of the steps associated with publishing a podcast. There's still manual work but this takes care of a lot of the fiddly repetitive detail work that's both time-consuming and error-prone. Especially if I do a batch of podcasts at an event, this is a lifesaver.

[1] https://opensource.com/article/17/4/automate-podcast-publish...

noahdesu 1 day ago 2 replies      
I frequently wipe and install from scratch my Linux desktop and laptops. I've been spending more time recently working on setup scripts that automate as much of this as possible. Things like installing packages, setting up firewall, checking out code projects and installing dependencies. Currently this is mostly a bash script plus my dot-files, but I'm always looking for ways to improve this process.
simula67 1 day ago 1 reply      
Wishing my friends Happy Birthday on Facebook, with Birthday Buddy : https://chrome.google.com/webstore/detail/birthday-buddy/cil...
blockchan 1 day ago 0 replies      
Transfering lead data to Salesforce from Intercom and Slack by sending simple messages like "SQL" or "email@example.com to sf"

Receiving and sending documents to proofreading

I described them in details here: https://www.netguru.co/blog/automating-myself-out-of-the-job...

l0b0 1 day ago 0 replies      
Some of my own projects that I've ended up using frequently - you can see what they do from the command structure:

 mkgithub ~/dev/new-project fgit pull -- ~/*/.git/.. ~/dev/*/.git/.. ~/dev/tilde/.screenlayout/right-tack.sh
And some less frequently used tools:

 mount-image ./*.iso vcard ~/contacts/*.vcf ~/dev/vcard/sort-lines.sh ~/dev/vcard/sorts/Gmail.re ~/contacts/*.vcf img2scad < example.png > example.scad indentect < "$(which indentect)" qr2scad < ~/dev/qr2scad/tests/example.png > example.scad schemaspy2svg ~/db
So yeah, automate all the things.

ibotheperfect 1 day ago 0 replies      
I was downloading beatport song by finding them from youtube. Then I decided to automate this. I wrote a code that finds them from youtube and download automatically. Finally I decided to make it a website so that everyone can use. www.beatportube.com
anotherevan 1 day ago 3 replies      
Wrote a program that tracks Australian movie release dates for movies I'm interested in. Sends a daily email if a release date moves, or there a new movies for me to flag my interest in.

Interfaces with themoviedb.org for plot summary, cast and crew info and such. Interfaces with Google Calendar for writing entries for each movie I'm tracking.

agopaul 1 day ago 0 replies      
I setup crawlers to make specific queries on various website. I used them in the past with:- used car dealer websites- job posting boards (found a job a few years ago with that)- craiglist-like websites- coupon websites (looking for sushi restaurant deals)- etc

Also, not sure if that counts, but I have monit+scripts monitoring backups timestamps and DB replication

anotherevan 1 day ago 0 replies      
I read a lot of articles by saving them to Pocket and reading via my ereader. I wrote a little PHP browser based application that interfaces with the Pocket and hn.algolia.com APIs that helps me to follow up on articles in related forums such as Hacker News and track my reading habits.

Naturally I called it Pocket Lint.

w3news 1 day ago 0 replies      
I write a browser extension so i dont have to click or type a lot on some websites.Firefox: https://addons.mozilla.org/en-US/firefox/addon/clickr/Chrome: https://chrome.google.com/webstore/detail/clickr/kbegiheknic...

Also very usefull as web developer to test some javascript on a website.

sprt 1 day ago 1 reply      
Buying crypto weekly using Kraken's API.
pisomojado_g 1 day ago 1 reply      
Library book renewals. I have an AWS Lambda function that runs daily, scrapes html from my public library (they have no API), and if a book is due within the next day, renews it. If I've reached max renewals, it sends me a notification.
paultopia 1 day ago 0 replies      
Scraping and compilation of various annoying web content formats, with varying levels of efficacy -- e.g. https://github.com/paultopia/scrape-ebook for open source PDF chapters and https://github.com/paultopia/spideyscrape for readthedocs-esque formats.

iCloud documents edited on iOS -> versioning and shoving in a private github repo -- https://paultopia.github.io/posts-output/backup-to-git/

CV updates via template to HTML, latex, and docx

ajarmst 1 day ago 2 replies      
I consult the relevant XKCD to decide: https://xkcd.com/1205/
vgchh 1 day ago 0 replies      
1. Code formatting

- gofmt for Go, Google Java Format for Java

2. Code Style Enforcement

- golint, govet for Go, CheckStyle with Google Style for Java

hellbanner 1 day ago 0 replies      
It's really simple; I automate creating builds for the game www.QuantumPilot.me

rm -rf ./QuantumPilot*rm -rf ./QuantumPilot* electron-packager ~/ele/electron-quick-start/ QuantumPilot --platform=all --icon=/Users/quantum/Desktop/QuantumPilot.icnsopen .

for some reason, OSX has trouble deleting the Linux folder the first time. I've heard Itch.io has a CLI for this but I haven't tried it yet. https://github.com/itchio/butler

greggman 1 day ago 1 reply      
In the past I've always automated exporting from Maya, 3DSMax and Photoshop, meaning I don't require artists to export from either. The artist saves the source file in the project, tools build from that to the final format for the app/game.

The more typical workflow is that artists export .JPGs or .PNGs manually from Photoshop and somewhere else save their .PSD files. Similarly with 3SDMax or Maya they'd manually export using some plugin. That seems wasteful to me and error prone. Source files get lost. Artists have to maintain multiple versions and do the export manually which seems like a huge waste of time. So, I automate it.

fantispug 1 day ago 0 replies      
I automated my wedding seating cards and plan.

I managed invitations as a CSV (who had been invited, who responded yes and no, addresses and dietary requirements).

I designed the placecards and seating plan as SVG in inkscape with special text I used as {templating parameters}.

I could then produce all my place cards and seating plan from a simple simple script. This was handy when guests changed their RSVP a week out from the wedding when I had little free time and I could make a change instantly. (Although admittedly I spent more time getting the layout right for the seating chart than if I had done it by hand).

xs 1 day ago 1 reply      
I just figured out how to use ansible and python to script out changing the passwords for all the network gear in the office. It uses a random password generator api https://passwordwolf.com to fetch a new password, changes it on everything, then sends me the new passwords. I'm changing passwords monthly now but it works so well that I might set it to weekly.
ehudla 1 day ago 1 reply      
Preparing purchase form for university library and letting me know when books I order become available.


sergiotapia 1 day ago 1 reply      
Download media. I have Sonarr+Radarr+Plex. I don't spend much time looking for media.

Code reviews. Using something like CodeClimate to automatically check code quality before anyone actually reads the code.

david90 1 day ago 0 replies      
I automate Stats of the products from Google Analytics using Google spreadsheet.By using appscript, I extract all key metrics such as activation rate/ retention rate from the raw data.

Then when I need to report all stats of multiple product, there is another automated script for me to aggregate them.

Saved me hours of context switching and copy and pasting.

mxxx 1 day ago 1 reply      
I get a weekly newsletter with a bunch of music recommendations in it, which I had been manually adding to a Spotify playlist.

So I recently wrote a CLI in Node that takes a URL and a CSS-style query selector (ie, '.album-title'), then scrapes the page, searches for each found instance and adds them all to a spotify playlist.


kogus 1 day ago 0 replies      
I do contract work for a few clients. I always automate the boring tasks of vpn'ning, firing up remote desktop, connecting to database servers, their email system, etc.

Automating that is fiddly and tedious, but it's worth it because I can just click a button and get a menu of clients. I choose one, and in about 10 seconds my machine is ready to go on their work.

Axsuul 1 day ago 0 replies      
I automate filtering my RSS feeds, or creating a weekly digest of emails that are not priority (bank statement emails, receipts, etc), crawling certain pages that I need to monitor and creating new RSS feed items on updates, weekly digests of top Reddit posts for specific subreddits, monitoring flight deals that originate from my airport.

I find that converting a lot of unimportant emails into RSS feed items has been a huge win for me.

ldp01 1 day ago 0 replies      
Clicking! I wrote a powershell script for Windows which mimicks the autoclick functionality which Ubuntu has in it's accessibility options. I also added double/triple clicking by twitching the mouse a bit.

It takes some getting used to but I feel it helps avoid forearm soreness.

koala_man 1 day ago 3 replies      
olalonde 1 day ago 0 replies      
I recently had to frequently create private git repos for job candidates (containing a coding challenge). I built a simple web app that does it all in one click (as a bonus, my non-technical co-founder can also use it). https://i.imgur.com/HhQP4lX.png
arikr 1 day ago 0 replies      
Great thread, thanks OP.
philip1209 1 day ago 0 replies      
I liked writing an internal command line utility for our Go codebase. It automates common dev commands like deployments (including installing dependencies, migrations, etc), sending test emails (eg to check formatting), and running smoke tests. Pretty minor, but it makes my life a lot easier. I plan on expanding it more for accessing prod and dev APIs.
vira28 1 day ago 0 replies      
I use slack a lot for the communication.

I have automated whenever there are significant events happen in our app, I will get notified. Its simple to implement. Configure the webhook.

Also, I did things like getting notified whenever there is a commit, pull request or push in your source control.

sawmurai 1 day ago 1 reply      
Commit hook that aborts commits if the projects code style is violated by one of the changes/added files
gottlos 1 day ago 1 reply      
Shopping list via Oscar, barcode scanner, open food facts

Aircon via temp sensors and node-samsung-airconditioner

still working on Owntracks/mqtt for useful automations on arrival home

lights plus motion sensor, lihht color by time of day (red at late night to save vision)

bakli 1 day ago 0 replies      
I've written a script which helps me copy-paste files from their folders in Material Design image library to my android project. This saves me at least 4 copy paste, and then renaming operations.
spinlock 1 day ago 0 replies      
I've automated deployment of my side project. When I merge a pr in github to master it will pull the new build and restart any process that's changed.
patrick_haply 1 day ago 1 reply      
Time logging. I use one piece of software to track my time, then fan those time logs out into the various pieces of software that need to know about them.
fest 1 day ago 1 reply      
Tracking packages so I could batch my trips to post office.

Simple web interface where I have a list of packages I've ordered with the last status update from post service web tracking for.

welder 1 day ago 0 replies      
I automate my time tracking using https://wakatime.com
utanapishtim 1 day ago 0 replies      
If I have to update a file programmatically when I make certain modifications to a codebase I'll write a script that automates the update.
SirLJ 1 day ago 2 replies      
Stock market trading systems, so I don't have to watch screens, also backups and also constantly improving monitoring for smooth operations
jessedhillon 1 day ago 2 replies      
I have a script that downloads bank and credit card transaction data, then applies rules to create a journal in GNU Ledger format.
borntyping 1 day ago 0 replies      
Anything I have to do more than once. If I have to do it a second time, I'll probably have to do it a third..
surfingdino 1 day ago 0 replies      
Saying "no" to meetings and interruptions. I have a box with a big "NO" written on top of it. Whenever someone comes by to ask me "how are you doing?" I tap the box.
based2 1 day ago 1 reply      
a collegue is doing JIRA exports to Excel / MS Project.
hacker_9 1 day ago 0 replies      
My build process.
webscalist 1 day ago 1 reply      
restart all things every night.
edwilson 1 day ago 0 replies      
i wrote little sync script to my server. it is save my mysql backups to google drive.
noiv 1 day ago 0 replies      
On the long run? All.
swayvil 1 day ago 0 replies      
All conversations.

In the case of f2f (face to face) I just let my phone run me like a peripheral.

canadian_voter 1 day ago 0 replies      
I wrote a bot that automatically comments on HN when certain topics appear.


This post has been automatically generated and may not reflect the opinion of the poster.

probinso 1 day ago 1 reply      
I automate things that a computer can do
bearton 1 day ago 0 replies      
I automate legal documents usings Advobot (advobot.co), a messenger based chatbot that walks you through drafting legal documents. It makes drafting legal documents easy and conversational and is much faster than traditional methods. I can also use it from my phone, which makes drafting legal documents on the go much easier.


Huhty 1 day ago 0 replies      
MY team and I run a reddit/HN-like community platform called Snapzu and we automate most (90%) of our social media channels.

We have 15 main categories, each with their own Twitter, Medium, WP, Blogger, etc. Here's an example of our science Twitter account: http://twitter.com/@Snapzu_Science

amingilani 1 day ago 0 replies      
Oh boy, sigh, I wish I could share something I just automated, it's insane. Like, everyone that sees it tells me it's pure genius.

Problem is that it isn't ready to for the public. I'll do a show HN next week, but by GOD it is a brilliant piece of automation and scaling :P

Soon (this is more for me than anyone else, i'm literally bursting with pride right now)

Turkish GSM networks currently play a message of the President on any phone call
422 points by mrtksn  2 days ago   172 comments top 21
kbody 2 days ago 2 replies      
"As president, I send congratulations on the July 15 National Day of Democracy and Unity and wish the martyrs mercy and the heroes (of the defeat of the coup) health and wellbeing,"

Source: https://au.news.yahoo.com/world/a/36394050/mr-president-erdo...

throwaway76493 2 days ago 1 reply      
There is something equally insane happening on the Turkish internets right now.

At least two major mobile operators / ISPs are injecting JS into web traffic to display pop-up ads / Youtube videos on the lower right corner of every web page. The videos "commemorate" last year's events on July 15 in a language that is, to put it mildly, thorougly in line with Erdogan's ideology, and make a point of offering free data and phone credits throughout the 3-day commemorations being held.

rdtsc 2 days ago 2 replies      
For a additional level of scary allow people to opt out but record who they are and compile a list. Use the list to deny them services or imprison when the next overthrow is attempted."You've been protesting and we noticed you blocked messages from our glorious leader... clearly a candidate for the labor camp"
buremba 2 days ago 2 replies      
Even if you stop watching TV, reading newspapers and following the political people on social media and avoid discussing political news with people, you can't escape from him and his followers.

They will force you to believe what they believe and if you don't, they will flag you and also make you listen their leader no matter what you do to avoid their propaganda.

Even though I believe that the leaders of Gulenist group did the coup attempt and are terrorist, Erdogan gave this power to them and yet acts like he's not responsible from all these shit.

xepbam57 2 days ago 4 replies      
Have anybody thought why you hear the sound(beeeep-beeeeep-....) when you make a call and from where it comes? Yes, telco can put anything there. Even more, I wounder why we do not hear some commercial Ad's every time we call. This would be in a spirit of current times...
mmerlin 2 days ago 1 reply      
So so sad when a country devolves into quasi-dictatorship
toroslar 2 days ago 2 replies      
It's a lie like so much other stuff in the press. I'm currently in Turkey/Antalya, I've a cell-phone with a turkish Vodafone SIM card - I had several phone calls today - no president in my phone.
fouadmatin 2 days ago 3 replies      
The number in the video is 112, which is Turkish-equivalent of 911 in the U.S.
exabrial 2 days ago 6 replies      
Why are they a NATO country again
Fnoord 2 days ago 2 replies      
What exactly is he saying? Can someone translate?
noncoml 2 days ago 4 replies      
I wonder how would things have been if Turket had been accepted to EU 10 years ago. Would it have helped?
Lagged2Death 2 days ago 2 replies      
Dexter Palmer's 2016 novel Version Control had imagined something rather like this in a near-future United States, where phone calls and video screens would occasionally be interrupted by a message from the president.

I had thought it was inventive and evocative, but sort of unrealistic.

I was wrong. Yikes.

NicoJuicy 2 days ago 1 reply      
It's funny to see that Erdogan wants to battle every European country and at the same time he asks us to visit Turkey.

His power comes from the wealth and investments of Western companies, so the people had it good in the past. But this is currently changing. Its 'just' a waiting game.

zagfai 1 day ago 0 replies      
Use a VPN to stop this happened again.Such as Yoga VPN, Bestline VPN, Super VPN...
homero 2 days ago 0 replies      
When people voted, they were tricked into thinking somehow they were voting against the West instead of installing a dictator for themself
AdamJacobMuller 2 days ago 0 replies      
What does this say in English?
marcxm 2 days ago 1 reply      
OzzyB 2 days ago 3 replies      
appendixsuffix 2 days ago 1 reply      
powertower 2 days ago 1 reply      
Talbotson 2 days ago 1 reply      
This is 100% normal for these types of situations.
Broadcom BCM43xx Wi-Fi chips allow remote attackers to execute arbitrary code nist.gov
219 points by rnhmjoj  3 days ago   42 comments top 12
cmurf 2 days ago 0 replies      
I've got a Macbook Pro from 2011 with BCM4331. So there must be a metric f ton of hardware affected.

The firmware is embedded in the closed source drivers, it's not available separately. It's uncertain how far back Broadcom will update their driver packages.

On Linux if you want to use the open source driver, either the distro or user must extract the firmware from the closed source driver using b43-fwcutter. I don't know what the license restrictions are that apply to distros that appear to include it in the base installs, and therefore probably get updated. But distros like Fedora don't include it due to license restrictions. Therefore it won't get updated automatically.

I think the kernel driver should warn, if not refuse to use without a force option, on firmware versions below a certain value.

userbinator 3 days ago 0 replies      
Note that Cypress acquired these from Broadcom and have opened up a lot of the documentation. There's still quite a lot of info not present, and they might still be going through and renaming them, but maybe if you ask... either way, a surprisingly pleasing result in contrast to the notoriously closed attitude before.
0x0 3 days ago 1 reply      
It feels like we just had another massive wifi firmware exploit not too long ago. iOS 10.3.1 / CVE-2017-6975 / CVE-2017-6956 / https://bugs.chromium.org/p/project-zero/issues/detail?id=10...
qume 3 days ago 3 replies      
Love my macbook pro (linux), but the broadcom chips are such a pain. Almost enough me to drop Apple hardware.

I think this has pushed me over the edge.

43224gg252 3 days ago 2 replies      
So since this is in the firmware is it safe to assume that this affects all devices with the broadcom chip, regardless of what OS they're running?
TekMol 3 days ago 3 replies      
Does this affect laptops running Linux? How do you know if you are vurnurable?
jwildeboer 3 days ago 2 replies      
Seems to be fixed in Android July security update. Not sure wrt iPhone, iPad.
nthcolumn 3 days ago 1 reply      
iokevins 2 days ago 0 replies      
One device specification resource:


andridk 3 days ago 1 reply      
Are there any tools out yet, to patch and check if your vulnerable?
anfractuosity 3 days ago 1 reply      
I haven't looked into how the exploit works in detail, but could this affect the RaspberryPi 3?
Australian PM Calls for End-To-End Encryption Ban eff.org
222 points by theunamedguy  3 days ago   149 comments top 19
corndoge 2 days ago 4 replies      
You know, every time I see something like this, I really hope the legislation passes. Then as the enforcement attempts to handle their impossible task we get to sit back and watch as Tor and I2P and EFF get funded, citizens and politicians gain a basic understanding of encryption, and governments learn that one does not simply ban crypto.

It would be fun. So more power to them! Let's see Britain create their own crypto free internet! Let's watch the public uproar as Google and WhatsApp becomes unavailable in Australia! These things are just big opportunities to get people familiar with what's at stake.

chrischen 3 days ago 8 replies      
I wish the gun nuts in the US realize that the constitutional protection of gun rights was drafted in a time when guns weren't niche and were necessary as a check on government power.

Today, encryption is a check on government overreach, and guns are effectively a vestigial hobby (unless you're in a gang or the illicit drug industry).

gingernaut 3 days ago 6 replies      
Pushed on how encrypted messages could be read when service providers don't hold the keys necessary decryption, and Turnbull had this to say:

Well, the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable but the only laws that applies in Australia is the law of Australia.


jacques_chester 3 days ago 2 replies      
I see that we Australians have resumed our international role as laughing-stock of the technology industry. As much as I disagree with the Greens generally, losing Scott Ludlam at this moment is a serious loss.

For those of you able to donate, the equivalent of the EFF in Australia is the EFA: https://www.efa.org.au/

openfuture 3 days ago 2 replies      
"Terrorists are now only allowed to use waterpistols, this should dramatically lessen casualties and help with identifying them"
arkem 2 days ago 0 replies      
I think calling this a ban on end-to-end encryption is an mis-characterization.

According to the press conference where this comes from[1] it seems that they're talking about legislation that would expand the Telecommunications Act's provisions to require communications providers to assist law enforcement[2] to cover internet messaging platforms.

This doesn't necessarily mean that Facebook would be obligated to backdoor their encryption or store a keys for all communications or change the architecture of their platform. They would be obligated to comply with interception warrants to the best of their ability.

Forms that this assistance could take (from the act):

 (7) A reference in this section to giving help includes a reference to giving help by way of: (a) the provision of interception services, including services in executing an interception warrant under the Telecommunications (Interception and Access) Act 1979 ; or (b) giving effect to a stored communications warrant under that Act; or (c) providing relevant information about: (i) any communication that is lawfully intercepted under such an interception warrant; or (ii) any communication that is lawfully accessed under such a stored communications warrant; or (ca) complying with a domestic preservation notice or a foreign preservation notice that is in force under Part 3-1A of that Act; or (d) giving effect to authorisations under Division 3 or 4 of Part 4-1 of that Act; or (e) disclosing information or a document in accordance with section 280 of this Act. 
I prefer to see law enforcement have broad authorizations but limited special powers (i.e. they're allowed to do a lot of things in pursuit of an investigation but they don't have many ways to compel assistance) but I think this story is overblown (largely because Turnbull's quote out of context is pretty funny).

[1] http://www.pm.gov.au/media/2017-07-14/press-conference-attor...

[2] http://www.austlii.edu.au/au/legis/cth/consol_act/ta1997214/...

chris_wot 3 days ago 1 reply      
He didn't say that when he was using end-to-end encryption.
wisty 3 days ago 2 replies      
As I understand it, it means banning end to end communication between clients. So you can chat over https, but only if the server in the middle (which can be served warrants) is doing the encryption.
borplk 3 days ago 0 replies      
Last week at G20 he saw Theresa May and said "I'll have what she's having". And here we are.
Zigurd 2 days ago 0 replies      
"The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia" Once they pass a law barring gravity from affecting police investigations, it will be a crime to use gravity to prevent flying cops from investigating.
daxfohl 3 days ago 1 reply      
May as well let's ban talking. Wait, and thinking too! Except for dickheads that run for office because ... (somebody smarter and better connected than me please end this sentence in a way that makes sense).
stevew20 2 days ago 0 replies      
I usually go the high route and write something insightful or informative on HN. However, the only thing that comes to mind after reading this are phrases like "What a fucking nut", or "Did no one review his speech for him, because they were too busy pulling the stuck fruit loops from his nose?"

Australia, shame on you for letting this moron take up this position of power. Oh wait.... I live in the USA.... Shame on me too.

forgottenacc57 3 days ago 1 reply      
It's stupid cause it doesn't solve the problem it is intended to.
lngnmn 3 days ago 0 replies      
Instead of better education, social security and jobs creation...
javiramos 3 days ago 0 replies      
Classic. It is always the politicians who screw things up.
openfuture 3 days ago 1 reply      
"Terrorists are now only allowed to use waterpistols, this will dramatically lessen civilian casualties and help us identify them"
blubb-fish 3 days ago 1 reply      
and I thought he's a good and smart guy because he made fun of trump.
type0 3 days ago 0 replies      
You just can't turn the bull, all pun intended.
       cached 18 July 2017 15:11:01 GMT