hacker news with inline top comments    .. more ..    18 Jul 2017 Best
home   ask   best   7 days ago   
1
Net Neutrality Day of Action: Help Preserve the Open Internet blog.google
1663 points by ghosh  5 days ago   430 comments top 52
1
mabbo 5 days ago 32 replies      
If Google were actually serious about Net Neutrality, they would use their insane market power to protect it.

How? Well, a simple statement saying "any ISP who abuses net neutrality will have their customers cut off from Google products". No Google search, no YouTube, no Gmail. Have those requests instead redirect to a website telling the customer what their ISP is doing, why Google won't work with them, and how to call to complain to the ISP. Make the site list competitors in the user's area that don't play stupid games.

Is this an insane idea? Yep. Would Google come under scrutiny because of their now-obvious market power? Oh definitely. And Google would probably lose money over it. But it would certainly work.

People don't get internet, and then decide to use Google. They want Google and then get internet for that purpose.

edit: an hour later, fixing an autocorrect word

2
AndrewKemendo 5 days ago 5 replies      
Thanks in part to net neutrality, the open internet has grown to become an unrivaled source of choice, competition, innovation, free expression, and opportunity.

Unless my history is wrong, and please correct me if that is the case, until the Title II decision in 2015, there were no regulations preventing an ISP from discriminating network traffic. So to say that Net Neutrality has been key to an open internet from 1980-2015 seems without merit.

I think the argument here is the same for any argument of nationalization: To turn a private good into a public one.

Businesses, local and federal governments, have all contributed to the infrastructure that is the internet. So the private company can't say, "well it was all our investment" and equally the Government can't say "This is a public good."

3
ambicapter 5 days ago 11 replies      
This has been the weakest day of action I could imagine. I thought sites were going to be throttled. Turns out its just some color changes and, oh, reddit has a fancy "slow-loading" gif for their website name. A real wake-up call!
4
bobcallme 5 days ago 6 replies      
"Net Neutrality" in its final form did not solve or fix any problems with the Internet. The definition of "Net Neutrality" is poorly defined, too vague and does not have any proposed legislation attached to "fix" things. Even when new rules were implemented, ISPs still throttled torrents and manipulated traffic. The only way to fix the Internet is to do so from a technical perspective, not by adding more regulations that ISPs won't obey (they work that into their business model). The "Internet" has never been free and has always been controlled by a handful of entities. The only fix for the Internet is if everyone actively participates in the Internet's infrastructure and we work to create technologies that thwart active threats from ISPs or that gives ISPs competition.

;TLDR I don't support Net Neutrality.

5
cyphar 5 days ago 1 reply      
I know this is "old news" now, but it's very fascinating that Google is suddenly so concerned about "the open internet" 4 days after EME was ratified (a proposal that they authored and forced other browsers into supporting thanks to their enormous browser share).

It feels like Google (and other companies for that matter) are only concerned about "the open internet" when it benefits their bottom line. In fact, I'm not convinced that Google _does_ care. For SOPA and PIPA they actually did a (lukewarm) blackout of their site for the day of action. Wikipedia shut down on that day. Where has all of the enthusiasm gone?

6
EdSharkey 5 days ago 4 replies      
I don't understand the logic of ISP's throttling certain sites based on the traffic to those sites.

As a consumer on ISP's last mile lines, I make a series of TCP requests and I expect responses. Fill my pipes with those responses as best you can and charge me for the privilege. If you're not making enough money on that, charge me more for the bandwidth.

Market-wise, why would an ISP anything else than fill my pipe with what I'm asking for?

An ISP should make all the money it needs to make off my service subscription. It's not too far of a leap for me to imagine U.S. laws being changed that restrict ISP's to only being able to charge the end-user for their subscriptions with heavily regulated flat fees for peering arrangements and co-location services placed near the consumer.

The obvious shenanagans that are ramping up here will eventually lead to a massive consumer backlash and a regulatory hammer coming down. People are not going to forget what the open internet looked like.

7
rtx 5 days ago 6 replies      
FCC Chairman Ajit Pai: Why He's Rejecting Net Neutralityhttps://www.youtube.com/watch?v=s1IzN9tst28
8
peterashford 5 days ago 4 replies      
As a New Zealander, I find it extraordinarily inappropriate that global infrastructure like the Internet is being shaped by the whims of US politics and corporate culture. The Internet is a global network of global concern and it should be above the manoeuvring of Republicans and American Internet providers
9
lerpa 5 days ago 2 replies      
Net neutrality just helps the status quo, and forces the "evil greedy ISPs" to take your money. Yeah let's show them by giving them money and no competition to their business... wait.

Vote for less regulation, not just getting rid of NN but getting rid of the monopolies that exist at the local level.

10
JoshTriplett 5 days ago 2 replies      
Now if only this were linked from the bottom of google.com .
11
zackbloom 5 days ago 1 reply      
If you use Cloudflare you can install the Battle for the Net widget: https://www.cloudflare.com/apps/net-neutrality
12
natch 5 days ago 0 replies      
Am I going blind, or is Google not listed here amongst the companies listed as participants behind battleforthenet.com?

https://www.battleforthenet.com/july12/#participants

Why, Google?

Yes I see they sponsored https://netneutrality.internetassociation.org/action/ but why not get behind both sites?

13
rf15 5 days ago 1 reply      
Can I contribute without being an US citizen?It seems to be an US-internal issue, but considering that most of the net belongs to the US, this might actually be a far more global question than is legally coverable/definable by US law.
14
gremlinsinc 4 days ago 0 replies      
So glad I live in Utah -- where we have X-mission Pete Ashdown is a huge supporter of EFF and Net Neutrality and anti-NSA -- and Google fiber - google's a big supporter as well. Loved X-mission, but new landlord only has google fiber installed so using that, but both had 1GB connections..

Two great ISP's who WON'T be doing shenanigans like comcast/att when net neutrality is destroyed.

Too bad more people in America don't have good choices... I do think though the biggest thing they could do for 'action' --would be every Monday block all comcast/att users from using Google, Facebook, Twitter, Youtube, Reddit in protest... till the ISPs cry and beg and plead w/ the FCC to re-instate net-neurality.

If it's legal to prioritize websites over others... then it's legal for those same websites to prioritize certain ISPs over others...

15
FRex 5 days ago 1 reply      
I can't even enter the USA without a visa that is expensive, hard to get and doesn't guarantee entry but I'm getting all these net neutrality PSAs today telling me to send letters to FCC and Congress... I'm supportive of the idea itself but it's a bit funny and stupid, the Americano-centrism.
16
throwanem 5 days ago 0 replies      
In the Notice of Proposed Rulemaking (Docket No. 17-108), much is made of the rapid growth of the Internet under the former "light-touch" regulatory regime. The notice overlooks that this was also an environment in which competition among many Internet service providers could and did flourish.

Since then, the provision of connectivity has consolidated among only a few very large companies, which among them have strongly oligopolic power to enforce whatever conditions they please upon their customers, both residential and commercial.

In the late-1990s, early-2000s environment of healthy competition among Internet service providers, utility-style regulation of ISPs, such as that here under consideration of repeal, was not a necessary measure.

However, in the current strongly oligopolic environment, only the regulatory power of the United States Government can continue to provide and enforce sufficient oversight to maintain a semblance of free market behavior.

Internet-powered entrepreneurship greatly benefits the US economy. The small, and occasionally large, businesses thus created have an outsized economic impact in terms of taxes paid and jobs created. Absent a true free market, or even the regulatory semblance of one, for Internet connectivity, these businesses may well find themselves severely hampered in their ability to earn revenue, with concomitant negative effect on their ability to contribute to our economy.

As such, I must strongly urge the new regulatory regime proposed in this filing not be adopted.

I thank you very kindly for your time and your consideration, and trust that you will decide in that fashion which you regard to best serve the interests of your constituents and of the nation which you serve.

(Also, the "Battle for the Net" folks would have done well to hire a UX designer - or perhaps to hire a different one. The lack of any clear confirmation that one's message has been sent fails to inspire confidence. Perhaps there's an email confirmation that has yet to arrive, but...)

17
heydonovan 5 days ago 1 reply      
The marketing for Net Neutrality is very poor. Just asked a few non-technical friends about it. A few responded with "Do you believe everything you read on the Internet?". Now if all their favorite websites were shutdown for a day, that would get everyones attention.
18
openloop 5 days ago 1 reply      
I am starting a small business. One of the decisions I must account for is network performance versus price. Perhaps I choose to partner with a company that my network deprioritizes. I am already at a disadvantage because I cannot afford to run my own lines or peer like large corporations.

These same corporations can invest or purchase smaller new buisness and enhance their portfolio. Some already support network neutrality as they understand this.

I know my buisness depends upon my own effort. But I am sure many other small buisness owners face the same difficulty.

I know it is hard to be fair and objective in allowing access to the entire electromagnetic spectrum. Thanks for the article.

19
crucini 4 days ago 0 replies      
While I don't have a good grasp on the larger issue, I hope we can protect small players from being squeezed. In my limited understanding, there are really two separate things here: Comcast vs Youtube and Comcast vs startup. As I understand it, Comcast gets mad that they have to invest in infrastructure so people can watch Youtube. They think Youtube is free-riding on their infrastructure. Comcast is envious of Youtube's profits and eyeballs. So Comcast wants to squeeze money out of Youtube. A battle between giants.

The other issue is that small sites including startups could get throttled almost incidentally in this war. They don't use much bandwidth, being small, but if Comcast enacts some "bizdev" process where it takes six months of negotiations to get into the fast lane, any deal below $1M is probably not worth their time.

This is how cell phone software worked before the iPhone - get permission before you can develop (IIRC). If we end up with fast-lane preferential pricing, it should really be available to the smallest players. Ideally it should be free, but the Apple app store model would work - $99/year for fast lane access until your bandwidth is really significant. But would the individual have to pay $99 to every major ISP out there?

20
shmerl 5 days ago 1 reply      
I didn't see any Net Neutrality related banner at: https://google.com

So Google didn't do what they could here.

21
thidr0 4 days ago 2 replies      
One thing I don't understand about net neutrality. Say I'm a toll road. I built the road when cars were relatively small and light. Now, some cars are getting really heavy and big (think semi trucks) and are the majority of my traffic. Because of this, they beat up the road and cause more congestion. So I want to repair the road and/or add more lanes by increasing the toll on these trucks. But all the trucking companies are complaining and preventing me from doing it, thus ultimately hurting the small personal cars that want to zip through.

Obviously this is an analogy to net neutrality, so why is this reasonable situation fundamentally different? In a free market, shouldn't I be able to increase the tolls on my private infrastructure for those that put the most stress on it?

(Now I will say, the fact that there's only one toll road option for many people is anti-competitive and against the free market, but that's not this topic)

22
chroem- 5 days ago 1 reply      
It's disingenuous for big business to try to frame this as a grassroots movement for freedom on the internet when they were completely silent about illegal NSA spying. The only difference between NSA spying and losing net neutrality is that without net neutrality their profits might be threatened.
23
joeyspn 5 days ago 1 reply      
24
executive 5 days ago 0 replies      
Help Preserve the Open Internet: Repeal and Replace Google AMP
25
openloop 5 days ago 0 replies      
I am starting a small business. One of the decisions I must account for is network performance versus price. Perhaps I choose to partner with a company that my network deprioritizes. I am already at a disadvantage because I cannot afford to run my own lines cross state like large corporations.

These same corporations can invest or purchase smaller new buisness and enhance their portfolio. Some already support network neutrality as they understand this.

I know my buisness depends upon my own effort. But I am sure many other small buisness owners face the same difficulty.

I know it is hard to be fair and objective in allowing access to the entire electromagnetic spectrum.

26
mychael 4 days ago 0 replies      
Follow the money. Do you really think the biggest corporations in America support Net Neutrality because of some altruistic need for things to be "fair"?
27
forgotmysn 5 days ago 0 replies      
If anyone would like to ask more direct questions about Net Neutrality, the ACLU is having an AMA on reddit right nowhttps://www.reddit.com/r/IAmA/comments/6mvhn3/we_are_the_acl...
28
Anarchonaut 5 days ago 0 replies      
Net neutrality (government's involvement in the Internet) sucks

https://www.google.de/amp/s/techcrunch.com/2017/05/19/these-...

29
daveheq 5 days ago 0 replies      
When everybody relies on the internet, even moreso than phones, it's a public utility that needs protection from the greed-feeders.
30
thinkingemote 5 days ago 1 reply      
Forgive me as a European but are there companies who oppose net neutrality? As in are there HN readers who work for them? If so, who are they and what are their reasons? Is the issue like same sex marriage where the only opposition is so laughably out of date or are there nuances?
31
aaronbrethorst 4 days ago 0 replies      
Consider this your friendly reminder that Clinton wouldve preserved the NN rules set up under Obama, and we wouldnt even be having this discussion had she been elected.

Especially consider this the next time a friend says every politician is the same, or whatever.

32
yarg 5 days ago 1 reply      
The only real way to ensure net neutrality is to ignore the bullshit and implement a distributed secure internet.

Net neutrality could be forced into place, regardless of the laws passed by Congress or the malfeasance of the ISPs.

I see no reason why Google would ever support such a thing.

33
tmaly 5 days ago 0 replies      
Another channel to consider, but much more of a long tail play is to put some effort into the state level political races. Many politicians with the exception of wealthier business people get started at the state level.
34
geff82 5 days ago 0 replies      
Greetings from Europe where we have er neutrality. Good luck to my American friends with voting for a sane government in 3 years. Maybe there are some remainings of the country you could have been.
35
protomyth 5 days ago 0 replies      
Does anyone have actual legislation written up that I can point my Congresspeople to? Is there a bill that can be introduced that will accomplish the objective of "Net Neutrality"?
36
wenbert 5 days ago 0 replies      
If this turns out to be big amongst other things, then some "big" news will come up in next few days to cover it up.

At least that how they would do it in Philippines.

37
rnhmjoj 5 days ago 0 replies      
Google trying to preserve the Open Internet... yeah right.
38
pducks32 5 days ago 0 replies      
Off topic: this is a very nice site. Its clean, easy to read (iPhone and iPad), and I think it makes good use of Google's design language.
39
nickysielicki 5 days ago 3 replies      
(This comment is a little bit disorganized, so I apologize for that.)

Far too many people don't seem to understand the arguments against net neutrality as it has been proposed... There's been much made about the "astroturfing" and automated comments on the FCC website that go against net neutrality-- but what about the reverse? John Oliver doesn't know what the hell he's talking about. Reddit and HN provide warped perspectives on the issue.

Don't you guys realize that no matter what policy is chosen, someone is getting screwed and someone going to profit? Don't get me wrong, the ISPs are not exactly benevolent organizations. But I don't think they're evil either. Plain and simple, if you think this is a cut-and-dry, good-versus-evil, conglomerates-versus-littleguy issue, I think you're not hearing both sides of the issue. This issue is between content providers that serve far more bits than they take in, and ISPs, and there are billions of dollars on both sides.

In other words, don't think for a second that this is about protecting small internet websites from having to pay ransom. That's not what is going to happen. The only people who are going to be squeezed are the giants like Google, Netflix, etc., and it's no surprise that these are the people who are making such a fuss about it today.

The particular event that made me reconsider net neutrality was digging into the details of the Comcast/Netflix/Level3 fiasco a couple years ago. Everything I had heard about that situation made it sound to me like Comcast was simply demanding ransom. The reality of the situation is that L3 and Netflix acted extremely recklessly in how they made their deals, and IMO deserved everything that came to them. Much is made about "eyeball ISPs" and the power it gives them. In reality, I think Netflix has more power in swaying consumers, and I think they used that power to bail themselves out of a sticky situation by badmouthing Comcast.

I don't see how compensatory peering agreements would work out well in a net neutral world. Specifically, the FCC proposal for Title II classification (paraphrasing here) said that the FCC would step in when it believed one party was acting unfairly. It is far too open-ended, doesn't list any criteria for what that means, and it's not the FCCs job anyway, the FTC should be doing that.

But in general I don't think net neutrality is a good idea. I think that people are out of touch with internet access in rural parts of the US, and I don't think NN is beneficial for that situation at all. My grandmother pays $30/mo for internet access that she barely uses, and I don't think it's right to enshrine into law that Comcast can't offer her a plan where she pays $5/mo instead for limited access to the few sites she uses.

As a bandwidth-hogging internet user, a lack of net neutrality will probably mean that I will pay more. But maybe that's how it is supposed to be. The internet didn't turn out to be what the academics once hoped it would be. And that's okay. The internet should serve everyone, however they want to use it, and the market should be built around that principle-- not around decades-old cypherpunk ideals.

I think it's incredible that behemoths like Google have the nerve to paint this as if they care about an open internet. It's obvious that their dominance is what makes an open internet irrelevant.

40
unityByFreedom 5 days ago 0 replies      
I'm just bummed Google didn't change their banner like the SOPA days. Big miss there.
41
aryehof 5 days ago 0 replies      
Is this just an issue in the USA?
42
blue_leader 4 days ago 0 replies      
All this going and Darpa wants to put ethernet jacks into our brains.
43
valuearb 5 days ago 3 replies      
I have never understood the need for net neutrality. That doesn't mean we don't need it, it means that no one has ever explained the need to me in a way that makes sense. Give me real world examples. What has any ISP done that would violate Net Neutrality that I would object to?
44
hzhou321 5 days ago 0 replies      
Google, Amazon, Netflix vs. ATT, Verizon, Comcast.

Monopolies vs monopolies.

Where's the freedom for us?

45
tyng 4 days ago 0 replies      
Funny thing is, I can't even visit blog.google from China
46
mnm1 5 days ago 6 replies      
Sorry Google (and FB, Amazon, etc.) this doesn't actually count as taking action. Not even a single link on their home page. An obscure post on a blog won't do shit. Let's stop pretending that you want net neutrality, Google, et al. Day of action my fucking ass.
47
dzonga 5 days ago 0 replies      
Simple way to understand Net Neutrality, look at the way AT&T prioritizes DirecTV Content on Mobile. It should be illegal, but well
48
tyrrvk 5 days ago 5 replies      
I see a lot of shills posting their anti-Network Neutrality stuff here, so I wanted to chime in reminding folks of a few things:Telco's were forced at one point to share phone lines. Remember all those DSL startups? Remember speakeasy? This was called local loop unbundling. What did the Telco's do? everything possible to break or interfere with these startup service provides. The telco's felt that it was "their lines". Customers were angry and eventually local loop unbundling was dismantled. Ironically - France, South Korea and other nations copied this idea for their high speed network providers - and it actually worked! You can get high speed internet in these countries from a variety of providers. Competition! If the FTC/FCC wasn't completely under regulatory capture, and telcos like AT&T were punished for this behaviour and competitors were allowed to provide services over last mile connections then yes, we might not need something like Network Neutrality.Instead we have entrenched ISP monopolies and no competition. So we need consumer protections like TitleII and Network Neutrality. We also need community owned fiber networks springing up everywhere, which then over time could lessen the need for regulation as market forces would prevail. However, entrenched monopolies like Comcast and AT&T have to be shackled. It's the only way.
49
throwawaycuz 5 days ago 10 replies      
Serious question, could someone please educate me.

1) How is Net Neutrality different from a slippery slope to communism?

2) During the President Obama years, my ISP in the U.S. offered 3 different tiers of service at 3 different prices. How is that pure "net neutrality"? (this was similar to the situation where in the U.S., rich lefty-liberals don't send their kids to public schools... but want poor conservatives to send their kids to public schools, rich lefty-liberals don't want public housing built in their neighborhoods... etc. etc... but still want to virtue signal that they're in favor of public education and public housing)

50
dmamills 5 days ago 1 reply      
This day is a joke.
51
idyllei 5 days ago 0 replies      
Net neutrality has been a buzzword for a while now. Large new companies like to harp on it just for views, and they don't really explain to viewers just what losing it will mean. FOX News's motto "We report. You Decide" makes it evident that large networks don't care about the validity of information, just that it generates the largest amount of revenue for them. Companies (and individuals) with money won't care about net neutrality--they can pay their way around it. But the casual user can't afford that, and they aren't being educated as to what this means for them. We need to get large news networks to accurately report the situation and how consumers can help.
52
pheldagryph 5 days ago 1 reply      
I understand why tech companies and VCs want net neutrality. But this protest is what is wrong with Silicon Valley "culture". It's incredibly out of touch with reality.

Are we really being asked to take this hill? Why? By whom?

History will record the hundreds of thousands of children who will die of the current-and-present famine affecting East Africa and the Arabian Peninsula. It will only exacerbate the current, historic, and costly human migration to Europe.

This is a matter of life and death for millions. Though, unfortunately, the cost can only be measured in human lives:https://www.oxfam.org/en/emergencies/famine-and-hunger-crisi...

2
Battle for the Internet battleforthenet.com
1274 points by anigbrowl  5 days ago   484 comments top 52
1
Clanan 5 days ago 11 replies      
Can someone please respond to the actual pro-repeal arguments (in a non-John-Oliver-smug way)? Everyone is focusing on "woe is the unfree internet!" which seems like a spoonfed, naive response with no content. And just having Google et. al. on one side isn't enough of a reason, given their motivations. The given reasons for the current FCC's actions appear to be:

1. The Title II Regs were designed during the Great Depression to address Ma Bell and don't match the internet now.

2. The FCC isn't the right vehicle for addressing anti-competitive behavior in this case; the FTC would be better.

3. The internet didn't need fixing in 2010 when the regs were passed.

2
drucik 5 days ago 7 replies      
I don't get why I see arguments like 'Oh, why would it matter, its not neutral anyway' or 'it won't change anything' and no one tries to explain why allowing an end of net neutrality would be bad.I would say the reason why net neutrality is important is the following:

'On paper' the end of net neutrality will mean that big companies like google or facebook (which, according to the website, do not support net neutrality [why would they, right?]) will pay the ISPs for priority connection to their service, and ISPs will be able to create 2 payment plans for their customers - throttled network and high-speed, super unthrottled network for some premium money.And some people are fine with that - 'it's their service' or 'i only use email so i don't care' or other things like that.

But we are living in a capitalism world and things aren't that nice. If it is not illegal to slow down connections 'just because', I bet in some (probably short) time companies will start abusing it to protect their markets and their profits. I'd expect under the table payments, so the company F or B will be favored by a given ISP, and you can forget about startups trying to shake up the giants.

3
d3sandoval 5 days ago 2 replies      
If your internet browser were a hearing aid, the information coming in would be sound - whether that's your husband or wife asking you to do the dishes, a ring at your doorbell, or even an advertisement on the radio.

now imagine if that hearing aid wasn't neutral in how it handled sound. imagine if, when the advertisement played on the radio, it would be louder than all other sounds around. at that time, you might miss an important call, maybe your wife just said "I love you", or perhaps there's a fire in the other room that you are now not aware of, because clorox wipes demanded your full attention.

without net neutrality, we lose the ability to chose our own inputs. our provider, our hearing aid, gets to choose for us. this could mean slower video downloads for some, if they're using a competitor's streaming service for instance, but it could also mean the loss of vital information that the provider is not aware even exists.

By rejecting Title II recommendations, the FCC will introduce a whole new set of prioritization problems, where consumers no longer have the ability to decide which information is most important to them. and, if the provider goes so far as to block access to some information entirely, which it very well could without Title II protections, consumers would be at risk of missing vital information - like a fire in the house or their husband saying "I love you"

4
pedrocr 5 days ago 5 replies      
I fully support the net neutrality argument, it seems like a no brainer to me. However I find it interesting that companies like Netflix and Amazon who heavily differentiate in which devices you can have which video quality[1] will then argue that ISPs shouldn't be able to differentiate which services should have which transport quality.

The situation seems completely analogous to me. I'm paying my ISP for a connection and it thinks it should be able to restrict which services I use on top of it. I'm paying a content provider for some shows/movies and it thinks it should be able to restrict which device I use to view them.

The argument for regulation also seems the same. ISPs don't have effective competition because physical infrastructure is a natural monopoly. Content providers also don't have effective competition because content access is also a natural monopoly because of network effects (right now there are 2-3 relevant players worldwide).

[1] Both of them heavily restrict which devices can access 4K content. Both of them make it very hard to have HD from non-standard devices. Netflix even makes it hard to get 1080p on anything that isn't the absolute mainstream (impossible on Linux for example).

5
marcoperaza 5 days ago 2 replies      
John Oliver, College Humor, and some comedian are featured heavily. You're going to need to do more than give liberal millennials something to feel smug about, if you actually want to win this political battle.

I don't know where I stand on net neutrality, but this is certainly not going to convince me.

6
eriknstr 5 days ago 4 replies      
Very recently I bought an iPhone and a subscription that includes 4G service. With this subscription I have 6 GB of traffic per month anywhere in EU, BUT any traffic to Spotify is unmetered, and I don't know quite how to feel about this. On one side it's great having unlimited access to all the music in Spotify at any time and any place within the whole of EU, but on the other side I worry that I am helping damage net neutrality.

Now Spotify, like Netflix and YouTube and a lot of other big streaming services, almost certainly has edge servers placed topologically near to the cell towers. I think this is probably ok. In order to provide streaming services to a lot of people you are going to need lots of servers and bandwidth no matter what, and when you do you might as well work with the ISPs to reduce the cost of bandwidth as much as possible by placing out servers at the edges. So IMO Spotify is in a different market entirely from anyone who hasn't got millions or billions of dollars to spend, and if you have that money it should be no more difficult for you to place edge servers at the ISPs than it was for them.

But the unmetered bandwith deal might be harmful to net neutrality, maybe?

7
_nedR 5 days ago 1 reply      
Where were the protests, blackouts, outrage and calls for action from these companies (Google, Amazon, Netflix) when the Internet privacy bill was being repealed? I'll tell you where they were - In line outside Comcast and Verizon, eagerly waiting to buy our browsing histories.

We had their back the last time net neutrality issue came around (lets be honest, their business depends on a neutral net). But they didn't do the same for us. Screw them.

8
franciscop 5 days ago 4 replies      
As a foreigner who deeply cares about the web, what can I do to help? For good or for bad, USA decisions on the Internet spread widely around the world. "Benign" example: the mess before UTF8, malign example: DRM and copyright fight.

Note: Besides spreading the word; I do not know so many Americans

9
superasn 5 days ago 1 reply      
This is great. I think the letter textarea should also be empty.

Instead there can be a small wizard with questions like "why is net neutrality important to you", etc with a guideline on what to write.

This way each letter will be a differently expressed opinion instead of every person sending the same thing and may create more impact.

10
agentgt 5 days ago 2 replies      
I have often thought the government should provide an alternative option for critical service just like they do with the mail and now health insurance (ignoring current politics).

That is I think the net neutrality issue could be mitigated or non issue if there were say a US ISP that operates anywhere where there is a telephones poles and public towers analogous to the United States Postal service (USPS).

Just like the roads (postal service) the government pseudo owns the telephone poles and airways (FTC) so they should be able to force their way in.

I realize this is not as free market as people would like but I would like to see the USPS experiment attempted some more particularly in highly leverage-able industries.

11
melq 5 days ago 1 reply      
The form on this page requires you to submit your personal information for use by third parties. I refreshed the page 3 times and saw 3 different notices:

"Fight for the Future willcontact you about future campaigns.""Demand Progress willcontact you about future campaigns.""FreePress willcontact you about future campaigns."

No opt out, no thank you.

12
webXL 5 days ago 2 replies      
It saddens me to see HN jump on the bandwagon of an anti-free-market campaign such as this. Words such as "battle" and "fight" have no place when coming up with solutions to problems in a market economy. I know government and its history of cronyism are a big part of the problem, but to think that more regulation will make everything better is woefully misguided. How did it come to pass that there's so little trust and understanding in the system of voluntary, peaceful, free trade that has produced virtually all of the wealth we see around us? Sure there are tons of problems, but I'm sure you'll agree that they pale in comparison to those of state-run economies.

The mistrust of large corporations is definitely warranted. McDonald's doesn't give a rat's ass about your health as long as you're healthy and happy enough keep coming back. And the reason why people come back is because McDonald's knows they have options; enough so that we all have it pretty good dietary-wise. Consumers and suppliers don't need to organize protests and boycotts of fast-food chains. Likewise, I don't think the major ISPs give a rat's ass about our choice/speed of content, so long as we're happy enough to not jump to another provider. As with food vendors, more choice, not more regulation, is the answer. The market should determine what it wants; not bureaucrats under the influence of large corporations.

13
exabrial 5 days ago 2 replies      
Hey guys,

The Trump administration expressed interest in having the FTC regulate ISPs. Does it really matter who enforces net neutrality as long as we have it?

It's not secret that ISPs have local monopolies, and that's an area of expertise the FTC has successfully regulated in the past (look at how the Gas station economy works).

It's really time to move past the 2016 election and put petty political arguments aside. We're failing because we're divided. I beg everyone to please stop being smug, and push collaboration with the powers that be rather than confrontation.

14
kuon 5 days ago 2 replies      
I'm fully in favor of net neutrality, but I am not against premium plans for some content.

For example let's say I have a 50/20Mb internet. I should be able to browse the entire internet at that speed. But, if I want to pay extra to have like 100Mb with QoS only from netflix, I am not against this kind of service.

15
bluesign 5 days ago 1 reply      
Why not make barrier of entry easy for other/new ISPs by forcing them to share infrastructure for a fee, and then allow them to tier/price as much as they want?
16
redm 5 days ago 0 replies      
I see everyone framing this conversation around Comcast charging customers to access websites. I feel that's just a talking point, not the real meat of the issue.

Regarding Backbones:

If I recall correctly, this originally came about over a peering dispute between Level 3's network and Netflix. The internet backbones work on settlement-free or paid to peer. When there is an imbalance, the party with the imbalance pays. When there is the balance, no one pays. This system has worked well for a very long time.

Regarding Personal Internet Access:

Consumer Internet connections are overbooked, meaning you may have the 100Mb link to the ISP but the ISP doesn't have the capacity for all users to use 100Mb at the same time. In short, they aren't designed for all users to be using high capacity at the same time. These networks expect users using bursts of capacity. This is why tech like BitTorrent has been an issue too.

There is a fundamental shift occurring where users are consuming far more network capacity per user because of technology like Netflix. I know I'm streaming 4k Netflix :D

17
elbrodeur 5 days ago 0 replies      
Hey everyone! My name is Aaron and I'm on the team that helped put together some of the digital tools that are making this day of action possible. If you find any issues please let us know here or here: https://github.com/fightforthefuture/battleforthenet
18
Pigo 5 days ago 0 replies      
It's very disheartening that this is a battle that doesn't seem to end. They are just going to keep bringing proposals in hopes that one time there won't be enough noise to scare politicians, or worse the politicians are already in pocket just waiting for the opposition level to be at a minimum. The inevitability vibe is growing.
19
sexydefinesher 5 days ago 0 replies      
*the American internet

Meanwhile the EU already has laws for Net Neutrality (though zero-rating is still allowed).

20
polskibus 5 days ago 0 replies      
That's a great illustration of what happens when you let the market be owned by only several entities. Long time ago, there were more, with time centralization happened and now you have to bow to the survivors.

Similar situation but at an earlier stage can be observed on the Cloud horizon - see Google, AMZN, MS, and maybe FB. They own so much traffic, mindshare and sales power, in theory they are not monopolies, but together their policies and trends they generate shape the internet world.

I'm not saying this current situation with Verizon et al is OK, just saying that if you intend to fix it, consider addressing the next centralization that is still happening.

21
mnm1 5 days ago 1 reply      
Are Google, FB, Amazon, and others actually supporting this and if so, how? I don't see anything on their sites about this. As far as I'm concerned, they're not doing anything to support this. And of course, why would they?
22
_eht 5 days ago 4 replies      
All I can find are arguments for neutrality, it seems like a very vocal crowd full of businesses who currently make a lot of money from people on the internet (reddit, Facebook, et al).

Anyone want to share resources or their pro priority internet stance?

23
pycal 5 days ago 1 reply      
There's truth found in Ajit's comment, that Americans' internet infrastructure just isn't as good as other countries. Is that because of the regulatory climate? The ISPs receive a lease on the public spectrum; are they expected to meet a minimum service level of quality?

According to this source, the US rates low in many categories of internet access i.e. % of people over 4mbit, and average bandwidth:

https://www.fastmetrics.com/internet-connection-speed-by-cou...

24
callinyouin 5 days ago 0 replies      
I hope I'm not alone in saying that if we lose net neutrality I'll be looking to help organize and set up a locally owned/operated ISP in my area.
25
sergiotapia 5 days ago 1 reply      
"Net neutrality" sounds good but it's just more and more laws to regulate and censor the internet via the FCC.
26
leesalminen 5 days ago 1 reply      
I'm currently unable to submit the form on https://www.battleforthenet.com/.

https://queue.fightforthefuture.org/action is returning HTTP 500.

27
AndyMcConachie 5 days ago 4 replies      
Just to be clear, this has nothing to do with the Internet, and everything to do with the USA. Most Internet users can't be affected by stupid actions of the FCC.

I guess I'm just a little annoyed that Americans think their Internet experience somehow represents 'the' Internet experience.

28
mbonzo 5 days ago 0 replies      
Ah, seems like this battle is just a part of the bigger war that is the ugly side of capitalism. The top companies that Millennials are raving about are threatening old companies, and as a result those old companies are making a pact to bring their rivals down.

Examples include Airbnb; the business is now being banned by cities like New York city, New Orleans, Santa Monica and countless others. Another is Uber; it's banned in Texas, Alaska, Oregon (except Portland), and more. Now it's our beloved, favorite websites that are being targeted by Internet Providers.

Who do you think will win this war?

29
steve_taylor 5 days ago 0 replies      
This website gives me the impression that this is the latest cause that the left has repurposed as something they can beat us over the head with. The video thumbnails look like a gallery of the who's who of the left. This is disappointing, because this is an issue for all of us. People are sick and tired of the left beating them over the head with various causes and tend to rebel against them regardless of their merit. We shouldn't lose our internet freedoms over a petty culture war that has nothing to do with this.
30
scott_s 5 days ago 0 replies      
I feel like this site is missing context - what recent events have caused all of these organizations to protest? I found this NY Times article gave me a better idea of this context: "F.C.C. Chairman Pushes Sweeping Changes to Net Neutrality Rules", https://www.nytimes.com/2017/04/26/technology/net-neutrality...
31
untangle 5 days ago 0 replies      
I wouldn't care so much about net neutrality if there was open access to the last-mile conduit for broadband to my house. But there isn't. Comcast owns the coax and there is no fiber here (even though I live in the heart of Silicon Valley).

Comcast is conflicted on topics from content to voice service. So neutering net neutrality is tantamount to deregulating a monopoly. That doesn't sound smart to me.

32
forgottenpass 5 days ago 0 replies      
Let me see if I have this right. The complaint against ISPs goes like this:

They established themselves as dominant middlemen and want to leverage that position to enable new revenue streams by putting their nose where it doesn't belong.

I'd have much more sympathy for belly-aching tech companies if they weren't all doing (or trying to do) the same goddamn thing.

33
joekrill 5 days ago 1 reply      
Is this form broken for anyone else? I'm getting CORS errors when it tries to submit to https://queue.fightforthefuture.org/action. That seems like a pretty big blunder, so I'm guessing maybe the site is just under heavy load?
34
web007 5 days ago 0 replies      
I can't help but feel like the site would be more effective if they removed https://www.battleforthenet.com/how-we-won/ - "we" didn't win, we just got a short reprieve from losing.
35
coryfklein 5 days ago 0 replies      
Is anybody else fatigued of this "battle"? I have historically spent time and effort supporting net neutrality, but it seems to rear its head again every 6 months.

It only seems inevitable now that these big-budget companies with great incentive will get their way.

36
ShirsenduK 5 days ago 1 reply      
In my hometown, Darjeeling (India), Internet has been blocked since June 17, 2017 by the government to censor the citizens of the area. Media doesn't cover us well as its a small town. How do we Battle for the Internet? How can we drum up support?
37
gricardo99 5 days ago 0 replies      
Perhaps the battle is already being lost.

Anyone else getting this error trying to send a letter from the site?

"! There was an error submitting the form, please try again"

http://imgur.com/a/V60gh

38
Flemlord 5 days ago 1 reply      
reddit (US Alexa rank = 4) is showing a popup to all users that sends them to www.battleforthenet.com. It is about time a major player started leveraging their platform. Why aren't Google, HN, etc. doing this too?
39
mvanveen 5 days ago 0 replies      
Please also check out and share the video recording flow we've built at https://video.battleforthenet.com
40
acjohnson55 5 days ago 0 replies      
I assume this is why the title bar has changed? Curious, since there doesn't seem to be a definitive statement about that.
41
stephenitis 5 days ago 0 replies      
Not surprising... Yahoo.com which was bought by Verizon has no mention of net neutrality.

HEADLINE: Kim Kardashian Goes Braless in Tank Top With Gym Shorts and Heels: See the Unusual Look!

Trending Now1. Chris Hemsworth 2. Wiz Khalifa 3. John McCain 4. Joanna Krupa 5. Sean Hannity 6. Universal Studios 7. McGregor suit 8. Maps 9. Loan Depot 10. Justin Bourne

Imagine if this was the fastest homepage for millions of Verizon customers. head explodes

edit.They are at least highlighting the FBI Director hearing on the homepage. shrug.

42
aabbcc1241 3 days ago 0 replies      
so much unhappy talks about the internet recently, I wonder why there isn't startup doing network service on top of NDN and IPFS for a better network
43
elorant 5 days ago 0 replies      
Is there anything we who don't live in US can do for you guys? I mean beyond spreading the word.
44
wjdp 3 days ago 0 replies      
Is there anything those outside the US can do?
45
dec0dedab0de 5 days ago 0 replies      
I wish ARIN, and IANA would just blacklist any companies that actively work against net neutrality.
46
tboyd47 5 days ago 0 replies      
Tried submitting the form without a phone number, got an error.
47
sharemywin 5 days ago 0 replies      
I'm all about the net neutrality.

But while we're at it how about some hardware neutrality.

And some data portability and control over who sees my information.

And maybe an API neutrality.

And how about letting the municipalities offer free wifi.

48
OJFord 5 days ago 0 replies      
Slightly tangentially, it seems that today the only way to get to get to the front page, other than the 'back' button if applicable, is to edit the URL?
49
dep_b 5 days ago 0 replies      
Interestingly all this kind of stuff seems to happen in the 1984th year since Jesus died.
50
shortnamed 5 days ago 11 replies      
love the blatant americentrism in the site:

"This is a battle for the Internet's future." - just American internet's future

"Team Cable want to control & tax the Internet" - they will be able to control the global system in which the US is just a part of?

"If we lose net neutrality, the Internet will never be the same" - American internet, others will be fine

51
dis-sys 5 days ago 5 replies      
Last time when I checked, there are 730 million Chinese users who mostly don't use _any_ US Internet services, their switches and servers are made/operated in China mostly by Huawei and ZTE. It is also laughable to believe that US domestic policies are going to affect Chinese decision making.

Policy leader? Not after we Chinese declared independence from the monopoly of your US "lead" on Internet.

52
throwaway2048 5 days ago 1 reply      
strange most of the top level comments are arguing against either net neutrality, or this campaign.

On a site that is otherwise extremely strongly for net neutrality.

Nothing suspicious about that...

3
Apache Foundation disallows use of the Facebook BSD+Patent license apache.org
1235 points by thelarkinn  2 days ago   343 comments top 52
1
numair 1 day ago 20 replies      
Finally, people are beginning to realize the insanity of this entire PATENTS file situation!

When I first brought up how misguided people were for embracing React and projects with this license, I was downvoted to hell on HN. But really, everyone, THINK ABOUT IT. This is a company that glorifies and celebrates IP theft from others, and lionizes their employees who successfully clone others projects. Theyve built their entire business on the back of open source software that wasnt ever encumbered with the sort of nonsense theyve attached to their own projects. And this industry is just going to let them have it, because the stuff they are putting out is shiny and convenient and free?

Having known so many people involved with Facebook for so long, I have come up with a phrase to describe the cultural phenomenon Ive witnessed among them ladder kicking. Basically, people who get a leg up from others, and then do everything in their power to ensure nobody else manages to get there. No, its not human nature or how it works. Silicon Valley and the tech industry at large werent built by these sorts of people, and we need to be more active in preventing this mind-virus from spreading.

By the way, the fact that Facebook is using this on their mostly-derivative nonsense isnt what should concern you. Its that Google has decided, as a defensive measure, to copy Facebooks move. Take a look at the code repo for Fuschia and youll see what I mean. Imagine if using Kubernetes meant you could never sue Google?

2
erichocean 1 day ago 0 replies      
To anyone concerned about React's virtual DOM and diff'ing stuff, and a potential Facebook patent thereof, in early 2012 I wrote and published (under a GLPv3 license) a virtual DOM implementation with efficient diff'ing when I forked SproutCore[0] to become Blossom.[1]

So even if Facebook tries to patent that particular invention/innovation, it may not stand up to legal scrutiny depending on the filing date. AFAIK, Facebook didn't do a provisional patent for virtual DOM stuff before July 2012 (long after I released Blossom), because that patent filing would have become public AT THE LATEST on January 1st, 2016 and nothing has come to light that I'm aware of.

Soyou should be safe (IANAL).

[0] Ironically given the subsequent popularity of React, the SproutCore team rejected my virtual DOM approach which is why I had to fork it. Live and learn. I actually came up with the specific virtual DOM + diff design in spring 2008, but didn't get around to writing the code for it until someone paid me to do it (I had asked Apple and they declined). Eventually, the copyright owner of SproutCore (Strobe, Inc.) got bought by Facebook, though I don't recall when

[1] https://github.com/erichocean/blossom

3
softinio 2 days ago 5 replies      
I really have no idea why react is so popular with such a silly license.

I agree with this move.

There are plenty of OSS projects out there without patent thing attached to its license so no reason to use react.

4
clarkevans 1 day ago 0 replies      
I'm not a lawyer, but perhaps Facebook's BSD+Patent license is not even open source.

It's tempting to consider the BSD license independent of the additional patent license. However, the OSI has not approved CC0 as being open source precisely because it expressly reserves patent rights [0]. In the OSI's justification, permissive MIT and BSD licenses may provide implicit patent license, so by themselves they are open source. However, like CC0, BSD+Patents expressly exclude this possibility. Indeed, Facebook's licensing FAQ deems the combined work of the BSD+patents to be the license [1]. Further, recent court case has shown that these licenses are not simply copyright or patent statements, but can be actual contracts [2].

Hence, we have to consider the BSD text + the patents file text as the combined license. This license is not symmetric and hence may violate OSI license standards. I've made this comment in the facebook bug report, https://github.com/facebook/react/issues/10191

[0] https://opensource.org/faq#cc-zero[1] https://code.facebook.com/pages/850928938376556[2] https://perens.com/blog/2017/05/28/understanding-the-gpl-is-...

5
captainmuon 1 day ago 3 replies      
I think this is an overreaction (pun accidental).

There are two things here: The copyright license, and the patent grant. Copyright applies to the concrete implementation. You have to agree to the license to be subject to it, and to legally use the code.

A potential patent applies to any implementation. Even if you write a clean-room clone of React, if it uses the same patent, Facebook has a patent claim. But that means the patent grant is not specific to the code; it doesn't even require consent, Facebook could allow you conditional patent usage even without your knowledge! A corollary is that you are strictly better off with the patent grant, it imposes no additional constraints on you.

License with no patent grant: Facebook can sue you for infringing patents, even if you are using a clone!

License with patent grant: Facebook cannot sue you for infringing patents, unless you do it first.

----

Second, I think the philosophy behind the patent grant is twofold: 1) that software patents are not legitimate. Enforcing a patent is not seen as a legitimate right, but an annoyance, like pissing on someones lawn. From that point of view, it seems not asked too much from somebody to refrain from doing that. (I don't know if that was the idea of the people who drafted that license, but it wouldn't surprise me.)

----

Another, unrelated observation (and please don't invalidate the first observations if this one is wrong as internet commentators are wont to do):

I see nowhere in the license [1] that it requires you to take the patent grant. Is that true? It would be silly to refuse it, because you are strictly better off with it, of course.

[1] https://github.com/facebook/react/blob/master/LICENSE

6
fencepost 1 day ago 2 replies      
I was interested in React based on what I'd read and was figuring it'd be worth looking into, but this provides all the reason I need to avoid it - I don't forsee a situation where I would personally or as a small company be suing Facebook, but I could see developing something then selling/trying to sell it to a larger company. If my code comes with a big side of "oh, and if you buy this you won't be able to sue Facebook or its affiliated companies for patent infringement" that could significantly hurt sales chances.
7
chx 2 days ago 2 replies      
RocksDB has fixed this https://github.com/facebook/rocksdb/commit/3c327ac2d0fd50bbd... now and moved to Apache / GPL dual license.
8
altotrees 1 day ago 0 replies      
I worked or a large company on several web-based apps right around the time React came out. There were some UI issues I thought could be sorted easily using React.

After going to our lead dev, who in turn went to our project manager, we received an email from our legal department a few days later that simply stated we would not be using React due to "certain patent constraints."

Having not done any prior research, I looked into what the problem might be and was pretty floored with what I found. At first I scoffed when they said no, but after reading about the patent situation I totally get it.

9
jorgemf 2 days ago 2 replies      
Can someone explain how this can affect to projects using react, as in a part of a product of a company or personal projects? Thanks

I found this [1]:

> FB is not interested in pursuing the more draconian possibilities of the BSD+patents license. If that is true, there is actually very little difference between BSD+patents and the Apache license. As such, relicensing should make little if any pragmatic difference to Facebook.

So what happens if Facebook doesn't change the license and in the future changes its mind?

[1] https://github.com/facebook/react/issues/10191

10
j_s 1 day ago 0 replies      
So push finally comes to shove.

Glad the long-term legal implications will be given serious consideration publicly, rather than the "this is not the droid you're looking for" I've seen nearly everywhere so far!

11
rmgraham 2 hours ago 0 replies      
IANAL, but...

People should be aware that Atom (the editor) uses React internally, so it's possible you face similar legal exposure without even shipping anything just because you agreed to the terms by installing an editor.

12
rdtsc 1 day ago 3 replies      
I was wondering about a similar issue for zstd compression library. It has a similar BSD+Patentsfile thing.

There is an issue with a related discussion about it going for more than a year:

https://github.com/facebook/zstd/issues/335

Last update is interesting. Someone did a search and looked for any patents FB filed for and couldn't find any in last year. So somehow based on that they decided things are "OK".

To quote:

---

US allows to patent up to a year after the publication, allowing to conclude that ZSTD remains free of patents (?) - suggesting this "The license granted hereunder will terminate, automatically and without notice (...)" from PATENTS file has no legal meaning (?)

---

Anyone care to validate how true that statement is?

13
learc83 1 day ago 0 replies      
If Facebook has patents that cover React functionality. They almost certainly cover parts of other JavaScript frameworks. React is well executed, but it's conceptually simple.

I don't think avoiding React makes you any safer. You don't know how broadly Facebook or the courts will interpret their patents.

14
plastroltech 10 hours ago 1 reply      
There is so much FUD around this PATENTS file.

If Facebook had not included this Patent grant and had released React under only the BSD license then any user would be in the exact situation which everyone is complaining so loudly about being in IF they decide to bring a patent action against Facebook. Specifically, you would be open to being sued by Facebook for violating a patent which they own.

What this grant says is for one specific circumstance (you haven't brought a patent suit against them) and for one specific limited set of patents (those related to React), Facebook will not sue you. If you like that protection then don't sue them. If you decide that you do want to bring a patent suit against them then you're right back where you were to begin with. Your one small limited protection is removed and Facebook can once again sue you if you violate one of their patents - just like they could before you started using React in the first place.

This business about it being asymmetrical is IMO a distraction. What would it mean for it to be symmetrical? That it would only trigger if you sue them for a patent violation related to React? What does that mean? You don't hold React patents, Facebook does. How would you sue them for a violation of a React patent? It makes no sense.

15
TheAceOfHearts 2 days ago 2 replies      
In the discussion they say RocksDB will be relicensed under dual license Apache 2 and GPL 2.

There's already an issue [0] asking them to consider doing something similar for react, and Dan Abramov said he'd route the request internally on the next work day.

I can't imagine they'd keep the existing license without harming their community image. But even if they keep the license, many applications should be able to easily migrate to preact [1] and preact-compat, which provides a react-compatible API.

Hopefully they relicense the project. It seems like it's the first thing that gets brought up each time react gets mentioned.

[0] https://github.com/facebook/react/issues/10191

[1] https://preactjs.com

16
tomelders 1 day ago 2 replies      
Ok. With the BSD + patent grant

Do you have a license to use Facebook's patents? Yes.

Do you have a license to use Facebooks patents if Facebook brings a patent case against you? Yes.

Do you have a license to use Facebooks patents if you bring a patent case against us? No.

If you do not have a patent grant, can you still use React? YES!

If you're going to down vote this, please say why. This is how I interpret the license plus patent grant. If I'm wrong, I'd like to know why.

17
tomelders 1 day ago 2 replies      
INAL - But this seems strange to me. The Apache license has what I see as being the same patent grant, with the same condition that if you make a claim against them, you lose the patent grant.

Apache 2.0

> 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

The important bit being...

> If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

But what people seem to be missing (as far as I can tell) is that you don't lose the licence to use the software. You just lose the patent grants. But with the BSD licence alone, you lose both the patent grand AND the licence. I really don't see how the Apache 2.0 License and Facebook's BSD+Patent Grant are any different.

18
mixedbit 1 day ago 1 reply      
Does this mean that a startup that uses React becomes un-buyable for any company that sells or plans to sell patent rights to Facebook?
19
ec109685 1 day ago 0 replies      
I don't understand how code with a bsd license without a patent grant is better for the apache foundation than facebook's bsd + patent license. With the former, the entity donating the source can sue you for patent infringement at any time.

Clearly the apache 2 license would be preferable (and what rocks db did), but I am puzzled the foundation accepts bsd code in their products, given their worry about patents.

20
vbernat 1 day ago 2 replies      
How did Facebook was able to change the license of RocksDB so easily? The CLA is not a copyright assignment and therefore all contributors have to agree for the change. Did they contact anyone who has signed up the CLA?
21
vxNsr 1 day ago 0 replies      
>>David RecordonAdded Yesterday 21:10Hi all, wanted to jump in here to let everyone know that the RocksDB team is adjusting the licensing such that it will be dual-licensed under the Apache 2 and GPL 2 (for MySQL compatibility) licenses. This should happen shortly and well ahead of August 31st. I'll leave the history and philosophy around licensing alone since it's generally a complex discussion to have and I'm not sure that it has actually been fully captured in this thread especially vis a vis Facebook's intent.Hopefully this morning's guidance to PMCs can be adjusted since I don't think any of us see a bunch of extra engineering effort as a desirable thing across the ASF projects which are already making use of RocksDB Thanks,--David

Looks like they're working on amending this issue, could very well be a case of legal getting involved and the regular engineers not realizing the change or simply not paying attention. Alternatively, maybe this is just crisis management and they were just hoping this wouldn't happen.

22
Xeoncross 2 days ago 1 reply      
> Does the additional patent grant in the Facebook BSD+Patents license terminate if I create a competing product?

> No.

> Does the additional patent grant in the Facebook BSD+Patents license terminate if I sue Facebook for something other than patent infringement?

> No.

https://code.facebook.com/pages/850928938376556

Consider re-licensing to AL v2.0: https://github.com/facebook/react/issues/10191

23
issa 1 day ago 0 replies      
I rejected using React in a project just for this reason. I'll be perfectly honest: I didn't (and still don't) completely understand the implications, but on it's face it sounds like trouble.
24
Communitivity 11 hours ago 0 replies      
Thanks to David Recordon this has now been fixed and RocksDB is now dual licensed under Apache or GPL 2. The ball has been started to have the same change occur at React, which AFAIK is still under the old BSD+Patents license. Please click through the OP's link to get the current details.
25
geofft 2 days ago 2 replies      
Am I reading this right that Apache's unwillingness to use rocksdb under the custom license pressured Facebook into switching to Apache || GPLv2? That is pretty cool!
26
isaac_is_goat 1 day ago 2 replies      
If you can't use React because of the license, use InfernoJS (MIT). In fact...you should probably use Inferno JS anyway.
27
didibus 1 day ago 1 reply      
So if I understand correctly, by using React, you agree that if you sue Facebook, you'll need to stop using React? And that goes no matter the reason why you're suing them for?

So say Facebook was infringing on one of your patent, you could still sue them, but you'd have to give up React if you did. Is that correct?

28
nsser 1 day ago 1 reply      
Facebook's open source projects are potential trojan horses.
29
jzelinskie 1 day ago 1 reply      
As the other comments say, RocksDB is going to be dual licensed both GPLv2 and Apache. What's the advantage to doing so? If I choose to consume the library via the Apache license, I'd never have to contribute back code; doesn't this invalidate the copyleft of GPLv2?
30
gedy 1 day ago 1 reply      
This keeps coming up as a concern with back and forth "it's a big deal"/"it's not a big deal" - so if FB has no ill-intent from this, are there any simple, obvious changes they could/should make to the React license?
31
maxsavin 1 day ago 1 reply      
And now, for a tinfoil hat moment: even though companies like Microsoft and Google are using Facebook's libraries, it is possible that they have some kind of private deal in regards to the patents clause.
32
snarfy 1 day ago 0 replies      
Is it any patent, or only patents pertaining to the "Software" as defined in the license (react)?

I cannot sue Facebook for patents in react or lose my react license, but I could some other patent I own e.g. fizzbuzz that Facebook is violating. Is this correct or is it any patent?

If it is any patent, I cannot believe that was the intent even if that's how Apache Foundation is interpreting it.

33
hellbanner 1 day ago 0 replies      
What will Facebook have to do now - they either have to opensource their infringing software or re-write them convincingly (Wasn't there an Oracle vs Google case about duplicate implementations of a function?)
34
alecco 1 day ago 0 replies      
This is the major drawback of adoption for Zstandard, too.
35
Steeeve 1 day ago 0 replies      
Interesting note from the discussion:

> As noted earlier, the availability of a GPLv2 license does not preclude inclusion by an ASF project. If patent rights are not conferred by the 3-clause BSD and required by ASLv2 then would not these licenses be incompatible?

---

> he has discussed the matter with FB's counsel and the word is that the FB license is intentionally incompatible. It is hard to make the argument that it is compatible after hearing that.

36
ouid 1 day ago 0 replies      
If your landlord does something that they would normally be within their rights to do in retaliation to you enforcing some other provision in your agreement, then it is illegal. I'd bet that this was not the kind of statute that they could override in a lease agreement.

I wonder if such a clause is actually enforceable. Are there any actual cases where this clause was invoked?

37
brooklyntribe 1 day ago 1 reply      
I'm a vue.js guy myself. Think it's far more cooler then React. And not every going to face this issue, me thinks.

:-)

38
0xbear 1 day ago 1 reply      
Edit: Caffe2 is similarly afflicted. Torch and PyTorch are not. Some of Torch modules are, however.
39
GoodInvestor 1 day ago 0 replies      
Luckily React itself is no longer unique in itself, and project can use the ideas popularized by React with liberally licensed alternatives such as Preact or Inferno.
40
didibus 1 day ago 0 replies      
What patent does React depends on?
41
vbernat 1 day ago 1 reply      
Reading a bit more the thread, it's quite surprising. The assignee is Chris Mattmann. From his webpage, he is not a legal counsel. The only evidence of a problem they show is that BSD alone implies a patent grant but coupled with an explicit patent grant, this is not the case anymore. The other evidence is brought by Roy Fielding who does not appear to be a legal counsel either about a discussion (oral?) with Facebook's legal counsel that the license is incompatible with ASLv2.

The whole decision seems to have been taken by "not-a-lawyer" people with their own interpretations. Doesn't the Apache Foundation have some software lawyers they can ask?

42
hordeallergy 1 day ago 0 replies      
When has Facebook ever demonstrated any integrity? Why anyone chooses to have any association with them is inexplicable to me.
43
shams93 1 day ago 0 replies      
Do 3rd party implementations like inferno or preact have legal issues from being based off Facebook intellectual property?
44
erichocean 1 day ago 0 replies      
Well, now I can use CockroachDB, so that's nice. :)
45
Kiro 1 day ago 2 replies      
Why would I care about patents if I'm outside the US? Software patents are not even valid where I live.
46
flushandforget 1 day ago 1 reply      
Please can someone paraphrase the implications of this. It's hard for me to understand.
47
luord 1 day ago 0 replies      
I've never liked react (and not only because of the sketchy licensing, in fact, that's fairly low among my qualms about react) so it's nice to see validation.

I'm sticking with Vue, even if (and that's a big if) it might also infringe facebook patents.

48
anon335dtzbvc 1 day ago 1 reply      
I've made a pull request to fix that situation https://github.com/facebook/react/pull/10195
49
guelo 1 day ago 0 replies      
I hate Facebook but I also hate patents so I like this license and wish more projects would use it because lawsuits impede progress, damage the economy, and no matter what the laws are curious smart people will always invent.
50
weego 1 day ago 0 replies      
I'm sorry but I struggle to take anyone here seriously that thinks that having to not use a specific js framework would be grounds to be legally neutered in a patent litigation case
51
BucketSort 1 day ago 0 replies      
Ding ding. Vue.JS wins.
52
known 1 day ago 0 replies      
Sounds rational
4
How Discord Scaled Elixir to 5M Concurrent Users discordapp.com
798 points by b1naryth1ef  6 days ago   251 comments top 40
1
iagooar 6 days ago 7 replies      
This writeup make me even more convinced of Elixir becoming one of the large players when it comes to hugely scaling applications.

If there is one thing I truly love about Elixir, it is the easiness of getting started, while standing on the shoulders of a giant that is the Erlang VM. You can start by building a simple, not very demanding application with it, yet once you hit a large scale, there is plenty of battle-proven tools to save you massive headaches and costly rewrites.

Still, I feel, that using Elixir is, today, still a large bet. You need to convince your colleagues as much as your bosses / customers to take the risk. But you can rest assured it will not fail you as you need to push it to the next level.

Nothing comes for free, and at the right scale, even the Erlang VM is not a silver bullet and will require your engineering team to invest their talent, time and effort to fine tune it. Yet, once you dig deep enough into it, you'll find plenty of ways to solve your problem at a lower cost as compared to other solutions.

I see a bright future for Elixir, and a breath of fresh air for Erlang. It's such a great time to be alive!

2
jakebasile 6 days ago 4 replies      
I'm continually impressed with Discord and their technical blogs contribute to my respect for them. I use it in both my personal life (I run a small server for online friends, plus large game centric servers) and my professional life (instead of Slack). It's a delight to use, the voice chat is extremely high quality, text chat is fast and searchable, and notifications actually work. Discord has become the de facto place for many gaming communities to organize which is a big deal considering how discriminating and exacting PC gamers can be.

My only concern is their long term viability and I don't just mean money wise. I'm concerned they'll have to sacrifice the user experience to either achieve sustainability or consent to a buyout by a larger company that only wants the users and brand. I hope I'm wrong, and I bought a year of Nitro to do my part.

3
Cieplak 6 days ago 8 replies      
I know that the JVM is a modern marvel of software engineering, so I'm always surprised when my Erlang apps consume less than 10MB of RAM, start up nearly instantaneously, respond to HTTP requests in less than 10ms and run forever, while my Java apps take 2 minutes to start up, have several hundred millisecond HTTP response latency and horde memory. Granted, it's more an issue with Spring than with Java, and Parallel Universe's Quasar is basically OTP for Java, so I know logically that Java is basically a superset of Erlang at this point, but perhaps there's an element of "less is more" going on here.

Also, we're looking for Erlang folks with payments experience.

cGF0cmljaytobkBmaW5peHBheW1lbnRzLmNvbQ==

4
rdtsc 6 days ago 3 replies      
Good stuff. Erlang VM FTW!

> mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data

There is a nice new OTP 20.0 optimization - now the value doesn't get copied even on message sends on the local node.

Jesper L. Andersen (jlouis) talked about it in his blog: https://medium.com/@jlouis666/an-erlang-otp-20-0-optimizatio...

> After some research we stumbled upon :ets.update_counter/4

Might not help in this case but 20.0 adds select_replace so can do a full on CAS (compare and exchange) pattern http://erlang.org/doc/man/ets.html#select_replace-2 . So something like acquiring a lock would be much easier to do.

> We found that the wall clock time of a single send/2 call could range from 30s to 70us due to Erlang de-scheduling the calling process.

There are few tricks the VM uses there and it's pretty configurable.

For example sending to a process with a long message queue will add a bit of a backpressure to the sender and un-schedule them.

There are tons of configuration settings for the scheduler. There is to bind scheduler to physical cores to reduce the chance of scheduler threads jumping around between cores: http://erlang.org/doc/man/erl.html#+sbt Sometimes it helps sometimes it doesn't.

Another general trick is to build the VM with the lcnt feature. This will add performance counters for locks / semaphores in the VM. So then can check for the hotspots and know where to optimize:

http://erlang.org/doc/man/lcnt.html

5
mbesto 6 days ago 1 reply      
This is one of those few instances where getting the technology choice right actually has an impact on cost of operations, service reliability, and overall experience of a product. For like 80% of all the other cases, it doesn't matter what you use as long as your devs are comfortable with it.
6
jlouis 6 days ago 1 reply      
A fun idea is to do away with the "guild" servers in the architecture and simply run message passes from the websocket process over the Manifold system. A little bit of ETS work should make this doable and now an eager sending process is paying for the work itself, slowing it down. This is exactly the behavior you want. If you are bit more sinister you also format most of the message in the sending process and makes it into a binary. This ensures data is passed by reference and not copied in the system. It ought to bring message sends down to about funcall overhead if done right.

It is probably not a solution for current Discord as they rely on linearizability, but I toyed with building an IRCd in Erlang years ago, and there we managed to avoid having a process per channel in the system via the above trick.

As for the "hoops you have to jump through", it is usually true in any language. When a system experiences pressure, how easy it is to deal with that pressure is usually what matters. Other languages are "phase shifts" and while certain things become simpler in that language, other things become much harder to pull off.

7
danso 6 days ago 1 reply      
According to Wikipedia, Discord's initial release was March 2015. Elixir hit 1.0 in September 2014 [0]. That's impressively early for adoption of a language for prototyping and for production.

[0] https://github.com/elixir-lang/elixir/releases/tag/v1.0.0

8
didibus 6 days ago 5 replies      
So, at this point, every language was scaled to very high concurrent loads. What does that tell us? Sounds to me like languages don't matter for scale. In fact, that makes sense, scale is all about parallel processes, horizontally distributing work can be achieved in all language. Scale is not like perforance, where if you need it, you are restricted to a few languages only.

That's why I'd like to hear more about productivity and ease now. Is it faster and more fun to scale things in certain languages then others. Beam is modeled on actors, and offer no alternatives. Java offers all sorts of models, including actors, but if actors are the currently most fun and procudctive way to scale, that doesn't matter.

Anyways, learning how team scaled is interesting, but it's clear to me now languages aren't limiting factors to scale.

9
jmcgough 6 days ago 0 replies      
Great to see more posts like this promoting Elixir. I've been really enjoying the language and how much power it gets from BEAM.

Hopefully more companies see success stories like this and take the plunge - I'm working on an Elixir project right now at my startup and am loving it.

10
ShaneWilton 6 days ago 1 reply      
Thanks for putting this writeup together! I use Elixir and Erlang every day at work, and the Discord blog has been incredibly useful in terms of pointing me towards the right tooling when I run into a weird performance bottleneck.

FastGlobal in particular looks like it nicely solves a problem I've manually had to work around in the past. I'll probably be pulling that into our codebase soon.

11
joonoro 6 days ago 1 reply      
Elixir was one of the reasons I started using Discord in the first place. I figured if they were smart enough to use Elixir for a program like this then they would probably have a bright future ahead of them.

In practice, Discord hasn't been completely reliable for my group. Lately messages have been dropping out or being sent multiple times. Voice gets messed up (robot voice) at least a couple times per week and we have to switch servers to make it work again. A few times a person's voice connection has stopped working completely for several minutes and there's nothing we can do about it.

I don't know if these problems have anything to do with the Elixir backend or the server.

EDIT: Grammar

12
majidazimi 5 days ago 3 replies      
It seems awkward to me. What if Erlang/OTP team can not guarantee message serialization compatibility across a major release? How you are going to upgrade a cluster one node at a time? What if you want to communicate with other platforms? How you are going to modify distribution protocol on a running cluster without downtime?

As soon as you introduce standard message format, then all nice features such as built-in distribution, automatic reconnect, ... are almost useless. You have to do all these manually. May be I'm missing something. Correct me if I'm wrong.

For a fast time to market it seems quite nice approach. But for a long running maintainable back-end it not enough.

13
ConanRus 6 days ago 1 reply      
I do not see there any Elixir specific, it is all basically Erlang/Erlang VM/OTP stuff. When you using Erlang, you think in terms of actors/processes and message passing, and this is (IMHO) a natural way of thinking about distributed systems.So this article is a perfect example how simple solutions can solve scalability issues if you're using right platform for that.
14
oldpond 1 day ago 0 replies      
When have you ever read, "How Acme scaled J2EE to 5M Concurrent Users"? I became an IT architect in 1998 at IBM, the year Sun released j2ee and IBM released Websphere. I have experienced 20 years of enterprise Java and object oriented computing, and I was thrilled when Elixir came out. I was a mainframe programmer before OO became all the rage, so I never really felt at home doing objects. Functional programming feels completely natural to me though.

What I like about this article is that they shared everything they learned with the community. Thank you for that excellent experience report.

15
_ar7 6 days ago 0 replies      
Really liked the blog post. Elixir and the capabilities of the BEAM VM seems really awesome, but I can't really find an excuse to really use them in my day to day anywhere.
16
StreamBright 5 days ago 0 replies      
Whatsapp's story is somewhat similar. Relevant read to this subject.

http://www.erlang-factory.com/upload/presentations/558/efsf2...

17
brian_herman 6 days ago 0 replies      
I love discord's posts they are very informative and easy to read.
18
OOPMan 5 days ago 1 reply      
5 million concurrent users is great and all, but it would be nice if Discord could work out how to use WebSockets without duplicating sent messages.

This seems to happen a lot when you are switching between wireless networks (E.g. My home router has 2Ghz and 5Ghz wireless networks) or when you're on mobile (Seems to happen regularly, even if you're not moving around).

It's terribly annoying though and makes using the app via the mobile client to be very tedious.

19
renaudg 5 days ago 1 reply      
It looks like they have built an interesting, robust and scalable system which is perfectly tailored to their needs.

If one didn't want to build all of that in house though, is there anything they've described here that an off the shelf system like https://socketcluster.io doesn't provide ?

20
etblg 5 days ago 0 replies      
Reading posts like this about widely distributed applications always gets me interested in it as a career path. Currently I'm working as a front-end dev with moderate non-distributed back-end experience. How would someone in my situation, with no distributed back-end experience, break in to a position working on something like Discord?
21
sriram_malhar 5 days ago 1 reply      
I really like elxir the language, but find myself strangely hamstrung by the _mix_ tool. There is only an introduction to the tool, but not a reference to all the bells and whistles of the tool. I'm not looking for extra bells and whistles, but simple stuff like pulling in a module from GitHub and incorporate it. Is there such documentation? How do you crack Mix?
22
omeid2 5 days ago 0 replies      
I think while this is great, it is good to remember that your current tech stack maybe just fine! after all, Discord start with mongodb[0].

[1]. https://blog.discordapp.com/how-discord-stores-billions-of-m...

23
alberth 6 days ago 2 replies      
Is there any update on BEAMJIT?

It was super promising 3 or so years ago. But I haven't seen an update.

Erlang is amazing in numerous ways but raw performance is not one of them. BEAMJIT is a project to address exactly that.

https://www.sics.se/projects/beamjit

24
ramchip 6 days ago 1 reply      
Very interesting article! One thing I'm curious about is how to ensure a given guild's process only runs on one node at a time, and the ring is consistent between nodes.

Do you use an external system like zookeeper? Or do you have very reliable networking and consider netsplits a tolerable risk?

25
andy_ppp 5 days ago 1 reply      
Just as an aside how would people build something like this if they were to use say Python and try to scale to these sort of user levels? Has anyone succeeded? I'd say it would be quite a struggle without some seriously clever work!
26
myth_drannon 6 days ago 1 reply      
It's interesting how on StackOverflow Jobs Elixir knowledge is required more often than Erlang.

http://www.reallyhyped.com/?keywords=erlang%2Celixir

27
neya 5 days ago 0 replies      
Hi community,Let me share my experience with you. I'm a hardcore Rails guy and I've been advocating and teaching Rails to the community for years.

My workflow for trying out a new language involves using the language for a small side project and gradually would try to scale it up. So, here's my summary, my experience of all the languages so far:

Scala - It's a vast academic language (official book is with ~700 pages) with multiple ways of doing things and it's attractiveness for me was the JVM. It's proven, robust and highly scalable. However, the language was not quite easy to understand and the frameworks that I've tried (Play 2, Lift) weren't as easy to transition to, for a Rails developer like me.

Nevertheless, I did build a simple calendar application, but it took me 2 months to learn the language and build it.

GoLang - This was my next bet, although I didn't give up on Scala completely (I know it has its uses), I wanted something simple. I used Go and had the same experience as I had when I used C++. It's a fine language, but, for a simple language, I had to fight a lot with configuration to get it working for me - (For example, it has this crazy concept of GOPATH where your project should reside and if your project isn't there it'll keep complaining).Nevertheless, I build my own (simple) Rails clone in GO and realized this isn't what I was looking for. It took my about a month to conquer the language and build my (simple) side project.

Elixir - Finally, I heard of Elixir on multiple HN Rails release threads and decided to give it a go. I started off with Phoenix. The transition was definitely wayy smoother from Rails, especially considering the founding member of this language was a Rails dev. himself (the author of "devise" gem). At first some concepts seemed different (like piping), but once I got used to it, for me there was no looking back.

All was fine until they released Phoenix 1.3, where they introduced the concept of contexts and (re) introduced Umbrella applications. Basically they encourage you to break your application into smaller applications by business function (similar to microservices) except that you can do this however you like (unopinionated).For example, I broke down my application by business units (Finance, Marketing, etc.). This forced me to re-think my application in a way I never would have thought and by this time I had finished reading all 3 popular books on this topic (Domain Driven Design). I loved how the fact that Elixir's design choices are really well suited for DDD. If you're new to DDD I suggest you try giving it a shot, it really can force you to re-think the way you develop software.

By the end of two weeks after being introduced to Elixir, I picked up the language. In a month and a half, I built a complete Salesforce clone just working on the weekends. And this includes even the UI. And I love how my application is always blazing fast, picks up errors even before it compiles and warns me if I'm no using a variable I defined somewhere.

P.S there IS a small learning curve involved if you're starting out fresh:

1) IF you're used to the Rails asset pipeline, you'll need to learn some new tools like Brunch / Webpack / etc.2) Understand about contexts & DDD (optional) if you want to better architect your application.3) There is no return statement in Elixir!

As a Ruby developer, here are my thoughts:

1. So, will I be developing with Rails again? Probably yes, for simpler applications / API servers.2. Is Ruby dying? No. In fact, I can't wait for Ruby 3.

Some drawbacks of Elixir:1. Relatively new, so sometimes you'll be on your own and that's okay.2. Fewer libraries as compared to the Ruby eco-system. But you can easily write your own.3. Fewer developers, but should be fairly to onboard Ruby developers.

Cheers.

28
agentgt 5 days ago 0 replies      
I realize this is off topic but how does Discord make money? I can't figure out their biz model (I'm not a gamer so I didn't even know about them).
29
jaequery 6 days ago 6 replies      
Anyone know if Phoenix/Elixir have something similar to Ruby's bettererror gem? I see Phoenix has a built-in error stack trace page which looks like a clone of bettererror but it doesn't have the real-time console inside of it.

Also, I wish they had a ORM like Sequel. These two are really what is holding me back from going full in on Elixir. Anyone can care to comment on this?

30
concatime 4 days ago 0 replies      
Sad to see some people taking raw and insignificant benchmarks to evaluate a language[0].

[0] https://news.ycombinator.com/item?id=14479757

31
zitterbewegung 6 days ago 1 reply      
Compared to slack discord is a much better service for large groups . Facebook uses them for react.
32
grantwu 5 days ago 0 replies      
"Discord clients depend on linearizability of events"

Could this be possibly be the cause of the message reordering and dropping that I experience when I'm on a spotty connection?

33
framp 6 days ago 0 replies      
Really lovely post!

I wonder how Cloud Haskell would fare in such a scenario

34
dandare 5 days ago 1 reply      
What is the business model behind Discord? They boast about being free multiple times, how do they make money? Or plan to make money?
35
brightball 6 days ago 1 reply      
I so appreciate write ups that get into details of microsecond size performance gains at that scale. It's a huge help for the community.
36
KrishnaHarish 5 days ago 0 replies      
Scale!
37
KrishnaHarish 5 days ago 0 replies      
What is Discord and Elixir?
38
marlokk 6 days ago 0 replies      
"How Discord Scaled Elixir to 5M Concurrent Users"

click link

[Error 504 Gateway time-out]

only on Hacker News

39
orliesaurus 6 days ago 1 reply      
Unlike Discord's design team who seem to just copy all of Slack's designs and assets, the Engineering team seems to have their shit together, it is delightful to read your Elixir blogposts. Good job!
40
khanan 6 days ago 1 reply      
Problem is that Discord sucks since it does not have a dedicated server. Sorry, move along.
5
Employees Who Stay in Companies Longer Than Two Years Get Paid 50% Less forbes.com
661 points by askafriend  1 day ago   501 comments top 45
1
elnygren 18 hours ago 18 replies      
Best way to increase salary at current workplace? Get offers from other companies & ask for a meeting with your boss (or whoever decides your pay, so HR etc.).

Bring up the offers, discuss what you'd get from there and also go through the potential career you could build at those companies. For example: Consultancy X would pay me $k/mo and every 6months it will go up the ladder if I perform well.

With this kind of discussion you should be able to get a raise that brings your pay even above the competing offers.

Treat your current as one of the options you can make that day; make your employer fight for you every 6-12 months. And remember it's not personal, it's just business. Your employer would let you go and throw you under the bus if it made business sense.

tldr; don't get too emotionally attached to job; that's when they get you.

2
nilkn 20 hours ago 10 replies      
I think it's important to understand that there's a significant amount of selection bias possible here. In general, folks who switch jobs every two years are the folks who are not getting offered big raises by their current companies. The ones who are getting offered big raises may still choose to leave due to other reasons, but they often will choose to stay instead. And they won't be making a big fuss about it online.

There's another phenomenon here. Almost by definition, average developers are not going to get big raises. They'll get average raises. The average developer raise is probably 3-5%, which is not big but it's more than the average raise across all professions, at least in the US.

This leads to an interesting question: why can the average developer get a big raise by switching jobs? I think at least part of the answer is that companies simply have more information about employees who've been around for at least a few years than they do about potential hires still on the market. It's a lot easier for a company to realize an employee is about average once that person has been on the job for 12+ months than it is during the interview phase, where the questions are heavily sandboxed and generally focused on basic problem solving ability rather than the candidate's ability to convert that into business value.

Finally, in general people who are average but think they're above average really do not like to confront the fact that they're average. So average developers with big egos being offered average raises will often very vehemently argue that the problem is all with the companies they've worked at and never with themselves.

---

Another point worth focusing on here is that raises are really determined by the business value that you're producing, not your raw intelligence or passion for coding. Very smart coders may or may not be any good at converting that talent into lots of value for the company. It may even be that sometimes a less talented coder gets offered a much bigger raise because other skills allowed them to create significant value.

I think this explains why so many folks are average but think they're above average. It's because they might indeed be above average in some metrics, but not the metrics that matter to their employer.

3
sidlls 23 hours ago 10 replies      
There is a plateau as one reaches senior positions.

As I went from "no" experience to my current position my job switches (every ~2 years) always were for 10%-25% (in one case, 100%, but that was Midwest to Bay Area so other factors are at play). I never got more than 5% merit increases otherwise.

Most of my friends who have been at the same place for >10 years in engineering (not software) are just now reaching parity with my base salary (CoL-adjusted) and they didn't spend 6 years in academia before transitioning to industry.

Two year tenures isn't job hopping. It's a reflection of how this industry works. Very few companies offer sufficient breadth and depth of product complexity, career advancement, or other similar things to make it worthwhile. I'd say the sweet spot is 2-4 years, except at very large companies (e.g. Google) (EDIT:) or companies which are developing complicated products with physical engineering or regulatory factors complicating development. Anything longer, especially if there is a lot of long-term stasis on a resume (e.g. "tech lead on product X" for more than a year or two), is an indication to me of someone who either isn't capable of stretching himself or doesn't want to. Anything less, especially if more than one project per job exists, indicates an inability to see a project through to maintenance or someone who is easily bored.

4
manicdee 22 hours ago 2 replies      
My contrary opinion is that people who job hop every two years are the ones who come in, make enough progress that management thinknthey are pretty nifty, then leave before they have to domany maintenance on the technical debt they left behind.

Sure, it is good to be highly paid, but the situation just reinforces the idea that people who wear suits are paid far too much.

Though I find myself i the situation of wanting to earn more, so I am seriously considering switching to SAP. Sell my soul, buy a house, live with my conscience formthe rest of my life?

5
needlessly 23 hours ago 4 replies      
I don't get people who say, "Well I would never hire someone who has never worked more than 5 years at a single place!!!"

I would never have increased my from $68k to $115k in 5 years.I probably would've been somewhere at like $80k right now at best if I was didn't switch jobs twice.

If it means some hiring manager is going say some snarky opinion, then yes I'll take my extra money.

6
fizl 1 day ago 4 replies      
This is a really poor article, and I'm surprised Forbes would publish it. As far as I can tell, there's absolutely no data behind any of the assertions in the article, and the title is just conjecture by the author based on a whole lot of assumptions.
7
southphillyman 11 hours ago 1 reply      
Never understood why companies will refuse to give valuable employees reasonable raises but when turn around and pay an unproven new hire even more money to replace them. So you refuse to give me a 10% increase because "never negotiate with terrorists" but will gladly pay the 20% increase that I'm getting at my new job to my replacement (because that's the market).Just seems so short sighted.
8
jondubois 17 hours ago 2 replies      
Yes that never made sense to me. Employees who have been around for a couple of years are much more efficient and know their way around - They are worth more, and yet they always get paid much less than 3-month or 6-month contractors or even new full-time employees in many cases.

Also if they say that they want to leave a company, usually employers let them go relatively easily without making any significant counter-offer. It means that employers don't even see the value there; they actually think that every employee is 100% replaceable and don't account for the massive efficiency loss incurred.

9
dsmithatx 12 hours ago 2 replies      
I keep seeing articles like this that fail to point out an important fact. The economy has been strong for a long time and it is a workers market. Once there is another downturn it will be an employers market. Job hoppers will be lucky to land interviews, much less command high salaries. That is unless you are in the top % of your skill level or have some skill like Oracle DBA that is highly sought after.

I worry about younger people shaping their world view and career around the last 10 years. I did that an in 2002 when I was laid off it was a painful few years of realization not finding work.

10
jurassic 21 hours ago 1 reply      
Maybe I'm more conservative than others here, but anything less than 2 years tenure at a company seems suspicious to me. The people I've known who bounced after ~18 months or less were often the ones I would've wanted to quit anyway, who weren't cutting the mustard and weren't on track for promotion. For them, talking a big game at interview time every two years probably is income-maximizing because the longer they stick in the same job without promotions the more obvious their stagnancy is on the resume.
11
zelos 17 hours ago 1 reply      
Yet large companies still launch investigations into why retention is so bad and have employees fill out surveys to try to figure out why people are leaving. Multiple-choice surveys, of course, with no questions about compensation.
12
bitexploder 11 hours ago 0 replies      
This article flies in the face of what I have been reading for years. This article is taking some simple data "employees that change jobs get X% raise" and extrapolating it to a whole lot of wrong. It does vary by organization.

For example if you are at an organization where you are basically the most senior employee already and you can take on a new role at a larger organization with new responsibilities or skill growth a job change can make sense. If you are just bouncing around between say Amazon, Google, and Facebook this can often be a dud strategy in the long term.

Take this with a grain of salt, since I don't have time to dig up citations, but I have read that the long term compensation is actually higher for individuals that don't switch jobs so often. I guess the career employee is a bit of a legend in IT these days so we could have a lively discussion about the selection and difference in a big tech firm and a more traditional F500, but I think in the long run these things will normalize because they are basic human nature and organizational structure problems and nothing is intrinsically special about technology companies. They are the special darlings of the era, so they get to break the rules a bit. Maybe it really will foment a long term change in "the rules" or always be a bit of a bubble in terms of how the organizations operate.

13
Steeeve 13 hours ago 1 reply      
Job hopping works. But the more you do it, the more you can expect to see skepticism from hiring managers.

Software development is a little bit different because right now there is more demand for workers than there are good developers. It hasn't always been that way and it won't always be that way.

When you're working for the right employer at the right salary, an extra 25-50% isn't going to be enough to lure you away. That and at some places stock options are worth something and vesting schedules actually play into the decision process.

14
nfriedly 11 hours ago 1 reply      
I wonder how strongly this holds after you're into the higher end of the salary range?

My best increase ever was going from my highschool/college web dev job to freelance, where I more than tripped my hourly rate from $12.50/hr to $45/hr. (It probably wasn't quite 3x after accounting for taxes and healthcare and such, but it was still a good jump, and brought more flexible hours too.)

Since then, I've gotten 20-25% increases a couple of times, topping out at $120k.

I was switching more like every 3-5 years, so a 5%/year raise actually ends up being in the same ballpark, if not slightly better. Every two years might be better from a pure cash perspective, but I didn't feel like I was "done" with my roles until the 3-5 year mark - I think I learned more delivered more value, and achieved a better sense of accomplishment doing it my way.

I also live in Ohio, with the exception of one year I spent in SF. I imagine I could double my current salary moving back to SF or over to NYC, but I'm happier here. (And $100k+ goes a bit farther here...)

15
sverhagen 21 hours ago 0 replies      
Sad, I am. I think I'm perfectly capable to sell myself to the next job. But I like what I'm doing, I like the goals that are still ahead and being worked towards, great team, all good. And whether you read the article with a bit (or a lot) of skepticism, it seems common knowledge that the biggest steps in salary are made when going to that next job. So... where's the silver lining for loyal dogs? Should I just pick up a book on negotiating, and take it up with my VP of People?
16
pm24601 7 minutes ago 0 replies      
Thoughts:

1. A company can be underpaying for the employee's experience but paying correctly for the skills the company needs.

2. It should not be a bad thing for an employee to leave a company to advance their career - the current company may not offer the needed opportunities.

3. Managers should be proactively asking the question of the employee: "Let's talk about recognizing when you SHOULD move to a new position at XYZ or some place else?"

17
epynonymous 15 hours ago 0 replies      
a few comments, this article seems to want to incite people to jump ship which is fine, but fwiw, i have been with the same company for about 14 years and i dont think i'm getting paid less than my peers, rather i think i'm way above most with similar experience and titles. part of it maybe that this article generalizes and doesnt segment the different industries, i work in software and my company has done a lot to make certain that they're competitive and do not lose key people. i met a guy at google, he has been there since 2004, i doubt he's making less than his peers, there are certainly benefits, say restricted stocks (please look at goog from 5 years ago and you'll understand). this article is probably grouping everyone who works the counter at mcdonalds to investment bankers, nice work forbes.

also, i think working for the same company does give you more opportunites to understand different parts of a company that you certainly would miss out on if you jumped ship every two years.

also note that as a hiring manager, i look down on candidates that jump ship too frequently, no matter how strong his/her skill.

18
johan_larson 1 day ago 9 replies      
It's a bit strange that this sort of job-hopping isn't a red flag to employers. You'd think managers would be reluctant to hire an employee who has jumped ship many times before, just as they were becoming useful.
19
animex 18 hours ago 2 replies      
Ha, this is funny. I rarely ever stay longer than 2 years at a company and every time I jumped, my salary went up significantly i.e. >10%+ HOWEVER, I've seen guys who have stayed with a company earning less for years, ending up as the VP/Presdient of the company eventually and their pay going way up.
20
jryan49 22 hours ago 0 replies      
If the article was even based on actual facts and figures, wouldn't survivorship bias be a huge factor here? Maybe the bottom of the barrel are bringing the average down because they are stuck with jobs with no mobility and aren't valuable enough to get the raises.
21
hughperkins 23 hours ago 1 reply      
Correlation != causation.

Seems perfectly plausible to me that the more confident candidates are more likely to be poached by other companies, for example.

22
abalashov 17 hours ago 0 replies      
I have been self-employed since age 22, but from 18 to 22 I had 6 jobs at 5 companies in 3.5 years. I started out in tech support at a small local ISP, hopped, came back as a sysadmin, then became primary sysadmin, and really cut my early career teeth there (and am ever grateful for the experience!). But it was a small college-town operation that ran on student-type labour; at my peak, I think I topped out at $16/hr. The next job hop was a move to Atlanta, where I commanded a $55k salary (about double my peak hourly earnings at NEGIA) and ultimately reached $70k at 21 not terrible for a 21 year-old in Atlanta in 2007.

It's easy to place big gains when moving up from entry level, especially if you entered into an employment bargain that presumed being paid vastly below market for doing fairly sophisticated and diverse things in exchange for having a diamond-in-the-rough skill set. I learned more at the small company than I have ever learned in my life, and more quickly, and was able to effectively parlay that into big-boy corporate jobs in Atlanta.

In one sense, this validates the thesis. But it's important to remember that it does plateau. When moving up from entry level and early-career pay, you're flying close to the ground and it feels like you're going fast. The ground rushes past you, and it's intoxicating. Minimum-wage checkout clerk to full-time salaried assistant store manager? Zoom!

The momentum doesn't last. Had I continued in W-2 employment, I would have likely hit low to low-mid $100k by now, after ten years, but no more.

23
nextstep 1 day ago 1 reply      
From 2014, but still relevant. Unfortunately, the best negotiating tactic is to bring a competing offer and essentially threaten to quit. Or just switch firms every ~18 months like the post suggests.
24
syntaxing 13 hours ago 1 reply      
I wish these studies would incorporate the value of benefits. Benefits are a huge reason (especially for people with families) why people stay at jobs. Imagine having a job for $60K with four weeks of vacation and good 401K matching compared with $80k with one week vacation and horrible health insurance.
25
conanbatt 13 hours ago 1 reply      
The single most important change that should happen to change things like this is having salaries being public.

The day that happens, salaries statistics will very likely take a huge jump.

26
castratikron 8 hours ago 0 replies      
I'm not sure why companies prefer that someone leave and take all of their institutional knowledge with them, then hire someone new at a higher wage that would have kept the first person at the company. They are not taking into account the value of knowledge. If your domain knowledge increases 20% within a year, then you should be getting a 20% raise.

With the elimination of pensions, the only vesting time that most people have is for their 401k, which is usually only a couple of years. Not sure what employers are expecting to happen or why they're surprised when their hot college grad employees decide to leave for a big pay bump.

27
EADGBE 12 hours ago 0 replies      
When I started in this industry, my father, who also works in this industry gave me this advice. Something along the lines of "if you want a raise, go somewhere else, that's the only way to move up quickly". He was right, three times in the last 4-5 years.
28
richardknop 16 hours ago 0 replies      
This is unfortunately true in our industry, from my experience. The only way to make substantial raise for top performers is to have a new job offer with +20-30%. Otherwise the HR department will give you X reasons why you asking for such big raise is inappropriate.

This is part of free market economy, companies are trying to minimize their costs and maximize profits so it makes sense not to waste money giving big raises. This is why you need to play the free market game and force their hand.

Once you get a much better competing offer, your current employer will probably offer to match it. But at that time why stay anyways if you had to force them to consider a substantial raise by going for interviews and getting a better offer?

Also, there is an old advice which says to never accept a counter offer.

I do think there is a ceiling for this approach. You can probably do it 4-5 times and get to quite high salary (150-200k). After that this tactic does not work as well anymore so staying at one place for a long time and earning an internal promotion becomes better option.

29
hdi 17 hours ago 0 replies      
That's been my experience as well.

But it didn't happen because I was on the lookout for a fat paycheck. It happened because the companies I worked at the time couldn't provide the stimulation, technical capabilities, working environment and personal development I was looking for.

Then I realised very few of them do where I am, so now I do 6-12 month contracts and it's helped with saving a bit of my soul and getting paid a bit more.

30
methodin 10 hours ago 1 reply      
What makes the most sense to me is making moves early on in your career then hopefully finding a company/CEO/team you really like and settle in for a while. You aren't going to learn some of the valuable lessons if you swap jobs too often as you will never really become an expert at what you're doing.

Certainly valid you are underpaid and don't particularly enjoy your job.

31
k__ 15 hours ago 1 reply      
Sure, I mean who gives you more than a 20% raise?

When you change jobs, you can renegotiate everything. More money, more holidays, less hours, etc.

Last time I changed jobs, I worked less hours and got 20% more money, my last boss would have laughed at me if I wanted this.

32
g9yuayon 6 hours ago 0 replies      
As I often told my friends, go to a company like Netflix. Netflix actually tries very hard to make money not a concern to its employees, and they do hell of a great job.
33
usmannk 1 day ago 2 replies      
I'm reluctant to apply this mindset to the tech, or at least SV tech, industry. While it's common practice to hop jobs every 2-5 years, it seems to be more for new or different opportunities (Different sized company, a new domain, new technical challenges, etc.) than it is for a more competitive offer. Internal promotion, both in compensation and position, seems to be the relative norm.
34
johnward 9 hours ago 0 replies      
I often refer to this gem on salary negotiation: http://www.kalzumeus.com/2012/01/23/salary-negotiation/
35
georgeburdell 11 hours ago 0 replies      
How much does this apply to senior employees? I have a PhD, started out at the "Senior" level as a new grad, and I just recently got my first promotion to "Staff" Engineer, which represents roughly the 85th-90th percentile for pay in my company.
36
anon4728 1 day ago 0 replies      
There's this management mythology that:

- Worker bees will stay because they're clueless and don't have the initiative to keep moving.

- Pressure from on high to fudge performance reviews downwards to "save the company money." (At where my mom worked, a middle manager blanket reduced all subordinate reviews down because BS: "people are overly generous.")

If you want least turnover / most morale / most productivity, pick people whom can grow the most and grow them until they no longer can keep up or find something else. Develop people (training, mentoring, promotions, bonuses, raises, perks, etc.), don't just consume them as static widgets to fill a hole.

37
Pogba666 11 hours ago 0 replies      
went thru many comments here and despite lots disagreement between people, I believe one thing people will all agree with is: in this game, individuals are much weaker than companies and much more vulnerable regardless.

company can never go bankrupt because of loosing one super star, but a worker and his/her family can be in a bad situation if company decides to do something to him/her. So the whatever negotiation between company and works can never be a fair game at any point.

38
sergers 21 hours ago 0 replies      
Depending on size of company, there may be some distinct departments.

I stayed on the same team, 6 years, but multiple promotions. Only 20+k in salary different.Jumped teams twice in next 3 years, +20k.

Now I have doubled my salary I started with. Been with company 10 years.The only thing I regret is not jumping teams earliar.. a few years max. Great learning experience through different roles on my original department... But didn't help me financially

39
trentmb 20 hours ago 1 reply      
I just hit my two year mark. Guess it's time to start job hunting.

My current employer 401k contributions vest at 3.5 years- would I be out of line insisting that my 'next' employer make a one time contribution equal to what I sacrifice?

40
known 21 hours ago 0 replies      
Employees != IT Employees
41
mcguire 11 hours ago 0 replies      
"Its a fact that employees are underpaid."

Forbes?

42
pc86 13 hours ago 0 replies      
(2014)
43
blazespin 18 hours ago 0 replies      
This might be corrupted because people who switch are willing to move geographically.
44
vacri 1 day ago 2 replies      
> Jessica Derkis started her career earning $8 per hour ($16,640 annual salary) as the YMCAs marketing manager. Over 10 years, shes changed employers five times to ultimately earn $72,000 per year at her most recent marketing position.

Amazing, thanks Forbes. I never knew that if you started out on minimum wage and then moved into a mundane professional role, you'd earn considerably more!

45
mustafabisic1 12 hours ago 1 reply      
Also a great way to increase your salary is to Take the lowest pay possible at first. Here is what I mean https://medium.com/the-mission/take-the-lowest-pay-possible-... And this is not my article and I don't have any affiliations with it :)
6
Toward Go 2 golang.org
742 points by dmit  4 days ago   635 comments top 39
1
munificent 4 days ago 3 replies      

 > I can't answer a design question like whether to support > generic methods, which is to say methods that are > parameterized separately from the receiver.
I work on the Dart language. Dart was initially designed with generic classes but not generic methods. Even at the time, some people on the team felt Dart should have had both.

We proceeded that way for several years. It was annoying, but tolerable because of Dart's optional type system -- you can sneak around the type checker really easily anyway, so in most cases you can just use "dynamic" instead of a generic method and get your code to run. Of course, it won't be type safe, but it will at least mostly do what you want.

When we later moved to a sound static type system, generic methods were a key part of that. Even though end users don't define their own generic methods very often, they use them all the time. Critical common core library methods like Iterable.map() are generic methods and need to be in order to be safely, precisely typed.

This is partially because functional-styled code is fairly idiomatic on Dart. You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.

I do think if you add generic types without generic methods, you will run into their lack. Methods are how you abstract over and reuse behavior. If you have generic methods without generic classes, you lose the ability to abstract over operations that happen to use generic classes.

A simple example is a constructor function. If you define a generic class that needs some kind of initialization (discouraged in Go, but it still happens), you really need that constructor to be generic too.

2
dgacmu 4 days ago 5 replies      
I should send this to rsc, but it's fairly easy to find examples where the lack of generics caused an opportunity cost.

(1) I started porting our high-performance, concurrent cuckoo hashing code to Go about 4 years ago. I quit. You can probably guess why from the comments at the top of the file about boxing things with interface{}. It just got slow and gross, to the point where libcuckoo-go was slower and more bloated than the integrated map type, just because of all the boxing: https://github.com/efficient/go-cuckoo/blob/master/cuckoo.go

(my research group created libcuckoo.)

Go 1.9 offers a native concurrent map type, four years after we looked at getting libcuckoo on go -- because fundamental containers like this really benefit from being type-safe and fast.

(2) I chose to very tightly restrict the initial set of operations we initially accepted into the TensorFlow Go API because there was no non-gross way that I could see to manipulate Tensor types without adding the syntactic equivalent of the bigint library, where everything was Tensor.This(a, b), and Tensor.That(z, q). https://github.com/tensorflow/tensorflow/pull/1237and https://github.com/tensorflow/tensorflow/pull/1771

I love go, but the lack of generics simply causes me to look elsewhere for certain large classes of development and research. We need them.

3
fusiongyro 4 days ago 19 replies      
The paragraph I was looking for is this:

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues. I wonder if Go's historical antipathy towards this feature has driven away most of the people who would want it, or if there is still enough latent desire for generics that serious Go users will be able to produce the necessary mountain of real-world use cases to get something going here.

4
bad_user 4 days ago 4 replies      
Java`s generics have had issues due to use site variance, plus the language isn't expressive enough, leading its users into a corner where they start wishing for reified generics (although arguably it's a case of missing the forest from the trees).

But even so, even with all the shortcomings, once Java 5 was released people migrated to usage of generics, even if generics in Java are totally optional by design.

My guess to why that happens is that the extra type safety and expressivity is definitely worth it in a language and without generics that type system ends up staying in your way. I personally can tolerate many things, but not a language without generics.

You might as well use a dynamic language. Not Python of course, but something like Erlang would definitely fit the bill for Google's notion of "systems programming".

The Go designers are right to not want to introduce generics though, because if you don't plan for generics from the get go, you inevitably end up with a broken implementation due to backwards compatibility concerns, just like Java before it.

But just like Java before it, Go will have half-assed generics. It's inevitable.

Personally I'm sad because Google had an opportunity to introduce a better language, given their marketing muscle. New mainstream languages are in fact a rare event. They had an opportunity here to really improve the status quo. And we got Go, yay!

5
didibus 4 days ago 6 replies      
I get that everyone would love to have a functional language that's eager by default with optional lazy constructs, great polymorphism, statically typed with inference, generics, great concurrency story, an efficient GC, that compiles quickly to self contained binaries with simple and effective tooling which takes only seconds to setup while giving you perfomance that equals java and can rival C, with a low memory footprint.

But, I don't know of one, and maybe that's because the Go team is right, some tradeoffs need to be made, and they did, and so Go is what it is. You can't add all the other great features you want and eat the Go cake too.

Disclaimer: I'm no language design expert. Just thinking this from the fact that I've yet to hear of such a language.

6
EddieRingle 4 days ago 3 replies      

 > To minimize disruption, each change will require > careful thought, planning, and tooling, which in > turn limits the number of changes we can make. > Maybe we can do two or three, certainly not more than five. > ... I'm focusing today on possible major changes, > such as additional support for error handling, or > introducing immutable or read-only values, or adding > some form of generics, or other important topics > not yet suggested. We can do only a few of those > major changes. We will have to choose carefully.
This makes very little sense to me. If you _finally_ have the opportunity to break backwards-compatibility, just do it. Especially if, as he mentions earlier, they want to build tools to ease the transition from 1 to 2.

 > Once all the backwards-compatible work is done, > say in Go 1.20, then we can make the backwards- > incompatible changes in Go 2.0. If there turn out > to be no backwards-incompatible changes, maybe we > just declare that Go 1.20 is Go 2.0. Either way, > at that point we will transition from working on > the Go 1.X release sequence to working on the > Go 2.X sequence, perhaps with an extended support > window for the final Go 1.X release.
If there aren't any backwards-incompatible changes, why call it Go 2? Why confuse anyone?

---

Additionally, I'm of the opinion that more projects should adopt faster release cycles. The Linux kernel has a new release roughly every ~7-8 weeks. GitLab releases monthly. This allows a tight, quick iterate-and-feedback loop.

Set a timetable, and cut a release with whatever is ready at the time. If there are concerns of stability, you could do separate LTS releases. Two releases per year is far too short, I feel. Besides, isn't the whole idea of Go to go fast?

7
jimjimjim 4 days ago 8 replies      
Here be Opinions:

I hate generics. also, I hate exceptions.

Too many people are wanting "magic" in their software. All some people want is to write the "Happy Path" through their code to get some Glory.

If it's your pet project to control your toilet with tweets then that's fine. But if it's for a program that will run 24/7 without human intervention then the code had better be plain, filled with the Unhappy Paths and boring.

Better one hour writing "if err" than two hours looking at logs at ohshit.30am.

8
zackmorris 4 days ago 3 replies      
Go doesn't have const structs, maps or other objects:

https://stackoverflow.com/questions/43368604/constant-struct...

https://stackoverflow.com/questions/18342195/how-to-declare-...

This is a remarkable oversight which makes it impossible to write purely-functional code with Go. We also see this same problem in most other imperative languages, with organizations going to great lengths to emulate const data:

https://facebook.github.io/immutable-js/

Const-ness in the spirit of languages like Clojure would seem to be a relatively straightforward feature to add, so I don't really understand the philosophy of leaving it out. Hopefully someone here knows and can enlighten us!

9
Analemma_ 4 days ago 6 replies      
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

This is sampling bias at work. The people who need generics have long since given up on Go and no longer even bother participating in Go-related discussions, because they've believe it will never happen. Meanwhile, if you're still using Go, you must have use cases where the lack of generics is not a problem and the existing language features are good enough. Sampling Go users to try and find compelling use cases for adding generics is not going to yield any useful data almost by definition.

10
loup-vaillant 4 days ago 1 reply      
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. [] If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild You had it coming, obviously.

Now what's the next step, refusing to implement generics because nobody uses it?

> Every major potential change to Go should be motivated by one or more experience reports documenting how people use Go today and why that's not working well enough.

My goodness, it looks like that is the next step. Go users have put up with the absence of generics, so they're not likely to complain too loudly at this point (besides, I hear the empty interface escape hatch, while not very safe, does work). More exacting developers have probably dismissed Go from the outset, so the won't be able to provide those experience reports.

11
AnimalMuppet 4 days ago 0 replies      
There are many comments griping about generics. There are many comments griping about the Go team daring to even ask what problems the lack of generics cause.

But take a look at this article about the design goals of Go: https://talks.golang.org/2012/splash.article Look especially at section 4, "Pain Points". That is what Go is trying to solve. So what the Go team is asking for, I suspect, is concrete ways that the lack of generics hinders Go from solving those problems.

You say those aren't your problems? That's fine. You're free to use Go for your problems, but you aren't their target audience. Feel free to use another language that is more to your liking.

Note well: I'm not on the Go team, and I don't speak for them. This is my impression of what's going on - that there's a disconnect in what they're asking for and what the comments here are supplying.

(And by the way, for those here who say - or imply - that the Go team is ignorant of other languages and techniques, note in section 7 the casual way they say "oh, yeah, this technique has been used since the 1970s, Modula 2 and Ada used it, so don't think we're so brilliant to have come up with this one". These people know their stuff, they know their history, they know more languages than you think they do. They probably know more languages than you do - even pjmlp. Stop assuming they're ignorant of how generics are done in other languages. Seriously. Just stop it.)

12
lemoncucumber 4 days ago 3 replies      
As much as I want them to fix the big things like lack of generics, I hope they fix some of the little things that the compiler doesn't catch but could/should. One that comes to mind is how easy it is to accidentally write:

 for foo := range(bar)
Instead of:

 for _, foo := range(bar)
When you just want to iterate over the contents of a slice and don't care about the indices. Failing to unpack both the index and the value should be a compile error.

13
tschellenbach 4 days ago 4 replies      
The beauty of Go is that you get developer productivity pretty close to Ruby/Python levels with performance that is similar to Java/C++

Improvements to package management is probably the highest item on my wishlist for Go 2.

14
Xeoncross 4 days ago 1 reply      
Generics have never stopped me from building in Go... But without them I often do my prototyping in python, javascript, or php.

Working with batch processing I'm often changing my maps to lists or hashes multiple times during discovery. Go makes me rewrite all my code each time I change the variable type.

15
vbezhenar 3 days ago 0 replies      
I like Go concept: very simple and minimalistic language, yet usable enough for many projects, even at cost of some repetition. Generics are not a concern for me. But error handling is the thing I don't like at all. I think that exceptions are best construct for error handling: they are not invasive and if you didn't handle error, it won't die silently, you have to be explicit about that. In my programs there's very little error handling, usually some generic handling at layer boundaries (unhandled exception leads to transaction rollback; unhandled exception returns as HTTP 500, etc) and very few cases when I want to handle it differently. And this produces correct and reliable program with very little effort. Now with Go I must handle every error. If I'm lazy, I'm handling it with `if err != nil { return err; }`, but this style doesn't preserve stack trace and it might be hard to understand what's going on. If I want to wrap original error, standard library don't even have this pattern, I have to roll my own wrapper or use 3-rd library for such a core concept.

What I'd like is some kind of automatic error propagation, so any unhandled error will return from function wrapped with some special class with enough information to find out what happened.

16
oelmekki 4 days ago 0 replies      
As a ruby and go dev, I'm a bit sad to see backward-compatibility going. Thinking I could write code with minimum dependencies and that would just work as is years later was really refreshing compared to the high level of maintenance needed in a ruby app.

But well, I trust the core team to make the best choices.

17
alexandernst 4 days ago 6 replies      
How about fixing all the GOPATH crap?
18
insulanian 3 days ago 0 replies      
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

Collections?

19
silverlake 2 days ago 0 replies      
This is a frustratingly common way to design mainstream PLs: "show me the use case". I've seen it firsthand for a number of big PL projects. People are trapped in their little bubble. My approach is to be wildly polyglot, actively searching for good ideas in other languages. Also, I try to write complex code outside the intended domain space and understand why it's harder than it should be.

For example, in Go it's difficult to implement state machines in a clean way. Another is handling events and timeouts is easier with Reactive programming. Distributed programming is easier with Erlang or Akka. Don't wait for problem reports in the Go community. Look at the problems in other PLs and proactively improve Go.

20
nebabyte 4 days ago 0 replies      
But I always heard never to use Go 2! :P
21
cdnsteve 4 days ago 1 reply      
"We estimate that there are at least half a million Go developers worldwide, which means there are millions of Go source files and at least a billion of lines of Go code"
22
elliotmr 4 days ago 7 replies      
I must say that whenever there is a discussion about the merits of the Go programming language, it really feels hostile in the discussion thread. It seems that people are seriously angry that others even consider using the language. It is sort of painful reading through the responses which implicitly declare that anybody who enjoys programming with Go is clueless.

It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is. I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.

What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately. When maintaining large code bases with many developers of differing skill levels, this advantage can't be understated. That is the reason there are so many successful new programs popping up in Go with large open-source communities. It is because Go is accessible and friendly to people of varying skill levels, unlike most of the opinions expressed in this thread.

23
drfuchs 4 days ago 2 replies      
"Go 2 considered harmful" - Edsger Dijkstra, 1968
24
thibran 4 days ago 1 reply      
I would love to see uniform-function-call-syntax.

Turning: func (f Foo) name() string

Into: func name(f Foo) string

Callable like this: f.name() or name(f)

Extending foreign structs from another package should be possible too, just without access to private fields.

Other than that, if-as-expression would be nice to have, too.

25
concatime 4 days ago 0 replies      
The leap second problem reminds me of this post[0].

[0] https://news.ycombinator.com/item?id=14121780

26
rmrfrmrf 3 days ago 0 replies      
Sorry, but if "a major cloud platform suffers a production outage at midnight" is the bar for effecting change in Go, then I want no part of it.
27
issaria 3 days ago 0 replies      
Regarding the lacking of generics problem, is there a way to get around it, there are always plenty of tools doing that, if the IDE can patch the syntax and support certain kind of prigma to generate the template code, then the problem is almost solved, not sure if it'll cover all cases like Java does though.
28
jasonwatkinspdx 4 days ago 6 replies      
Disclaimer: I mean this with love

This post really frustrates me, because the lengthy discussion about identifying problems and implementing solutions is pure BS. Go read the years worth of tickets asking for monotonic time, and see how notable names in the core team responded. Pick any particular issue people commonly have with golang, and you'll likely find a ticket with the same pattern: overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue. It's infuriating that the same people making those comments are now taking credit for the solution, when they had to be dragged into even admitting the issue was legitimate.

29
beliu 4 days ago 1 reply      
This was announced at GopherCon today. FYI, if folks are interested in following along other conference proceedings, there is no livestream, but there is an official liveblog: https://sourcegraph.com/gophercon
30
bsaul 3 days ago 0 replies      
about generics : i've never had a deep look at it but i've always wondered if most of the problem couldn't be solved by having base types ( int, string, date, float, ..) implements fundamental interfaces (sortable, hashable, etc). i suppose that if the solution were that simple people would've already thought about it.

in particular, i think it could help with the method dispatch, but probably not with the memory allocation ( although go already uses interfaces pretty extensively).

31
martyvis 3 days ago 0 replies      
"Play it cool" https://youtu.be/BHIo6qwJarI Go! by Public Service Broadcasting.

(Sorry just discovered this song a few days ago)

32
verroq 3 days ago 0 replies      
Forget generics. What I missed most are algebraic data structures.
33
egonschiele 4 days ago 0 replies      
Most of the discussion here seems to be around generics, and it sounds like they still don't see the benefit of generics.

I like Go, but the maintainers have a maddeningly stubborn attitude towards generics and package managers and won't ease up even with many voices asking for these features.

34
notjack 4 days ago 2 replies      
> We did what we always do when there's a problem without a clear solution: we waited. Waiting gives us more time to add experience and understanding of the problem and also more time to find a good solution. In this case, waiting added to our understanding of the significance of the problem, in the form of a thankfully minor outage at Cloudflare. Their Go code timed DNS requests during the end-of-2016 leap second as taking around negative 990 milliseconds, which caused simultaneous panics across their servers, breaking 0.2% of DNS queries at peak.

This is absurd. Waiting to fix a known language design issue until a production outage of a major customer is a failure of process, not an achievement. The fact that the post presents this as a positive aspect of Go's development process is beyond comprehension to me.

35
jgrahamc 4 days ago 1 reply      
I didn't expect to get namechecked in that.

Shows the value of constantly being transparent and supporting open source projects.

36
nickbauman 4 days ago 2 replies      
Will there be gotos in Go 2? Asking for a friend.
37
EngineerBetter 4 days ago 1 reply      
I suspect the authors of Golang got drunk, and challenged each other to see how many times they could get people to type 'if err != nil' in the next decade.
38
johansch 4 days ago 5 replies      
My main wish:

Please don't refuse to compile just because there are unused imports.

Please do warn, and loudly say it's NON-CONFORMANT or whatever will embarass me enough from sharing my piece of Go code with someone else.. but.. can I please just run my code, in private, when experimenting?

39
JepZ 4 days ago 2 replies      
I know there is a lot of discussion about generics, but I am not sure if that is missing the point. I mean 'generics' sounds like a complex concept from the java development and I am uncertain if that's really what we need in go.

From my experience I think we should talk about container formats because they make 80% of what we would like to have generics for. Actually, go feels as if it has only two container data structures: Slices and Maps. And both feel as if they are pretty hard coded into the language.

Yes, I am sure there are more container formats and it is possible to build your own, but I think it is not easy enough.

7
ECMAScript 2017 Language Specification ecma-international.org
600 points by samerbuna  6 days ago   241 comments top 27
1
thomasfoster96 6 days ago 4 replies      
Proposals [0] that made it into ES8 (whats new):

* Object.values/Object.entries - https://github.com/tc39/proposal-object-values-entries

* String padding - https://github.com/tc39/proposal-string-pad-start-end

* Object.getOwnPropertyDescriptors - https://github.com/ljharb/proposal-object-getownpropertydesc...

* Trailing commas - https://github.com/tc39/proposal-trailing-function-commas

* Async functions - https://github.com/tc39/ecmascript-asyncawait

* Shared memory and atomics - https://github.com/tc39/ecmascript_sharedmem

The first five have been available via Babel and/or polyfills for ~18 months or so, so theyve been used for a while now.

[0] https://github.com/tc39/proposals/blob/master/finished-propo...

2
callumlocke 6 days ago 3 replies      
This is mostly symbolic. The annual ECMAScript 'editions' aren't very significant now except as a talking point.

What matters is the ongoing standardisation process. New JS features are proposed, then graduate through four stages. Once at stage four, they are "done" and guaranteed to be in the next annual ES edition write-up. Engines can confidently implement features as soon as they hit stage 4, which can happen at any time of year.

For example, async functions just missed the ES2016 boat. They reached stage 4 last July [1]. So they're officially part of ES2017 but they've been "done" for almost a year, and landed in Chrome and Node stable quite a while ago.

[1] https://ecmascript-daily.github.io/2016/07/29/move-async-fun...

3
HugoDaniel 6 days ago 5 replies      
I would really love to see an object map function. I know it is easy to implement, but since they seem to be gaining ranks through syntax sugar, why not just have a obj.map( (prop, value) => ... ) ? :)
4
ihsw2 6 days ago 2 replies      
Notably, with shared memory and atomics, pthreads support is on the horizon.

https://kripken.github.io/emscripten-site/docs/porting/pthre...

Granted it may be limited to consumption via Emscripten, it is nevertheless now within the realm of possibility.

For this that cannot grok the gravity of this -- proper concurrent/parallel execution just got a lot closer for those targeting the browser.

5
flavio81 6 days ago 2 replies      
What I wish ECMAScript had was true support for number types other than the default 32-bit float. I can use 32 and 64 bit integers using "asm.js", but this introduces other complications of its own -- basically, having to program in a much lower level language.

It would be nice if EcmaScript could give us a middle ground -- ability to use 32/64 bit integers without having to go all the way down to asm.js or wasm.

6
pier25 5 days ago 2 replies      
In the last couple of years we've seen a small number of significant improvements like async/await but mostly small tepid improvements like string padding, array.map(), etc. It's like TC39 are simply polishing JS.

I'd like to see TC39 tackling the big problems of JS like the lack of static type checking. I'm tired of looking at a method and having to figure out if it is expecting a string, or an object.

We had EcmaScript4 about 10 years ago with plenty of great features but TC39 killed it. And yeah, it probably made sense since the browser vendor landscape was very different back then. Today it would be possible to implement significant changes to the language much like the WebAssembly initiative.

7
pi-rat 6 days ago 5 replies      
Really hate the naming for JS standards.. ES2017, ES8, ECMA-262. Way to confuse people :/
8
43224gg252 6 days ago 7 replies      
Can anyone recommend a good book or guide for someone who knows pre-ES6 javascript but wants to learn all the latest ES6+ features in depth?
9
baron816 6 days ago 0 replies      
Regardless of what gets included in the spec, I hope people think critically about what to use and what not to use before they jump in. Just because something is shiny and new in JS, it doesn't mean you have to use it or that it's some sort of "best practice."
10
pgl 6 days ago 2 replies      
Heres whats in it: https://github.com/tc39/proposals/blob/master/finished-propo...

And some interesting tweets by Kent C. Dodds: https://twitter.com/kentcdodds/status/880121426824630273

Edit: fixed KCD's name.Edit #2: No, really.

11
drinchev 6 days ago 1 reply      
For anyone wondering what's NodeJS support of ES8.

Everything is supported, except "Shared memory and atomics"

[1] http://node.green

12
speg 6 days ago 1 reply      
Is there a "What's new" section?
13
correctsir 5 days ago 0 replies      
I've been looking at the stage 2 and 3 proposals. I have a difficult time finding use for any of them except for Object spread/rest. The stage 4 template string proposal allowing invalid \u and \x sequences seems like a really bad idea to me that would inadvertently introduce programmer errors. I do hope the ECMAScript standardization folks will raise the barrier to entry for many of these questionable new features that create a maintenance burden for browsers and ES tooling and a cognitive burden on programmers. It was possible to understand 100% of ES5. I can't say the same thing for its successors. I think there should be a freeze on new features until all the browser vendors fully implement ES6 import and export.
14
rpedela 6 days ago 2 replies      
Has there been any progress on supporting 64-bit integers?
15
jadbox 6 days ago 1 reply      
I wish this-binding sugar would get promoted into stage 1.
16
gregjw 6 days ago 1 reply      
I should really learn ES6
17
ascom 6 days ago 1 reply      
Looks like ECMA's site is overloaded. Here's a Wayback Machine link for the lazy: https://web.archive.org/web/20170711055957/https://www.ecma-...
18
emehrkay 6 days ago 2 replies      
I'd like to be able to capture object modifications like Python's magic __getattr__ __setattr__ __delattr__ and calling methods that do not exist on objects. In the meantime I am writing a get, set, delete method on my object and using those instead
19
wilgertvelinga 6 days ago 2 replies      
Really interesting how bad the only JavaScript code used on their own site is: https://www.ecma-international.org/js/loadImg.js
20
espadrine 6 days ago 0 replies      
I made a short sum-up of changes in this specification here: http://espadrine.github.io/New-In-A-Spec/es2017/
21
lukasm 6 days ago 1 reply      
What is up with decorators?
22
komali2 6 days ago 0 replies      
>AWB: Alternatively we could add this to a standard Dict module.

>BT: Assuming we get standard modules?

>AWB: We'll get them.

lol

23
j0e1 6 days ago 1 reply      
> Kindly note that the normative copy is the HTML version;

Am I the only one who finds this ironic..

24
idibidiart 6 days ago 0 replies      
Wait, so async generators and web streams are 2018 or 2016?
25
Swizec 6 days ago 3 replies      
Time to update https://es6cheatsheet.com

What's the feature you're most excited about?

26
bitL 6 days ago 2 replies      
Heh, maybe JS becomes finally usable just before WebAssembly takes off, rendering it obsolete :-D
27
cies 6 days ago 2 replies      
Nice 90s style website ECMA!
8
The Facebook Algorithm Mom Problem boffosocko.com
736 points by pmlnr  5 days ago   301 comments top 41
1
ryanbrunner 5 days ago 20 replies      
I find a lot of sites feel like they're overtuning their recommendation engines, to the detriment of using the site. YouTube is particularly bad for this - given the years of history and somewhat regular viewing of the site, I feel like it should have a relatively good idea of what I'm interested in. Instead, the YouTube homepage seems myopically focused on the last 5-10 videos I watched.
2
danso 5 days ago 1 reply      
tl;dr, as I understand it: when family members Like your Facebook content in relative quick succession, FB apparently interprets it as a signal that it is family-specific content. I didn't see any metrics but this seems plausible.

I think I'm more of a fan of FB than the average web geek, probably because I used it at its phase of peak innocence (college years) and have since weaned myself off to the point of checking it on a less-than-weekly basis. I also almost never post professional work there, nor "friend" current colleagues. Moreover, I've actively avoided declaring familial relationships (though I have listed a few fake relationships just to screw with the algorithm). But wasn't the feature of making yourself a "brand page" and/or having "subscribers" (which don't count toward the 5,000 friend limit) supposed to mitigate this a bit? I guess I'm so used to keeping Facebook solely for personal content (and using Twitter for public-facing content) that I'm out of touch with the sharing mechanics. That, and anecdotal experience of how baby/wedding pics seems to be the most Liked/Shared content in my friend network.

4
mrleinad 5 days ago 8 replies      
I'd like Facebook to have an option to see all posts, without filtering, just as they're posted. It's not hard, it's a simple UX, but it's just not there.
5
siliconc0w 5 days ago 1 reply      
I feel like these are just shitty models. A good recommendation model would get features like "is_mom" and learn that "is_mom" is a shitty predictor of relevance.

Similarly with Amazon, products should have some sort of 'elasticity' score where it should learn that recommendations of inelastic products is a waste of screen real-estate. I mean, I doubt the model is giving a high % to most of those recommends - it's likely more a business/UX issue in that they've decided it's worth showing you low-probability recommends instead of a cleaner page (or something more useful).

Youtube, on the hand, seems to be precision tuned to get you to watch easy to digest crap. You consume the crap voraciously but are generally left unfulfilled. This is a more difficult problem where you're rewarding naive views rather than a more difficult to discern 'intrinsic value' metric. As a 'long term' business play the model should probably weight more intellectually challenging content just like fast food restaurants should probably figure out how to sell healthier food because by pedaling crap you're only meeting consumer's immediate needs, not their long term ones.

6
pmlnr 5 days ago 1 reply      
Yesterday I sent one of my friends a link to an old - 4.5 years old, from 2013 dec - entry he wrote as a Facebook note. There were 70+ likes, 30+ commenters and 110 comments on it.

He added a new comment yesterday - I only saw it, because I randomly decided to read through the comments.

Those who commented on it should have received a notification - well, in the end, 2 people got something.

This is how you effectively kill conversation - which dazzles me, because keeping conversations running = engagement, which is supposed to be one of the end goals.

I get the "need" of a filter bubble, even though I'd simple let people choke on the amount of crap they'd get if follow would actually mean get everything - they may learn not to like things without thinking.

But not sending notifications at all? Why? Why is that good?

7
kromem 5 days ago 5 replies      
Facebook also has a serious problem in that its news feed is a content recommendation engine with only positive reinforcement but no negative reinforcement. So you end up with a ton of false positives even when actively interacting with the content, and their system doesn't even know how wrong it is.

And should you really not like some content, the solution is unfriending the poster, rather than simply matching against that type of content (political, religious, etc).

The fact there isn't a private dislike button (that no one sees you clicked other than Facebook), is remarkable at this point. It's either woefully obtuse, or intentional so that a feed of false positives better hides moderately targeted ads.

8
paulcnichols 5 days ago 2 replies      
Honestly, if FB was just me and my mom I'd probably get more value out of the site. Smaller radius is better IMO.
9
ianhawes 5 days ago 3 replies      
There is a very simple solution for this issue. Create a Facebook Page for yourself as a brand, post links to your articles on that page, then share it from your personal Facebook page.
10
chjohasbrouck 4 days ago 1 reply      
Some proof or data to back up the article's claim would be great. I'm not really buying it.

If moms auto-like every post, then how is that a relevant signal? Everyone has a mom. That would mean every post is getting penalized in the same way (which effectively means no posts are getting penalized).

And if circumventing this was as simple as excluding his mom, wouldn't the effect be even greater if he excluded all non-technical friends and family?

Which pretty much just means you're posting this for the greater public, which presumably a lot of users of Facebook's API already do. Since his intention is for his content to be seen by the greater public, then... go ahead and tell the API that?

It's a great angle for an article, and it's very shareable, but he provides no data (even though he seems like someone who would have all of the data).

11
aeturnum 5 days ago 3 replies      
I'm pretty sure this description is wrong. My impression is facebook shows your content to a subset of friends and then classifies it based on likes received. If your mom 'like's 9/10 posts and your other friends like 3/10 posts, then 60% of your posts /are/ family content. Even if they're about mathematical theories.
12
compiler-guy 5 days ago 1 reply      
For every social media site XX:

Question: "Why does XX do things this way?"

Answer: "Because it increases engagement."

Why does the Facebook algorithm do this? Because it increases clicks.Why does Youtube use autoplay? Because it increases watch time.

For every single social media site.

13
mullingitover 5 days ago 0 replies      
I generally hold facebook in contempt for the forced filtering that they subject me to. Making the 'sort posts chronologically' flag come unstuck is a dirty hack that they should be ashamed of.
14
anotheryou 5 days ago 0 replies      
Any proof of this happening? And does it utilize the user-provided relationship data or just group people?
15
type0 4 days ago 0 replies      
> Id love to hear from others whove seen a similar effect and love their mothers (or other close loved ones) enough to not cut them out of their Facebook lives.

I think the bigger issue is family members, friends and relatives who do cut out their non-fb using closed ones by ignoring all other methods of telecommunication. "Oh, you didn't knew we planned a wedding, too bad you're not on fb!"

16
robbles 4 days ago 0 replies      
I'm not a machine learning expert, but isn't this an easily solved problem?

Similar to TF/IDF, where you mitigate common words by dividing by their overall frequency, you should be able to divide the weight of any particular "like" by the frequency of likes between the two people. That way a genuine expression of interest by an acquaintance is weighted far higher than a relative or close friend that reflexively likes everything you post.

17
firasd 5 days ago 0 replies      
Even aside from complicated questions like the Newsfeed algorithm, when a friend started hitting Like on nearly every post of mine I appreciated their caring but mentally discounted the meaning of their Like in terms of being a reaction to the content of my post. It's like "Like Inflation". So the algo should probably do the same about indiscriminate likes...
18
harry8 4 days ago 1 reply      
Dump facebook, it's sucks for basically everything. Ring your mom, tell her she's awesome and you love her. Wish I could ring mine...

Post a blog post to a real blog under your control. If you want a colleague to see it, email the link and ask for feedback.

There solved. It's a good algo too.

19
iplaw 4 days ago 0 replies      
No joke. My mom is the first person to LIKE anything that I post -- and then make an inappropriate comment on it.

I have added all of my family to a family group. I'll see if I can post to friends/publically and exclude an entire group of contacts.

20
SadWebDeveloper 3 days ago 0 replies      
Offtopic: It always baffles me when people "suggest" better ways to customize their "internet feed" because they don't realize how much information the "system" need to know about you (or those persons close to you) in order to make it useful and when confronted/informed about it, they explicitly denied such permission because it undermines their privacy.
21
_greim_ 5 days ago 1 reply      
> Facebook, despite the fact that they know shes my mom, doesnt take this fact into account in their algorithm.

Wouldn't it also be possible to analyze the content of the post to determine if it's family-related? It seems like with a math or technical post, that should be easy for FB to do.

22
warcher 5 days ago 1 reply      
Unrelated but worse problem: top of feed livelock. If you're below the fold, cause, I dunno, you got shit to do, you are only going to get viewed by heavy scrollers, which overly favors (IMHO, as somebody with shit to do) folks who get on the thread first. Even a one-dimensional "rank by number of uplikes" filter still doesn't calculate your likes/views ratio, which is what you'd actually care about.
23
rainhacker 5 days ago 0 replies      
I feel it's not limited to family. Empirically I've noticed liking of a post factors in how much someone likes the poster then just the contents of a post.
24
minademian 3 days ago 0 replies      
Had an aha moment reading this post. It makes sense now. This sentence mirrors my own recent experience: "These kinds of things are ones which I would relay to them via phone or in person and not post about publicly."
25
arikrak 4 days ago 0 replies      
I haven't found Facebook to be very good at recommending things. They often don't seem to be able to tell what people are interested in, and they don't really let users control who they're posting to. For example, they should make it easier to just post to people who live in the same city...
26
pdevine 4 days ago 0 replies      
I'm pretty sure they're using a machine learning algorithm, and it's determining the way to handle your post. Can someone who understands the ML algorithms better than I explain how this would interact with the feature weights? I'd be curious as to how we think that would play out.
27
smrtinsert 5 days ago 0 replies      
Beyond a 'mom' problem, this seems like a highly plausible cause for the incredibly silo-ed content on Facebook.
28
erikb 5 days ago 0 replies      
I also have a problem with social media in general, especially with following people instead of institutions/groups. Usually what 99.9% of people like is totally not what I like. So if you base the content I should consume on the assumptions that I like what my connections liked you are nearly going the opposite direction of what I want.

PS: Maybe some of you have the experience of having an active following. I notice that many social networks like Twitter, FB, Youtube, allow comments. But almost never does the content creator/sharer actually react to comments. Some may use comments in future content, but some don't react at all. Are these people not even reading the comments? Why are people commenting when it's so obvious that it's just going down a black hole? For instance, on Twitter a share with additional text is nearly the same amount of work than a comment. And it's obvious that you reach more people by share-commenting rather than simply writing your comment underneath the content. So why do people do that? And why does Twitter has the option to comment even?

29
jondubois 4 days ago 0 replies      
It's a more general problem than that. Maybe Facebook should penalise posts written by popular authors, celebrities or recognizable brands to offset the popularity (rich-get-richer) factor and select only for quality.
30
viraptor 4 days ago 0 replies      
I find this interesting: So much talk about the technical solutions both in the post and in the comments here. Yet, it doesn't seem like he asked his mom not to like his tech posts. If that's the goal - why not start there?
31
piker 5 days ago 0 replies      
Interesting post. I clicked thinking someone had coined a clever new "NASCAR Dad" moniker about parents who read and parrot their Facebook echo chamber at Thanksgiving, but was pleasantly surprised.
32
Beltiras 5 days ago 0 replies      
I like the idea of "embargo from group for n days".
33
jrochkind1 5 days ago 0 replies      
Kind of fascinating.
34
husamia 3 days ago 0 replies      
our interests are different based on our daily interactions outside of facebook. How would an algorithm define that?
35
carapace 5 days ago 1 reply      
I've said it before and I'll say it again, FB users are a kind of digital peasant or serf.

To me it feels like we're seeing the genesis of the divide between Morlocks and Eloi.

36
EGreg 5 days ago 1 reply      
Simple solution: hide post from family :)
37
45h34jh53k4j 5 days ago 1 reply      
Facebook is disgusting.

* Delete your account* Convince your mother to delete her account

38
AznHisoka 5 days ago 3 replies      
This is not a problem, and I certainly hope Facebook does not fix it. Why? Because it forced the OP to narrow down his audience and show the post only to those who would enjoy it.

That's a much better experience than everyone trying to push everything they publish to you.

39
rickpmg 5 days ago 1 reply      
>.. shame on Facebook for torturing them for the exposure when I was originally targeting maybe 10 other colleagues to begin with.

Seems like the facebook algorithm is actually working for the users by in effect blocking insipid idiots from posting their crap trying to game the system.

You don't 'target' colleagues.. colleagues are people you work with and respect.. not try to spam.

40
mamon 5 days ago 1 reply      
You are obviously giving Facebook to much information to act upon. Some suggestions:

1. Don't use Facebook :)

2. If you use it don't tag your family members as such. Or your close friends as FB "close friends"

3. Never tag anyone in photos

4. Never set "relationship status"

5. Never add info about where do you live, work, etc.

6. Having separate public profile for your company/work related stuff is probably a good idea.

7. Never post anything you wouldn't want to see in CNN news :)

41
andreasgonewild 5 days ago 2 replies      
It's simple really, just stop participating in that evil experiment. From the outside, you look like morons; talking to opposite sides of an algorithm while interpreting what comes out as reality. It's been proven over and over again that consuming that crap makes everyone feel bad and hate each other. There are plenty of alternatives, but this one is mine: https://github.com/andreas-gone-wild/snackis
9
Cloudflares fight with a patent troll could alter the game techcrunch.com
721 points by Stanleyc23  6 days ago   274 comments top 32
1
jgrahamc 6 days ago 3 replies      
More detail on what we are doing from three blog posts:

Standing Up to a Dangerous New Breed of Patent Trollhttps://blog.cloudflare.com/standing-up-to-a-dangerous-new-b...

Project Jengohttps://blog.cloudflare.com/project-jengo/

Patent Troll Battle Update: Doubling Down on Project Jengohttps://blog.cloudflare.com/patent-troll-battle-update-doubl...

2
JumpCrisscross 6 days ago 4 replies      
I've used Latham & Watkins. Just made a call to let a partner there know what I think about his firm's alumna and how it colors my opinion of him and his firm.

Encourage everyone to check with your firm's General Counsel about this. If you use Latham, or Kirkland or Weil, encourage your GC to reach out and make your views heard. It's despicable that these lawyers are harassing their firms' former and potential clients.

3
notyourday 6 days ago 3 replies      
It is all about finding a correct pressure point.

Long time ago certain Philadelphia area law firms decided to represent vegan protesters that created a major mess in a couple of high end restaurants.

A certain flamboyant owner of one the restaurants targeted decided to have a good time applying his version of asymmetric warfare. The next partners from those law firm showed up to wine and dine their clients in the establishment, the establishment(s) politely refused the service to the utter horror of the lawyers.

Needless to say, the foie gras won...

[Edit: spelling]

4
tracker1 6 days ago 3 replies      
I think that this is absolutely brilliant. I've been against the patent of generalistic ideas, and basic processes for a very long time. Anything in software should not really be patentable, unless there is a concrete implementation of an invention, it's not an invention, it's a set of instructions.

Let software work under trade secrets, but not patents. Anyone can implement something they think through. It's usually a clear example of a need. That said, I think the types of patent trolling law firms such as this deserve every bit of backlash against them that they get.

5
avodonosov 6 days ago 6 replies      
It was late summer night when I noticed that article on HN. I immediately noticed it's organized like a novel - this popular lame style which often annoys me lately:

 Matthew Prince knew what was coming. The CEO of Cloudflare, an internet security company and content delivery network in San Francisco, was behind his desk when the emails began to trickle in ...
Was he really behind his desk?

Hesitated a little before posting - am I trying to self-assert by deriding others? But this "novel" article style is some new fashion / cliche which might be interesting to discuss. Let's see what others think.

6
siliconc0w 6 days ago 2 replies      
I'm not a fan of the argument that if Blackbird weren't a NPE it'd be okay because Cloudflare could then aim it's 150 strong patent portfolio cannon back at them. It's basically saying incumbents like Cloudflare don't really want to fix the system, they want to keep the untenable 'cold war' status quo which protects them but burdens new entrants.
7
oskarth 6 days ago 5 replies      
> So-called non-practicing entities or holders of a patent for a process or product that they dont plan to develop often use them to sue companies that would sooner settle rather than pay what can add up to $1 million by the time a case reaches a courtroom.

Why on earth aren't non-practicing entity patent lawsuits outlawed? Seems like a no-brainer, and I can't imagine these firms being big enough to have any seriously lobbying power.

8
mabbo 6 days ago 2 replies      
> [Is Blackbird] doing anything thing that is illegal or unethical? continues Cheng. For the most part, its unethical. But its probably not illegal.

If it's not illegal, more work needs to be done to make it illegal. Inventors always have avenues, moreso today than ever before.

9
FussyZeus 6 days ago 3 replies      
I've never heard a good argument against this so I'll say it here: Require that the plaintiff in this cases show demonstrable, actual, and quantifiable loss by the activity of the defendant. It seems like such a no-brainer that a business suing for damage to it's business prospects after someone stole their idea would have to actually show how it was damaged. Even allowing very flimsy evidence would do a lot to dissuade most trolls, because as every article points out, they don't make anything. And if they don't make or sell a product, then patent or not, they haven't lost anything or been damaged in any way.
10
mgleason_3 6 days ago 2 replies      
We need to get rid of software patents. Patents were created to encourage innovation. Software patents simply rewarding the first person who patents what is almost always an obvious next step. That's not innovation.
11
corobo 5 days ago 0 replies      
I'm hoping their fight actually leads to a defeat rather than a submission. I have faith that Cloudflare will see this through but I also had faith that Carolla would too.

https://www.eff.org/deeplinks/2014/08/good-bad-and-ugly-adam...

12
ovi256 6 days ago 4 replies      
I've noticed a Techcrunch comment that makes this fight about software patents and states that forbiding them would be a good solution. I think that's a very wrong view to take. The software patent fight is worth fighting, but do not conflate the two issues. Abuse by patent trolls or non-practicing entities can happen even without software patents.

The law patch that shuts down patent trolls will have no effect on software patents, and vice-versa.

13
tragomaskhalos 5 days ago 0 replies      
This reminds me of an altercation in the street that my neighbour reported overhearing some years ago:

Aggressive Woman: You need to watch your step, my husband is a criminal lawyer

Woman she was trying to intimidate: (deadpans) Aren't they all ?

14
shmerl 6 days ago 2 replies      
Someone should figure out a way how to put these extortionists in prison for protection racket.
15
anonjuly12 5 days ago 0 replies      
> Its for this reason that Prince sees Cloudflares primary mission as figuring out how to increase Blackbirds costs. Explains Prince, We thought, if its asymmetric, because its so much cheaper for Blackbird to sue than for a company to defend itself, how can we make it more symmetric? And every minute that they spend having to defend themselves somewhere else is a minute they arent suing us or someone else.

They should take it a step further and apply the Thiel strategy of finding people with grievances against the founders of the patent troll and support individual lawsuits against them.

16
drtillberg 5 days ago 0 replies      
This is a dysfunction in the patent and legal processes that cannot be fixed by even more dysfunctional tactics deployed against the NPE. The rules against champterty (buying a cause of action) have been relaxed considerably to the extent in many jurisdictions of being a dead letter, and the litigation financing industry seems to have a better sound bite.

At least half of the problem is the "American Rule" of rarely shifting legal fees, which if you dig a bit you will find is of recent vintage. Back in time, for example in Massachusetts, there actually is a law for shifting legal fees as costs as a matter of course; the catch is that the fee is very low (even at the time it was enacted) of about $2.50 per case, which partly reflects inflation and partly antagonism toward legal fees.

I wonder whether a compromise solution would be to require a deposit for costs of a percentage of the demand for recovery like 2.5% of $34mm, which post-suit you could figure how to divvy up. That would make the demand more meaningful, and provide a tangible incentive to the plaintiff to think a little harder about pricing low-probability lottery-ticket-type litigation.

17
kelukelugames 6 days ago 1 reply      
I'm in tech but not in the valley. How accurate is HBO's representation of patent trolls?
18
unityByFreedom 6 days ago 0 replies      
> Blackbird is a new, especially dangerous breed of patent troll... Blackbird combines both a law firm and intellectual property rights holder into a single entity. In doing so, they remove legal fees from their cost structure and can bring lawsuits of potentially dubious merit without having to bear any meaningful cost

That's not new. It's exactly what Intellectual Ventures was (or is?) doing.

19
avodonosov 6 days ago 0 replies      
I've read the patent. But what part of CloudFlare services it claims to cover?

Also, the patent applies the same way to almost any proxy server (ICAP and similar https://en.wikipedia.org/wiki/Internet_Content_Adaptation_Pr...)

20
bluejekyll 5 days ago 0 replies      
Something needs to give on this stuff. It's probably going to be hard to get a significant change done, such as getting rid of software patents (following from no patents on Math).

I've wondered if one way to chip away at them, would be to make Patents non-transferable. This would preserve the intent, to protect the inventors R&D costs, but not allow the patents to be exploited by trolls. This would have the effect of devaluing patents themselves, but it's not clear that patents were ever intended to carry direct value rather they exist to grant temporary monopolies for the inventor to earn back the investment.

21
fhrow4484 6 days ago 1 reply      
What is the state of "anti-patent trolls" laws in different state? I know for instance Washington state has a law like this effective since July 2015 [1][2]. What is it like in other states, specifically California?

[1] http://www.atg.wa.gov/news/news-releases/attorney-general-s-...

[2] http://app.leg.wa.gov/RCW/default.aspx?cite=19.350&full=true

22
redm 6 days ago 0 replies      
It would be great if the "game" was really altered but I've heard that statement and hope many times over the last 10 years. While there has been some progress, patent trolling continues. Here's hoping...
23
bluesign 5 days ago 0 replies      
Tbh I dont think there is a practical solution for patent trolls.

Patents are basically assets, and they are transferable.

Making then non-transferable is not a solution at all. Basically law firms can represent patent owners.

System needs different validity for patents, which should be set after an evaluation, and can be challenged at the courts.

Putting all patents in the same basket is plain stupid.

24
SaturateDK 6 days ago 0 replies      
This is great, I guess I'm going "Prior art searching" right away.
25
arikrak 6 days ago 0 replies      
Business usually settle rather than fight patent trolls, but I wonder if fighting is worth it if it can deter others from suing them in the future? I guess it depends somewhat on the outcome of the case..
26
draw_down 6 days ago 0 replies      
Unfortunately, I think this is written in a way that makes it hard to understand what exactly Cloudflare is doing against the troll. They're crowdsourcing prior art and petitioning the USPTO?
27
avodonosov 6 days ago 0 replies      
Can the Decorator design pattern be considered a prior art?
28
y0ssar1an 5 days ago 0 replies      
Go Cloudflare Go!
29
danschumann 6 days ago 0 replies      
Can I create 5 more HN accounts just to +1 this some more?
30
dsfyu404ed 6 days ago 1 reply      
31
subhrm 6 days ago 1 reply      
Long live patents !
32
ivanbakel 6 days ago 3 replies      
I don't see anything game-changing about their approach. Fighting instead of settling should definitely be praised, but the only differences between this legal challenge and any of the previous ones are the result of recent changes in the law or the judiciary, which are beyond Cloudflare's control. Nothing suggests that patent-trolling itself as a "game" is going to shift or go away after this, and until that is made to happen, it's going to be as lucrative as ever.
10
Seeing AI for iOS microsoft.com
821 points by kmather73  3 days ago   125 comments top 29
1
nharada 2 days ago 6 replies      
Yes, YES, this is what I'm talking about Microsoft. I'm surprised how muted the reaction is from HN here.

On the technical side, this is a perfect example of how AI can be used effectively, and is a (very obvious in hindsight) application of the cutting edge in scene understanding and HCI. There are quite a few recent and advanced techniques rolled into one product here, and although I haven't tried it out yet it seems fairly polished from the video. A whitepaper of how this works from the technical side would be fascinating, because even though I'm familiar with the relevant papers it's a long jump between the papers and this product.

On the social side, I think this is a commendable effort, and a fairly low hanging fruit to demonstrate the positive power of new ML techniques. On a site where every other AI article is full of comments (somewhat rightfully) despairing about the negative social aspects of AI and the associated large scale data collection, we should be excited about a tool that exists to improve lives most of us don't even realize need improving. This is the kind of thing I hope can inspire more developers to take the reins on beneficial AI applications.

2
mattchamb 2 days ago 0 replies      
Just as an extra piece of information for people, the person presenting the videos in that page is Saqib Shaikh, who is a developer at Microsoft. Earlier on HN, there was a really interesting video of him giving a talk about how he can use the accessibility features in visual studio to help him code. https://www.youtube.com/watch?v=iWXebEeGwn0
3
dcw303 3 days ago 3 replies      
US App Store only at this stage it seems. Pity, I'd like to try this.

edit: I'm wrong. it's in other stores as well, but not in the Australian app store, which is the one that I tried.

4
booleandilemma 3 days ago 1 reply      
It correctly identified my refrigerator and bookshelf. Color me impressed.

Things like this make articles like this one seem silly: https://www.madebymany.com/stories/what-if-ai-is-a-failed-dr...

5
ve55 2 days ago 4 replies      
If I have a friend that is visually impaired and is using this, I have to consent to their phone recording me and analyzing me and sending all of that data off to who knows where.

And this is just from my perspective - someone who is not visually impaired. For the person who is, every single thing they look at and read is going to be recorded and used.

It's an unfortunate situation for people to put in, and I'm sure everyone will choose using improvements like this over not using them. As much as I would love to see a focus on privacy for projects like this, I don't imagine it happening any time soon, given how powerful the data involved is.

I imagine a future where AI assistants like this are commonplace, and there is no escaping them.

6
engulfme 2 days ago 1 reply      
If I remember correctly, this came out of a OneWeek project - Microsoft's company-wide weeklong hackathon. Very cool to see a final published version of this!
7
ian0 2 days ago 3 replies      
Wow. You can imagine a near future where this, a small wearable camera and an earphone could really make a big difference to a persons daily life.

Screw Siri, thats a real AI assistant :)

8
scarface74 2 days ago 1 reply      
The text recognition from documents is amazingly primitive. It doesn't use any type of spell checking to make a best guess at what a word is. It's straight text recognition.

On the other hand the "short text" feature works amazingly well to read text is sees from the camera. It's fast and accurate when reading text even at some non optimal angles.

How do you get it to try to recognize items that the camera sees?

Edit:

Oops. I guess it would help if I swiped right....

9
GeekyBear 2 days ago 0 replies      
One of my friends from college has limited vision and the feature to read text aloud will be a game changing convenience.

He has a magnifier in his home, but it isn't portable and is limited to working only with documents and images that can lie flat.

edit: After speaking with my friend, he already uses a popular app called KNFB Reader that works very well on short text and documents, but costs $100. On the plus side, it works on Android or iOS.

10
EA 3 days ago 0 replies      
I am recommending this for my elderly family members with poor eyesight. This could greatly increase their quality of life.
11
Finbarr 2 days ago 1 reply      
I'm pretty blown away by this. I took a picture of myself in the mirror with the scene description feature, and it said "probably a man standing in front of a mirror posing for the camera". I took a picture of the room in front of me and it said "probably a living room". Think I'll be experimenting with this for days.
12
zyanyatech 2 days ago 0 replies      
The use of AI and ML for application purposes is starting to get to a point that it can really be used for problem solving, we did a demo app similar use of this technology, https://zyanya.tech/hashtagger-android

I am going to give Seeing AI a try as well, but I totally understand why a research department would like to have a demo as an Application available for public.

13
rb666 2 days ago 0 replies      
Well, it needs some work, but pretty cool nonetheless, can see where it was going with this :)

http://imgur.com/jxsWrEq

14
mechaman 2 days ago 0 replies      
Does it do hot dog or not?
15
rasengan0 2 days ago 0 replies      
I love the low vision pitch as there are a dearth of low vision resources particularly those hit with age related macular degeneration. I wonder if they are any censored items in the backend that may limit functionality -- Seeing AI won't be seeing any sex toys...
16
hackpert 2 days ago 0 replies      
This is amazing. I don't know if a lot of people here realize this, but it is really hard to pull off this level of integration of different computer vision components (believe me, I've tried). Microsoft has really outdone themselves this time.
17
NicoJuicy 1 day ago 0 replies      
Microsoft app ' Office Lens ' is the only app I use for screenshots of documents in Android. I see part of the tech is used in this app to take a screenshot also.

Love it

18
vinitagr 2 days ago 0 replies      
This is quite an amazing technology. With new products like HoloLens and this, i think Microsoft is finally coming around.
19
leipert 2 days ago 0 replies      
Awesome technology; and todays SMBC [1] seems to be related.

[1]: https://www.smbc-comics.com/comic/the-real-me

20
coworkerblues 2 days ago 0 replies      
Does this mean RIP OrCam (the other startup form mobileye creator which basically does this as a full hardware / software solution) ?

http://www.orcam.com/

21
armandososa 1 day ago 0 replies      
Remember this? http://i.dailymail.co.uk/i/pix/2013/05/13/article-2323625-19... when it was a really big deal that Microsoft agreed to help Apple and release some software for their platform?
22
ClassyJacket 3 days ago 2 replies      
Not available in the Australian store. Ah, I forgot my entire country doesn't exist.
23
parish 2 days ago 0 replies      
I'm impressed. Good job MS
24
cilea 2 days ago 0 replies      
I think this is also cool for learning English as well. An English learner who'd like to express what s/he sees can verify with AI's response.
25
MrJagil 2 days ago 0 replies      
The videos explaining it are really nice https://youtu.be/dqE1EWsEyx4
26
arized 2 days ago 0 replies      
Looks amazing, I really need to dive into machine learning more this year... Waiting impatiently for UK release to give it a try!
27
jtbayly 2 days ago 0 replies      
Unimpressed. Took a picture of a ceiling fan and it said, "probably a chair sitting in front of a mirror." Took a pic of a dresser, and it said "a bedroom with a wooden floor." Tried the ceiling fan again and got an equally absurd answer.

Deleted app.

28
chenster 3 days ago 1 reply      
29
LeoNatan25 3 days ago 3 replies      
Whenever I take a picture with the camera button on the left, it shows a loading indicator and the app crashes. Not a great first impression. Coming from a company the size of Microsoft, such trivial crashes should have been caught.
11
Math education: Its not about numbers, its about learning how to think nwaonline.com
565 points by CarolineW  6 days ago   330 comments top 53
1
d3ckard 6 days ago 18 replies      
Maybe I'm wrong, but I have always believed that if you want people to be good at math, it's their first years of education which are important, not the last ones. In other worlds, push for STEM should be present in kindergartens and elementary schools. By the time people go to high school it is to late.

I never had any problems with math until I went to university, so I was merely a passive observer of everyday struggle for some people. I honestly believe that foundations are the key. Either you're taught to think critically, see patterns and focus on the train of thought, or you focus on numbers and memorization.

The latter obviously fails at some point, in many cases sufficiently late to make it really hard to go back and relearn everything.

Math is extremely hierarchical and I believe schools do not do enough to make sure students are on the same page. If we want to fix teaching math, I would start there, instead of working on motivation and general attitude. Those are consequences, not the reasons.

2
gusmd 6 days ago 4 replies      
I studied Mechanical Engineering, and it was my experience that several professors are only interested in having the students learn how to solve problems (which in the end boil down to math and applying equations), instead of actually learning the interesting and important concepts behind them.

My wife went to school for Architecture, where she learned "basic" structural mechanics, and some Calculus, but still cannot explain to me in simple words what an integral or a derivative is. Not her fault at all: her Calculus professor had them calculate polynomial derivatives for 3 months, without ever making them understand the concept of "rate or change", or what "infinitesimal" means.

For me that's a big failure of our current "science" education system: too much focus on stupid application of equations and formulas, and too little focus on actually comprehending the abstract concepts behind them.

3
Koshkin 6 days ago 9 replies      
Learning "how to think" is just one part of it. The other part - the one that makes it much more difficult for many, if not most, people to learn math - especially the more abstract branches of it - is learning to think about math specifically. The reason is that mathematics creates its own universe of concepts and ideas, and this universe, all these notions are so different from what we have to deal with every day that learning them takes a lot of training, years of intensive experience dealing with mathematical structures of one kind or another, so it should come as no surprise that people have difficulty learning math.
4
spodek 6 days ago 1 reply      
> it's about learning how to think

It's about learning a set of thinking skills, not how to think. Many people who know no math can think and function very well in their domains and many people who know lots of math function and think poorly outside of math.

5
J_Sherz 6 days ago 2 replies      
My problem with Math education was always that speed was an enormous factor in testing. You can methodically go through each question aiming for 100% accuracy and not finish the test paper, while other students can comfortably breeze through all the questions and get 80% accuracy but ultimately score higher on the test. This kind of penalizing for a lack of speed can lead to younger kids who are maximizing for grades to move away from Math for the wrong reasons.

Source: I'm slow but good at Math and ended up dropping it as soon as I could because it would not get me the grades I needed to enter a top tier university.

6
BrandiATMuhkuh 6 days ago 0 replies      
Disclaimer: I'm CTO of https://www.amy.ac an online math tutor.

From our experience most people struggle with math since they forgot/missed a curtain math skill they might have learned a year or two before. But most teaching methods only tell the students to practise more of the same. When looking at good tutors, we could see that a tutor observes a student and then teaches them the missing skill before they actually go to the problem the student wanted help with. That seems to be a usefull/working approach.

7
Nihilartikel 6 days ago 0 replies      
This is something I've been pondering quite a bit recently. It is my firm belief that mathematical skill and general numeracy are actually a small subset of abstract thought. Am I wrong in thinking that school math is the closest to deliberate training in abstract reasoning that one would find in public education?

Abstract reasoning, intuition, and creativity, to me, represent the underpinnings of software engineering, and really, most engineering and science, but are taught more by osmosis along side the unintuitive often boring mechanics of subjects. The difference between a good engineer of any sort and one that 'just knows the formulas' is the ability to fluently manipulate and reason with symbols and effects that don't necessarily have any relation or simple metaphor in the tangible world. And taking it further, creativity and intuition beyond dull calculation are the crucial art behind choosing the right hypothesis to investigate. Essentially, learning to 'see' in this non-spacial space of relations.When I'm doing system engineering work, I don't think in terms of X Gb/s throughput and Y FLOPS... (until later at least) but in my mind I have a model of the information and data structures clicking and buzzing, like watching the gears of a clock, and I sort of visualize working with this, playing with changes. It wouldn't surprise me of most knowledge workers arrive have similar mental models of their own. But what I have observed is that people who have trouble with mathematics or coding aren't primed at all to 'see' abstractions in their minds eye. This skill takes years to cultivate, but, it seems that its cultivation is left entirely to chance by orthodox STEM education.

I was just thinking that this sort of thing could be approached a lot more deliberately and could yield very broad positive results in STEM teaching.

8
jeffdavis 6 days ago 2 replies      
My theory is that math anxiety is really anxiety about a cold assessment.

In other subjects you can rationalize to yourself in various ways: the teacher doesn't like me, or I got unlucky and they only asked the history questions I didn't know.

But with math, no rationalization is possible. There's no hope the teacher will go easy on you, or be happy that you got the gist of the solution.

Failure in math is often (but not always) a sign that education has failed in general. Teachers can be lazy or too nice and give good grades in art or history or reading to any student. But when the standardized math test comes around, there's no hiding from it (teacher or student).

9
monic_binomial 6 days ago 1 reply      
I was a math teacher for 10 years. I had to give it up when I came to realize that "how to think" is about 90% biological and strongly correlated to what we measure with IQ tests.

This may be grave heresy in the Temple of Tabula Rasa where most education policy is concocted, but nonetheless every teacher I ever knew was ultimately forced to chose between teaching real math class with a ~30% pass rate or a watered-down math Kabuki show with a pass rate just high enough to keep their admins' complaints to a low grumble.

In the end we teachers would all go about loudly professing to each other that "It's not about numbers, it's about learning how to think" in a desperate bid to quash our private suspicions that there's actually precious little that can be done to teach "how to think."

10
mindcrime 6 days ago 1 reply      
This part really resonates with me as well:

"You read all the time, right? We constantly have to read. If you're not someone who picks up a book, you have to read menus, you've got to read traffic signs, you've got to read instructions, you've got to read subtitles -- all sorts of things. But how often do you have to do any sort of complicated problem-solving with mathematics? The average person, not too often."

From this, two deductions:

Having trouble remembering the quadratic equation formula doesn't mean you're not a "numbers-person."

To remember your math skills, use them more often.

What I remember from high-school and college was this: I'd take a given math class (say, Algebra I) and learn it reasonably well. Then, summer vacation hits. Next term, taking Algebra II, all the Algebra I stuff is forgotten because, well, who uses Algebra I over their summer vacation? Now, Algebra II is harder than it should be because it builds on the previous stuff. Lather, rinse, repeat.

This is one reason I love Khan Academy so much. You can just pop over there anytime and spend a few minutes going back over stuff at any level, from basic freaking fractions, up through Calculus and Linear Algebra.

11
ouid 6 days ago 0 replies      
When people talk about the failure of mathematics education, we often talk about it in terms of the students inability to "think mathematically".

It's impossible to tell if students are capable of thinking mathematically, however, because I have not met a single (non-mathlete) student who could give me the mathematical definition of... anything. How can we evaluate student's mathematical reasoning ability if they have zero mathematical objects about which to reason?

12
jtreagan 6 days ago 0 replies      
You say "it's not about numbers, it's about learning how to think," but the truth is it's about both. Without the number skills and the memorization of all those number facts and formulas, a person is handicapped both in learning other subjects and skills and in succeeding and progressing in their work and daily life. The two concepts -- number skills and thinking skills -- go hand in hand. Thinking skills can't grow if the number skills aren't there as a foundation. That's what's wrong with the Common Core and all the other fads that are driving math education these days. They push thinking skills and shove a calculator at you for the number skills -- and you stall, crash and burn.

The article brings out a good point about math anxiety. I have had to deal with it a lot in my years of teaching math. Sometimes my classroom has seemed so full of math anxiety that you could cut it with a butter knife. I read one comment that advocated starting our children out even earlier on learning these skills, but the truth is the root of math anxiety in most people lies in being forced to try to learn it at too early an age. Most children's brains are not cognitively developed enough in the early grades to learn the concepts we are pushing at them, so when a child finds failure at being asked to do something he/she is not capable of doing, anxiety results and eventually becomes habit, a part of their basic self-concept and personality. What we should instead do is delay starting school until age 8 or even 9. Some people don't develop cognitively until 12. Sweden recently raised their mandatory school age to 7 because of what the research has been telling us about this.

13
g9yuayon 6 days ago 2 replies      
Is this a US thing? Why would people still think that math is about numbers? Math is about patterns, which got drilled into us by our teachers in primary school. I really don't understand how US education system can fuck up so badly on fundamental subject like math.
14
quantum_state 6 days ago 0 replies      
Wow ... this blows me away ... in a few short hours, so many people chimed in sharing thoughts ... It is great ... Would like to share mine as well.Fundamentally, math to me is like a language. It's meant to help us to describe things a bit more quantitatively and to reason a bit more abstractly and consistently ... if it can be made mechanical and reduce the burden on one's brain, it would be ideal. Since it's like a language, as long as one knows the basics, such as some basic things of set theory, function, etc., one should be ready to explore the world with it. Math is often perceived as a set of concepts, theorems, rules, etc. But if one gets behind the scene to get to know some of the original stories of the things, it would become very nature. At some point, one would have one's mind liberated and start to use math or create math like we usually do with day to day languages such as English.
15
yellowapple 6 days ago 0 replies      
I wish school curricula would embrace that "learning how to think" bit.

With the sole exception of Geometry, every single math class I took in middle and high school was an absolutely miserable time of rote memorization and soul-crushing "do this same problem 100 times" busy work. Geometry, meanwhile, taught me about proofs and theorems v. postulates and actually using logical reasoning. Unsurprisingly, Geometry was the one and only math class I ever actually enjoyed.

16
taneq 6 days ago 6 replies      
As my old boss once said, "never confuse mathematics with mere arithmetic."
17
brendan_a_b 6 days ago 1 reply      
My mind was blown when I came across this Github repo that demonstrates mathematical notation by showing comparisons with JavaScript code https://github.com/Jam3/math-as-code

I think I often struggled or was intimidated by the syntax of math. I started web development after years of thinking I just wasn't a math person. When looking at this repo, I was surprised at how much more easily and naturally I was able to grasp concepts in code compared to being introduced to them in math classes.

18
gxs 6 days ago 0 replies      
Late to the party but wanted to share my experience.

I was an Applied Math major at Berkely. Why?

When I was in 7th grade, I had an old school Russian math teacher. She was tough, not one for niceties, but extremely fair.

One day, being the typical smart ass that I was, I said, why the hell do I need to do this, I have 0 interest in Geometry.

Her answer completely changed my outlook and eventually was the reason why I took extensive math in HS and majored in math in college.

Instead of dismissing me, instead of just telling me to shut up and sit down, she explained things to me very calmly.

She said doing math beyond improving your math skills improves your reasoning ability. It's a workout for your brain and helps develop your logical thinking. Studying it now at a young age will help it become part of your intuition so that in the future you can reason about complex topics that require more than a moment's thoughts.

She really reached me on that day, took me a while to realize it. Wish I could have said thank you.

Wherever you are Ms. Zavesova, thank you.

Other beneits: doing hard math really builds up your tolerance for building hard problems. Reasoning through long problems, trying and failing, really requires a certain kind of stamina. My major definitely gave me this. I am a product manager now and while I don't code, I have an extremely easy time working with engineers to get stuff done.

19
alistproducer2 6 days ago 1 reply      
I can't agree more. Math is about intuition of what the symbols are doing. In the case of functions, intuition about how the symbols are transforming the input. I've always thought I was "bad at math." It wasn't until my late 20's when I took it upon myself to get better at calculus and I used "Calculus Success in 20 Minute a Day[0]" did I finally realize why I was "bad" at it; I never understood what I was doing.

That series of book really put intuition at the forefront. I began to realize that the crazy symbols and formulas were stand-in for living, breathing dynamic systems: number transformers. Each formula and symbol represented an action. Once I understood Math as a way to encode useful number transformation, it all clicked. Those rules and functions were encoded after a person came up with something they wanted to do. The formula or function is merely a compact way of describing this dynamic system to other people.

The irony was I always thought math was boring. In retrospect it was because it was taught as if it had no purpose other than to provide useless mental exercise. Once I started realizing that derivatives are used all around me to do cool shit, I was inspired to learn how they worked because I wanted to use them to do cool shit too. I went through several years of math courses and none of them even attempted to tell me that math was just a way to represent cool real world things. It took a $10 used book from amazon to do that. Ain't life grand?

[0]:https://www.amazon.com/Calculus-Success-20-Minutes-Day/dp/15...

20
dbcurtis 6 days ago 0 replies      
Permit me to make a tangentially related comment of interest to parents reading this thread: This camp for 11-14 y/o kids: http://www.mathpath.org/ is absolutely excellent. My kid loved it so much they attended three years. Great faculty... John Conway, Francis Su, many others. If you have a math-loving kid of middle-school age, I encourage you to check it out.
21
simias 6 days ago 1 reply      
I completely agree. I think we start all wrong too, the first memories I have of maths at school was learning how to compute an addition, a subtraction and later a multiplication and division. Then we had to memorize by heart the multiplication tables.

That can be useful of course (especially back then when we didn't carry computers in our pockets at all times) but I think it sends some pupils on a bad path with regards to mathematics.

Maths shouldn't be mainly about memorizing tables and "dumbly" applying algorithms without understanding what they mean. That's how you end up with kids who can answer "what's 36 divided by 4" but not "you have 36 candies that you want to split equally with 3 other people, how many candies do you end up with?"

And that goes beyond pure maths too. In physics if you pay attention to the relationship between the various units you probably won't have to memorize many equations, it'll just make sense. You'll also be much more likely to spot errors. "Wait, I want to compute a speed and I'm multiplying amperes and moles, does that really make sense?".

22
kchr 15 hours ago 0 replies      
Dammit, I glanced at the domain and expected a statement from the gangsta rap group... School is cool, kids!
23
Tommyixi 5 days ago 0 replies      
For me, math has always been a source of unplugging. I'd sit at my kitchen table, put in some headphones, and just get lost in endless math problems.

Interestingly, now as a masters student in a statistics graduate program, I've learned that I don't like "doing" math but get enjoyment from teaching it. I really like it when students challenge me when I'm at the chalkboard and I'll do anything for those "ah-ha!" moments. The best is at the end of the semester hearing students say "I thought this class was going to suck but I worked hard and am proud of the work I did." I'm hoping that on some small scale I'm shaping their views on math. Or at least give them the confidence to say, "I don't get this, but I'm not afraid to learn it."

24
jrells 6 days ago 0 replies      
I often worry that mathematics education is strongly supported on the grounds that it is about "learning how to think", yet the way it is executed rarely prioritizes this goal. What would it look like if math curriculum were redesigned to be super focused on "learning how to think"? Different, for sure.
25
lordnacho 6 days ago 4 replies      
I think a major issue with math problems in school is that they're obvious.

By that I don't mean it's easy. But when you're grappling with some problem, whatever it is, eg find some angle or integrate some function, if you don't find the answer, someone will show you, and you'll think "OMG why didn't I think of that?"

And you won't have any excuses for why you didn't think of it. Because math is a bunch of little logical steps. If you'd followed them, you'd have gotten everything right.

Which is a good reason to feel stupid.

But don't worry. There are things that mathematicians, real ones with PhDs, will discover in the future. By taking a number of little logical steps that haven't been taken yet. They could have gone that way towards the next big theorem, but they haven't done it yet for whatever reason (eg there's a LOT of connections to be made).

26
alexandercrohde 6 days ago 0 replies      
Enough "I" statements already. It's ironic how many people seem to think their personal experience is somehow relevant on a post about "critical thinking."

The ONLY sane way to answer these questions:- Does math increase critical thinking?- Does critical thinking lead to more career earnings/happiness/etc?- When does math education increase critical thinking most?- What kind of math education increases critical thinking?

Is with a large-scale research study that defines an objective way to measure critical thinking and controls for relevant variables.

Meaning you don't get an anecdotal opinion on the matter on your study-of-1 no-control-group no-objective-measure personal experience.

27
dahart 6 days ago 4 replies      
I wonder if a large part of our math problem is our legacy fixation on Greek letters. Would math be more approachable to English speakers if we just used English?

I like to think about math as language, rather than thought or logic or formulas or numbers. The Greek letters are part of that language, and part of why learning math is learning a completely foreign language, even though so many people who say they can't do math practice mathematical concepts without Greek letters. All of the math we do on computers, symbolic and numeric, analytic and approximations, can be done using a Turing machine that starts with only symbols and no built-in concept of a number.

28
tnone 5 days ago 0 replies      
Is there any other subject that is given as much leeway for its abysmal pedagogical failures?

"Economics, it's not about learning how money and markets work, it's about learning how to think."

"Art, it's not about learning about aesthetics, style, or technique, it's about learning how to think."

"French, it's not about learning how to speak another language, it's..."

Math has a problem, and it's because the math curriculum is a pile of dull, abstract cart-before-the-horse idiocy posing as discipline.

29
WheelsAtLarge 6 days ago 0 replies      
True, Math is ultimately about how to think but students need to memorize and grasp the basics in addition to making sure that new material is truly understood. That's where things fall apart. We are bombarded with new concepts before we ultimately know how to use what we learned. How many people use imaginary numbers in their daily life? Need I say more?

We don't communicate in Math jargon every day so it's ultimate a losing battle. We learn new concepts but we lose them since we don't use them. Additionally a large number of students get lost and frustrated and finally give up. Which ultimately makes math a poor method to teach thinking since only a few students can attain the ultimate benefits.

Yes, Math is important, and needs to be taught, but if we want to use it as away to learn how to think there are better methods. Programming is a great way. Students can learn it in one semester and can use it for life and can also expand on what they already know.

Also, exploring literature and discussing what the author tries to convey is a great way to learn how to think. All those hours in English class trying to interpret what the author meant was more about exploring your mind and your peer's thoughts than what the author actually meant. The author lost his sphere of influence once the book was publish. It's up to the readers of every generation to interpret the work. So literature is a very strong way to teach students how to think.

30
listentojohan 6 days ago 0 replies      
The true eye-opener for me was reading Number - The Language of Science by Tobias Dantzig. The philosophy part of math as an abstraction layer for what is observed or deducted was a nice touch.
31
yequalsx 6 days ago 3 replies      
I teach math at a community college. I've tried many times to teach my courses in such a way that understanding the concepts and thinking were the goals. Perhaps I'm jaded by the failures I encountered but students do not want to think. They want to see a set of problem types that need to be mimicked.

In our lowest level course we teach beginning algebra. Almost everyone has an intuition that 2x + 3x should be 5x. It's very difficult to get them to understand that there is a rule for this that makes sense. And that it is the application of this rule that allows you to conclude that 2x + 3x is 5x. Furthermore, and here is the difficulty, that same rule is why 3x + a x is (3+a)x.

I believe that for most people mathematics is just brainwashing via familiarity. Most people end up understanding math by collecting knowledge about problem types, tricks, and becoming situationally aware. Very few people actually discover a problem type on their own. Very few people are willing, or have been trained to be willing, to really contemplate a new problem type or situation.

Math education in its practice has nothing to do with learning how to think. At least in my experience and as I understand what it means to learn how to think.

32
katdev 5 days ago 0 replies      
You know what helps kids (and adults) learn math? The abacus/soroban. Yes, automaticity with math facts/basic math is important but what's really important is being able to represent the base-10 system mentally.

The abacus is an amazing tool that's been successful in creating math savants - here's the world champion adding 10 four-digit numbers in 1.7 seconds using mental math https://www.theguardian.com/science/alexs-adventures-in-numb...

Students are actually taught how to think of numbers in groups of tens, fives, ones in Common Core math -- however, most are not given the abacus as a tool/manipulative.

33
0xFFC 6 days ago 0 replies      
Exactly, as ordinary hacker i was always afraid of math. But after taking mathematical Analysis I realized how wonderful math is. These day i am in love with pure mathematics. It literally corrected my brain pipeline in so many ways and it continues to do it further and further.

I have thought about changing my major to pure mathematics too.

34
lucidguppy 6 days ago 0 replies      
Why aren't people taught how to think explicitly? The Greeks and the Romans thought it was a good idea.
35
jmml97 6 days ago 1 reply      
I'm studying math right now and I have that problem. We're just being vomited theorems and propositions in class instead of making us think. There's not a single subject dedicated to learning the process of thinking in maths. So I think we're learning the wrong (the hard) way.
36
Mz 6 days ago 0 replies      
Well, I actually liked math and took kind of a lot of it in K-12. I was in my 30s before I knew there were actual applications for some of the things I memorized my way through without really understanding.

When I homeschooled my sons, I knew this approach would not work. My oldest has trouble with numbers, but he got a solid education in the concepts. He has a better grasp of things like GIGO than most folks. We also pursued a stats track (at their choice) rather than an algebra-geometry-trig track.

Stats is much more relevant to life for most people most of the time and there are very user-friendly books on the topic, like "How to lie with statistics." If you are struggling with this stuff, I highly recommend pursuing something like that.

37
keymone 6 days ago 1 reply      
i always found munging numbers and memorizing formulas discouraging. i think physics classes teach kids more math than math classes and in more interesting ways (or at least have potential to).
38
JoshTriplett 6 days ago 0 replies      
One of the most critical skills I see differentiating people around me (co-workers and otherwise) who succeed and those who don't is an analytical, pattern-recognizing and pattern-applying mindset. Math itself is quite useful, but I really like the way this particular article highlights the mental blocks and misconceptions that seem to particularly crop up around mathematics; those same blocks and misconceptions tend to get applied to other topics as well, just less overtly.
39
andyjohnson0 5 days ago 0 replies      
A couple of years ago I did the Introduction to Mathematical Thinking course on Coursera [1]. Even though I found it hard, I enjoyed it and learned a lot, and I feel I got some insight into mathematical though processes. Recommended.

[1] https://www.coursera.org/learn/mathematical-thinking

40
djohnston 5 days ago 0 replies      
Anecdotally, I was a pretty average math student growing up and a pretty good math student in university. One of the reasons I studied math in college was to improve what was objectively my weakest area intellectually, but I found that once we were working with much more abstract models and theories, I was more competent.
41
cosinetau 6 days ago 0 replies      
As a someone with a degree in applied mathematics, I feel the problem with learning mathematics is more often than not a problem or a fault of the instructor of mathematics.

Many instructors approach the subject with a very broad understanding of the subject, and it's very difficult (more difficult than math) to shake that understanding and abstract it to understandable chunks of knowledge or reasoning.

42
archeantus 6 days ago 0 replies      
If we want to teach people how to think, I propose that math isn't the best way to do it. I can't tell you how many times I complained about how senseless math was. The real-world application is very limited, for the most part.

Contrast that to if I had learned programming instead. Programming definitely teaches you how to think, but it also has immense value and definite real-world application.

43
k__ 6 days ago 0 replies      
I always had the feeling I failed to grasp math because I never got good at mid level things.

It took me reeeally long to grasp things like linear algebra and calculus and I never was any good at it.

It was a struggle to get my CS degree.

Funny thing is, I'm really good at the low level elementary school stuff so most people think I'm good at math...

44
EGreg 6 days ago 0 replies      
There just needs to be faster feedback than once a test.

https://opinionator.blogs.nytimes.com/2011/04/21/teaching-ma...

45
bojo 5 days ago 0 replies      
When I first saw it I thought the sign in the mentioned tweet may have been because the deli was next to a mathematics department and the professors/students would stand around and hold up the line while discussing math.

Overactive imagination I guess.

46
GarvielLoken 5 days ago 0 replies      
tl;drA couple of numbers-nerds are sad and offended that math is not as recognized as reading and literature, where there are great works that speaks of the human condition and illustrates life.

Also they have the mandatory "everything is really math! ". "LeGrand notes that dancing and music are mathematics in motion. So ... dance, play an instrument."

Just because i can describe history through the perspective of capitalism or Marx theories, does not make history the same thing as either of those.

47
CoolNickname 6 days ago 0 replies      
School is not about learning but learning how to think. The way it is now it's more about showing off than it is about anything actually useful. They don't reward effort, they reward talent.
48
humbleMouse 6 days ago 0 replies      
On a somewhat related tangent, I think about programming the same way.

I always tell people programming and syntax are easy - it's learning to think in a systems and design mindset that is the hard part.

49
crb002 6 days ago 2 replies      
Programming needs to be taught alongside Algebra I. Especially in a language like Haskell or Scheme where algebraic refactoring of type signatures looks like normal algebra notation.
50
calebm 6 days ago 0 replies      
I agree, but have a small caveat: math does typically strongly involve numbers, so in a way, it is about numbers, though it's definitely not about just memorizing things or blindly applying formulas.

It just bugs me sometimes when people make hyperbolic statements like that. I remember coworkers saying things like "software consulting isn't about programming". Yes it is! The primary skill involved is programming, even programming is not the ONLY required skill.

51
dorianm 4 days ago 0 replies      
Maths problems are cool too, like counting apples and oranges :) (Or gold and rubies)
52
pklausler 6 days ago 0 replies      
How do you "learn to think" without numbers?

Depressing.

53
bitwize 6 days ago 0 replies      
Only really a problem in the USA. In civilized countries, there's no particular aversion to math or to disciplined thinking in general.
12
Maryam Mirzakhani, first woman and Iranian to win Fields Medal, has died thewire.in
608 points by urahara  2 days ago   105 comments top 25
1
mrkgnao 2 days ago 1 reply      
I hope to study (non-inter-universal) Teichmueller theory some day, as a side quest on my own journey (which is vaguely directed in some sense toward number theory at present). It's the best way I can think of to honor her memory: to learn to appreciate the ideas that she devoted her life to understanding better.

Here is a picture of her drawing on one of her vast sheets of paper.

https://news.artnet.com/app/news-upload/2014/08/Maryam-Mirza...

2
Mz 2 days ago 0 replies      
If anyone needs any background info on her work, there are a number of sources gathered here:

http://micheleincalifornia.blogspot.com/2014/08/links-on-mar...

It includes this, a quote that is the best laymen's explanation of her work that I could find:

Mirzakhani became fascinated with hyperbolic surfaces doughnut-shaped surfaces with two or more holes that have a non-standard geometry which, roughly speaking, gives each point on the surface a saddle shape. Hyperbolic doughnuts cant be constructed in ordinary space; they exist in an abstract sense, in which distances and angles are measured according to a particular set of equations. An imaginary creature living on a surface governed by such equations would experience each point as a saddle point.

It turns out that each many-holed doughnut can be given a hyperbolic structure in infinitely many ways with fat doughnut rings, narrow ones, or any combination of the two. In the century and a half since such hyperbolic surfaces were discovered, they have become some of the central objects in geometry, with connections to many branches of mathematics and even physics.

3
MichaelBurge 2 days ago 1 reply      
It looks like her research is here:

http://math.stanford.edu/~mmirzakh/Research.html

And the paper in question that got the Fields Medal might be one of these:

http://math.stanford.edu/~mmirzakh/Papers/VA.pdf

https://arxiv.org/pdf/1206.5574.pdf

It looks like her work involved counting the number of equivalent paths between two points on surfaces that have been punctured.

4
jacobkg 2 days ago 0 replies      
I remember how excited I was when she received the award as a role model for women and immigrants, and a reminder of how great minds can come from places the US considers enemies. My wife (33) spent most of last year being treated for Breast Cancer (now in remission). This news makes me profoundly sad.
5
oefrha 2 days ago 2 replies      
I took her hyperbolic geometry class last year (my senior year at Stanford) and she looked perfectly healthy. She even gave me some advice on my grad school offers. What a shock...
6
snake117 2 days ago 1 reply      
This actually shook me awake when I read this. It was just a few months ago my aunts where sharing articles (through Telegram) with us, from Iran, about Maryam and her accomplishments. She was brilliant and really did serve as an inspiration to many Iranians all over. May she rest in peace.
7
hi41 5 hours ago 0 replies      
Deeply saddened to hear this news. I have immense respect for people like Maryam and Vera Rubin. They could have used their incredible genius to earn riches. Instead they devoted their lives to further math and science. When it was declared that Maryam won the Fields Medal I read the summary. I could barely understand even one word of the summary leave alone the actual work. People like Maryam amaze me. I am just a ordinary mortal working in a tech company. If God had asked me I would have told him to take my life and let Maryam live so we could have progress in math. Life is so unfair. I can't image how much math's progress has suffered with her passing. Deepest condolences to her family. RIP.
8
akalin 2 days ago 0 replies      
Terry Tao wrote a eulogy for her on his blog: https://terrytao.wordpress.com/2017/07/15/maryam-mirzakhani/
9
lr4444lr 2 days ago 0 replies      
Terrible to lose such a notable person at age 40 due to breast cancer, diagnosed when she was in her mid-30s no less.
10
mncharity 2 days ago 0 replies      
The 2014 article https://www.quantamagazine.org/maryam-mirzakhani-is-first-wo... , with the IMU video https://www.youtube.com/watch?v=qNuh4uta8oQ .

2008 interview http://www.claymath.org/library/annual_report/ar2008/08Inter... .

"I would like to thank all those who have sent me kind emails. I very much appreciate them and I am very sorry if I could not reply."

11
theCricketer 2 days ago 0 replies      
An interview with Maryam from 2008 when she was a Research Fellow at the Clay Mathematical Institute -> this is inspiring: http://www.claymath.org/library/annual_report/ar2008/08Inter...
12
urahara 2 days ago 0 replies      
She was absolutely brilliant in so many ways, what a loss.
13
throwaway5752 2 days ago 0 replies      
What a tragic loss for the mathematics community and her husband and young daughter. Deepest condolences to her family, friends, colleagues, and students.
14
bhnmmhmd 2 days ago 0 replies      
This was terrible news today. It made me genuinely sad. I hope she rests in peace while all the humanity honors her.
15
adyavanapalli 2 days ago 0 replies      
Oh no :/Such a terrible loss..
16
rurban 2 days ago 1 reply      
I cried a bit. Life is unfair
17
tomrod 2 days ago 0 replies      
What a sad day. Too soon.
18
afshinmeh 2 days ago 0 replies      
RIP, very sorry to hear this news.
19
Jabanga 2 days ago 0 replies      
How tragic.
20
Lotus123 2 days ago 0 replies      
Sad news
21
dbranes 2 days ago 2 replies      
It's immensely disappointing that this comment section has has an abundance of comments on iranian socio-political issues, while completely devoid of any discussion on dynamics on moduli spaces.
22
brian_herman 2 days ago 2 replies      
Can we get a black bar for this?
23
thr31238893 2 days ago 0 replies      
This is such sad news. RIP.
24
0xFFC 2 days ago 6 replies      
Iranian here, one point I really want to mention is following despite our theocratic regime, really I have to mention, Iran is not Suadi Arabia or anything like that. Girls and Boys do get the almost exactly same education, they have great opportunity to be a successful scholar in any branch of science they want (yes, It is like USA, your parent demographic will decide which university you most like end up into. But this issue is the issue of most part of the world and we cannot blame Iran, USA or any specific country for this, we should blame the system).

She got her bachelor degree from one of our top universities (Sharif University) and went to Harvard from there and there is plenty of science-eager students like her are there waiting for an opportunity to become next Maryam Mirzakhani.

Come to Tehran. I know you hear a lot of bad thing about Tehran and Iran from Right wing political media outlets. But believe me, you are going to see Paris of the middle east and whole new generation of liberal people who believe in personal freedom and freedom of speech.

It really bothered me when I saw her talking about how people of USA think about Iran as desert and all women wearing an abaya. No they are not, we are a new generation and we are different and although our regime tries its best to suppress us, we are the future of Iran.

25
bhnmmhmd 2 days ago 4 replies      
How is it that brilliant, genius people who truly help our world become a better place die young, while people who bring nothing but destruction and war, live for decades?
13
Redis 4.0 groups.google.com
582 points by fs111  3 days ago   159 comments top 14
1
StevePerkins 3 days ago 5 replies      
"9) Active memory defragmentation. Redis is able to defragment the memory while online..."

I'm so amazed that this is a thing.

2
Dowwie 3 days ago 1 reply      
@antirez: Congrats! Are you going to modularize disque now that v4 is ready?
3
dvirsky 3 days ago 0 replies      
If you want to try out some of the modules already available: https://redis.io/modules
4
frou_dh 3 days ago 1 reply      
Maybe with the new modules support there will emerge some explicit way to do robust worker/job queues? So you don't have to remember your BRPOPLPUSH/LREM dances (or whatever it is) just so.
5
brango 3 days ago 1 reply      
Redis Cluster connecting to nodes via DNS instead of IP would vastly simplify deployment on K8s.
6
infocollector 3 days ago 2 replies      
Is there a Redis PPA for Ubuntu 16.04 that is supported by the redis team?
7
Hates_ 3 days ago 2 replies      
Anyone know when we might see this on AWS (Elasticache)?
8
jokoon 3 days ago 2 replies      
It seems it even supports nearest search for lon/lat points by default... Quite nice since most RDMS don't even support it be default.

Although I'm curious to know what algorithm it uses for nearest search, it doesn't talk about it in the doc.

I don't really understand what redis should not be used for, I guess it's not for complex queries? Conventional RDMS really seem to belong to the hard disk drive age. So the difficulty resides in having well designed data schemes.

9
indeyets 3 days ago 1 reply      
LFU policy sounds really interesting!
10
anirudhgarg 3 days ago 1 reply      
Any news on when this will be on Azure Redis Cache ?

https://azure.microsoft.com/en-us/services/cache/

11
RUG3Y 3 days ago 2 replies      
12
numbsafari 3 days ago 15 replies      
13
fapjacks 3 days ago 0 replies      
I've said it before and I'll say it again: I am so smitten with antirez (and redis)! One of my favorite projects for sure.
14
HankB99 2 days ago 3 replies      
Interesting coincidence (or maybe not...) Redis was discussed on the Floss podcast that I listened to earlier today and now I have an inkling of what Redis is. My first exposure to Redis was to ponder where it came from after I tried to run Gitlab on my puny (J1900, 4GB RAM and spinning rust) file server. It was spectacularly non-performant with most page loads timing out. I suppose it was because Redis had insufficient RAM for operation. Redis may be scalable toward large busy systems but seems less so in the other direction. I thought it would be cool to have a real Git server but this one was not it.

During the podcast the Redis guy mentioned that 4.0 was on the verge of being released.

14
Reverse-engineering the Starbucks ordering API tendigi.com
655 points by nickplee  5 days ago   152 comments top 27
1
dsacco 5 days ago 10 replies      
Solid writeup. From someone who does/did a lot of this professionally:

1. Android typically is easier for this kind of work (you don't even need a rooted/jailbroken device, and it's all Java/smali),

2. That said, instead of installing an entire framework like Xposed that hooks the process to bypass certificate pinning, you can usually just decompile the APK and nop out all the function calls in the smali related to checking if the certificate is correct, then recompile/resign it for your device (again, easier on Android than iOS),

3. Request signing is increasingly implemented on APIs with any sort of business value, but you can almost always bypass it within an hour by searching through the application for functions related to things like "HMAC", figuring out exactly which request inputs are put into the algorithm in which order, and seeing where/how the secret key is stored (or loaded, as it were),

4. There is no true way to protect an API on a mobile app. You can only make it more or less difficult to secure. The best you can do is a frequently rotated secret key stored in shared libraries with weird parameters attached to the signing algorithm. To make up for this savvy companies typically reduce the cover time required (i.e. change the secret key very frequently by updating the app weekly or biweekly) or by using using a secret key with several parts generated from components in .so files, which are significantly more tedious to reverse.

2
joombaga 4 days ago 7 replies      
I did this with the Papa John's webapp a while back (which was waaaay simpler btw). They limited duplicate toppings to (I think) 3 of the same, but "duplicate_item" was just a numerical property on the (e.g.) "bacon" object. Turns out you could just add multiple "bacon" members to the toppings array to exceed the limit, and they didn't charge for duplicates, so I ordered a pizza with like 50 bacons.

It definitely didn't have 50x worth of bacon, but it did have more than 3x, maybe 5x-6x. The receipt was hilariously long though.

3
rkunnamp 4 days ago 2 replies      
The dominos ordering app in India had a terrible flow a while back. Once the products are added to the cart and proceeded to checkout , the flow was as follows

1. First a payment collection flow is initiated from the browser (asking Credit Card details, pin etc)

2. The payment confirmation comes to the browser

3. The browser then places the order(the pizza information) to another api end point, marking the payment part as 'Paid'

The thing is, one could add as many pizzas to the cart in a different tab, while the original tab proceeds to payment. The end result is, you get to pay only for pizzas that were initially in the cart, but could get any number of pizzas. For literally Rs100 one could order thousands of rupees worth pizza.

I discovered it accidentally and did report to them. Neither did they acknowledge nor did they send me a free pizza :(

They later fixed this, by not allowing to load the cart in a different tab. But there is a high chance that there could be another hack even now. Since I had wowed not to eat junk food anymore, there was not much incentive for me to spend any more time on it.

4
rconti 4 days ago 6 replies      
Excellent writeup.

I have to take issue with the "Starbucks app is great" line, though. I think I've had more problems with it (on iOS) than any other app. It's the only app that (for a period of many months if not a year) was regularly unable to find my location. Even if I opened up Maps or some other location enabled app and found my location before launching Starbucks, it would just bomb out.

Overall the app seems to have tons of 'issues'. It's been better the past few months though. And it beats the hell out of standing in a 10 minute line. I honestly wouldn't stop at Starbucks anymore if it wasn't for the app.

5
zitterbewegung 5 days ago 3 replies      
If an open API existed yes there would be more integrations. Of course you would have to hire engineers to perform upkeep. Eventually if the ordering API isn't profitable you get a bunch of sunk costs and have to reassign people. Its not just "make this open" and POOF. Also your access could be revoked by unofficially using the API and or they could just change it at any time.
6
heywire 4 days ago 0 replies      
I've always wondered the legality of doing things like this. Have there been cases where someone was taken to court (in the US) for reversing an API from a mobile app or website? Assuming no malicious action or intent of course. Could the CFAA be used in this case, even if the intent was just to understand the API for personal use?

With so many IoT devices out there relying on 3rd party web services that may or may not be around a year from now, I expect that the right to inspect and understand these APIs will become more and more important. Not to mention wanting the ability to build interactions between devices where the manufacturer may not have interest (IFTTT, etc).

7
spike021 4 days ago 1 reply      
Maybe a bit off-topic, but APIs always make me wonder a bit when they can be reverse-engineered or.. for lack of a better word, misused.

I know of one website (site A) that sells items for sports and uses an API of a sports website (site B) to provide current statistics and other information.

Thing is, that sports website's API is now deprecated for public use, there's no way to request a token, and from what I can tell it's only to be used for commercial purposes/paid for by companies.

But, I can easily find the API token being used on site A, dig through the private/deprecated docs for the API of site B, and use any of their endpoints and data for pretty much whatever I want.

At least, this was the case roughly 4-6 months ago and I haven't looked into it since; perhaps they've changed it since then.

But I wonder how this works. Wouldn't it be a misuse of their API and something they wouldn't want allowed? Usually sports statistics APIs are fairly expensive, and the fact that some random person like me can get access easily for free seems unfair to site B, especially when they don't want normal people using their API anymore.

8
pilif 4 days ago 2 replies      
I wonder why they went such great lengths to prevent unauthorized clients (which also is a thing thats fundamentally impossible. All you can do is making it harder for attackers). What would be so bad about third party apps ordering coffee?

Generally, its a good idea to be protective, but between cert pinning and that encrypted device fingerprint and the time based signature, this adds a lot of points of possible failure to an app you want to be as available as possible to as many people as possible.

What information this API has access to is so precious to warrant all of this?

9
bpicolo 5 days ago 4 replies      
Opening up an API like this is ripe for abuse, so taking care makes sense. Bad actors translate directly to lost money.

A real method of securing APIs would be a godsend, but in current tech it's just not possible. This is the one place where mediocre security-by-obscurity is your only choice =(

10
coworkerblues 4 days ago 3 replies      
I want to write a similar write-up for a company which basically does everything over HTTP with their own half-baked hardcoded AES key in app for sending credit card info. and that their confirmation checkup is stupid (for SMS) and can be bypassed.

The problem is that their site TOS forbids reverse engineering, and I am afraid their lawyers will go after me instead of fixing the security issues (even if I just contact them), any tips for me ?

11
leesalminen 5 days ago 2 replies      
If there was a IoT button in my kitchen that could order my usual morning order, well I'm not sure. I may or not shout out in joy.
12
supergeek133 4 days ago 0 replies      
The way to solve this is just for companies to give out their API in a public manner. You'll almost never be able to secure it from scrapers.

We experienced this at Honeywell, when I first started here we were blocking users that scraped the app instead of giving them public access and teach them how to use it correctly.

13
nailer 4 days ago 1 reply      
From: https://github.com/tendigi/starbucks

> Once you've obtained your client_id, client_secret

I'd like to use this module. Question for author: what's the fastest way to get a CLIENT_ID and CLIENT_SECRET?

14
chx 5 days ago 0 replies      
Xposed is the reason I am never getting anything but an unlockable Android phone.
15
shortstuffsushi 4 days ago 0 replies      
This is super interesting, especially the part about trying to reverse engineer their "security" measure. I did something like this a few months back for Paylocity, a time logging system that has a terrible web interface. After trying to talk with their sales people about them potentially offering an API, I was told "no API, just this mobile app." Turns out the mobile app is just an Ionic app with all of the resources (all, including tests and even test environment logins and stuff) baked in. Super easy to grab their API out of that (literally called /mobileapi/), but then the trouble was figuring out how they generated their security token header, which was also a little dance of timestamps and faked device info.

The best part was when I contacted them afterwards and warned them about the extra pieces of info they had baked in, their response was basically "yeah, we're aware that there's more in there than there should be, but it's not a priority." Oh well, they just have all of my personal information.

16
masthead 4 days ago 0 replies      
This was already done before, in 2016

https://www.ryanpickren.com/starbucks-button

17
IshKebab 4 days ago 0 replies      
Probably would have been easier to decompile the Android app than the iOS one. Even if they use proguard the code is much easier to read.
18
King-Aaron 4 days ago 0 replies      
Good article, though one main take-home for me is some killer new Spotify playlists to run at work :D
19
Lxr 4 days ago 1 reply      
What's the motivation for Starbucks to make it this difficult to reverse?
20
catshirt 4 days ago 1 reply      
eyyyyy tendigi. hi Jeff! looks like you're doing some cool stuff.

- your old downstairs guy

21
jorblumesea 4 days ago 0 replies      
Interesting, what's with the hardcoded new relic ID? Writeup didn't mention it, I assume that's analytics/monitoring related? Does it need to be set?
22
turdnagel 5 days ago 4 replies      
Is there a good guide out there on reverse engineering mobile apps for iOS & Android?
23
amelius 4 days ago 0 replies      
Can somebody explain what the use/fun of this is, if Starbucks can push an update that invalidates the approach anytime they want?
24
wpovell 4 days ago 0 replies      
Is there a good place to learn how to do this sort of reverse engineering?
25
kyle-rb 4 days ago 0 replies      
HTCPCP implementation when?
26
dabockster 4 days ago 0 replies      
>issue tracking turned off

Well, now how am I supposed to tell him that Tully's is better?

27
githubcomyottu 4 days ago 0 replies      
Starbucks is nice for dessert, I wish they sold decent coffee though.
15
Google is releasing 20M bacteria-infected mosquitoes in Fresno techcrunch.com
541 points by chriskanan  3 days ago   209 comments top 45
1
jimrandomh 3 days ago 9 replies      
This is called the sterile insect technique, and it is a well-established practice for getting rid of mosquito populations that could threaten humans. It is very safe, both to humans (male mosquitoes don't bite) and ecologically (species other than mosquitoes aren't affected at all).

It sounds like Google is working on improvements to the process. This is important work, because mosquitos are a major cause of disease, especially in Africa, and we haven't been able to fully solve the problem with existing technology.

2
WaxProlix 3 days ago 8 replies      
I recall hearing when I was younger that mosquitoes were an outlier in the natural world. With most species, the balance of any food web would be pretty thoroughly disrupted by a major culling. As I heard it, this isn't the case for mosquitoes - if you could press a button and kill them all tomorrow, most ecosystems would be largely unimpacted.

Am I just making this up/misremembering it?

Edit: found a few sources.

Pro-mosquitocide:

http://www.nature.com/news/2010/100721/full/466432a.html

http://news.nationalgeographic.com/2016/02/160207-mosquitoes...

Anti-mosquitocide:

http://io9.gizmodo.com/what-if-every-mosquito-on-earth-went-...

3
Keyframe 2 days ago 0 replies      
Unlike other questions, I'm interested in logistics behind this. How do you produce 20m mosquitos and where do you hold them? How do you transport them and how do you release them? How do you 'store' them and when releasing are most harmed, are they 'sprayed' or you 'open a box and they will go by themselves'? How do you decide where to release them? Is it all at once (1m per week) or is there a pattern, is it related to wind... so many questions!!

I wan't a documentary "How it's made: Mosquitocide". I'm willing to make one if someone can provide access to info and logistics.

4
polskibus 3 days ago 7 replies      
Google, while you're at it, please find a way to eradicate ticks. They are getting more and more irritating and dangerous in Northern Europe!
5
sjcsjc 2 days ago 1 reply      
"Verily, the life sciences arm of Googles parent company Alphabet, has hatched a plan to release a ..."

My immediate reaction on reading that sentence was to wonder why they'd written it in some kind of Shakespearean English.

My next reaction was to feel stupid.

6
sillysaurus3 3 days ago 2 replies      
So whats the plan to get rid of them? Verilys male mosquitos were infected with the Wolbachia bacteria, which is harmless to humans but when they mate with and infect their female counterparts, it makes their eggs unable to produce offspring.

Thank goodness. We can't eliminate mosquitoes fast enough.

Wildlife will probably find other food sources, so bring on the weapons of mosquito destruction.

7
teddyg1 3 days ago 2 replies      
Can someone with knowledge of this particular experiment explain how they've overcome the regulations that have stopped Oxitec / Intrexon with their aedes aegypti solution? They key regulatory factors cited against Oxitec, especially in their Florida Keys trials in the past year, were centered around controlling for the release of only males (which do not bite humans), thus avoiding transmission of any kind from the genetically modified varieties, or bacterially modified varieties in this case.

Oxitec has worked for years to filter their mosquitoes so only ~0.2% of the released mosquitoes are female[1]. They then had to demonstrate that and more in many trials before being allowed to release their mosquitoes in the wild in Panama and Florida.

Otherwise, it's great that Google can overstep the other factors that would stop this solution like NIMBYism and working with county / municipal boards. These solutions are great.

[1]:http://www.sciencemag.org/news/2016/11/florida-voters-weigh-...

8
yosito 3 days ago 1 reply      
It's interesting that Google is doing this rather than some government organization. What's Google's motivation? Is it purely altruistic, a PR move, an experiment, or does it have some direct benefit to them?
9
davesque 3 days ago 5 replies      
I'm aware that this is a known technique and thought has been given to whether or not it will impact the food chain, etc. But I do wonder this: has anyone considered what the effect will be of removing this constant source of stimulation for our immune systems?
10
RobLach 2 days ago 0 replies      
Just want to point out that a megacorp breeding and releasing a sterilization disease is pretty sci-fi. Also a mutation away from a Children of Men style dystopia.
11
Tepix 11 hours ago 0 replies      
Similar project in Germany:

http://www.spiegel.de/wissenschaft/natur/tigermuecken-in-deu...

However they use gamma rays to sterilize the mosquitoes instead of bacteria.

12
sxates 3 days ago 1 reply      
"You don't understand. I didn't kill just one mosquito, or a hundred, or a thousand... I killed them all... all mosquito... everywhere."
13
amorphid 3 days ago 0 replies      
Reminds of when UC Riverside released some stingless wasps to prey on a whitefly infestation in Southern California. This was in the early 1990s.

I think this paper is relevant, but I only scanned it:

http://ucce.ucdavis.edu/files/repositoryfiles/ca5106p5-67707...

14
franga2000 7 hours ago 0 replies      
I just love how people simply refuse to use the Alphabet name and keep calling it Google.
15
azakai 2 days ago 0 replies      
Why is "Google" in the title? The only connection between Google and this company is that they share a parent company, Alphabet.
16
Lagged2Death 3 days ago 1 reply      
What kind of planning and permitting process does a project like this require?

Or would it be legal for me to just go and release a cloud of mosquitoes myself?

17
Raphael 3 days ago 1 reply      
What an unfortunate headline.
18
dzink 3 days ago 0 replies      
From what is explained so far, this process doesn't kill mosquitoes. It just makes sure that some of the females (that reproduce 5 times in a life of 2 weeks as an adult) get fertilized with unproductive eggs. http://www.denguevirusnet.com/life-cycle-of-aedes-aegypti.ht... The eggs of aedes aegypti can be spread anywhere and the fertile hatch whenever their area gets wet in the next year or so.

Does anyone know what % population reduction impact this process results in? They'd have males likely die after 2 weeks and that just wipes the reproductive chances of the females in that period. Google is treating for 20 weeks in dry weather, which is not exactly the peak reproductive season of this mosquito.

19
pcmaffey 2 days ago 0 replies      
Has any research been done on potential benefits of widespread micro blood transfusion as a result of mosquitoes? The diseases are the obvious downside, wondering if resistances, etc may be an unrecognized upside.
20
briandear 3 days ago 0 replies      
First they came for the mosquitos, but I didnt speak up because I wasnt a mosquito. Next they came for the invasive fire ants and then we all cheered because mosquitos and fire ants were finally gone.
21
LinuxBender 3 days ago 1 reply      
Does this prevent reproduction of the mosquitos, or of the disease? If mosquitos, will this have a negative impact on bats? My bats eat mosquitos and moths, but there are not many moths any more.
22
phkahler 3 days ago 3 replies      
I wish the other mosquito killing efforts would go forward.
23
crimsonalucard 2 days ago 0 replies      
Unless this solution virtually slaughters every single mosquito wouldn't this technique only select out unfit mosquitos eventually leading to populations of mosquitos with genetic countermeasures to this method of eradication?
24
stanislavb 3 days ago 2 replies      
All good. Yet I thought that was a responsibility of the gov... A big corp spending millions for free seems, you know, questionable
25
markburns 2 days ago 0 replies      
Does anyone know why the mosquitoes wouldn't evolve to be repulsed by others infected in this way?

Or is this a similar class of problem to antibiotics becoming useless over time?

I.e. it's useful to do now so let's cross that bridge if we come to it?

Or is there something else I don't understand about this?

26
jondubois 2 days ago 0 replies      
>> Verilys male mosquitoes were infected with the Wolbachia bacteria, which is harmless to humans

What they mean is; harmless in the short term and hopefully also harmless in the long term.

27
tcbawo 3 days ago 1 reply      
I can't wait until the day we start releasing solar powered the mosquito-hunting drones.
28
vinitagr 2 days ago 0 replies      
This is some real breakthrough. I don't remember i have heard of anything like this before. Any amount of success with this solution will have a lot of consequences on other problems.
29
jackyb 1 day ago 0 replies      
I always wondered, how do they count so many mosquitos? Is there a technique to determine that it's 20M?
30
Harelin 3 days ago 0 replies      
For those of us who live in Fresno and are curious as to which neighborhoods are being targeted: Harlan Ranch and Fancher Creek. They say "communities outside of these areas will not be affected."
31
SubiculumCode 3 days ago 0 replies      
I wish they'd do it in Sacramento where most of the mosquitoes live.
32
pcollins123 3 days ago 0 replies      
Google is releasing 20M bacteria-infected mosquitoes in Fresno... wearing small cameras and a projector that can display text advertisements
33
makkesk8 2 days ago 0 replies      
I've never been interested in biology. But this is so cool! :O
34
WalterBright 2 days ago 1 reply      
I'm curious how mosquitoes will evolve to beat this.
35
banach 1 day ago 0 replies      
What could possibly go wrong?
36
mrschwabe 2 days ago 1 reply      
No one should have the right to play god with our biosphere.
37
walshemj 2 days ago 0 replies      
could we have a less clickbaity title
38
pcarolan 3 days ago 2 replies      
39
kuschku 3 days ago 1 reply      
40
will_pseudonym 3 days ago 0 replies      
What could possibly go wrong?
41
chris_wot 3 days ago 0 replies      
At least they aren't attempting to go viral.
42
forgottenacc57 3 days ago 2 replies      
What could possibly go wrong? (Eye roll)
43
ultim8k 3 days ago 0 replies      
I came up with this idea last year! I didn't know someone was already building it.
44
unclebucknasty 3 days ago 3 replies      
Wait. Is there no regulation around this? Any company or individual can cook up whatever specimen they want and simply release it into the environment en masse?

Am I missing something?

16
Things I wish someone had told me before I started angel investing rongarret.info
583 points by lisper  1 day ago   183 comments top 35
1
birken 1 day ago 4 replies      
> But the cool kids don't beg. The cool kids the ones who really know what they're doing and have the best chances of succeeding decide who they allow to invest in their companies.

The company I was an early employee of (that ended up being a "unicorn") was not a cool kid, and we certainly were begging people to invest both at the angel stage and (especially) the series A stage. And those people got a really really good return on their money.

This isn't to say there aren't valuable signals perhaps involving "cool kids" status, but there are a lot of diamonds in the rough.

> I figured it would be more fun to be the beggee than the beggor for a change, and I was right about that.

As a much smaller time angel investor myself than the author, I'm still the beggor. You are only the beggee if you are writing 25k+ checks (and more like 50k-100k to really be the beggee). If you are writing 5k or 10k checks, you are going to be begging people to take your money, cool kids or not cool kids. So if you are looking to get into angel investing today without allocating 6-figure amounts to your hobby, I wouldn't advise doing it for ego reasons :)

2
jonnathanson 10 hours ago 2 replies      
Every single word in this article burns clear and bright and true. Every word. Every paragraph. Every penny paid for every hard lesson learned.

If you want to get into the angel game in 2017, and you want to do it to make money, then I'd sincerely advise you to go take out $5-10k for a weekend in Vegas, and try to get really good at a game of complete chance, like roulette.

"Good" at roulette, you're thinking? What can that possibly mean?

It means having a large bankroll and knowing your tolerance for burning through it. It means understanding how to pace yourself, so that you're not blowing through your bankroll in the span of a few minutes. It means getting the itch out of your system, if, indeed, this is merely an itch.

Can't afford to fly to Vegas and blow 10 grand in a weekend? Don't get into angel investing. You can't afford it. I say this not as a snobby rich asshole, but rather, as the sort of nouveau-riche asshole who lost quite a bit of money many years back, doing exactly what the author did, and losing money I learned in retrospect I didn't really want to lose.

I still make the occasional investment, but as part of a group. By and large, those investments go to founders we've worked with before, or who come highly regarded. We invest super early, we eat a fuckton of risk, and we expect to lose 99.999% of the time. We're too small-time to play the game any other way at the moment.

Angel investing is about bankroll and access, and if you're wondering whether you've got the right access, you don't. So you're left with bankroll. Have fun, and try to get lucky if you can help it. :)

3
mindcrime 1 day ago 5 replies      
There is a small cadre of people who actually have what it takes to successfully build an NBT, and experienced investors are pretty good at recognizing them.

I really do question this. The "problem of induction"[1] comes into play when you start talking about pattern matching and learning from "experience". That is, there's no guarantee that the future will look like the past.

Before Zuckerberg was Zuckerberg, I wonder how many people would have said "Hey, I recognize in this kid the innate capacity to be an NBT"? Of course they got funded, but I believe most of it was after they already had demonstrable traction.

On that note, one of the things that makes fund-raising such a drag, is that so many angels (at least in this area) want to see "traction" before investing. Even though, typically, you would thing that angels are investing at such an early stage that nobody would really have traction yet. Maybe it's just that the angels here on the East Coast are more risk averse.

[1]: https://plato.stanford.edu/entries/induction-problem/

4
TheBlerch 1 day ago 3 replies      
The author makes good points here. While it's true that YC and other venture investors invest in many companies to increase the chances of large returns on the best of their portfolio companies, there is another significant advantage to YC having a bunch of companies in each batch - the teams that are not doing so well are a source of talent for the teams that are doing well. At some point YC can and has encouraged teams they think aren't making enough progress to join teams that are. A friend in one batch described his batch consisting of: 1/3 working on great ideas/products that could be big, 1/3 working on mediocre ideas/products and 1/3 working on bad ideas/products, and those in the bottom 1/3-2/3 still had good team members that could be sourced for talent for the best 1/3 and for previous YC companies doing well.
5
justinsb 1 day ago 0 replies      
Ron was one of our investors in FathomDB, and that turned out to be a bad financial investment, much to my personal dismay & regret.

However, something that I think the essay modestly overlooks is the non-financial elements. The investors made a huge difference in my life & that of the others that worked for FathomDB. I like to think that we moved the industry forward a little bit in terms of thinking about modular services (vs a monolithic platform-as-a-service) although it turned out that the spoils went mostly to the cloud vendors. Many of the ideas developed live on in open-source today.

Of course, this all serves Ron's point in that it doesn't make for a good investment. But that doesn't mean that no good came of it - and it makes me want to work harder next time so that it is both a good outcome and a good investment.

So: thank you to Ron and all our investors. It is no accident that you are called angels.

6
seibelj 1 day ago 7 replies      
My 2 cents - As an investor or potential employee when analyzing a startup, pay close attention to how scrappy and capital efficient they are. Do they have excessively nice office space? Are the founders making too much in salary? Does it seem like the executives are working like animals, or do they have the big company mindset where they take it easy? Startups are nothing like established, revenue-generating companies and the mindset should be entirely different.

The #1 thing a startup can do to survive is to be as stingy as possible with their capital.

7
brianwawok 1 day ago 1 reply      
I was hoping for a fact like

"And this is how I made 42 investments in my first 3 years. All are now bust, and I am out 1.4 million dollars"

Obviously not fun to tell the world how much money you lost, but it would help to add color to the people behind the VCs, that developers love to see as the frenemy (terrible people out to screw you, but man their money is nice sometimes).

8
Nelson69 1 day ago 1 reply      
So fundamentally as an angel you're in early. That usually means that you face dilution. I also wouldn't think it would be that unusual for the business to make a pretty dramatic pivot or two and that initial angel investment may have been for something else entirely by the time the company finds its legs. There are basically 3 things you can do in that dilution situation: 1) Do nothing and go from basically owning the business to not. (You still get to watch and be part of the ride) 2) Pony up more money to match the big investors, assuming the terms allow it, or 3) Fight it or any change every step of the way.

A VC once told me that there were "good angels and bad angels" Too many bad ones and he wouldn't invest. A couple specific bad ones and he wouldn't invest. To that, there are also good angels that will make introductions, spend time coaching, and really help beyond what I'd call a "hobby." It seems like there are good people out there with money and knowledge and they really want to help out others in an angelic sort of way knowing full well they will likely lose their investment.

9
danieltillett 1 day ago 1 reply      
While the points Ron raises are really good, there is another source of investing error which is "generals fighting the last war" effect. As an Angel investor you are drawn towards founders and companies that resemble you and your experiences. This is almost certainly going to lead you astray as conditions will have changed and everyone's experiences are so limited.
10
polote 1 day ago 5 replies      
Summary: beginners almost never invest well.

And it is the same for stocks, many people think they will make money by investing in a specific company because their logic says it is good idea.

Investment is a job, and to win you need experience

11
trevyn 1 day ago 6 replies      
>There are a myriad ways to make a company fail, but only two ways to make one succeed. One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Is he implying some sort of unethical behavior as the second way?

12
PangurBan 13 hours ago 0 replies      
Thank you to the author for sharing your experience and insights. Some tips for being a better angel investor and reducing risk: 1) I can't stress this enough - learn the ABC's of private investments. I have seen a ridiculous number of angel investors as well as founders who don't understand the fundamentals of private equity investments and returns - even people who have been working at a startup for several years.2) Limiting yourself to meeting with and investing in startups an industry in which you've worked, so that you understand their industry, better evaluate them and add value3) Limiting yourself to meeting with and investing in startups in industries in which you or close colleagues and friends have worked so that you can consult with them regarding the potential investments4) Joining an angel group, such as Band of Angels, so that you have a group of fellow angels to learn from and discuss investments with5) Meeting with successful serial entrepreneurs who make angel investments to ask what they look for6) If you have not founded a company or worked at an early stage startup, learn Lean Startup methodologies7) Make sure you understand how a startup achieves product-market fit8) Put together a list of good people and companies who can help startups you invest in, with everything from operations to tech to growthThere's much more, but this is a start.
13
rwmj 1 day ago 2 replies      
I wonder how often VCs/angels are conned out of money (and I don't mean by delusional entrepreneurs, but by genuine con-artists). I assume it must happen, and with all the money sloshing around may be common.
14
apeace 21 hours ago 0 replies      
> There are a myriad ways to make a company fail, but only two ways to make one succeed. One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Any guesses on the second way?

The best I could think of was: find dumb investors to pump it full of money and hype it. Then rely on all the hype to get it sold.

I'm hoping for a less cynical answer!

15
geetfun 1 day ago 0 replies      
Being a good investor takes a certain kind of temperament. Can't really teach this. For most, as Buffet says, stick with the index fund.
16
caro_douglos 20 hours ago 0 replies      
This post reminds me of "the war of art" where you're encourage to make the decision right off the bat of whether or not you're a professional or an amateur.

I'm somewhat biased when it comes to angels because most of the experiences I heard (http://etl.stanford.edu/) were almost always homeruns. Sure there's the down in the gutter claims every investor tries to sob about where they lose money but let's be honest wouldn't it be great if someone gave a talk and said how much they lost (and how much they're continuing to lose) by attempting to get rich quick.

So far bootstrapping appears to best way of weeding out the shitty angels who haven't been in the game for a minute.....it's a pleasure to not deal with someone wanting to give you a check while telling you what their expectations are for YOUR business not their 10k+ check.

17
kushankpoddar 13 hours ago 0 replies      
I am realising that a weekly exercise on 'inversion' should be a must-do for founders/investors.

Inversion essentially means think hard about: "What factors can cause my venture to fail?, How to avoid those factors?"

This sounds simple but it could be a very powerful idea. https://www.farnamstreetblog.com/2013/10/inversion/

18
untangle 1 day ago 0 replies      
Cap table economics are an equally-important reason to fear angel investing. Unless the company is very successful, angels tend to get diluted-out of the money by VC rounds. In the baseball vernacular, angel investors must pick triples and homeruns to make money. Singles and outs will result in total loss. Doubles may break even. It's a tough way to get ahead. Impossible without some insider edge (YC, pundits, stars, etc.).
19
dabei 1 day ago 0 replies      
Seems to me acting as a lone Angel investor is not very efficient and quite limiting to the kinds of opportunities that are open to you. It's analogous to the constraints you face with as a lone founder of a startup. Maybe better to team up and benefit from each other's insight and capital. And totally agree you have to do this seriously as a job unless you don't care about the money.
20
david927 1 day ago 2 replies      
> There is a small cadre of people who actually have what it takes to successfully build an NBT, and experienced investors are pretty good at recognizing them. Because of this, they don't have trouble raising money.

That's a pretty specious statement. I don't know how he came up with that; it certainly doesn't match with a lot of reality.

21
Radim 23 hours ago 0 replies      
Off-topic, but lisper, congrats on your neat HN karma points!

 It's 222-2222 I gotta answering machine that can talk to you

22
Theodores 1 day ago 0 replies      
I prefer the phrase philanthropy to angel investing. As I understand it philanthropy is using your own hard earned money for lost causes of one's own choosing. This is different to fundraising or giving money to charity. With a modest philanthropy budget you can change lives and be able to support others achieve their dreams. Everything can be on an individual basis with no formal framework. For instance, what happens if you pay someone's way so they can finish their degree? What is the potential return? Or, more radically, what happens if you find a homeless person a place to live? Do they get a job and return to society? These things can be found out with radical personal philanthropy. I would say there is good value in this if you do want to learn about society and the human condition. I also think that financial and time losses are an investment. This type of work where you really do invest in individuals should help anyone angel investing to have the chops to do it well.
23
stevenj 1 day ago 1 reply      
Interesting read.

I'd love to hear from other angel investors with (perhaps) different experiences and opinions.

24
kumarvvr 21 hours ago 2 replies      
The author has mentioned "random shit that markets do, like completely ignore clearly superior products..."

Can anyone give me such examples?? I am curious.

25
sgroppino 1 day ago 1 reply      
Perhaps the key is to invest in what you know?
26
sophiamstg 1 day ago 0 replies      
I think I must agree with you on taking it as a full-time job!
27
max_ 21 hours ago 0 replies      
>One of those is to make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition. That is incredibly hard to do. (I'll leave figuring out the second one as an exercise.)

Anyone figured this out?

28
graycat 1 day ago 1 reply      
Good news: I can agree with some of theOP.

Much better news: I do believe that it'sfairly obvious that there are goodsolutions to the most important problemmentioned in the OP.

First a remark on scope: I'm talkingabout information technology (IT) startupsbased heavily on Moore's law, theInternet, other related hardware,available infrastructure software, etc.,and I'm not talking about bio-medicaltechnology which I suspect is quitedifferent.

Second, a remark on methodology: When theOP says "almost certainly" and similarstatements about probability, sure, (A) inpractice he might be quite correct but(B), still the statement is nearly alwaysjust irrelevant.

Why irrelevant? Because what matters isnot the probability, say, estimated acrossall or nearly all the population, or allof business, or all of startups, or evenall of IT startups. Instead, what isimportant, really crucial, really close tosufficient for accurate investmentdecision making, is the conditionalprobability given what else we know. Whenthe probability is quite low, still theconditional probability -- of success orfailure -- given suitable additionalevents, can be quite high, thus, givingaccurate decision making. So, net, what'skey is not the probability but what elseis known so that the conditionalprobability of the event we are trying toevaluate, project success or failure,given what else we know is quite high.

So, back to the OP. We can start with thestatement:

> The absolute minimum to play the gameeven once is about $5-10k, and if that'sall you have then you will almostcertainly lose it.

Here for the "almost certainly" to be trueneeds to depend on what else is known.Sure, if not much more is known, then"almost certainly lose it" is correct.But with enough more known, the firstinvestment can still likely be a bigsuccess.

The big, huge point, first investment or101, is what else is known.

> There is a small cadre of people whoactually have what it takes tosuccessfully build an NBT, and experiencedinvestors are pretty good at recognizingthem.

I agree with the first but not with thesecond. From all I can see, there ishardly a single IT investor in the US whoknows more than even dip squat about howto evaluate an IT investment. E.g.,commonly the investors were history oreconomics majors and got MBA degrees.Since I've been a prof in an MBA program,I have to conclude that a history oreconomics major with an MBA has no startat all evaluating IT projects.

Here is huge point:

We can outline a simple recipe in justthree steps for success as an IT startup:

(1) Find a problem where the first good ora much better solution will be enoughnearly to guarantee a great business,e.g., the next big thing.

(2) For the first good or much bettersolution, exploit IT. Also exploitoriginal research in high quality, atleast partly original, pure/appliedmathematics. Why math? Because the ITsolution will be manipulating data; alldata manipulations are necessarilymathematically something; for morepowerful manipulations for more valuableresults, by far the best approach is toproceed mathematically, right, typicallywith original work based on some advancedpure/applied math prerequisites.

(3) Write the corresponding software, getpublicity, go live, get users/customers,get revenue, and grow the revenue to asignificant business.

So, right: Step (2) is a bottleneck: Thefraction of IT entrepreneurs who can dothe math research is tiny. The fractionof startup investors who could do anevaluation of that research or evencompetently direct such an evaluation isso small as to be essentially zero.

So, net, the investors in IT are condemnedto miss the power of step (2) and, thus,flounder around in nearly hopeless mudwrestling in a swamp of disasters. And,net, that's much of why angel investorslose money.

So, the main problem in the OP was losingmoney on IT projects. The main solution,as both an investor and an entrepreneur,is to proceed as in steps (1)-(3).

For IT venture capitalists (VCs), theycan't use step (2) either, e.g., can't dosuch work, can't evaluate such work, andcan't even competently direct evaluationsof such work, but they have a partialsolution: Likely enforced by their LPs,in evaluating projects they concentrate oncases of traction and want it to besignificantly high and growing rapidly.

So, with this traction criterion, and someadditional judgment and luck, some of theVCs get good return on investment (RoI),but they are condemned to miss out on step(2).

So, what is the power of step (2)? As wewill see right away, clearly it'sfantastic: Clearly with step (2) we cando world changing projects relativelyquickly with relatively low risk.

The easiest examples to see of the powerof step (2) are from the US DoD for USnational security. Some of the bestexamples are the Manhattan Project, theSR-71, GPS, the M1A1 tank, and laserguided rockets and bombs, all relativelylow risk projects with world changingresults. Each of these projects, and manymore, was heavily dependent on step (2)and met a military version of steps (1)and (3).

More generally, lots of people and partsof our society are quite good atevaluating work such as in step (2) andproposals for such work, just on paper.We can commonly find such people asprofessors in our best researchuniversities and editors of leadingjournals of original research in the moremathematical fields.

I started some risky projects, e.g., anapplied math Ph.D. from one of the world'sbest research universities. From somegood history, only about one in 15entering students successfully completessuch a program. The completion rate ofapplied math Ph.D. programs makes the NavySeals and the Army Rangers look likefuzzy, bunny play time. With much of myPh.D. program at risk, I took on aresearch project. Two weeks later I had agood solution, with some surprisingresults, quite publishable. Later I didpublish in a good journal. I could haveused that for my Ph.D. research, but I hadanother project I'd pursued independentlyin my first summer -- did the originalresearch then, in six weeks. The rest ofthat work was routine and my dissertation.While working part time, the Navy wantedan evaluation of the survivability of theUS SSBN fleet under a special scenario ofglobal nuclear war limited to sea, all intwo weeks. I did the original appliedmath and computing, passed a severetechnical review, and was done in the twoweeks. Later I took on a project toimprove on some of our work in AI fordetection of problems never seen before inserver farms and networks. In two days Ihad the main ideas, and a few weeks laterI had prototype software, nice results onboth real and simulated data, and a paperthat was publishable -- and was published.My work made the AI work look silly; itwas. Once in a software house, we were ina competitive bidding situation. I lookedat what the engineers wanted and saw someflaws. Mostly on my own, I took out aweek, got good on the J. Tukey work inpower spectral estimation, wrote somesoftware, and showed the engineers how tomeasure power spectra and how to generatestochastic process sample paths with thatpower spectrum. As a result, my companywon sole source on the contract. So,before I did these projects, they all wererisky, but I completed all of them withoutdifficulty.

Lesson: Under some circumstances, it'spossible to complete such risky projects,given the circumstances, with low risk.

But IT VCs can't evaluate the risk beforethe projects are attacked or even evaluatethe results after the projects aresuccessfully done. So IT VCs fall back ontraction.

I confess: It appears that the IT VCs arenot missing out on a lot of reallysuccessful projects. Well, there aren'tmany IT startups following steps (1)-(3).

So, for IT success, just borrow from whatthe US military has done with steps(1)-(3).

The problem and the opportunity is thatnearly no IT entrepreneurs and nearly noIT investors are able to work effectivelywith steps (1)-(3), especially with step(2).

The IT VCs have another problem: The knowthat for the next big thing -- Microsoft,Apple, Cisco, Google, Facebook -- they arelooking for something exceptional. Andthey know that those for examples havevery little significant in common. Stillthe IT VCs look for patterns for hottopics at the present or recent past.That's no way to find the desiredexceptional projects. E.g., when the USDoD wanted the Manhattan Project, theydidn't go to the best bomb designers ofthe previous 20 years; doing so would nothave resulted in the two atomic bombs thatended WWII. Instead, the US DoD listenedto Einstein, Szilard, Wigner, Fermi,Teller, etc., none of whom had anyexperience in bomb design.

29
d--b 19 hours ago 0 replies      
I love the difference of tone between this article and the usual Silicon Valley pieces.

For me, the most important thing that Ron conveys is that being an entrepreneur is an incredibly foolish thing to do. Silicon Valley created myths of passionate geeks who worked in their mom's garages and went on to make billions. Who doesn't want that?

But the reality of Silicon Valley today is that because of these myths, most people work their twenties away for a chance to buy a lottery ticket...

30
logicallee 1 day ago 1 reply      
The investor names only "one way" to succeed (though alluding to a second one that this investor does not name):

>To make a product that fills a heretofore unmet market need, and to do it better, faster, and cheaper than the competition.

This is an insane sentence. Let's make it only slightly more insane to throw it into starker relief:

>To make a product that fills a heretofore unmet market need that nobody has expressed or even thought about until the company announces it, and to do it absolutely perfectly, instantly without any development time, and make it free for the consumer, while getting money from a sustainable high-margin source and having a proprietary moat that makes it impossible for any other market players to enter even a similar market. Also I'll add that the company must have such strong network effect that the utility of any competitor's product is negative (people would regret getting it even for free) unless the competitor is able to get at least 98% market share.

That's pretty insane, and if you re-read what I quoted you will see it's the same kind of insanity.

Why do people even write stuff like this.

-

EDIT:

Downvoters don't understand my objection. I'm not going to edit this comment. If you don't get it, you don't get it. This investor literally named "good, fast, cheap" (except as: better, faster, cheaper) as three of four requirements that must be met. (The fourth named requirement being "heretofore unmet".) You cannot get more insane than this except in magical la-la land where there are no trade-offs of any kind. It's absurd.

31
banku_brougham 1 day ago 1 reply      
TLDR; you will lose money because you don't know anything.
32
CalChris 1 day ago 0 replies      
I remember Ron although I was unsure about the name. Fair winds and following seas.

Unless it's in your background and in your DNA, it seems that angel investing will end in tears.

33
lowercase_ 1 day ago 0 replies      
Interesting perspective, but he assumes that everyone's experience will be his own. First of all, he was in LA. I don't know of many success angels down there.
34
flylib 1 day ago 0 replies      
"If you want to make money angel investing, you really have to treat it as a full time job, not because it makes you more likely to pick the winners, but because it makes it more likely that the winners will pick you."

plenty of good entrepreneurs have great angel investing track records doing it part time (Elad Gil, David Sacks, Aaron Levie)

35
RandyRanderson 1 day ago 0 replies      
It's impossible, with any certainty, for an investor to prove that his or her judgement is better than random chance. This is high schools stats.

What I take from this is that this person doesn't have a grasp if high school math or is not being honest.

Also if you listen closely to a lot of investors they'll basically tell you their metric is "can I sell this to a greater fool?". This is why there is so much investment in 'hot' areas when, in reality, those are the areas to stay away from as the unicorn shares are likely already over-priced.

17
Kindness is Underrated (2014) circleci.com
519 points by hengputhireach  1 day ago   208 comments top 36
1
BenchRouter 1 day ago 9 replies      
People often conflate "kindness" with "kid gloves" (for lack of a better term). Being kind doesn't have to mean giving "compliment sandwiches" all the time, or avoiding direct feedback. In many contexts, being kind just means being a professional.

See Allen's comment in the linked post, for example. It's direct ("I'm confused"), but polite. It's asking a question of the submitter in a respectful way that's likely to engender a productive conversation as opposed to putting people on the defensive. Allen's leaving the possibility open that his assumptions are wrong (and often our assumptions are).

It quite literally requires less effort - Allen didn't have to expend the extra effort to type out "this is stupid".

I guess I don't see what's so difficult about that particular type of kindness.

2
zeteo 1 day ago 4 replies      
> Bezos talks about a lesson imparted by his grandfather on one of the cross-country road trips they would take every summer: Jeff, one day youll understand that its harder to be kind than clever.

Sure, that's nice rhetoric. And yet the "kind" Bezos has presided over some of the worst working conditions in the developed world [1] while the "blunt" Torvalds has kept together the very scattered Linux team for decades without controlling their income or work conditions. Apparently the more money you have, the more you can get away with a "do as I say, not as I do" standard.

[1] http://www.salon.com/2014/02/23/worse_than_wal_mart_amazons_...

3
jasode 1 day ago 4 replies      
>, an atmosphere of blunt criticism hurts team cohesiveness and morale; theres time and energy lost to hurt feelings, to damage control, to trust lost between team members - not to mention the fact that people are working in a fundamentally less humane environment. It may seem faster and easier to be direct, but as a strategy its penny wise and pound foolish.

This is one of those statements that I think we want to be true but we have no evidence that it's true. Many contradictory examples exist in the real world:

You can yell at your team and insult them and be successful. (Famous examples are Steve Jobs and Bill Gates' "that's the stupidest idea I've ever heard!")

You can be soft-spoken and be successful. (Warren Buffet would be an example. He doesn't yell at the people in his Omaha office or his presidents/CEOs at Berkshire subsidiary companies.)

Likewise, you can be blunt & harsh and fail. You can also be diplomatic & nice and fail.

Same in other endeavors. You can yell at the football team and win the Super Bowl (Mike Ditka - Chicago Bears). Or, you can be soft-spoken and win the championship (Tony Dungy - Indy Colts). Likewise, you can do either style and still be the worst team in the league.

Doesn't seem to be much correlation either way.

My conclusion based on life experiences is that companies can have both the blunt and the diplomatic approaches. The blunt communication works well in upper management. (E.g. one VP tells another VP that "it's a stupid idea.") Everybody is a Type A personality and has a thick skin. However, the reality is that many employees (especially lower-level positions) feel demeaned by direct language. (As the endless debates about Linus' style attests.) Therefore, they require indirect language and those VPs have to dynamically adjust the communication to that personality.

Personally, I don't like the style of indirect communication the author uses in examples of Daniel, David, and Allen but I fully understand it's necessary in the real world for certain people.

4
eksemplar 1 day ago 5 replies      
Being in middle management in a workplace of 7000 it often surprises me how little time people in tech devote to diplomacy.

You can certainly get a point across by being direct, but to make a truly lasting change you need to convince people it's a good idea. I've yet to see this happen without kindness and diplomacy.

So while the IT security officer can certainly get a strict password policy implemented, without also making sure people understand and agree that security is a good idea the end result becomes a lot of written down passwords hiding on postits under keyboards.

6
scottLobster 1 day ago 2 replies      
Part of working effectively with a group is learning to take blunt non-personal critcism in stride. In English 110 freshman year we were required to get into groups and review each other's work (essays, papers, assignments for class) for this very purpose. All of the criticism was blunt if non-personal (you have a run-on sentence here, this is phrased weirdly, etc...), and it was obviously the first time receiving such criticism for some of the students. All of our writing improved as a result, though, and because it was non-personal even the most insecure people in the class eventually adapted to it.

I'll submit that personal remarks like "only a fucking idiot would..." and such are bad not because they hurt feelings but because they are worthless and distracting. They make the conversation about a person instead of what people are supposed to be talking about, if only for a fraction of a second, and can disrupt conversation.

If someone is doing something that harms the objective, you tell them what they're doing, why they need to stop and possibly how they can fix/improve things going forward. That's effective blunt criticism, and there's no need for personal insults anywhere in the chain.

7
marcoperaza 1 day ago 1 reply      
There is a big difference between being NICE and being GOOD.

To paraphrase Charles Murray: "nice" is a moment-to-moment tactic for avoiding conflict, not a guiding principle for living your life. We should default to being nice amicable people, but being good often requires otherwise.

Unfortunately, niceness has been raised to the highest virtue in recent years. This is a mistake with civilizational consequences.

8
matthewowen 1 day ago 0 replies      
I agree that kindness is important.

I don't think the examples given are examples of kindness.

Concretely, they're insufficiently direct.

If you think someone is doing something that isn't well thought out, and you think you understand the problem well enough to say that they haven't thought through it fully (which is a scenario that arises in workplaces), don't say that you're "confused". It's a variant on false shock. Just say " I don't think this change considers the following scenario:". You can soften that with a disclaimer of "perhaps I'm missing something", but saying "I'm confused" when you think the other person is consumed is mildly passive aggressive.

Likewise, if you think someone should do something, don't say "it'd be nice if we could". Make the request directly. You can still add "let me know if there's something I'm not considering that prevents that". It's frustrating otherwise, because it is unclear what is a request or nice-tp-have and what is an instruction that approval is contingent upon. In the long term, lacking that clarity becomes annoying, especially for non-native speakers or people from different cultures who expect different lvels of directness.

There is a position between aggressive "don't do that, it's stupid" and the indirect formulations in this post, and that's where you should aim. Polite and kind, but still clear and direct.

Honestly, if you just state the problems with the approach clearly and avoid words like "stupid" or "dumb", you're 90% of the way there.

9
ivanbakel 1 day ago 0 replies      
In a similar vein, one of the articles that has more influenced my interactions has been The Minimally-nice OSS Maintainer [0]. It doesn't produce an instant slipstream where all your collaboration is suddenly super-fluid, but niceness does help reduce those abrasive moments which, in my experience, can slow a community down a lot more than working well speeds it up. It goes hand-in-hand with good community curation - so long as you're trimming out bad actors, you have to be able to acknowledge bad behaviour in yourself.

0. https://news.ycombinator.com/item?id=14051106https://brson.github.io/2017/04/05/minimally-nice-maintainer

10
agibsonccc 1 day ago 0 replies      
I struggle with this a ton. 1 thing I can't really get past with this, is: People themselves often take "ideas" as "personal criticism" in practice.

As much as I like the ideas this post advocates, I feel like some of this is on a case by case basis.

It should always be a goal to keep criticism professional, not personal.

One other thing that should be kept in mind here of is culture.

I live in japan where you really can't even say "no" let alone "wrong". There's are extremes like: Linus and the other being many asian cultures.

Like any advice like this, try to look at the intent and the points that work for your situation not "Silicon valley startup only".

11
qdev 1 day ago 0 replies      
The article ends by discussing trust, and perhaps that is more fundamentally important than kindness -- kindness is one vehicle that allows trust to evolve, but probably not the only one.

An environment of trust (and safety) allows open technical discussions and lets you come to decisions in a way that helps everyone learn and evolve without "losing face" and without breeding an undercurrent of anger and resentment. Knowing that each person is willing to listen to the other respectfully and that each person is prepared to say they are wrong, can improve the discussion rather than making it more wishy-washy.

You need to have this if you're going to be working day after day, maybe for years with the same people. Lose trust and the feeling that it is safe to make potentially "stupid" statements, and people will just blindly follow the loudest most belligerent person because it's not worth the emotional cost of trying to engage in "debate".

So maybe "Trust is Underrated" would be a better title for the original article.

12
siliconc0w 1 day ago 0 replies      
I get it's possible to qualify statements, de-personalize, and obfuscate blame but I'm not convinced this is the ideal environment. It's diplomatic, but it's slower and less clear. It can work but I've also seen it fail where someone takes a comment as a suggestion when it wasn't. It's basically 'level 0' or the default mode of communication.

A good workplace culture is, essentially, leveling up from this. It's agreeing while diplomatic language is more comfortable and it's how we might communicate outside work, we're agreeing to suspend it to better achieve our shared goals. If someone challenges your idea, you need dispassionately and genuinely consider their objections and either defend your idea or acquiesce to the better idea. Some people just can't do this. Ideas are personal things and arguing about them feels uncomfortable and they don't like to feel uncomfortable. And, maybe getting a little carried away, but I think there is general societal issue where we think if you're uncomfortable something must be wrong. Good decisions are born out of argument not trust. Saying "I'm confused" or "Help me understand" when you already understand and just disagree is level 0 language. It kinda works but it's slow and inefficient and as engineers - this isn't good enough.

13
sillysaurus3 1 day ago 4 replies      
It takes a lot more work to get your point across while being kind. Sometimes I'm not sure it's worth it. Especially when it seems like no manager qualifies as "kind." So if you want to advance, what do you do?

It's still annoying that becoming a manager is correlated with advancement, but that's life.

14
jeffdavis 1 day ago 0 replies      
This article makes it sound like kindness is just expending extra time for the same message, and it's magically "nice".

That explanation of kindness doesn't make sense. Some people try to be nice and, by mistake, end up being rude. And business people make deals quickly all of the time, using jargon and cutting out pleasantries while still being kind.

No, kindness is a skill of words and actions that must be developed over time. It's about navigating complex ideas and decisions effectively.

For instance, "no" is generally rude, not because it's too short, but because it doesn't provide good feedback on a complex idea. What is the proposer trying to accomplish? What existing alternatives exist, or what others might be explored?

If you don't have the time to give good reasons, then point them toward others that you trust to give good advice. E.g: "This proposal is unacceptable. Discuss with group XYZ and explore alternatives." Or even: "This proposal is unacceptable -- the proposed use case is not important enough to justify what you are trying to do."

15
strictfp 1 day ago 1 reply      
I think Linus is extreme, but I can totally understand that he got fed up with being nice and getting ignored. I don't agree with his conclusion that people don't get him reprimanding them, though. I think they mostly get it, but think they can get a way with ignoring him. And that is an attitude problem we have in our industry. A lot of people seem to think that they are the shit and are really bad listeners.
16
depsypher 1 day ago 1 reply      
I think we do need to have empathy in our dealings with people online, and in general it's in our own best interests to do so. Many open source projects' lifeblood are their communities, and other things being equal, you'll get more contributions if you're not a complete jerk.

The flip-side is that high quality maintainable code is the product of top-notch commits, and rejecting commits is sometimes necessary to keep the standard of quality high. A good maintainer shouldn't cave to pressure of accepting a flawed commit just to avoid hurting someone's feelings.

This article in fact had what looks like a prime example of that. The comment mentioning a PR might "break a limit" but "we'll cross that bridge when we get to it" was touted as an example of how to give guidance. I'd argue that code quality slipped right there as a direct result of social pressure to accept a subpar commit.

It's not easy by any measure, but I think it pays to be not only clever and kind, but also consistent and firm when it comes to reviewing people's work.

17
jancsika 1 day ago 1 reply      
Linus' story is that early on in the history of Linux he was not direct enough in his criticism of a kernel dev's code to make it clear he wouldn't accept it into the kernel. So the kernel dev kept working on the code in the hopes of it being accepted, and then when Linus finally made it clear it wouldn't be accepted the dev became-- according to reports Linus heard-- suicidal.

Consequently Linus says he decided to go in the direction of communicating in the manner that he is now known for. (Which makes me wonder-- if he had a personal encounter early on with his sarcasm causing the same bad outcome, would he have decided as confidently to go in the other direction?)

Regardless, I think jaromil who maintains Devuan is a great counterexample. He's quite nice and non-sarcastic, approachable to newcomers, and he seems to be able to herd cats just as well.

18
TheAceOfHearts 1 day ago 0 replies      
I disagree that these three things are the same: that sucks!, youre doing it wrong!, only an idiot would. Sometimes you really are doing things wrong, and I'd regard being told so as a kindness. The situation where I've seen it most commonly is when someone is learning to speak a language. If you don't correct them, they'll continue making mistakes. When someone corrects me I give serious thought to what they're saying.

In my last job I had lots of hour-long arguments with coworkers on different topics, many of which I ended up conceding the point. I'm incredibly appreciative of them having taken the effort to help me understand the their views, and convince me otherwise.

I think there's a lot of stigma on disagreeing with people. But I don't see why that should be the case. If you have an argument with someone and you both end up leaving with a better understanding of the problem, why is that a bad a thing? I've had plenty of discussions where I fundamentally disagreed with someone, only to go and later drink a few beers them. Just because you disagree with someone doesn't mean you hate or dislike them, and there's no reason to take it personally. It's fine for someone to hold different views than you own.

An example of this are hate-speech laws, which I'm thankful that the US doesn't have. Personally, I consider them horrible mistakes, but I respect that others disagree. FWIW, the reason I disagree with hate-speech laws is that I think you should be able to openly speak your mind on any topic, because it means you can have a discussion and learn from it. If you can't have an open discussion about some topic, you might never be presented with the opportunity to rise above whatever might've lead you to some terrible belief.

I've certainly said a lot of stupid things online, and every time I've been called out on them I think I've grown and learned a bit. I have no doubt I'll continue saying stupid stuff, because in many cases I won't know any better, and I fully hope that others will call me out on it.

19
amirouche 1 day ago 0 replies      
20
overgard 1 day ago 0 replies      
I think directness can be a form of kindness though. For an intelligent professional, being treated with kid gloves and not receiving direct feedback is often detrimental to everyone involved, and the resentment that can form from leaving a situation lingering can be vastly more damaging than having an argument might have been.

Also, while I've been critical of Linus' approach in the past, I think given that his standards are well known and consistent it's probably not that hurtful if he rips you to shreds over a patch because its well known that thats just what hes like.

21
crispinb 1 day ago 0 replies      
We live in societies designed to systematically select for greed and dog-eat-dog individualism, to which kindness is antithetical. Given this, for kindness to survive beyond the private/family sphere requires heroism. Heroism is lovely, but is by definition too much to expect on average. To promote greed as the primary organising principle of mass societies was a reckless experiment. It failed, to which our world's collapsing ecosystems are primary witnesses.
22
ppod 1 day ago 2 replies      
I think that kindness is a gift just like cleverness. You can work to become more educated, work to be more rational, more evidence-minded in your judgements, but you will still be behind someone who works the same amount but has a natural ability. The same is true of kindness. Of course, we should all work to be kind, but it comes easier to some than to others. I know some people who, in a very natural way, are pretty much incapable of being unkind.
23
bitL 1 day ago 2 replies      
How does author solve the problem of being kind, other people mistaking it for weakness and taking advantage of it?
24
Aron 18 hours ago 0 replies      
Basically, most people walk around with inflamed highly sensitive status buttons that get triggered by any indication of relative power balance out of line with officially designated titles e.g. your interlocutor is pretentiously using large words. Kindness is acting like everyone is equal maximally, regardless of the truth of the matter.
26
kevmo 1 day ago 0 replies      
Aggressive kindness has opened so many doors and smoothed so many paths for me. It's painless and pays enormous dividends while making you feel great about yourself.

I also get tons of free shit by just being nice to service workers.

27
maxxxxx 1 day ago 0 replies      
Kindness and sincerity have to go together. I see way too many people going through rituals that are supposed to make them look kind but they are not sincere.
28
makecheck 1 day ago 0 replies      
It can be very motivating to see someone get mad at you though. All at once, lots of things become clear: (1) this is important to that person, (2) you need to treat this seriously, and (3) this is really uncomfortable, it would be good to avoid future discomforts (i.e. change behavior more permanently, not just this one time).

Kindness actually triggers the exact opposite of the 3 things above: suddenly everything seems like no big deal and nothing ever changes. Just great: now youre setting yourself up for several more unpleasant interactions in the future, instead of just fixing something from the beginning.

There are a lot of other considerations too...

For one, the person yelling is usually not the only unkind person in the interaction, even if thats the most obvious one. It is unkind, for instance, to be a lazy person who goes into situations utterly unprepared, showing no respect; at that point, YOU arent being nice so why do you expect niceness in return?

And sometimes niceness gets in the way of well-understood, efficient processes. On a mailing list, say, youre better off making a direct statement that isnt wrapped in two extra paragraphs of polite tone for everyone to read through. And heck, when youre driving, you can create MAJOR traffic problems by being kind instead of just following the rules (ironically bubbling back and impacting 50 people for a mile because you wanted to be kind to one person; just watch some videos).

29
rickpmg 1 day ago 0 replies      
I think opponents of being kind tend to think:

1- you can't be kind without appearing weak and

2- being blunt and being kind are two different things

30
hbarka 1 day ago 1 reply      
Can't this be simply distilled as being a gentleman/woman? There was that generation.
31
throwme_1980 1 day ago 10 replies      
As a developer, kindness is EARNED, you want people to be kind to you despite of who you are and your mediocre contribution to the code base , unnecessarily refactoring code when you're meant to be working on an important feature ? No sir, I don't think it'll be kindness you will get from or any business manager.

If however you want well deserved respect and kindness, show that you excel at your job, you are able to deliver for me in a timely fashion and exceeding expectation.You can't handle being criticised ? You have no business being in business, go open a charity bookshop. One has to understand, developers like in any other creative industry can go off on a tangent by themselves if not given direction explicitly, sometimes that means being very much assertive and firm.If that is perceived as being unkind then tough luck.

32
EGreg 1 day ago 2 replies      
Experience has taught me there is a serious difference between being nice and being kind.

Often, we are nice because we are afraid of hurting people's feelings. As a result, though, we sometimes end up stringing people along and the ultimately make them lose more time and energy than if we had breached their comfort zone early, and communicated our expectations when they weren't yet super-invested. And after all is said and done, if we string them along, they end up blaming us more as well.

This was a hard life lesson to learn, but sometimes, to be kind, one must risk not being nice.

My advice would be: before communicating a tough expectation, do your homework (research how it's done) and be diplomatic. Different cultures have different linguistic paradigms that help grease the wheels towards agreement. Use them. And at the end, be firm but offer support for the transition. If they want it, they will take it. In any case it's likely you will be respected and won't burn bridges that way.

33
loeg 1 day ago 1 reply      
(2014)
34
minademian 17 hours ago 0 replies      
h/t to CircleCI for doing this kind of work in the tech industry.
35
unclebucknasty 1 day ago 0 replies      
The missing link and unspoken driver behind much meanness (in development and otherwise) is contempt.

Contempt is one of the worst regards a person can hold for another--perhaps even worse than hatred. It's a fundamental lack of respect for another's worth, either within a domain or more generally.

One can muster the will to express kindness for someone they dislike. But, it is virtually humanly impossible to be kind towards those one holds in contempt.

36
kronos29296 1 day ago 0 replies      
I came here thinking here is another situation or anecdote and this time about kindness and being screwed over because of it or something. Instead it is about workplace professionalism being called kindness and a recruitment pitch disguised as click bait. (Click baits are increasing in HN) my .02$
18
Using Deep Learning to Create Professional-Level Photographs googleblog.com
541 points by wsxiaoys  4 days ago   120 comments top 31
1
wsxiaoys 4 days ago 12 replies      
For those who think it's just another lame DL based instagram filter...

The method proposed in the paper(https://arxiv.org/abs/1707.03491) is mimicing a photographer's work: From taking the picture(image composition) to post-processing(traditional filter like HDR, Saturation. But also GAN powered local brightness editing).In the end it also picks the best photos(Aesthetic ranking)

Selected comments from professional photographers at the end of paper is very informative. There's also a showcase of model created photos in http://google.github.io/creatism

[Disclaimer: I'm the second author of the paper]

2
Lagged2Death 3 days ago 2 replies      
When a topic like self-driving vehicles comes up, the Hacker News crowd is mainly in favor: Creative destruction! Disruption! Go go gadget robots! Not surprising. How many Hacker News readers drive trucks or taxis for a living? How many regard commuting as an enjoyable hobby?

Photography, on the other hand, is a very common hobby in the tech community. And the comments here seem to reflect that this effort strikes a little close to home: Those pictures are lousy, if you find them appealing you have no taste! Just because they're 'professional' doesn't mean they're good! Machines cant replace human judgment, they have no soul! I bet that machine had a lot of human help!

Tech people may tell you great stories about meritocracy and reason, but in the end we are just emotional monkeys. Like the rest of humanity.

Those of us who can accept this may at least aspire to be wise monkeys.

3
andreyk 4 days ago 5 replies      
Talking as a semi-pro (I've put in some money into cameras and lenses and spent a good bit of time on photo editing), this is a bit underwhelming. For landscapes (which this seemed to focus on), I've found that opening up the Windows photo editing programs and clicking 'enchance' or Gimp and clicking some equivalent already gets you most of the way there in terms editing for aesthetic effect. The most tricky bit is deciding on the artistic merit of a particular crop or shot, and as indicated by the difference between the model's and photographer's opinion at the end of the paper, the model is not that great at it. Still, pretty cool that they did that analysis.
4
jff 4 days ago 1 reply      
Automatically selecting what portion to crop is impressive, but just slamming the saturation level to maximum and applying an HDR filter is the sign of "professional" photography rather than good photography.
5
brudgers 4 days ago 1 reply      
It is an interesting project and shows significant accomplishment. I'm not sold on the idea of "professional level" except in so far as people getting paid to make images. I am not sold because the little details of the images don't really hold up to close scrutiny (and I don't mean pixel peeping).

1. The diagonal lines in the clouds and the bright tree trunk at the extreme right of the first image are distractions that don't support the general aesthetic.

2. The bright linear object impinging on the right edge of the cow image and the bright patch of the partial face of the mountain on the extreme left. Probably the gravel at the left too since it does not really support the central theme.

3. The big black lump that obscures the 'corner' where the midground mountain meets the ground plane in the house image.

4. The minimal snow on the peaks in the snow capped mountain image is more documenting a crime scene than creating interest. I mean technically, yes there is snow and the claim that there was snow would probably stand up in a court of law, but it's not very interesting snow.

For me, it's the attention to detail that separates better than average snapshots from professional art. Or to put it another way, these are not the grade of images that a professional photographer would put in their portfolio. Even if they would get lots of likes on Facebook.

Again, it's an interesting project and a significant accomplishment. I just don't think the criteria by which images are being judged professional are adequate.

6
d-sc 4 days ago 4 replies      
As someone who lives in a relatively rural area with similar geography to much of the mountains and forests in these pictures I have noticed previously how professional pictures of these areas have a similar feeling of over saturating the emotion.

It's interesting to see algorithms catching up to being able to replicate this. However when you mention these kind of abilities to photographers, they get defensive, almost like you are threatening their identity by saying a computer can do it.

7
matthewvincent 4 days ago 1 reply      
I don't know why but the "professional" label on this really irritates me. I'm curious to know how the images that got graded on their "professional" scale were selected for inclusion in the sample. Surely by a human who judged them to be the best of many? I'd love to see the duds.
8
fudged71 4 days ago 3 replies      
Very impressed by the results.

I hope that one day our driverless cars will alert us when there is a pretty view (or a rainbow) so we take a moment to look up from our phones. Every route can be a scenic route if you have an artistic eye.

9
wonderous 4 days ago 1 reply      
Interesting how hi-res the photos of a small section of Google Street Car photo can be compared to what users see online; here's an example from the linked article:

https://2.bp.blogspot.com/-6bVWUgA8NEI/WWe1uoW8ayI/AAAAAAAAB...

10
jtraffic 4 days ago 1 reply      
When a photographer takes or edits a picture, she doesn't need to predict or simulate her own reaction. There is no model or training necessary, because the real outcome is so easily accessible. However, she is only one person, and perhaps can't proxy well for a larger group.

The model has the reverse situation, of course: it cannot perfectly guess the emotional response for any one person, but it has access to a larger assortment of data.

In addition, in different contexts it may be easier/cheaper to place a machine vs. a human in a certain locale to get a picture.

If my theorizing makes any sense, it suggests that this technology would be useful in contexts where: the locale is hard to reach and the topic is likely to evoke a wide variety of emotional responses.

11
bitL 4 days ago 0 replies      
Retouching is another field to play with - I am experimenting with CNN/GANs to clone styles of retouchers I like. If you are a photographer, you know that most studio photos look very bland and retouching is what makes them pop; for that everyone has a different bag of tricks. If you use plugins like Portraiture or do basic manual frequency separation followed by curves and dodge/burn adjustments, you leave some imprint of your taste. This can be cloned using CNN/GANs pretty well; the main issue is to prevent spills of retouched area to areas you want to stay unaffected.
12
seasonalgrit 3 days ago 1 reply      
"Someday this technique might even help you to take better photos in the real world."

So what? Maybe I missed it, but what are some potentially meaningful applications of this technology? What motivated this to begin with? Or are these questions that we even bother asking anymore?

I remember the first time someone showed me the Snapchat app -- it would make them look like a cartoon dog, or all these other real-time overlays. I thought, 'jesus, so glad we're all getting advanced computer science degrees so we can work on utterly useless shit like this...'

13
Kevorkian 4 days ago 0 replies      
Lately, there has been lots of talk of deep learning applied to create tools which can generaterequirements designs software code create builds test builds as well help with deploying builds to various environments. I'm excited for the future developments capable with ML.
14
mozzarella 4 days ago 0 replies      
this is amazing, but 'professional photographers' aren't really the best arbiters of what a 'good' photograph is. Also, training on national parks binds the results to a naturally bland subject, no pun intended. While an amazing achievement, nothing shown here demonstrates ability beyond a photographer's assistant/digital tech adjusting settings to a client's tastes in Capture One Pro. Jon Rafman's 9 Eyes project comes to mind as something that produced interesting photographs, as does the idea to find a more rigorous panel of 'experts' (e.g. MoMA), or training the model on streets/different locations than national parks.
15
mozumder 4 days ago 0 replies      
If they're doing dodging/burning, then they could really use the processing on raw files instead of jpegs. The dynamic range is obviously limited when dodging/burning jpegs, as you can see from the flat clouds and blown highlights on the cows.
16
agotterer 3 days ago 0 replies      
Related: Arsenal (https://www.kickstarter.com/projects/2092430307/arsenal-the-...) is trying to build a hardware camera attachment that uses ML to find the perfect levels for your photo in realtime.
17
zemotion 3 days ago 0 replies      
I think some of these results are really lovely, the one at Interlaken is a perfect travel photo. Would be interesting to see more types of work this could apply to.

Saw a few people talking about retouching and studio work - I do a lot of studio shoots and retouching on my own, and would be happy to help or participate in projects. Feel free to reach out.

18
parshimers 4 days ago 1 reply      
This is cool but I really don't get why one could call this actually creating "Professional-Level" photographs. It's more like a very good auto-retouch. There's still the matter of someone actually being there, realizing it is a beautiful place, and dragging a large camera with them and waiting for the right light.
19
campbelltown 3 days ago 0 replies      
The first thought after going through all these photos was: incredibly stilted. It's amazingly impressive, but the human photographer will always be able to capture the subtleties that AI will miss. But very cool nonetheless
20
descala 3 days ago 0 replies      
Instead of augmented reality I would call this "distorted reality". People will prefer to visit places with Street View than being there. Real reality is uglier
21
mtgx 4 days ago 1 reply      
Great, not all we need is specialized machine learning inference accelerators in our mobile phones. I wonder if Google has even considered making a mobile TPU for its future Pixel phones.
22
tuvistavie 4 days ago 0 replies      
Up to what point can the output be controlled?Can complex conditions be created?e.g. a lake with a mountain background during the evening
23
k__ 4 days ago 0 replies      
Is deep learning comparable to perceptual exposure?
24
wingerlang 3 days ago 0 replies      
In the future maybe we can just hook up a drone to this and have it fly around taking nice pictures.
25
BasDirks 3 days ago 0 replies      
I find the colors in the results images consistently worse than in the original images.
26
known 3 days ago 0 replies      
ML = Wisdom of Crowds
27
seany 4 days ago 0 replies      
Would be interesting to see how well you could train this kind of thing off of a large catalog of lightroom edit data. to then mimic a specific editors style.
28
anigbrowl 4 days ago 0 replies      
For example, whether a photograph is beautiful is measured by its aesthetic value, which is a highly subjective concept.

Oh really.

29
olegkikin 4 days ago 2 replies      
[deleted]
30
cooervo 3 days ago 0 replies      
wow automation isn't leaving any fields untouched
31
jonbarker 4 days ago 1 reply      
From the article the caption of the first picture was interesting: "A professional(?) photograph of Jasper National Park, Canada." Is that the open scene from The Shining? If so I wonder why the question mark, is Stanley Kubrick not a professional photographer?
19
Used GPUs flood the market as Ethereum's price drops below $150 overclock3d.net
402 points by striking  1 day ago   320 comments top 29
1
DanBlake 1 day ago 11 replies      
Just took a quick look at this.If you are located in the 'mining valley' of Washington where power is ~2c/kwh you are still getting healthy profits.

A computer with 7 GTX 1070 graphics cards should produce ~230 mh/s and draw 1 kw. This would cost approximately $30/month in power factoring in kw demand + cooling.

The above setup will currently generate $385/month in ETH.

So basically for miners who are in the right spot with the right facility, this is still profitable. The question is of course for how long. You also need to factor in the cost of equipment, datacenter, employees and difficulty/price.

But even if you dont have a facility in washington and just mine from your apartment, your power cost would probably be $100 a month. So its still 'profitable', just not nearly as much as it was in the run up.

Cliffnotes: 'professional' miners dont care. Even with the 'crash' today, they are making more per day than they were before the entire run up. For instance the 'worst' time for mining was December 2016 where you would only make $7.50 a day gross in ETH.

2
abalone 1 day ago 3 replies      
All I can think of is the careless environmental impact of all that dirty electricity consumption. For, let's be honest, a mostly speculative activity.

One cryptocurrency crashes, another gets hyped up, and the computational cycle repeats. When will it end.

3
zanny 1 day ago 2 replies      
On the bright side, this has been a great test of etheriums scalability. Which isn't great, but when this mining craze dies down I won't hesitate to run ethminer when I'm not home for a little extra dough.

What I would really expect is an overreaction to the price crash, which means the difficulty rate might drop a lot. At this point, doing what a lot of people do with bitcoin - mining small amounts for a long period of time and just holding it until it reaches all time highs to cash out - is probably really easy money.

Probably most relevantly is how crypto valuations are bound together. Bitcoin is also down about 20% from its ATH, and will certainly drop more as long as eth pulls it down. The entire market will rise and fall on the hype of just one blockchain. Coins nobody even cared about like peercoin saw 5x returns on miners during this eth bubble.

4
crypt1d 1 day ago 1 reply      
Its mostly RX series that are being sold off and the reason for that is the ever-increasing Ethereum DAG size. I dont know the specifics, but due to the DAG size the ETH hash rate on AMD RX400/500s is starting to slowly drop, and will be behind the performance of their Nvidia counter-parts in a few months time.

(Source: I run a mining operation.)

5
geff82 1 day ago 3 replies      
And then they'll buy back at horrendous prices when the Price goes up again? Seems like shortsighted people do this. At least they could play some tremendous video games in the mean time ;)
6
strictnein 23 hours ago 2 replies      
I sold my hardware last week. GTX 1060s, a 1080, an AMD R9 290X, some other stuff.

Unless I'm missing something, there's no huge flood of video cards on ebay. There's maybe ~20% more than there was a week ago. All told, for the in demand mining hardware, you're only talking about a couple thousand cards.

7
Thriptic 1 day ago 2 replies      
A quick search of eBay shows no good deals on gtx 1070s. Used cards are selling for what I bought my cards new for a month ago or more (380).
8
zo7 22 hours ago 2 replies      
Interesting, looking at the price graph Ethereum's price seems to correlate with Bitcoin's, which lost about %20 of value ($500) recently. In case anyone's wondering, the crash seems mainly driven by anxiety of an upcoming blockchain fork splitting the currency in two next month.

http://www.zerohedge.com/news/2017-07-15/bitcoin-battered-be...

9
schiffern 1 day ago 3 replies      
Off topic, but the "ETH/USD" label on the price graph bothers me. Shouldn't it be USD/ETH?

https://www.overclock3d.net/gfx/articles/2017/07/16083553143...

150 ETH/USD would mean that you can get 150 coins per 1 USD. On the other hand, 150 USD/ETH correctly captures the mathematical relationship.

10
corporateslave2 1 day ago 3 replies      
GPU trading? More profitable than buying the underlying currency since GPU's in relatively good condition always hold a certain value?

High levels of correlation with BTC and ETH, along with other cryptocurrency, but a floor on how low it can go.

Pays off while holding by mining coins. Much like a dividend.

11
dawnerd 1 day ago 1 reply      
Good, maybe video cards will start to come back in stock at their MSRP.
12
vortico 1 day ago 3 replies      
Sorry if this is a dumb question, but does a single party control the difficulty level of Ethereum and Bitcoin? If so, it seems like they have massive control over the market. If not, how does it work?
13
j_s 1 day ago 1 reply      
In case anyone is not aware, there are user-friendly tools (that take their cut) which ensure maximum mining profitability for available hardware.

NiceHash, MinerGate, Awesome Miner and others - many have an affiliate program and fight against botnets (and antivirus often block the actual mining programs they download).

14
rdl 1 day ago 0 replies      
At least the NV cards, if they become non-viable for crypto mining, are useful for a lot of other GPU computation (or just as graphics cards).
15
justforFranz 4 hours ago 0 replies      
Cryptocurrency noob here. Isn't the ability to "mine" a cryptocurrency a design failure of the cryptocurrency?
16
horusthecat 13 hours ago 1 reply      
Does anyone know where the money that initially went into the cryptocoins during the run-up this year came from, or where it went?
17
wunderg 1 day ago 1 reply      
I thought ethereum would move to Proof of State from Proof of Work which will make mining as it's obsolete.

https://www.ethnews.com/proof-of-work-vs-proof-of-stake-expl...

18
nsxwolf 21 hours ago 2 replies      
I see an ebay buy it now for four 8GB RX 480s for $360. $90 a piece seems pretty crazy low - does that mean there's a high likelyhood of hardware failure after these things were used for mining?
19
kushankpoddar 20 hours ago 2 replies      
There is a point of view out there that Europe's higher than average reliance on renewables has bumped up electricity prices there and contributed in making the place less competitive for industries. You can see that argument in action when European miners are losing out to others due to high power costs.
20
Nursie 16 hours ago 2 replies      
Excellent, my partner is looking for a card right now, a (lighly) used 1060, 1070 or 580 might be just the thing!

In the meantime, my machine which is a gaming rig that is mostly idle, may as well do a bit of mining...

21
stOneskull 20 hours ago 0 replies      
this 'flooding the market' claim seems to be made up.
22
tossandturn 1 day ago 2 replies      
Worst place to ask this, I know, but... If I wanted to upgrade my old machine that currently has an HD 4770 (PCI Express 2.0 x16), where exactly could I find a worthwhile upgrade for less than $20?
23
Shinchy 16 hours ago 0 replies      
Annoying as all hell, I am in the market for a 1080ti and the prices have rocketed up in the past two weeks.
24
Temasik 12 hours ago 0 replies      
Mining has no future
25
waspear 1 day ago 0 replies      
Ethereum's Casper protocol upgrade (Proof of Stake) might have a long term affect on GPU market as well.
26
foota 1 day ago 0 replies      
Maybe I should have sold the R9 Fury I got for gaming a few months ago...
27
aussieguy123 20 hours ago 1 reply      
Anyone doing deep learning?
28
the_end 1 day ago 3 replies      
29
ryanSrich 1 day ago 3 replies      
Maybe this is an incredibly uneducated comment, but won't they have to buy those GPUs back when the price of ETH inevitability goes up again? This is all just FUD from August 1st, ICO instability, and lack of Ethereum use cases. All of these issues have resolutions planned. So one would be stupid to think Ethereum stays at $150 for even a year.
20
Alibaba Cloud alibabacloud.com
518 points by paulmach  4 days ago   353 comments top 41
1
david90 3 days ago 6 replies      
> https://www.alibabacloud.com/customers/strikingly

> As an international website building platform, obtaining an ICP license for China is very important to our users. The actual process of obtaining an ICP license though is quite complex. With Alibaba Clouds built-in and easy-to-follow ICP application process, it has helped with our user experience a lot.

Seems like it's killer feature is China ICP license made easy.

2
JohnTHaller 3 days ago 6 replies      
Don't serve any javascript from within China to users outside of China. Remember when the Chinese government used the great firewall of China to modify Baidu analytics javascript passing through it to setup an international DDoS against github? Hosting your stuff in mainland China for consumption outside make you a party to that happening again in the future.
3
kevinsd 3 days ago 21 replies      
I have been missing a feature from Alibaba Cloud that AWS does not provide and there seems no easy replacement: Their Object Storage Service (OSS) provides an endpoint for transforming images (resizing/thumbnailing, compressing etc). Putting it behind a CDN (which is also integrated in the feature), this solves virtually all the image processing requirements ever needed in a common web or mobile application. https://www.alibabacloud.com/help/doc-detail/44687.htm?spm=a...
4
EZ-E 3 days ago 8 replies      
A concern is that if ever your product on this platform gets big, friction with the (often unpredictable) Chinese gov and policies will become a liability.

example : your product displays news. Some of these might considered not acceptable by the Chinese govt and cause you to get shut down or blocked

5
gentro 3 days ago 3 replies      
Don't forget there's also:

Tencent Cloud: https://www.qcloud.com/?lang=enBaidu Cloud (Chinese only): https://cloud.baidu.com/Netease/163 Cloud (Chinese only): https://www.163yun.com/

I use Tencent Cloud for a small China-oriented SaaS. The SDK APIs are kind of a mess/lacking, but the service is otherwise pretty reliable and easy to use.

6
nodesocket 3 days ago 5 replies      
Forgive me, but why not just use Google Compute Engine in the Taiwan region? Can US citizens even signup and use Alibaba Cloud? I'm very skeptical about using a Chinese based cloud provider given the current world situation.

Also, back of the napkin math, but GCE is even cheaper.

 Alibaba Cloud ($79.00/mo) 2 Core CPU 8GB Memory 80GB SSD Google Compute Engine - Taiwan Region ($69.81/mo) n1-standard-2 (2 vCPUs / 7.5 GB Memory) 80 GB SSD disk

7
tristanj 3 days ago 4 replies      
What's new about this? Alibaba cloud has been around for 8 years, it's called Aliyun in China (literally Ali-cloud). They didn't build datacenters in 7 countries overnight.

Could anyone explain the sudden excitement about their service?

8
iliketosleep 3 days ago 3 replies      
It looks like a great offering, but it also means that in all likelihood, you'd be sharing your data with the Chinese government - which may or may not be a problem depending on your business.
9
mitchellh 3 days ago 1 reply      
If you're interested in trying this out in a more advanced capability beyond the UI, Alibaba maintains official support for Terraform: https://github.com/alibaba/terraform-provider

(Note: I work on Terraform)

10
dis-sys 3 days ago 1 reply      
It is a pretty cool offering, for $30/year, you get to experience the GFW by sitting comfortably in your bay area fancy house.

So far, you can't really claim that you've ever designed a global platform because your stuff clearly doesn't work in mainland China. Think about it - 95% of all US services you can think of does _not_ work there, google.com/GCE/most AWS/golang.org/docker etc. For $30/year you get a chance to battle the GFW and the ability to build something truly work in all major markets.

11
MikeDoesCode 3 days ago 0 replies      
When I was working with AliCloud I ran into an issue in that during peak hours, we'd want to scale-up, and they'd be "out of stock" of virtual instances... Which is fine if you have the budget to keep a load of instances running, but if your spike goes over what you expected, there's no resource left for you to scale up. Not sure if that's still the case, but scalability is perhaps the biggest draw for me to the cloud and it seemed AliCloud didn't really get that right.
12
analyst74 3 days ago 4 replies      
Wow, the offering seems fairly comprehensive, they even have 2 data ceters in US. Have anybody used them? how do they compare to AWS or GCP?
13
gondo 3 days ago 1 reply      
"Great Firewall as a service" :)
14
zbruhnke 3 days ago 1 reply      
it seems interesting that noone notes just how much their wording is almost identical to AWS's - they even call their "container service" ECS instances - That feels like something that will sit poorly with Amazon
15
allan_s 3 days ago 1 reply      
After reading this comparison between AWS S3 apis and Aliyun OSS apis https://www.alibabacloud.com/forum/read-148

I've been wondering for a while

 * does it mean if I use boto3 (python library for AWS), but with a different enpoint (which I know can be overrided as we do this for our CI tests) and only do basic operations (put content/get content) I do not have to switch to an other library ? * The comparison does not mention things like presigned url (in order to share private content for a limited amount of time), what is the situation on it for OSS? * Does Aliyun engineer works on closing the gap ?
As s3 is a very popular (if not the most) aws-specific service (compared to things like RDS which are transparent in your application code), at least for me, not having to change library in my code would be a big cost saver.

16
djsumdog 3 days ago 0 replies      
I always thought it'd be funny to setup TOR exit nodes within china and tunnel traffic to them. People get on TOR to get past censorship, and if they connect to an exit node within China, suddenly they can't get to anything. It'd be the ultimate asshole/trolling.
17
always_good 3 days ago 1 reply      
Not a great first impression: Never got the confirmation email after two attempts to two different email services.
18
jakozaur 3 days ago 0 replies      
New cloud appears and their pricing mimics AWS and Alibaba Cloud is no different. Compute and storage is cheap, egress to internet is expensive.

If you want to create new cloud I would rather shoot for cheaper egress as this may give you an edge in many data transfer intensive applications.

19
popopobobobo 1 day ago 0 replies      
Folks, time to ramp up the racism. It is extremely political correct to fling all of our opinions onto you know who.
20
uptownhr 3 days ago 1 reply      
21
wanghq 3 days ago 2 replies      
Seeing few comments about how Alibaba Cloud is doing. It's ranked at the 4th position on Gatner's latest magic quadrant.

http://www.zdnet.com/article/gartner-puts-aws-microsoft-azur...

Disclosure: I work for Alibaba Cloud. Drop me an email (in my profile) if you're interested in the opportunities. Yes, we have office in Seattle (Bellevue).

22
davidgerard 3 days ago 0 replies      
We use AWS a lot and we're using this for our China-based stuff.

tl;dr it's pretty good, if you know AWS this'll be OK, their support is competent.

23
atemerev 3 days ago 0 replies      
In the fine days of Chinese Bitcoin trading domination, I used them to host my algo trading servers (as OKCoin servers were also held there).

But now, there is no need.

24
crispytx 3 days ago 0 replies      
How unoriginal can you get? We're sort of like the Amazon of China, why don't we get into cloud computing too?
25
michael-go 3 days ago 0 replies      
The OLAP "Analytics DB" looks interesting https://www.alibabacloud.com/product/analytic-db

Wonder what OLAP features it providers above the managed & massively-parallel SQL like in BigQuery

26
sangd 3 days ago 4 replies      
Clicking Buy for web hosting leads me to a Chinese website:https://ews.console.aliyun.com/buy.htm?spm=a3c0i.149865.7761...

This doesn't look like a serious contender with AWS.

27
wickedlogic 3 days ago 0 replies      
Something that struck me, is the wording is surprisingly unwordy for a cloud provider...

- "based on the instance rental fee"- "Tell us what you think about this page and win $10 credit! "- "Instance Fee, Storage fee and Public Traffic fee"

28
Cub3 3 days ago 0 replies      
So I tried to sign up for the $300 credit.

Max password length 20 characters.... um ok.

Fill in details, hit sign up button.

Network busy please try again later, for your sign up form, really?

Not what I want to see when onboarding a hosting provider

29
strin 3 days ago 0 replies      
Interesting. That means your data gets extra Great Firewall protection :)
30
liuxiaobo 3 days ago 0 replies      
All the information of Alibaba is controlled and monitored by Chinese Government. Never trust a company controlled by a Autocracy.
31
jdubs 3 days ago 2 replies      
The only European region is in Germany which makes regulatory requirements a bit more difficult. I wonder why they went there rather than Ireland.
32
bArray 3 days ago 0 replies      
I must be missing something - are these prices considered competitive? At least for a straight VM I think I can do better?
33
gobengo 3 days ago 1 reply      
Do you think this is powered by OpenStack? I know Alibaba uses/used OpenStack internally.
34
Punisher 3 days ago 0 replies      
The offering seems fairly comprehensive, they even have 2 data centers in US. Has anybody used them?
35
xiconfjs 3 days ago 1 reply      
"The peak bandwidth for ECS instances from the ECS Package is 50Mbps. This cannot be modified by the user."

Sounds strange to me.

36
du_bing 2 days ago 0 replies      
Hi there, I am a Web Developer in China, if you want to build websites for Chinese and pass ICP examination, you can contact me, I will help you with that.
37
nkkollaw 3 days ago 0 replies      
I tried signing up and it says "Network busy, please try again later"..?
38
dyu- 3 days ago 2 replies      
Note that they need your credit card info even with their free 1 year plan.
39
banach 3 days ago 4 replies      
Can I host a blog that criticizes Putin on one of these servers?
40
lucaspottersky 3 days ago 0 replies      
well, I guess China isn't that cheap anymore...
41
5_minutes 3 days ago 3 replies      
This is pretty neat, $30/year. I find it admirable from them to do this (mimicking AWS).
21
Students Are Better Off Without a Laptop in the Classroom scientificamerican.com
425 points by thearn4  6 days ago   243 comments top 59
1
zeta0134 6 days ago 5 replies      
Oh, okay, I thought the study was going to be on the benefits of attempting to use the laptop itself for classroom purposes, not for social media distractions. This would be more accurately titled, "Students Are Better Off Without Distractions in the Classroom." Though I suppose, it wouldn't make a very catchy headline.

I found my laptop to be very beneficial in my classroom learning during college, but only when I made it so. My secret was to avoid even connecting to the internet. I opened up a word processor, focused my eyes on the professor's slides or visual aids, and typed everything I saw, adding notes and annotations based on the professor's lecture.

This had the opposite effect of what this article describes: my focusing my distracted efforts on formatting the article and making my notes more coherent, I kept myself focused, and could much more easily engage with the class. Something about the menial task of taking the notes (which I found I rarely needed to review) prevented me from losing focus and wandering off to perform some unrelated activity.

I realize my experience is anecdotal, but then again, isn't everyone's? I think each student should evaluate their own style of learning, and decide how to best use the tools available to them. If the laptop is a distraction? Remove it! Goodness though, you're paying several hundred (/thousand) dollars per credit hour, best try to do everything you can to make that investment pay off.

2
makecheck 6 days ago 9 replies      
If students arent engaged, they arent going to become star pupils once you take away their distractions. Perhaps kids attend more lectures than before knowing that they can always listen in while futzing with other things (and otherwise, they may skip some of the classes entirely).

The lecture format is what needs changing. You need a reason to go to class, and there was nothing worse than a professor showing slides from the pages of his own book (say) or droning through anything that could be Googled and read in less time. If there isnt some live demonstration, or lecture-only material, regular quizzes or other hook, you cant expect students to fully engage.

3
ourmandave 6 days ago 5 replies      
This reminds me of the running gag in some college movie where the first day all the students show up.

The next cut some students come to class, put a recorder on their desk and leave, then pick it up later.

Eventually there's a scene of the professor lecturing to a bunch of empty desks with just recorders.

And the final scene there's the professor's tape player playing to the student's recorders.

4
stevemk14ebr 6 days ago 2 replies      
I think this is a highly personal topic. As a student myself i find a laptop in class is very nice, i can type my notes faster, and organize them better. Most of my professors lectures are scatter brained and i frequently have to go back to previous section and annotate or insert new sections. With a computer i just go back and type, with a pen and paper i have to scribble, or write in the margins. Of course computers can be distractions, but that is the students responsibility, let natural selection take its course and stop hindering my ability to learn how i do best (I am a CS major so computers are >= paper to me). If you cannot do your work with a computer, then don't bring one yourself, dont ban them for everyone.
5
imgabe 6 days ago 4 replies      
I went to college just as laptops were starting to become ubiquitous, but I never saw the point of them in class. I still think they're pretty useless for math, engineering, and science classes where you need to draw symbols and diagrams that you can't easily type. Even for topics where you can write prose notes, I always found it more helpful to be able to arrange them spatially in a way that made sense rather than the limited order of a text editor or word processor.
6
njarboe 6 days ago 1 reply      
This is a summary of an article titled "Logged In and Zoned Out: How Laptop Internet Use Relates to Classroom Learning" published in Psychological Science in 2017; The DOI is 10.1177/0956797616677314 if you want to check out the details.

Abstract: Laptop computers are widely prevalent in university classrooms. Although laptops are a valuable tool, they offer access to a distracting temptation: the Internet. In the study reported here, we assessed the relationship between classroom performance and actual Internet usage for academic and nonacademic purposes. Students who were enrolled in an introductory psychology course logged into a proxy server that monitored their online activity during class. Past research relied on self-report, but the current methodology objectively measured time, frequency, and browsing history of participants Internet usage. In addition, we assessed whether intelligence, motivation, and interest in course material could account for the relationship between Internet use and performance. Our results showed that nonacademic Internet use was common among students who brought laptops to class and was inversely related to class performance. This relationship was upheld after we accounted for motivation, interest, and intelligence. Class- related Internet use was not associated with a benefit to classroom performance.

7
shahbaby 6 days ago 1 reply      
"Thus, there seems to be little upside to laptop use in class, while there is clearly a downside."

Thanks to bs articles like this that try to over generalize their results, I was unsure if I "needed" a laptop when returning to school.

Got a Surface Book and here's what I've experienced over the last 2 semesters.- Going paperless, I'm more organized than ever. I just need to make sure I bring my surface with me wherever I go and I'm good.

- Record lectures, tutorials, office hours, etc. Although I still take notes to keep myself focused, I can go back and review things with 100% accuracy thanks to this.

- Being at 2 places at once. ie: Make last minute changes before submitting an assignment for class A or attend review lecture to prepare for next week's quiz in class B? I can leave the surface in class B to record the lecture while I finish up the assignment for class A.

If you can't control yourself from browsing the internet during a lecture then the problem is not with your laptop...

8
baron816 6 days ago 0 replies      
Why are lectures still being conducted in the classroom? Students shouldn't just be sitting there copying what the teacher writes on the board anyway. They should be having discussions, working together or independently on practice problems, teaching each other the material, or just doing anything that's actually engaging. Lecturing should be done at home via YouTube.
9
zengid 6 days ago 0 replies      
Please excuse me for relating an experience, but it's relevant. To get into my IT grad program I had to take a few undergrad courses (my degree is in music, and I didn't have all of the pre-reqs). One course was Intro to Computer Science, which unfortunately had to be taught in the computer lab used for the programming courses. It was sad to see how undisciplined the students were. Barely anyone paid attention to the lectures as they googled the most random shit (one kid spent a whole lecture searching through images of vegetables). The final exam was open-book. I feel a little guilty, but I enjoyed seeing most of the students nervously flip through the chapters the whole time, while it took me 25 minutes to finish (the questions were nearly identical to those from previous exams).
10
Kenji 6 days ago 0 replies      
If you keep your laptop open during class, you're not just distracting yourself, you're distracting everyone behind you (that's how human attention works - if you see a bright display with moving things, your attention is drawn towards it), and that's not right. That's why at my uni, there was an unspoken (de-facto) policy that if you keep your laptop open during lectures, you're sitting in the backrows, especially if you play games or do stuff like that. It worked great - I was always in the front row with pen & paper.

However, a laptop is very useful to get work done during breaks or labs when you're actually supposed to use it.

11
rdtsc 6 days ago 2 replies      
I had a laptop and left it home most of the time. And just stuck with taking notes with a pen and sitting upfront.

I took lots notes. Some people claim it's pointless and distracts from learning but for me the act of taking notes is what helped solidify the concepts a better. Heck due to my horrible handwriting I couldn't even read some of the notes later. But it was still worth it. Typing them out just wasn't the same.

12
alkonaut 6 days ago 0 replies      
This is the same as laptops not being allowed in meetings. A company where it's common for meeting participants to "take notes" on a laptop is dysfunctional. Laptops need to be banned in meetings (and smartphones in meetings and lectures).

Also re: other comments: A video lecture is to a physical lecture what a conference call is to a proper meeting. A professor rambling for 3h is still miles better than watching the same thing on YouTube. The same holds for tv versus watching a film on a movie screen.

Zero distractions and complete immersion. Maybe VR will allow it some day.

13
brightball 6 days ago 1 reply      
Shocker. I remember being part of Clemson's laptop pilot program in 1998. If you were ever presenting you basically had to ask everyone to close their laptops or their eyes would never even look up.
14
calvano915 2 days ago 0 replies      
Just putting in my 2c that as a senior in dual BS and a minor, technology has allowed me to be more successful than paper ever will. I can get more notes written, can draw (I've used a tablet from day one, upgraded to an SP4 recently), import everything to keep it all organized, etc. These arguments about tech vs. paper are the same about learning styles. People are different, and what works for them will be different. Professors should allow any tools that are effective for some to be used, as long as a student is not abusing the privilege by distracting/harming other's learning. I've let my mind wander (by means of browsing facebook or some other distraction) in maybe 10 lectures max over the last 3 years of school. The rest of the time, I had OneNote open and was fully engaged. When students aren't engaged, they will find distraction be it on paper or tech. STOP TELLING ME THAT A COMPUTER WILL LOWER MY GRADES. I have a 3.84 and rising, and I refuse to change what makes me successful.
15
tsumnia 6 days ago 1 reply      
I think its a double edge sword; not just paper > laptop or laptop > paper. As many people have already stated, its about engagement. Since coming back for my PhD, I've subscribed to the pencil/paper approach as a simple show of respect to the instructor. Despite what we think, professors are human and flawed, and being in their shoes, it can be disheartening to not be able to feed off your audience.

That being said, you can't control them; however, I like to look at different performance styles. What makes someone binge watch Netflix episodes but want to nod off during a lecture. Sure, one has less cognitive load, but replace Netflix binge with anything. People are willing to engage, as long as the medium is engaging (this doesn't mean easy or funny, simply engaging).

[Purely anecdotal opinion based discussion] This is one of the reasons I think flipping the classroom does work; they can't tune out. But, if its purely them doing work, what's your purpose there? To babysit? There needs to be a happy median between work and lecture.

I like to look at the class time in an episodic structure. Pick a show and you'll notice there's a pattern to how the shows work. By maintaining a consistency in the classroom, the students know what to expect.

To tie it back to the article, the laptop is a great tool to use when you need them to do something on the computer. However, they should be looking at you, and you should be drawing their attention. Otherwise, you're just reading your PowerPoint slides.

16
wccrawford 6 days ago 3 replies      
I'd be more impressed if they also did the same study with notepads and doodles and daydreams, and compared the numbers.

I have a feeling that people who aren't paying attention weren't going to anyhow.

However, I'd also guess that at least some people use the computer to look up additional information instead of stopping the class and asking, which helps everyone involved.

17
emptybits 6 days ago 0 replies      
It makes sense that during a lecture, simple transcription (associated with typing) yields worse results than cognition (associated with writing). So pardon my ignorance (long out of the formal student loop):

Are students taught how to take notes effectively (with laptops) early in their academic lives? Before we throw laptops out of classrooms, could we be improving the situation by putting students through a "How To Take Notes" course, with emphasis on effective laptopping?

It's akin to "how to listen to music" and "how to read a book" courses -- much to be gained IMO.

18
LaikaF 6 days ago 0 replies      
My high school did the one laptop loan out thing (later got sued for it) and I can tell you it was useless as a learning tool. At least in the way intended. I learned quite a bit mainly about navigating around the blocks and rules they put in place. In high school my friends and I ran our own image board, learned about reverse proxying via meebo repeater, hosted our own domains to dodge filtering, and much much more. As far as what I used them for in class... if I needed to take notes I was there with note book and pen. If I didn't I used the laptop to do homework for other classes while in class. I had a reputation among my teachers for handing in assignments the day they were assigned.

In college I slid into the pattern they saw here. I started spending more time on social media, paying less attention in class, slacking on my assignments. As my burnout increased the actual class times became less a thing I learned from and more just something I was required to sit in. One of my college classes literally just required me to show up. It was a was one of the few electives in the college for a large university. The students were frustrated they had to be there, and the teacher was tired of teaching to students who just didn't care.

Overall I left college burnt out and pissed at the whole experience. I went in wanting to learn it just didn't work out.

19
Fomite 6 days ago 1 reply      
Just personally, for me it was often a choice between "Laptop-based Distractions" or "Fall Asleep in Morning Lecture".

The former was definitely the superior of the two options.

20
free_everybody 6 days ago 0 replies      
I find that having my laptop out is great for my learning, even during lectures. If somethings not clear or I want more context, I can quickly look up some information without interrupting the teacher. Also, paper notes don't travel well. If everything is on my laptop and backed up online, I know that if I have my laptop, I can study anything I want. Even if I don't have my laptop, I could use another computer to access my notes and documents. This is a HUGE benefit.
21
kyle-rb 6 days ago 0 replies      
>students spent less than 5 minutes on average using the internet for class-related purposes (e.g., accessing the syllabus, reviewing course-related slides or supplemental materials, searching for content related to the lecture)

I wonder if that could be skewed, because it only takes one request to pull up a course syllabus, but if I have Facebook Messenger open in another tab, it could be receiving updates periodically, leading to more time recorded in this experiment.

22
BigChiefSmokem 6 days ago 0 replies      
I'll give you no laptops in the class if you give me no standardized testing and only four 15-20 minute lectures per day and let the kids work on projects the rest of the time as a way to prove their learning and experiences in a more tangible way.

Trying to fix the problem by applying only patches, as us technically inclined would say, always leads to horribly unreliable and broken systems.

23
jon889 5 days ago 0 replies      
I have had lectures where I have had a laptop/iPad/phone and ones where Ive not had any. i did get distracted, but I found that if I didnt have say Twitter Id get distracted for longer. With Twitter Id catch up on my news feed and then a few minutes later be back to concentrating. Without it Id end up day dreaming and losing focus for 10-20 minutes.

The biggest problem isnt distractions, or computers and social media. Its that hour long lectures are an awful method of transferring information. In my first year we had small groups of ~8 people and a student from 3rd/4th year and wed go through problems from the maths and programming lectures. I learnt much more in these.

Honestly learning would be much more improved if lectures were condensed into half an hour YouTube videos you can pause, speed up and rewind. Then have smaller groups in which you can interact with the lecturers/assistants.

24
TazeTSchnitzel 6 days ago 0 replies      
> In contrast with their heavy nonacademic internet use, students spent less than 5 minutes on average using the internet for class-related purposes

This is a potential methodological flaw. It takes me 5 minutes to log onto my university's VLE and download the course materials. I then read them offline. Likewise, taking notes in class happens offline.

Internet use does not reflect computer use.

25
fatso784 6 days ago 0 replies      
There's another study showing that students around you with laptops harm your ability to concentrate, even if you're not on a laptop yourself. This is in my opinion a stronger argument against laptops, because it harms those not privileged enough to have a laptop. (not enough time to find study but you can find it if you search!)
26
dalbasal 5 days ago 0 replies      
I think there is a mentality shift that may come with digitizing learning which might help here.

The discussion on a topic like this can go in two ways. (1) Is to talk about how a laptop can help if students use it to xyz and avoid cba. It's up to the student. Bring a horse to water...(2) The second way you can look at this is to compare outomes, statistacally or quasi-statistically. IE, If laptops are banned we predict an N% increase in Z, where Z is (hopefully) a good proxy for learning or enjoyment or something else we want. IE, think about improving a college course the same way we think about optimizing a dating site.

On a MOOC, the second mentality will tend to dominate. Both have downsides, especially when applied blindly (which tends to happen). In any case, new thinking tends to help.

27
homie 6 days ago 0 replies      
instructors are also better off without computers in the classroom. lecture has been reduced to staring at a projector while each and every students eyes roll to the back of their skull
28
brodock 5 days ago 1 reply      
Any research that takes students as an homogenic group is flawed. People can be (more or less) in about one of the 7 different types of learning styles https://www.learning-styles-online.com/overview/.

So making claims like "doing X works better than Y" is meaningless without pointing to a specific learning style.

That's why you hear people defending writing to paper, while others prefer just hearing the lectures or others have better performance while discussing with peers (and some hate all of the other interactions and can perform better by isolating and studying on your own... which is probably the one who will benefit the most of having a laptop available).

29
vblord 6 days ago 0 replies      
During indoor recess at my kids school, kids don't eat their lunch and just throw it away because of the chromebooks. There are only have a few computers and they are first come first serve. Kids would rather go without lunch to be able to play on the internet for 20 minutes.
30
Zpalmtree 5 days ago 1 reply      
I like having a laptop at uni just because I can program when the lectures are boring, I find the material is too easy in UK universities in CS at least, dunno about other courses or countries, but the amount of effort you need to get good marks along with the amount you're paying is a bit silly, and mostly you'll learn more by yourself...

That said, if you're in a programming class, having a laptop to follow along and try out the concepts is really handy, when we were in an C++/ASM class, seeing the different ASM GCC/G++ and Microsoft's C++ compiler spat out was quite interesting.

31
nerpderp83 6 days ago 1 reply      
Paying attention requires work, we need to purposefully use tools that are also distractions.
32
zokier 6 days ago 1 reply      
I love how any education-related topic brings out the armchair-pedagogist out from the woodworks. Of course a big aspect there is that everyone has encountered some amount of education, and especially both courses they enjoyed and disliked. And there is of course the "think of the children" aspect.

To avoid making purely meta comment, in my opinion the ship has already sailed; we are going to have computers in classrooms for better or worse. So the big question is how can we make the best use of that situation.

33
erikb 6 days ago 0 replies      
I'd argue that students are better off without a classroom as long as they have a laptop (and internet, but that is often also better at home/cafe than in the classroom).
34
thisrod 6 days ago 0 replies      
> First, participants spent almost 40 minutes out of every 100-minute class period using the internet for nonacademic purposes

I think that I'd be one of them; in the absence of a laptop, I'd spend that time daydreaming. How many people can really concentrate through a 100 minute nonstop lecture about differential geometry or the decline of the Majapahit empire?

35
zitterbewegung 6 days ago 0 replies      
When I was in College I would take notes using a notebook and pad and paper. I audited some classes with my laptop using latex but most of the time I used a notebook. Also, sometimes I would just go to class without a notebook and get the information that way. It also helped that I didn't have a smartphone with Cellular data half of the time I was in school.
36
kgilpin 6 days ago 0 replies      
It sounds like what students need are better teachers. I haven't been to school in a while but I had plenty of classes that were more interesting than surfing YouTube; and some that weren't.

The same is true for meetings at work. In a good session, people are using their laptops to look up contributing information. In a bad one... well... you know.

37
polote 6 days ago 0 replies      
Well it depends on what you do in the classroom, when class is mandatory but you are not able to learn this way (by listening to a teacher), having a laptop can let you do other things. And then use your time efficiently, like doing some administrative work, send email, coding ...

Some students are of course better with a laptop in the classroom

38
jessepage1989 6 days ago 0 replies      
I find taking paper notes and then reorganizing on the computer works best. The repetition helps memorization.
39
_e 5 days ago 0 replies      
Politicians are also better off without a laptop during legislative sessions [0].

[0] http://www.snopes.com/photos/politics/solitaire.asp

40
marlokk 6 days ago 0 replies      
Students are better off with instructors who don't bore students into bringing out their laptops.
41
wh313 6 days ago 0 replies      
Could it be that the intermittent requests to servers by running apps, say Facebook Messenger or WhatsApp, be tracked as social media use? Because they all use HTTPS I don't see how the researchers distinguished between idle traffic vs sending a message.
42
mark_l_watson 5 days ago 0 replies      
In what universe would it be a good idea for students use laptops in class?

Use of digital devices should be limited because the very use of digital devices separates us from what is going on around us. Students should listen and take notes (in a notebook) as necessary.

43
qguv 5 days ago 0 replies      
Internet access, especially to Wikipedia, did wonders for me whenever the lecture turned to something I was already familiar with. That alone kept me from getting distracted and frustrated as I would in classes whose professors prohibited laptop use.
44
aurelianito 5 days ago 0 replies      
Even better, just remove the surrounding classroom of the laptop. Now we can learn anything anywhere. Having to go to take a class were a professor recites something is ridiculous.
45
Radle 5 days ago 0 replies      
If students thing the class is boring enough, they'll watch youtube whether on the laptop or on their mobile is no really important.
46
catnaroek 5 days ago 0 replies      
This is why I like to program in front of a whiteboard rather than in front of my computer: to be more productive.
47
Glyptodon 6 days ago 2 replies      
I feel like the conclusion is a bit off base: that students lack the self control to restrict the use of laptops laptops to class-related activities is somehow a sign that the problem is the laptop and not the students? I think it's very possible that younger generations have big issues with self-control and instant gratification. But I think it's wrong to think that laptops are the faulty party.
48
Shinchy 5 days ago 0 replies      
I've always find the idea of taking a laptop to a lecture pretty rude. I'm there to give the person teaching my full attention, not stare at a laptop screen. So personally I never use them in any type of lecturing / teaching environment simply as a mark of respect.
49
dorianm 4 days ago 0 replies      
Pen and papers are the best. Also chromebooks are pretty cool
50
jonbarker 5 days ago 0 replies      
Students need a GUI-less computer like a minimalist linux distro.
51
alistproducer2 6 days ago 0 replies      
"Duh" - anyone who's ever been in a class with a laptop.
52
exabrial 6 days ago 0 replies      
Students are best of with the least amount of distractions
53
rokhayakebe 6 days ago 1 reply      
We really need to begin ditching most studies. We have the ability now to collect vast amount of data and use that to make conclusions based on millions of endpoints, not just 10, 100 or 1000 pieces of information.
54
partycoder 6 days ago 1 reply      
I think VR will be the future of education.
55
ChiliDogSwirl 6 days ago 1 reply      
Maybe it would be helpful if our operating systems were optimised for working and learning rather than to selling us crap and mining our data.
56
Bearwithme 6 days ago 0 replies      
They should try this study again, but with laptops heavily locked down. Disable just about everything that isn't productive including a strict web filter. I am willing to bet the results would be much better for the kids with laptops. Of course if you let them have free reign they are going to be more interested in entertainment than productivity.
57
microcolonel 6 days ago 1 reply      
58
bitJericho 6 days ago 2 replies      
The schools are so messed up in the US. Best to just educate children yourself as best you can. As for college kids, best to travel abroad.
59
FussyZeus 6 days ago 0 replies      
Disengaged and uninterested students will find a distraction; yes, perhaps a laptop makes it easier but my education in distraction seeking during middle school, well before laptops were even close to schools, shows that the lack of a computer in front of me was no obstacle to locating something more interesting to put my attention to.

The real solution is to engage students so they don't feel the urge to get distracted in the first place. Then you could give them completely unfiltered Internet and they would still be learning (perhaps even faster, using additional resources.) You can't substitute an urge to learn, no matter if you strap them to the chairs and pin their eyeballs open with their individual fingers strapped down, it won't do anything. It just makes school less interesting, less fun, and less appealing, which makes learning by extension less fun, less appealing, and less interesting.

22
The Limitations of Deep Learning keras.io
501 points by olivercameron  9 hours ago   174 comments top 31
1
therajiv 8 hours ago 13 replies      
As someone primarily interested in interpretation of deep models, I strongly resonate with this warning against anthropomorphization of neural networks. Deep learning isn't special; deep models tend to be more accurate than other methods, but fundamentally they aren't much closer to working like the human brain than e.g. gradient boosting models.

I think a lot of the issue stems from layman explanations of neural networks. Pretty much every time DL is covered by media, there has to be some contrived comparison to human brains; these descriptions frequently extend to DL tutorials as well. It's important for that idea to be dispelled when people actually start applying deep models. The model's intuition doesn't work like a human's, and that can often lead to unsatisfying conclusions (e.g. the panda --> gibbon example that Francois presents).

Unrelatedly, if people were more cautious about anthropomorphization, we'd probably have to deal a lot less with the irresponsible AI fearmongering that seems to dominate public opinion of the field. (I'm not trying to undermine the danger of AI models here, I just take issue with how most of the populace views the field.)

2
toisanji 8 hours ago 4 replies      
There is some good information in there and I agree with the limitations he states, but his conclusion is completely made up.

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction."

There are tens of thousands of scientists and researchers who are studying the brain from every level and we are making tiny dents into understanding it. We have no idea what the key ingredient is , nor if it is 1 or many ingredients that will take us to the next level. Look at deep learning, we had the techniques for it since the 70's, yet it is only now that we can start to exploit it. Some people think the next thing is the connectome, time, forgetting neurons, oscillations, number counting, embodied cognition,emotions,etc. No one really knows and it is very hard to test, the only "smart beings" we know of are ourselves and we can't really do experiments on humans because of laws and ethical reasons. Computer Scientists like many of us here like to theorize on how AI could work, but very little of it is tested out. I wish we had a faster way to test out more competing theories and models.

3
CountSessine 7 hours ago 2 replies      
Surely we shouldn't rush to anthropomorphize neural networks, but we'd ignoring the obvious if we didn't at least note that neural networks do seem to share some structural similarities with our own brains, at least at a very low level, and that they seem to do well with a lot of pattern-recognition problems that we've traditionally considered to be co-incident with brains rather than logical systems.

The article notes, "Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way". But this ignores a lot of the subtlety in psychological models of human consciousness. In particular, I'm thinking of Dual Process Theory as typified by Kahneman's "System 1" and "System 2". System 1 is described as a tireless but largely unconscious and heavily biased pattern recognizer - subject to strange fallacies and working on heuristics and cribs, it reacts to it's environment when it believes that it recognizes stimuli, and notifies the more conscious "System 2" when it doesn't.

At the very least it seems like neural networks have a lot in common with Kahneman's "System 1".

4
siliconc0w 6 hours ago 0 replies      
A neat technique to help 'explain' models is LIME: https://www.oreilly.com/learning/introduction-to-local-inter...

There is a video here https://www.youtube.com/watch?v=hUnRCxnydCc

I think this has some better examples than the Panda vs Gibbon example in the OP if you want to 'see' why a model may classify a tree-frog as a tree-frog vs a billiard (for example). IMO this suggests some level of anthropomorphizing is useful for understanding and building models as the pixels the model picks up aren't really too dissimilar to what I imagine a naive, simple, mind might use. (i.e the tree-frog's goofy face) We like to look at faces for lots of reasons but one of them probably is because they're usually more distinct which is the same, rough, reason why the model likes the face. This is interesting (to me at least) even if it's just matrix multiplication (or uncrumpling high dimensional manifolds) underneath the hood,

5
meh2frdf 4 hours ago 2 replies      
Correct me if I'm wrong but I don't see that with 'deep learning' we have answered/solved any of the philosophical problems of AI that existed 25 years ago (stopped paying attention about then).

Yes we have engineered better NN implementations and have more compute power, and thus can solve a broader set of engineering problems with this tool, but is that it?

6
cm2187 7 hours ago 2 replies      
I think the requirement for a large amount of data is the biggest objection to the reflex "AI will replace [insert your profession here] soon" that many techies, in particular on HN, have.

There are many professions where there is very little data available to learn from. In some case (self-driving), companies will invest large amount of money to build this data, by running lots of test self-driving cars, or paying people to create the data, and it is viable given the size of the market behind. But the typical high-value intellectual profession is often a niche market with a handful of specialists in the world. Think of a trader of financial institutions bonds, or a lawyer specialized in cross-border mining acquisitions, a physician specialist of a rare disease or a salesperson for aviation parts. What data are you going to train your algorithm with?

The second objection, probably equally important, also applies to "software will replace [insert your boring repetitive mindless profession here]", even after 30 years of broad adoption of computers. If you decide to automate some repetitive mundane tasks, you can spare the salary of the guys who did these tasks, but now you need to pay the salary of a full team of AI specialists / software developers. Now for many tasks (CAD, accounting, mailings, etc), the market is big enough to justify a software company making this investment. But there is a huge number of professions where you are never going to break even, and where humans are still paid to do stupid tasks that a software could easily do today (even in VBA), and will keep doing so until the cost of developing and maintaining software or AI has dropped to zero.

I don't see that happening in my life. In fact I am not even sure we are training that many more computer science specialists than 10 years ago. Again, didn't happen with software for very basic things, why would it happen with AI for more complicated things.

7
ilaksh 2 hours ago 0 replies      
Actually there are quite a few researchers working on applying newer NN research to systems that incorporate sensorimotor input, experience, etc. and more generally, some of them are combining an AGI approach with those new NN techniques. And there has been research coming out with different types of NNs and ways to address problems like overfitting or slow learning/requiring huge datasets, etc. When he says something about abstraction and reasoning, yes that is important but it seems like something NNish may be a necessary part of that because the logical/symbolic approaches to things like reasoning have previously mainly been proven inadequate for real-world complexity and generally the expectations we have for these systems.

Search for things like "Towards Deep Developmental Learning" or "Overcoming catastrophic forgetting in neural networks" or "Feynman Universal Dynamical" or "Wang Emotional NARS". No one seems to have put together everything or totally solved all of the problems but there are lots of exciting developments in the direction of animal/human-like intelligence, with advanced NNs seeming to be an important part (although not necessarily in their most common form, or the only possible approach).

8
kowdermeister 5 hours ago 3 replies      
> In short, deep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humansas embodied earthly creatures.

Well maybe we should train systems with all our sensory inputs first, like newborns leans about the world. Then make these models available open source like we release operating systems so others can build on top of that.

For example we have ImageNet, but we don't have WalkNet, TasteNet, TouchNet, SmellNet, HearNet... or other extremely detailed sensory data recorded for an extended time. And these should be connected to match the experiences. At least I have no idea they are out there :)

9
debbiedowner 8 hours ago 0 replies      
People doing empirical experiments cannot claim to know the limits of their experimental apparatus.

While the design process of deep networks remains founded in trial and error, and there are no convergence theorems and approximation guarantees, no one can be sure what deep learning can do, and what it could never do.

10
pc2g4d 8 hours ago 1 reply      
Programmers contemplating the automation of programming:

"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like."

11
danielam 3 hours ago 1 reply      
"This ability [...] to perform abstraction and reasoning, is arguably the defining characteristic of human cognition."

He's on the right track. Of course, the general thrust goes beyond deep learning. The projection of intelligence onto computers is first and foremost wrong because computers are not able, not even in principle, to engage in abstraction, and claims to the contrary make for notoriously bad, reductionistic philosophy. Ultimately, such claims underestimate what it takes to understand and apprehend reality and overestimate what a desiccated, reductionistic account of mind and the broader world could actually accommodate vis-a-vis the apprehension and intelligibility of the world.

Take your apprehension of the concept "horse". The concept is not a concrete thing in the world. We have concrete instances of things int he world that "embody" the concept, but "horse" is not itself concrete. It is abstract and irreducible. Furthermore, because it is a concept, it has meaning. Computers are devoid of semantics. They are, as Searle has said ad nauseam, purely syntactic machines. Indeed, I'd take that further and say that actual, physical computers (as opposed to abstract, formal constructions like Turing machines) aren't even syntactic machines. They do not even truly compute. They simulate computation.

That being said, computers are a magnificent invention. The ability to simulate computation over formalisms -- which themselves are products of human beings who first formed abstract concepts on which those formalisms are based -- is fantastic. But it is pure science fiction to project intelligence onto them. If deep learning and AI broadly prove anything, it is that in the narrow applications where AI performs spectacularly, it is possible to substitute what amounts to a mechanical process for human intelligence.

12
eanzenberg 8 hours ago 2 replies      
This point is very well made: 'local generalization vs. extreme generalization.' Advanced NN's today can locally generalize quite well and there's a lot of research spent to inch their generalization further out. This will probably be done by increasing NN size or increasing the NN building-blocks complexity.
13
lordnacho 8 hours ago 2 replies      
I'm excited to hear about how we bring about abstraction.

I was wondering how a NN would go about discovering F = ma and the laws of motion. As far as I can tell, it has a lot of similarities to how humans would do it. You'd roll balls down slopes like in high school and get a lot of data. And from that you'd find there's a straight line model in there if you do some simple transformations.

But how would you come to hypothesise about what factors matter, and what factors don't? And what about new models of behaviour that weren't in your original set? How would the experimental setup come about in the first place? It doesn't seem likely that people reason simply by jumbling up some models (it's a line / it's inverse distance squared / only mass matters / it matters what color it is / etc), but that may just be education getting in my way.

A machine could of course test these hypotheses, but they'd have to be generated from somewhere, and I suspect there's at least a hint of something aesthetic about it. For instance you have some friction in your ball/slope experiment. The machine finds the model that contains the friction, so it's right in some sense. But the lesson we were trying to learn was a much simpler behaviour, where deviation was something that could be ignored until further study focussed on it.

14
gallerdude 7 hours ago 1 reply      
I'm sorry, but I don't understand why wider & deeper networks won't do the job. If it took "sufficiently large" networks and "sufficiently many" examples, I don't understand why it wouldn't just take another order of magnitude of "sufficiency."

If you look at the example with the blue dots on the bottom, would it not just take many more blue dots to fill in what the neural network doesn't know? I understand that adding more blue dots isn't easy - we'll need a huge amount of training data, and huge amounts of compute to follow; but if increasing the scale is what got these to work in the first place, I don't see we shouldn't try to scale it up even more.

15
andreyk 7 hours ago 2 replies      
"Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data."

This statement has a few problems - there is no real reason to interpret the transforms as geometric (they are fundamentally just processing a bunch of numbers into other numbers, in what sense is this geometric), and the focus on human-annotated data is not quite right (Deep RL and other things such as representation learning have also achieved impressive results in Deep Learning). More importantly, saying " a deep learning model is "just" a chain of simple, continuous geometric transformations " is pretty misleading; things like the Neural Turing Machine have shown that enough composed simple functions can do pretty surprisingly complex stuff. It's good to point out that most of deep learning is just fancy input->output mappings, but I feel like this post somewhat overstates the limitations.

16
eli_gottlieb 7 hours ago 0 replies      
>But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.

I'm really looking forward to this. If it comes out looking like something faster and more usable than Bayesian program induction, RNNs, neural Turing Machines, or Solomonoff Induction, we'll have something really revolutionary on our hands!

17
latently 6 hours ago 1 reply      
The brain is a dynamic system and (some) neural networks are also dynamic systems, and a three layer neural network can learn to approximate any function. Thus, a neural network can approximate brain function arbitrarily well given time and space. Whether that simulation is conscious is another story.

The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:

http://grey.colorado.edu/CompCogNeuro

The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:

http://grey.colorado.edu/emergent

18
cs702 5 hours ago 0 replies      
Yes.

Here's how I've been explaining this to non-technical people lately:

"We do not have intelligent machines that can reason. They don't exist yet. What we have today is machines that can learn to recognize patterns at higher levels of abstraction. For example, for imagine recognition, we have machines that can learn to recognize patterns at the level of pixels as well as at the level of textures, shapes, and objects."

If anyone has a better way of explaining deep learning to non-technical people in a few short sentences, I'd love to see it. Post it here!

19
thanatropism 3 hours ago 0 replies      
This is evergreen:

https://en.m.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_...

See also, if you can, the film "Being in the world", which features Dreyfus.

20
denfromufa 8 hours ago 2 replies      
If the deep learning network has enough layers, then can't it start incorporating "abstract" ideas common to any learning task? E.g. could we re-use some layers for image/speech recognition & NLP?
21
LeanderK 4 hours ago 0 replies      
the author raises some valid points, but i don't like the style it is written in. He just makes some elaborate claims about the limitation of Deep Learning, but conveys why they are limitations. I don't disagree about the fact that there are limits to Deep Learning and many may be impossible to overcome without completely new approaches. I would like to see more emphasis on why things, like generating code from descriptions, that are theoretically possible, are absolutely impossible and out of reach today and not make the intention that the tasks itself is impossible (like the halting-problem).
22
zfrenchee 6 hours ago 2 replies      
My qualm with this article is disappointingly poorly backed up. The author makes claims, but does not justify those claims well enough to convince anyone but people who already agree with him. In that sense, this piece is an opinion piece, masquerading as a science.

> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].

> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]

I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.

23
msoad 8 hours ago 1 reply      
Then people are assuming Deep Learning can be applied to a Self Driving Car System end-to-end! Can you imagine the outcome?!
24
ezioamf 7 hours ago 1 reply      
This is why I don't know if it will be possible (at current limitations) to let insect like brains to fully drive our cars. It may never be good enough.
25
nimish 8 hours ago 1 reply      
This is basically the Chinese Room argument though?
26
erikb 5 hours ago 1 reply      
I don't get it. If reasoning is not an option how does deep learning beat the boardgame go?
27
graycat 6 hours ago 1 reply      
On the limitations of machine learning asin the OP, the OP is correct.

So, right, current approaches to "machinelearning* as in the OP have some serious"limitations". But this point is a small,tiny special case of something else muchlarger and more important: Currentapproaches to "machine learning" as in theOP are essentially some applied math, andapplied math is commonly much morepowerful than machine learning as in theOP and has much less severe limitations.

Really, "machine learning" as in the OP isnot learning in any significantlymeaningful sense at all. Really,apparently, the whole field of "machinelearning" is heavily just hype from thedeceptive label "machine learning". Thathype is deceptive, apparently deliberatelyso, and unprofessional.

Broadly machine learning as in the OP is acase of old empirical curve fitting wherethere is a long history with a lot ofapproaches quite different from what is inthe OP. Some of the approaches are undersome circumstances much more powerful thanwhat is in the OP.

The attention to machine learning isomitting a huge body of highly polishedknowledge usually much more powerful. Ina cooking analogy, you are being sold astate fair corn dog, which can be good,instead of everything in Escoffier,

Prosper Montagn, Larousse Gastronomique:The Encyclopedia of Food, Wine, andCookery, ISBN 0-517-503336, CrownPublishers, New York, 1961.

Essentially, for machine learning as inthe OP, if (A) have a LOT of trainingdata, (B) a lot of testing data, (C) bygradient descent or whatever build amodel of some kind that fits thetraining data, and (D) the model alsopredicts well on the testing data, then(E) may have found something of value.

But the test in (D) is about the onlyassurance of any value. And the value in(D) needs an assumption: Applications ofthe model will in some suitable sense,rarely made clear, be close to thetraining data.

Such fitting goes back at least to

Leo Breiman, Jerome H. Friedman, RichardA. Olshen, Charles J. Stone,Classification and Regression Trees,ISBN 0-534-98054-6, Wadsworth &Brooks/Cole, Pacific Grove, California,1984.

not nearly new. This work is commonlycalled CART, and there has long beencorresponding software.

And CART goes back to versions ofregression analysis that go back maybe 100years.

So, sure, in regression analysis, we aregiven points on an X-Y coordinate systemand want to fit a straight line so that asa function of points on the X axis theline does well approximating the points onthe X-Y plot. Being more specific coulduse some mathematical notation awkward forsimple typing and, really, likely notneeded here.

Well, to generalize, the X axis can haveseveral dimensions, that is, accommodateseveral variables. The result ismultiple linear regression.

For more, there is a lot with a lot ofguarantees. Can find those in short andeasy form in

Alexander M. Mood, Franklin A. Graybill,and Duane C. Boas, Introduction to theTheory of Statistics, Third Edition,McGraw-Hill, New York, 1974.

with more detail but still easy form in

N. R. Draper and H. Smith, AppliedRegression Analysis, John Wiley and Sons,New York, 1968.

with much more detail and carefully donein

C. Radhakrishna Rao, Linear StatisticalInference and Its Applications: SecondEdition, ISBN 0-471-70823-2, John Wileyand Sons, New York, 1967.

Right, this stuff is not nearly new.

So, with some assumptions, get lots ofguarantees on the accuracy of the fittedmodel.

This is all old stuff.

The work in machine learning has addedsome details to the old issue of overfitting, but, really, the math in oldregression takes that into consideration-- a case of over fitting will usuallyshow up in larger estimates for errors.

There is also spline fitting, fitting fromFourier analysis, autoregressiveintegrated moving average processes,

David R. Brillinger, Time SeriesAnalysis: Data Analysis and Theory,Expanded Edition, ISBN 0-8162-1150-7,Holden-Day, San Francisco, 1981.

and much more.

But, let's see some examples of appliedmath that totally knocks the socks offmodel fitting:

(1) Early in civilization, people noticedthe stars and the ones that moved incomplicated paths, the planets. WellPtolemy built some empirical models basedon epi-cycles that seemed to fit thedata well and have good predictive value.

But much better work was from Kepler whodiscovered that, really, if assume thatthe sun stays still and the earth movesaround the sun, then the paths of planetsare just ellipses.

Next Newton invented the second law ofmotion, the law of gravity, and calculusand used them to explain the ellipses.

So, what Kepler and Newton did was farahead of what Ptolemy did.

Or, all Ptolemy did was just someempirical fitting, and Kepler and Newtonexplained what was really going on and, inparticular, came up with much betterpredictive models.

Empirical fitting lost out badly.

Note that once Kepler assumed that the sunstands still and the earth moves aroundthe sun, actually he didn't need much datato determine the ellipses. And Newtonneeded nearly no data at all except tocheck is results.

Or, Kepler and Newton had some good ideas,and Ptolemy had only empirical fitting.

(2) The history of physical science isjust awash in models derived fromscientific principles that are, then,verified by fits to data.

E.g., some first principles derivationsshows what the acoustic power spectrum ofthe 3 K background radiation should be,and the fit to the actual data from WMAP,etc. was astoundingly close.

News Flash: Commonly some real science oreven just real engineering principlestotally knocks the socks off empiricalfitting, for much less data.

(3) E.g., here is a fun example I workedup while in a part time job in gradschool: I got some useful predictions foran enormously complicated situation out ofa little applied math and nearly no dataat all.

I was asked to predict what thesurvivability of the US SSBN fleet wouldbe under a special scenario of globalnuclear war limited to sea.

Well, there was a WWII analysis by B.Koopman that showed that in search, say,of a submarine for a surface ship, anairplane for a submarine, etc. theencounter rates were approximately aPoisson process.

So, for all the forces in that war at sea,for the number of forces surviving, withsome simplifying assumptions, we have acontinuous time, discrete state spaceMarkov process subordinated to a Poissonprocess. The details of the Markovprocess are from a little data aboutdetection radii and the probabilities at adetection, one dies, the other dies, bothdie, or neither die.

That's all there was to the set up of theproblem, the model.

Then to evaluate the model, just use MonteCarlo to run off, say, 500 sample paths,average those, appeal to the strong law oflarge numbers, and presto, bingo, done.Also can easily put up some confidenceintervals.

The customers were happy.

Try to do that analysis with big data andmachine learning and will be in deep,bubbling, smelly, reeking, flaming, blackand orange, toxic sticky stuff.

So, a little applied math, some firstprinciples of physical science, or somesolid engineering data commonly totallyknocks the socks off machine learning asin the OP.

28
reader5000 8 hours ago 1 reply      
Recurrent models do not simply map from one vector space to another and could very much be interpreted as reasoning about their environment. Of course they are significantly more difficult to train and backprop through time seems a bit of a hack.
29
beachbum8029 7 hours ago 1 reply      
Pretty interesting that he says reasoning and long term planning are impossible tasks for a neural net, when those tasks are done by billions of neural nets every day. :^)
30
deepnotderp 7 hours ago 1 reply      
I'd like to offer a somewhat contrasting viewpoint (although this might not sit well with people): deep nets aren't AGI, but they're pretty damn good. There's mounting evidence that they learn similar to how we do, at least in vision; https://arxiv.org/abs/1706.08606 and https://www.nature.com/articles/srep27755

There's quite a few others but these were the most readily available papers.

Are deep nets AGI? No, but they're a lot better than Mr.Chollet gives them credit for.

31
AndrewKemendo 7 hours ago 0 replies      
the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data.

Yes, but that's what human's do too, only much much better from the generalized perspective.

I think that fundamentally this IS the paradigm for AGI, but we are in the pre-infant days of optimization across the board (data, efficiency, tagging etc...).

So I wholeheartedly agree with the post, that we shouldn't cheer yet, but we should also recognize that we are on the right track.

I say all this because prior to getting into DL and more specifically Reinforcement Learning (which is WAY under studied IMO), I was working with Bayesian Expert Systems as a path to AI/AGI. RL totally transformed how I saw the problem and in my mind offers a concrete pathway to AGI.

23
SFO near miss might have triggered aviation disaster mercurynews.com
480 points by milesf  6 days ago   414 comments top 37
1
ddeck 6 days ago 4 replies      
Attempts to take off from or land on taxiways are alarmingly common, including those by Harrison Ford:

 Harrison Ford won't face disciplinary action for landing on a taxiway at John Wayne Airport [1] Serious incident: Finnair A340 attempts takeoff from Hong Kong taxiway [2] HK Airlines 737 tries to take off from taxiway [3] Passenger plane lands on the TAXIWAY instead of runway in fourth incident of its kind at Seattle airport [4]
[1] http://www.latimes.com/local/lanow/la-me-ln-ford-taxiway-agr...

[2] https://news.aviation-safety.net/2010/12/03/serious-incident...

[3] https://www.flightglobal.com/news/articles/hk-airlines-tries...

[4] http://www.dailymail.co.uk/travel/travel_news/article-337864...

2
charlietran 6 days ago 3 replies      
There's an mp3 of the radio chatter here:

https://forums.liveatc.net/atcaviation-audio-clips/7-july-ks...

> Audio from the air traffic controller communication archived by a user on LiveATC.net and reviewed by this newspaper organization showed how a the confused Air Canada pilot asks if hes clear to land on 28R because he sees lights on the runway.

> Theres no one on 28R but you, the air controller responds.

> An unidentified voice, presumably another pilot, then chimes in: Wheres this guy going. Hes on the taxiway.

> The air controller quickly tells the Air Canada pilot to go around. telling the pilot it looks like you were lined up for Charlie (Taxiway C) there.

> A United Airlines pilot radios in: United One, Air Canada flew directly over us.

> Yeah, I saw that guys, the control tower responds.

3
Animats 6 days ago 1 reply      
Here's a night approach on 28R at SFO.[1] Same approach during the day.[2] The taxiway is on the right. It's a straight-in approach over the bay. The runway, like all runways at major airports worldwide, has the standardized lighting that makes it very distinctive at night, including the long line of lights out into the bay. This was in clear conditions. WTF? Looking forward to reading the investigation results.

The planes on the taxiway are facing incoming aircraft as they wait for the turn onto the runway and takeoff. So they saw the Air Canada plane coming right at them. That must have been scary.

[1] https://www.youtube.com/watch?v=rNMtMYUGjnQ[2] https://www.youtube.com/watch?v=mv7_lzFKCSM

4
watson 6 days ago 5 replies      
English is not my native language, but shouldn't the headline have read "SFO near miss would have triggered aviation disaster"? "Might" seems to indicate that something else happened afterwards as a possible result of the near miss
5
tmsh 6 days ago 2 replies      
The moral of this story for me is: be that "another pilot." To be clear, "another pilot" of another aircraft. Not as clear as it could be just like the title of this article is ambiguous.

The moral of this story for me is: call out immediately if you see something off. He's the real hero. Even if the ATC controller immediately saw the plane being misaligned at the same time - that feedback confirming another set of eyes on something that is off couldn't have hurt. All 1000 people on the ground needed that feedback. Always speak up in situations like this.

6
WalterBright 6 days ago 4 replies      
In the early 1960s, a pilot mistook a WW2 airfield for Heathrow, and landed his 707 on it, barely stopping before the end of the runway.

The runway being too short to lift a 707, mechanics stripped everything out of it they could to reduce the weight - seats, interiors, etc. They put barely enough gas in it to hop over to Heathrow, and managed to get it there safely.

The pilot who landed there was cashiered.

7
mate_soos 6 days ago 3 replies      
Before crying pilot error, we must all read Sydney Dekker's A Field Giude to Understading "Human Error" (and fully appreciate why he uses those quotes). Don't immediately assign blame to the sharp end. Take a look at the blunt one first. Most likely not a pilot error. Assigning blame is a very human need, but assigning it to the most visible and accessible part is almost always wrong.
8
cperciva 6 days ago 1 reply      
Can we have "might have triggered" changed to "could have triggered" in the title?
9
phkahler 6 days ago 0 replies      
A different kind of error... I was returning from Las Vegas in the middle of the day and the tower cleared us for departure on 9 and another plane on 27. We had taxied out and then the pilot pulled over, turned around and waited for the other plane to depart. He told us what had happened - there was a bit of frustration in his voice. Imagine pulling up and seeing another plane sitting at the opposite end of the runway ready to go. (it may not have been 9 and 27 I don't know which pair it was) Earlier waiting in the terminal I had seen a different plane go around, but didn't know why. Apparently there was a noob in the tower that day. This is why you look out the window and communicate.
10
lisper 6 days ago 0 replies      
Possible explanation for why this happened: it was night, and the parallel runway 28L was closed and therefore unlit. The pilot may have mistaken 28R for 28L and hence the taxiway for 28R. This comes nowhere near excusing this mistake (there is no excuse for a screwup of this magnitude) but it makes it a little more understandable.
11
mikeash 6 days ago 0 replies      
I wonder just how likely this was to end in disaster. It feels overstated. The pilot in question seemed to think something was wrong, he just hadn't figured it out yet. I imagine he would have seen the aircraft on the taxiway in time to go around on his own if he hadn't been warned off.

I'm having trouble figuring out the timeline. The recording in the article makes it sound like this all happened in a matter of seconds, but it's edited down to the highlights so that's misleading. LiveATC has an archived recording of the event (http://archive-server.liveatc.net/ksfo/KSFO-Twr2-Jul-08-2017..., relevant part starts at about 14:45) but even those appear to have silent parts edited out. (That recording covers a 30 minute period but is only about 18 minutes long.) In the archived recording, about 40 seconds elapse between the plane being told to go around and the "flew directly over us" call, but I don't know how much silence was edited out in between.

Certainly this shouldn't have happened, but I wonder just how bad it actually was.

12
blhack 6 days ago 3 replies      
People "could" run their cars off of bridges every day, but they don't because they can see, and because roads have signs warning them of curves.

This sounds like a story of how well the aviation system works more than anything. The pilot is in constant communication with the tower. The system worked as intended here and he went around.

It seems like a non story.

13
vermontdevil 6 days ago 0 replies      
Found a cockpit video of a landing approach to 28R to give you an idea (daylight, good weather etc)

https://www.youtube.com/watch?v=I0Y6GTI9pg4

14
cmurf 6 days ago 0 replies      
Near as I can tell HIRL could not have been on, they were not following another aicraft to land, and the runway and taxiway lighting must've been sufficiently low that the taxi lights (low intensity version of a landing light) on the queued up airplanes on the taxiway, made it look like the taxiway was the runway. Pilot fatigue, and experience at this airport also are questions.

http://flightaware.com/resources/airport/SFO/IAP/ILS+RWY+28R...

All runways have high intensity runway lighting (HIRL) and 28R has touchdown zone and centerline lighting (TDZ/CL). Runway lights are white, taxiway lights are blue. If you see these elements, there's no way to get confused. So my assumption is the pilots, neither of them, saw this distinction.

HIRL is typically off for visual landings even at night. That's questionable because night conditions are reduced visibility situations and in many other countries night flying is considered as operating under instrument rules, but not in the U.S. You do not need instrument rated aircraft or pilot certification. For a long time I've though low intensity HIRL should be enabled briefy in the case of visual night landings, where an aircraft is not following behind another, at the time "runway in sight" verbal verification happens between ATC and pilot.

15
mannykannot 6 days ago 0 replies      
AFAIK (not that I follow the issue closely) the problem of radio interference that ended the last-chance attempt to prevent the Tenerife crash has not been addressed [1]. If so, then it may be very fortunate that only one person called out that the landing airplane had lined up its approach on the taxiway, and not, for example, the crews of every airplane on the taxiway, simultaneously.

[1] http://www.salon.com/2002/03/28/heterodyne/

TL;DR: At Tenerife, both the Pan-Am crew and the tower realized that the KLM aircraft had started its take-off roll, and both tried to warn its crew at the same time, but the resulting radio interference made the messages unintelligible. The author states that a technical solution is feasible and relatively easily implementable.

16
rdtsc 6 days ago 4 replies      
Without knowing the cause but if I had to guess this looks like pilot error. At least statistically that the leading cause of crashes.

I am surprised pilots still manually land planes. Is the auto-landing feature not implemented well enough? But then it's relied upon in low visibility. So it has to work, they why isn't it used more often?

17
ryenus 6 days ago 1 reply      
This reminds me of the runway incursion incident at Shanghai, in Oct 2016:

http://www.jacdec.de/2016/10/11/2016-10-11-china-eastern-a32...

18
radialbrain 6 days ago 0 replies      
The avherald article has a slightly more factual account of the event (with links to the ATC recording): https://avherald.com/h?article=4ab79f58
19
URSpider94 5 days ago 0 replies      
Incidentally, I heard a story on KQED (SF Bay Area public radio) today that mentioned a potential clue. There are two parallel runways on this heading -- however -- the left runway is closed for repairs and therefore is currently unlit. If the pilot didn't remember this (it would have been included in his briefings and approach charts for the flight, but he may not have internalized it), he would likely have been looking for two parallel runways and would have lined up on the right one, which in this case would have been the taxiway...
20
dba7dba 6 days ago 0 replies      
I'd like to suggest that if you are still interested in learning more about what happened, you should look for a video from "VASAviation" on youtube. I'm sure his subscribers have asked him already for analysis and he's working on the video.

The channel focuses on aviation comms channel.

I find it informative because the youtube channel provides detailed voice/video/photo/analysis of incidents (actual/close-calls) involving planes/passengers taxing/landing/taking-off in/around airports.

21
briandear 6 days ago 0 replies      
I wonder why on 35R they wouldnt have the taxiway to the left of the runway. Then the right is always the runway. Same for the left. Basically have parallel taxiways on the opposite side of the R/L designation of the runway. So at SFO, the parallel taxiways would be inside the two runways.

However, approach lighting is pretty clear, but at dusk, I agree with another comment that it can be rather hard to distinguish depending on angles. I think that approach would be landing into setting sun, so that could have some bearing.

22
4ad 6 days ago 0 replies      
It's not a near miss, it's a near hit.

https://www.youtube.com/watch?v=zDKdvTecYAM

23
exabrial 6 days ago 1 reply      
Wouldn't the word be "near hit" instead of "near Miss"? If you were close too missing, you'd hit something...
24
milesf 6 days ago 3 replies      
How is this even possible? Is it gross negligence on the part of the pilot, a systems problem, or something else? (IANAP)
25
BusinessInsider 6 days ago 0 replies      
Theoretically - if the plane had landed, how many planes would it have taken out? It obviously wouldn't have been pretty, but I doubt the AirCanada would have reached the fourth plane, or maybe even the third.
26
heeen2 6 days ago 0 replies      
Aren't there lights that have to line up if you're on the right course for the runway like with nautic harbors?Or warning lights that are visible when you're not aligned correctly?
27
TheSpecialist 6 days ago 0 replies      
I always wondered what about SFO makes it so much more dangerous than the other airports in the area? It seems like they have a potential disaster every couple years.
28
perseusprime11 6 days ago 0 replies      
How will an autonomous system handle this issue? Will it figure out the light colors of runways vs. taxiways or will it rely close geolocation capabilities?
29
jjallen 6 days ago 0 replies      
Does anyone know just how close of a call this was? Was the landing aircraft 100, 200 meters above ground?

How many more seconds until they would have been too slow to pull up?

30
TrickyRick 6 days ago 2 replies      
> Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic. [1]

Is it just me or is this blatantly off-topic? Or is anything major happening in the bay area automatically on-topic for Hacker News?

[1] https://news.ycombinator.com/newsguidelines.html

31
martijn_himself 6 days ago 1 reply      
I get that this was a manual (non-ILS) landing, but why is there no audio warning to indicate the aircraft is not lined up with the runway?
32
FiloSottile 6 days ago 11 replies      
I am just a passenger, but this looks very over-blown. A pilot aligned with the taxiway, that's bad. But no pilot would ever land on a runway (or taxiway) with 3 planes on it. Just search the Aviation Herald for "runway incursion". And indeed, he spotted them, communicated, went around.

Aviation safety margins are so wide that this does not qualify as a near-miss.

33
kwhitefoot 6 days ago 0 replies      
Why is instrument landing not routinely done? Is it because it is not good enough?
34
EGreg 6 days ago 0 replies      
35
leoharsha2 6 days ago 0 replies      
Reporting on disasters that didn't happen.
36
stygiansonic 6 days ago 0 replies      
Wow, I landed on the next day on the same flight (AC 759)
37
petre 6 days ago 1 reply      
Paint the runway and the taxiway in different colors and also use different colors for the light signals that illuminate them at night. Blue/white is rather confusing. Use clearly distinguishable colors such as red/blue or orange/blue or magenta/yellow.
24
Jefferies gives IBM Watson a Wall Street reality check techcrunch.com
431 points by code4tee  4 days ago   276 comments top 35
1
throwaway9980 4 days ago 16 replies      
IBM vastly over promises with their marketing. It is so frustrating to have to answer questions from the CEO about why we don't solve all our problems with magic beans from IBM's Watson.

I understand that this is what they want. They want to drive executives' interest in the product, but I believe they do so at the expense of their goodwill with the tech community.

Am I the only one who cringes when these ads air?

Edit: "magic beans" is harsh and it isn't that I don't think their tools are good. My point is that they put you in a position where it seems very unlikely to meet expectations.

2
xienze 3 days ago 3 replies      
> Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBMs broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, not ready for human investigational or clinical use.

Well, can't say I'm surprised. I used to work on that project a few years ago, basically the idea was that Watson would look at a patient's medical record, figure out what medications they're on, what symptoms they had, etc. and cross-reference all that with the medical knowledge it had ingested from vast amounts of medical literature. In theory, Watson could figure out what medications the patient should or should not be using, a proper course of treatment, etc.

There were two major problems:

First, it turns out your medical record is mostly written in narrative form, i.e., "John Smith is a 45 year old male...", "Patient is taking X mg of Y twice daily", "Patient was administered X ml of Y on 3/1/2016", etc. In other words, there's basically no structured data, so just figuring out the patient's stats, vitals, medications, and treatment dosages was an adventure in NLP. All that stuff was written in sentence form, and of course how things were written depended on who wrote it in the first place. It was really, really hard to make sure Watson actually had correct information about the patient in the first place.

Second, all that medical literature that was being ingested? Regular old, don't-know-anything-about-medicine programmers were the ones writing the rules the manipulating the data extracted via NLP. Well guess what, if you're not a domain expert you're bound to get things wrong.

Put those two things together and we would frequently get recommendations that were wildly incorrect, but that's to be expected when you get garbage input being fed into algorithms written by people who aren't domain experts.

3
throwaway111991 4 days ago 7 replies      
I was at the CogX artificial intelligence summit in London a couple of weeks ago, and IBM were there in full force.

I made several rounds around all of the stalls, and sat at the bar for a couple of hours with friends, and the whole time I could see the IBM stall, with 4-5 people there, WATSON plastered everywhere and nobody talking to them.

So I went over. I got talking to one of their technical people there,

I am highly experienced in Deep Learning so I started talking about Neural Nets, and he went blank, and admitted he didn't know much about that. I inquired about WATSON's technology and he couldn't answer telling me he didn't know.

I asked about the main use cases, and what makes WATSONs offering better than Deep Learning, he couldn't answer, or even compare on basic levels.

I asked him "What are the coolest uses of WATSON you've seen" and he immediatly went into a canned response about WATSON diagnosing cancer (a project I had seen and was familiar with) we spoke a few minutes on that, and I asked what other cool projects WATSON had been used on ... he had nothing, and I mean literally nothing.

very disappointing

4
pgodzin 3 days ago 3 replies      
The marketing has made it very hard to have a real conversation about IBM Watson. There is no such singular thing as "Watson". IBM offers a ML solution for health, for NLP, chatbots, etc. They all have very different capabilities and require different levels of machine learning. The marketing is BS, but most of the tech is real - if you give IBM your data, let them train a model on it, and communicate what you want, you will get an end-to-end custom solution. It's just not the magic IBM sells in its marketing videos.

Disclaimer: SWE at Watson Health

5
save_ferris 3 days ago 4 replies      
I don't think this isn't limited to IBM, my partner's PE firm recently hired a small consulting group touting a "revolutionary, AI-driven" real-estate analysis product that has zero AI whatsoever. It's basically a custom spreadsheet tool that they're claiming to be building AI on top of as they consume company data, but for a few hundred grand per year, they have a basic CRUD app on Azure with a reporting tool using D3 visualizations. But they think it's AI.

It's almost like 2016-17 were gold-mine years for marketing buzzwords and some companies are closing deals with no real execution plan for what they're selling.

6
RcouF1uZ4gsC 4 days ago 3 replies      
Watson is one of the biggest empty marketing slogans ever. The marketing makes it almost seems like General AI able to easily solve your pressing problems if you pay IBM money.

As a non-expert, it seems like the top end researchers are working for Google(Hinton, Bengio, etc), Facebook(LeCun), Baidu, Uber (ex CMU faculty). I don't really see a lot of machine learning research coming out of IBM comparable to the others.

IBM seems to running on the fumes of it's previous greatness while burning the ship to generate stock market returns.

7
batmansmk 3 days ago 0 replies      
IBM offered a day of Watson training in San Francisco about a year and a half ago.As engineers working with classifications, we were interested to compare the results of Watson to our algorithms, but also look at the API, the communication, the community etc.

It was a classroom nightmare. WIFI not working, Bluemix required for all workshops not working at that time, teachers very new on the topic themselves (one confessed he only knew Watson for a couple of weeks before the training), no announcement, no nice moment to socialize or build up a community, no coupon given to try on our own after, ...

And... the algorithms didn't work at all. The sentiment analysis was classifying as really positive the sentence: "I wasn't happy at all by the service" due to 'happy' and 'all' present in the sentence.

8
verdverm 3 days ago 1 reply      
I quit IBM Watson 5 weeks ago. Here is why IBM is suffering.

https://verdverm.com/2017/07/a-new-chapter/

PM me for more ;]

9
bradneuberg 3 days ago 0 replies      
I've had two encounters with IBM Watson that left me unimpressed. The first was using the IBM Watson Speech Transcription service (give an audio file and get text); the results were pretty bad vs. Google's, for example. The second was in their recent integration into Star Trek Bridge Command (which is an amazing game BTW!); the speech recognition results were pretty bad.
10
bischofs 3 days ago 2 replies      
What does IBM even do anymore? Is it some bizarre set of buildings where they just play with computers and print money?What product do they sell? Whom do they sell it to?

I am genuinely curious...

I have a comp sci degree and worked in different industries relating to software and have never even seen or touched any IBM tech except for those old cash registers.

11
code4tee 3 days ago 0 replies      
Within the data science community Watson has long been viewed as snake oil. Glad to see less technical and business folks are finally smelling BS too.
12
hbarka 3 days ago 1 reply      
Lotus Notes. Why this garbage of an email system is still perpetrated by IBM explains IBM. It worked back in 1999 but pity you if you're in a company still using it and the CIO still putting upgrade patches to it.
13
BucketSort 3 days ago 0 replies      
As soon as I saw "now with Watson" on H&R block's windows, I knew it was over.
14
zaphod_ibm 3 days ago 0 replies      
Disclosure: I work for IBM.

Watson is not a consumer gadget but the AI platform for real business. Watson solutions are being built, used, and deployed in more than 45 countries and across 20 different industries. Take health care alone -- Watson is in clinical use in the US and 5 other countries, and it has been trained on 8 types of cancers, with plans to add 6 more this year. Watson has now been trained and released to help support physicians in their treatment of breast, lung, colorectal, cervical, ovarian, gastric and prostate cancers. By the end of the year, the technology will be available to support at least 12 cancer types, representing 80 percent of the global incidence of cancer. Beyond oncology, Watson is in use by nearly half of the top 25 life sciences companies, major manufacturers for IoT applications, retail and financial services firms, and partners like GM, H&R Block and SalesForce.com.

We have invested billions of dollars in the Watson business unit since its inception in 2014, with more than 15,000 professionals, and more than a third of IBM's research division is devoted to leading-edge AI research. When you consider the vast scope of IBM's work in AI, from Watson Health to Watson Financial Services to the emerging Internet of Things opportunity, it is clear that no other company is doing AI at the scale of IBM.

By the end of this year, Watson will touch one billion people in some way Watson can see, able to describe the contents of an image. For example, Watson can identify melanoma from skin lesion images with 95 percent accuracy, according to research with Memorial Sloan Kettering. Watson can hear, understanding speech including Japanese, Mandarin, Spanish, Portuguese, among others. Watson can read 9 languages. Watson can feel impulses from sensors in elevators, buildings, autos and even ball bearings. At IBM, there are more than 1,000 researchers focused solely on artificial intelligence

15
daxfohl 3 days ago 1 reply      
> and lets be real, things would look much worse if Google, Microsoft and Facebook were added to this table

Umm, so add them? And Nvidia, Intel, Baidu, Uber, Tesla? Anybody else? That single chart would actually be more interesting than the entirety of this article.

16
mark_l_watson 3 days ago 1 reply      
In the last 6 weeks, I have been called by two reporters (Wall Street Journal and Reuters) for background on AI. I talked with the Journal reporter for about an hour, covering 'everything.' However, the Reuters reporter only wanted to talk about IBM Watson - we just had a short talk.

I have seen a lot of negative press on Watson, but really, it can be evaluated like any other API to see if it meets your needs.

17
zitterbewegung 3 days ago 0 replies      
While getting my undergraduate IBM said they were going to give an overview of the Watson system they used to solve Jeopardy . I skipped it but there were some professors that went to it. The professors walked out saying that they were using Watson as some kind of marketing term. They gave no technical details either . That's how I found out that Watson was a marketing gimmick.
18
smegel 3 days ago 0 replies      
> Unfortunately, IBM is struggling to bridge the gap between client needs and its own technological capability.

AI in a nutshell.

19
sumoboy 3 days ago 0 replies      
IBM should ask Watson how to fix the company first, then it would have some credibility. They don't treat there employees very well either, but neither does Oracle so why would anybody waste time working for losers.

+1 "dog shit wrapped in cat shit" .. that is awesome.

20
zhanwei 3 days ago 0 replies      
So IBM Watson can do all the smart and complex stuff but we still need human to do the dumb stuff like importing excel files where the cost outweighs the benefit of getting Watson to do the smart stuff.
21
throwaway91111 3 days ago 0 replies      
This is only the beginning of selling AI as a panacea. People, if it does something useful, there is a term for it. The only reason not to use that term is to AVOID direct comparisons.
22
atsaloli 3 days ago 0 replies      
This reminds me of a Linux Journal piece I did 5 years ago on system administration of the Watson supercomputer (after they got their 15 minutes of fame on Jeopardy):

http://www.linuxjournal.com/content/system-administration-ib...

They brought in a sysadmin after they got up to 800 OS instances. Before that, it was just 3 part-time researchers handling the system administration duties.

23
dangero 3 days ago 1 reply      
Anything that IBM puts out remember they are a consulting company, so they want to generate a huge brand name. That allows them to charge the consulting prices they need to charge to make this business work for them. IBM Watson is a collection of sort-of-working AI related APIs, but it gets A LOT of press. If they can create an AI brain, then people will believe they can do anything for them in the tech arena and that's the goal.
24
rv816 3 days ago 0 replies      
Finally someone publishes what everybody in the industry has long since known, especially in healthcare.
26
JunkDNA 3 days ago 0 replies      
Still waiting for the peer reviewed publication in a prestigious medical journal that demonstrates doctors using Watson get better outcomes for their patients.
27
shard972 3 days ago 0 replies      
Only took like 4 years for someone to catch on. I remember seeing right away when they put Watson on jeopardy it was going to be a giant pr stunt.
28
bitmapbrother 3 days ago 0 replies      
This is simply the media and analysts catching up with what everyone familiar with Watson already knew - that it was nothing more than marketing bullshit designed to project IBM as a leader in A.I. Watson is a lot like IBM's cloud initiative - a service so bad that they don't even use it internally, but have no problems conning their customers on its value.
29
dmritard96 3 days ago 1 reply      
somewhat off topic but I find their use of 'Watson' to be rather outrageous as he was a big part of IBMs Jew tracking systems installed in concentration camps during world war two. I suppose I already looked at IBM as an org that really does not own this as they should but its particularly bothersome that they would use his name as a flagship of their marketing efforts.
30
snissn 3 days ago 0 replies      
I spent months as a fully qualified lead trying to buy a Watson product and simply couldn't. Had calls rescheduled, canceled, got on the phone and a kafka-esque experiences with a sales person. We gave up and just built out what we wanted to buy..
31
hacksonx 3 days ago 1 reply      
Fundamental software problems here. Probably the reason why software is being marketed as service more and more. IBM might be moving a little too fast, especially from a sales perspective but their systems offer features that will define the future.
32
polm23 3 days ago 0 replies      
Best story I heard from a guy who claimed to have worked at IBM in a bar was when he went to meet a client and they asked, in all seriousness, where the talking hologram from the commercial was.
33
komali2 3 days ago 0 replies      
I think it's one of the last great investments - Watson will make IBM an astonishing amount of money, right up until it and the technology its spearheading make money irrelevant.
34
fredsanford 3 days ago 0 replies      
IBMs solution? More offshore and H1B and force everyone into the office.

Circling the drain.

35
dpkonofa 3 days ago 1 reply      
This and the comments below are really depressing to me. Watson seemed like such an exciting piece of tech and something that had the potential to change the world and now I feel like the shareholder's virus has stagnated it to the point of it being worthless. I've heard multiple stories where the staff that's assigned to demo and talk about Watson have no idea what they're talking about and that the marketing, management, and finance people don't have any inkling as to what is special about Watson. They only care that it's not currently making them boatloads of money, despite the fact that it absolutely could. I guess I'll have to move my excitement to Google and Apple's machine learning attempts.
25
CSS and JS code coverage in Chrome DevTools developers.google.com
428 points by HearMeRoar  5 days ago   120 comments top 24
1
umaar 5 days ago 10 replies      
If you're interested in staying up to date with Chrome DevTools, I run this project called Dev Tips: https://umaar.com/dev-tips/

It contains around 150 tips which I display as short, animated gifs, so you don't have to read much text to learn how a particular feature works.

2
cjCamel 5 days ago 4 replies      
From the same link, being able to take a full page screenshot (as in, below the fold) is also very excellent. I notice from the YouTube page description there is a further shortcut:

 1. Open the Command Menu with Command+Shift+P (Mac) or Control+Shift+P (Windows, Linux, Chrome OS). 2. Start typing "Screenshots" and select "Capture full size screenshots".
I needed this literally yesterday, when I used MS Paint to cut and paste a screen together like a total mug.

3
TekMol 5 days ago 4 replies      
So I recorded my site for a while. Then sorted by unused bytes. What was on top?

Google's own analytics.js

4
err4nt 5 days ago 1 reply      
Interesting tool, but even more interesting results. I just tried it on a simple, one-page website I built recently and there is not a single line of _code_ that's unused, yet it's still showing me 182 lines unused.

Things it seems to consider unused: `style` tags, if your CSS rule is on more than one line - the lines for the selector and closing tag.

There should be 0 unused lines since there are 0 unused rules, and the opening and closing `style` tags are DEFINITELY being used, so until these false results get weeded out it will be noisey to try to use this to track down real unused lines.

5
orliesaurus 5 days ago 4 replies      
Chrome Dev tools, the first reason why I started using Chrome. I wonder if HN has any better alternatives to suggest? I'm curious to see what I could be missing on!
6
laurencei 5 days ago 3 replies      
My vague recollection of the Google event where this was first announced (was it late 2016 or early 2017?) - was it was going to "record" your site usage for as long as you were "recording" - and give the report at the end.

But this now sounds like a coverage tool for a single page?

Does anyone know if it can record over multiple pages and/or application usage (such as an SPA)?

7
KevanM 5 days ago 4 replies      
A single page solution for a site wide issue.
8
wiradikusuma 5 days ago 1 reply      
How do I exclude "chrome-extension://" and "extensions::" from the list? I can't do anything with them anyway, so it's just clutter.
9
TekMol 5 days ago 1 reply      
In the CSS file view, isn't it unpractical, that it marks whitespace as unused? That makes it much harder to find rules that are unused.
10
genieyclo 5 days ago 1 reply      
Is there an easy way to filter out extensions from the Coverage tab, besides opening it in incognito mode?
11
hacksonx 5 days ago 2 replies      
"{ Version 57.0.2987.98 (64-bit)

 Updates are disabled by your administrator
"}

Guess I will only be able to comment on these when I get home. The full screen screenshot feature is going to be a welcomed addition. I will especially have to teach it to the BA's since they always want to take screenshots to show to business when design is finished but test is still acting up.

12
indescions_2017 5 days ago 1 reply      
I like this, and it's addictive ;) Any way to automatically generate output that consists of the 100% essential code subset?

As suspected: a typical medium.com page contains approx 75% extra code. Most egregious offenders seem to be content loader scripts like embedly, fonts, unity, youtube, etc.

On the other hand, besides net load performance, I'm not really worrying about the "coverage" metric. Compiling unreal engine via emscripten to build tappy dodo may result in 80%+ unused code, but near native runtime worth is a healthy tradeoff.

Try, for example: http://webassembly.org/demo/

13
i_live_ther3 5 days ago 2 replies      
What happened with shipping everything in a single file and letting cache magic happen?
14
mrskitch 5 days ago 0 replies      
I wrote a tool to automate this (right now it's just JavaScript coverage) here: https://github.com/joelgriffith/navalia. Here's a walk through on the doing so: https://codeburst.io/capturing-unused-application-code-2b759...
15
arthurwinter 5 days ago 2 replies      
It'd be awesome if there was a button to download a file with the code that's used, and the code that's unused, instead of just having a diff. Hint hint :)
16
rypskar 5 days ago 1 reply      
Excellent timing, I had given up finding a good tool for coverage on JS and CSS and where right now using audits in Chrome trying to find unused CSS and searching through the code to find unused JS on our landing page. Even if it is hard for at tool to find everything that is unused on a page it will show what is used so we know what we don't have to check in the code
17
foodie_ 5 days ago 1 reply      
Hurray! Now they just need to make it part of an analytics program so we can let the users tell us what code is never running!
18
dethos 5 days ago 0 replies      
Is there anything similar for Firefox? On the developer tools or as an external extension?
19
TekMol 5 days ago 1 reply      
Would be super useful if it recorded over multiple pageviews. To find unused CSS+JS and to measure the coverage of tests.

But it seems to silently forget what happened on the first page as soon as you go to the second page.

20
geniium 5 days ago 0 replies      
Very nice! The Coverage feature is something I have been waiting for since a long time!
21
mgalka 5 days ago 0 replies      
This is great, such a useful function. Thanks for posting.
22
wmkthpn 5 days ago 2 replies      
Can this be useful when someone uses webpack?
23
_pmf_ 5 days ago 0 replies      
When your developer experience depends on how much free time Chrome developers have ...
24
ajohnclark 5 days ago 0 replies      
Yesss!!
26
'Living Drug' That Fights Cancer by Harnessing Immune System Clears Key Hurdle npr.org
423 points by daegloe  4 days ago   140 comments top 11
1
jfarlow 4 days ago 3 replies      
Congratulations! The Chimeric Antigen Receptor (CAR) deployed here is very much unlike the standard 'small molecule' drug that 'disrupts a bad thing', and much more like a rationally engineered tool using the body's very own technologies to overcome a particular limitation. In this case, it gives the patient's own immune system a notion of what the cancer looks like.

If you want to build your own 'living drugs' we've built a digital infrastructure to allow you. Though we just made public our generic protein design software (thanks ShowHN! [1]), we're employing the same underlying digital infrastructure to build, evaluate, and manage CAR designs in high throughput [2]. The drug approved here was painstakingly designed by hand, while we think the technology now exists to permit many more such advances to be created at a much more rapid pace.

[1] https://news.ycombinator.com/item?id=14446679

[2] https://serotiny.bio/notes/applications/car

Design your own 'living' protein drugs here right now: https://serotiny.bio/pinecone/ (and let us know what you think, and how we can make it better!)

2
stillfinite 4 days ago 4 replies      
The significant thing about CAR-T cell therapy is that it's not very specific to the type of cancer - all cancer cells have damaged DNA that leads to the productions of antigens. Leukemia is the low-hanging fruit because it's easy to inject the T-cells back into the body right where the cancers cells are. It's hard to tell whether you could get enough T-cells to diffuse out of the bloodstream to have an effect on something like prostate cancer. It would be a real breakthrough if you could overcome that hurdle, because then you would have a treatment that works on many different cancers without much modification.
3
Young_God 4 days ago 0 replies      
A friend of mine is alive today because he was part of one of the early trials.He had been told by his doctor, just before he was accepted into the trial, that he should start putting his affairs in order.
4
eatbitseveryday 4 days ago 3 replies      
NYTimes also covers the story (https://www.nytimes.com/2017/07/12/health/fda-novartis-leuke...) with more discussion about individual patients.

From the NYT article:

> The panel recommended approving the treatment for B-cell acute lymphoblastic leukemia that has resisted treatment, or relapsed, in children and young adults aged 3 to 25.

Why so young?

5
JoeAltmaier 4 days ago 8 replies      
From the article:

 "Scientists use a virus to make the genetic changes in the T cells, raising fears about possible long-term side effects"
Is this a real risk? Is 'using a virus' in this way, still risky at all? or is it just the word 'virus' that makes writers put this line in every article about gene therapy?

{edit: real risk}

6
aaronbrethorst 4 days ago 2 replies      
"While Novartis will not estimate the price it will ultimately put on the treatment, some industry analysts project it will cost $500,000 per infusion."

Meanwhile, the latest version of the US Senate's healthcare bill includes the so-called Cruz Amendment[1], which would allow insurance companies to offer health insurance plans without essential health benefits, which would allow lifetime caps on insurance[2], which could mean that your six year old with recurring leukemia gets pulled off their treatment when they're halfway through. Not because you did anything wrong, per se, but because maybe your employer refuses to spring for health care plans with more than an $x dollar cap. Or you never anticipated something so horrific and catastrophic happening to your family.

[1] https://www.nytimes.com/2017/07/13/us/politics/senate-republ...

[2] https://www.brookings.edu/2017/05/02/allowing-states-to-defi...

7
ceejayoz 4 days ago 3 replies      
> Another big concern is the cost. While Novartis will not estimate the price it will ultimately put on the treatment, some industry analysts project it will cost $500,000 per infusion.

Welp, guess my insurance premiums aren't stabilizing anytime soon.

8
sjbase 4 days ago 1 reply      
Does anyone know: what are the failure rates like for the gene editing technology being used for this? Thinking like a software engineer, are there transposition errors (GATC --> GTAC) , atomicity issues (GATC --> GA)? Mutations afterward?
9
judah 4 days ago 1 reply      
Is this the same CAR-T treatment that Juno Therapeutics tried and scrapped[0] after 5 trial patients died after receiving the treatment?

[0]: http://www.xconomy.com/seattle/2017/03/01/after-trial-deaths...

10
known 3 days ago 1 reply      

 it will cost $500,000 per infusion

11
known 3 days ago 1 reply      
Isn't this how vaccines work?
27
Bitcoin Potential Network Disruption on July 31st bitcoin.org
463 points by amdixon  4 days ago   362 comments top 33
1
jpatokal 4 days ago 17 replies      
Well, that's a remarkably uninformative announcement. Here's an attempt at a neutral tl;dr from a Bitcoin amateur.

Bitcoin is currently suffering from significant scaling problems, which lead to high transaction fees. Numerous proposals to fix the scaling issue have been proposed, the two main camps being "increase the block size" and "muddle through by discarding less useful data" (aka Segregated Witness/SegWit). However, any changes require consensus from the miners who create Bitcoins and process transactions, and because it's not in their best incentive to do anything to reduce those transaction fees, no change has received majority consensus.

In an attempt to break this deadlock, there is a "Bitcoin Improvement Proposal #148" (BIP148) that proposes a User-Activated Soft Fork (UASF) taking effect on August 1, 2017. Basically, everybody who agrees to this proposal wants SegWit to happen and (here's the key part) commits to discarding all confirmations that do not flag support for SegWit from this date onward. If successful, this will fork Bitcoin, because whether a transaction succeeded or not is going to depend on which side of the network you believe.

However, BIP148's odds of success look low, as many of the largest miners out there led by Bitmain have stated that they will trigger a User-Activated Hard Fork (UAHF) if needed to stop it. Specifically, if UASF appears successful, instead of complying with SegWit, they'll start mining BTC with large blocks instead: https://blog.bitmain.com/en/uahf-contingency-plan-uasf-bip14...

Anyway, it all boils down to significant uncertainty, and unless you've got a dog in the race you'll probably want to refraining from making BTC transactions around the deadline or purchasing new BTC until the dust settles down.

And an important disclaimer: this is an extremely contentious issue in the Bitcoin community and it's really difficult to find info that's not polarized one way or the other. Most notably, Reddit's /r/bitcoin is rabidly pro-BIP148 and /r/btc is equally rabidly against it. Here's one reasonably neutral primer: https://bitcoinmagazine.com/articles/bitcoin-beginners-guide...

2
buttershakes 4 days ago 4 replies      
This will go down as a massive failure in governance. The Bitcoin core guys have completely created this situation by taking a hard liner stance based on a non issue. Committing to a 2 megabyte hard fork 2+ years ago would have averted this situation and kept control within the core dev team. Now we see miners taking a stance because SegWit doesn't necessarily benefit them. Further payment channels and other off chain scaling haven't really been tested or materialized, and the SegWit code itself is a series of changes to the fundamentals of Bitcoin without requiring a hard fork. In other words it is overly engineered to avoid having to have real consensus.

Further the almost rabid attacks against a 2mb increase are bordering on complete insanity. No serious software engineer would say that an additional 1 megabyte of traffic every 10 minutes is a problem in any way. Instead we are stuck with a proportional increase in bandwidth and processing to support segwit and a minor increase in block size, which is through some convoluted logic preventing centralization. This whole thing is a power grab, plain and simple.

Now the alternative implementations are racing to complete something the miners will agree with, the sole purpose being to wrest control away from the "Bitcoin core" development group which has made some a complete mess of governance. Anyone who invested in Blockstream has to seriously be scratching their heads and wondering why they are killing the golden goose over some ideological bs instead of making what is really a trivial change. I think at this point they have screamed so loudly for so long that back tracking would reveal them to be hypocritical in the extreme. To couch this whole debate as a rallying cry against centralized interests instead of a corporate power grab is completely absurdist.

3
sktrdie 4 days ago 7 replies      
All this "unconsensus" is weird to me given that PoW was created to fix just that. I don't understand how can any other group of people decide what should happen other than the miners. After all, anybody can be a miner. Anything other than that just doesn't make it decentralized anymore.

If you trust the developers, exchanges or even users to make decisions, then why not just make a BitcoinSQL where the servers are controlled by these groups?

Mining specifically allows for this not to happen. one-CPU-one-vote as per Satoshi's paper. No matter the rules of the protocol, the chain with most work is the one that most people agreed upon. This seems to me the only true democratic solution and I don't understand how anything else is possible.

With regards to fees being to high and miners actually liking that, that's bullshit, because miners (which are also users!) care about the health of the entire system. If something like SegWit will bring many more users, that's a win for them.

Let's not forget that anybody can be a miner! Miners aren't just these chinese groups of people. It's the only true democratic way of reaching consensus - anything else is really not a way to reach trustless consensus in my opinion.

4
BenoitP 4 days ago 2 replies      
To me, hashing power is not the process by which the outcome will be decided.

IMHO, the percentage of technical signalling will not even matter that much.

Two chains will get created quite quickly. And some BTC holders will try to take advantage of the situation.

Since transactions can get replayed on the other chain (and copying them from one chain to the other brings a stability advantage) the technical way things are going to occur is double-spending to different adresses.

... Which means services supporting different chains will be pitted against each other.

Users will empty out one wallet at the same time to one exchange on a chain they don't support, and to another address they control on the chain they support. In cashing out on the exchange, they will crash the market value of that chain.

... Which brings me to: exchanges should start signalling support and come to a consensus pretty quickly, in their own interest. They don't want to be the exchange everybody cashes out on.

Questions abound:

* Have they started signalling it?

* What software are they running?

* If you hold some BTCs: are you planning to double spend?

* How are you going to proceed?

* Which chain do you support, and how many BTC do you possess?

TL;DR: There will be a run. Exchanges will determine the outcome.

5
Twisell 4 days ago 3 replies      
Ok just another proof that bitcoin can definitely not be compared to gold. Or maybe it could?

"After the event you might end up with gold or lead it all depends if your banker believe in transmutation or not (and if transmutation is actually achievable which will be determined by the best alchemists of the kingdom that need to agree together).

So all in all the guild of merchants recommend that you don't accept gold as a payement on the last day before new moon (and for a few day after that), because you wil not be able to tell if you are getting real gold or lead during that period.

Well to be honest it won't technically be lead you would get but forked gold, a gold that could be gold but isn't until the alchemists say so. But you shall still be able to use it in a limited way with people who believe in the same alchemists dissidents.

It's totally normal if it's sound complicated, it's magic after all!"

6
unabridged 4 days ago 2 replies      
This is late FUD, a last minute whine by the owners of "bitcoin.org" aka core. The discussion over scaling has been happening for many months and consensus has actually just been reached in the last couple weeks. 85% of the mining power is signalling for segwit2x, and if this continues it will lock in before Aug 1st completely avoiding the scary situation talked about in the post.
7
matt_wulfeck 4 days ago 5 replies      
> This means that any bitcoins you receive after that time may later disappear from your wallet or be a type of bitcoin that other people will not accept as payment.

Can you imagine the uproar if Visa said the same thing? It would be totally unthinkable.

Bitcoin can get away with this type of "disruption" because it's not really being used for anything other than a speculative vehicle.

8
1ba9115454 4 days ago 0 replies      
If you hold Bitcoin then you need to think about getting hold of your private keys or using a non custodial wallet like https://strongcoin.com

If the network splits there will 1 or 2 new types of Bitcoin.

If the exchanges decide to support the new types of Bitcoin you will be able to sell your holding on the new chains whilst still keeping coins on the main Bitcoin chain.

But to do this you need to manage your own private keys.

9
taspeotis 4 days ago 0 replies      
https://github.com/bitcoin-dot-org/bitcoin.org/commit/9ddfa8...

 Alerts: BIP148/92: change title over objection Note: I object to this change, which I think makes the alert less clear, less forceful, and degrades alert usability.

10
dmitriid 4 days ago 0 replies      
Ladies and gentlemen, we give you the most amazing stable scalable global tech of the future
11
Fej 4 days ago 2 replies      
Do any significant number of people genuinely take Bitcoin to be the future of currency at this point?
12
kanzure 6 hours ago 0 replies      
follow-up thread was posted over here, https://news.ycombinator.com/item?id=14788663
13
atemerev 4 days ago 2 replies      
For me, the lightning network is the obviously right solution, bringing more decentralization and totally removing the need for global consensus. Why it is not that popular, I don't know.
14
isubkhankulov 4 days ago 4 replies      
this post feels like propoganda. prior bitcoin upgrades have gone much smoother and when they do go wrong the community banded together to spread the right information. albiet the userbase was likely a lot smaller back then.
15
gopz 4 days ago 3 replies      
As someone with a basic Comp Sci understanding of crypto currencies could someone explain to me why there is a scalability problem? I thought one of the primary benefits of Bitcoin was that higher transaction fees will attract more miners and ergo the transactions can be processed at a higher rate. Why won't this problem be resolved naturally? Tinkering with the block size makes sense to me as a way to crank through more transactions per mined block, but again, why is it even a problem? The mining power is just not there?
16
frenchie4111 4 days ago 3 replies      
Can someone give me some context on what is causing this?
17
cableshaft 4 days ago 1 reply      
> "Be wary of storing your bitcoins on an exchange or any service that doesnt allow you to make a local backup copy of your private keys."

I know a couple people who have some bitcoin on Coinbase and aren't too comfortable moving it to a local wallet (Coinbase is just easier for them, they don't have to worry about the security of their personal computer).

Does Coinbase allow making a local backup of private keys? I'm thinking they might not, but maybe they do.

18
roadbeats 4 days ago 2 replies      
What strategy is the best for small investors ? Moving the money into altcoins or just pulling completely back ? Should we expect a soar on altcoins (e.g litecoin, ripple, antshares a.k.a neo) ?
19
kristianp 3 days ago 0 replies      
Here's part of the warning message on bitcointalk [1]:

1. Ensure that you have no BTC deposited with a Bitcoin bank or other trusted third-party before Aug 1. If there's no technical way for you to export the private keys for your BTC, then that BTC is at risk. Some Bitcoin banks may assure you that they'll definitely keep your BTC safe, but I absolutely wouldn't trust them.

2. Do not send transactions or trust received transactions starting 12 hours before Aug 1 at midnight UTC, and continue this until you hear the "all clear" from several trustworthy sources. For example, I will post a forum news item if everything is OK, or if everything is not OK and action is required.

[1] https://bitcointalk.org/index.php?topic=2017191.0

20
Taek 4 days ago 0 replies      
I wish there was a concise way to explain what is going on, but there really isn't. I'm going to do my best though.

Bitcoin is a consensus system. This means that the goal is to have everyone believe the exact same thing at all times. Bitcoin achieves this by having everyone run identical software which is able to compile a list of transactions, and from there decide what money belongs to which person.

As you can imagine, it's a problem if you have $10, and Alice believes she owns that $10, Bob believes he owns that same $10, and Charlie believes that the money was never sent to either of them. These three people can't interact with eachother, because they can't agree on who owns the money. Spending money has no meaning here.

In Bitcoin, there are very precise rules that define how money is allowed to move around. These rules are identical on all machines, and because they are identical for everyone on the network, nobody is ever confused about whether or not they own money.

Unfortunately, there are now 3 versions of the software floating around (well... there are more. But there are only 3 that seem to have any real traction right now, though even that is hard to be certain about). Currently, all versions of the software have the exact same set of rules, but on August 1st, one of those versions of the software will be running a different set of rules. So, depending, people may not be able to agree on the ownership of money. If you are running one version, and your friend is running another, your friend may receive that money, or they may not. This is of course a bad situation for both of you, and its even worse if you are working with automated systems, because an automated system likely has no idea that this is happening, and it may have no way to fix any costly mistakes.

It gets worse. The version of the software that is splitting off actually has the power to destroy the other two versions of the software. I don't know how to put this in simple terms either.

In Bitcoin, it is possible to have multiple simultaneous histories. As long as all of the histories are mathematically correct (that is, they follow all of the formal rules of Bitcoin), you know which history is the real history based on how much work is behind it. The history with the most work wins. If the history is illegal, you ignore it no matter how much work is behind it.

So, this troublemaker version of the software (the UASF version) has a compatible set of rules with the other 2 versions. Basically, everything that it does, the other versions see as valid. So if its history is the longest, the other versions will treat that history as the one true history. The thing is, this troublemaker version of the software is stubborn, and so even if the histories of the other two versions have more work, it'll ignore them and focus only on its own version of history.

So, the dramatic / problematic situation happens if the UASF software initially has less work in its history. What'll happen is a split, and two different versions of Bitcoin will exist at the exact same time. But then, if the UASF software ends up with more work after some period of time (days, weeks, etc.), the other versions of the software will prefer its version of history over their own.

Basically, what happens there is that entire days, or weeks, etc. of history get completely obliterated. The UASF history becomes canonical, and the histories built by the other versions all get destroyed. Miners lose all of their money, people who accepted payments lose those payments, people who made payments get those payments back. Basically a lot of chaos where people end up losing probably millions and millions of dollars.

----

I hope that helps. This whole situation is screwed up, and really the best thing to do is to put your coins in a cold wallet (one that you control, not an exchange), and then just not send or receive any coins for a few weeks. Let the dust settle, and then resume using Bitcoin once its clear that the turmoil is over.

----

The most likely situation here is that nothing interesting happens at all. My personal opinion is that the vast majority of people who matter in Bitcoin aren't even paying attention to the drama, and something dramatic is really only possible if the majority of Bitcoin users opt-in to doing something. I don't think that's the case at all, which means essentially nothing interesting is going to happen.

But, I could be wrong. There's a non-zero chance that something very unfortunate happens, and there's a pretty easy way to isolate yourself: don't send or receive any Bitcoins starting July 31st, and don't resume until it's clear that the storm has passed. It'll likely take less than a week to come to a well-defined conclusion.

21
wittgenstein 4 days ago 0 replies      
Does anyone know how Coinbase is going to handle this?
22
modeless 4 days ago 2 replies      
The problems described in this post are unlikely to happen. There is an attempt to split ("fork") the network scheduled for August 1. The people forking will force activation of a new feature, Segwit, while the non-forkers won't. However, the non-forkers are currently planning to activate Segwit as part of a compromise plan before the deadline. If this compromise happens as planned, there will be no need to force-activate Segwit with a fork, and so no fork will happen on August 1.

Frankly, even if the compromise solution fails and the fork does happen on August 1, it will be a complete non-event. Bitcoin.org is biased as they are affiliated with people who support the August 1 fork, and so they're attempting to publicize it. However, the fork has practically zero support from Bitcoin miners or exchanges. On Aug 1 the vast majority of miners and exchanges will stay with the current network. Without significant miner support the forked network will run extremely slowly, and it will be vulnerable to several kinds of attacks. Without exchange support the forked network will not have economic value, and will quickly become irrelevant.

Although August 1 will likely not be a problem either way, there is another date that will. Around the end of October, another proposal to fork the network is scheduled, and this one is supported by miners and exchanges. What will happen then is much more murky. It will become clearer as the date approaches.

23
jstanley 4 days ago 1 reply      
I wrote this in case anyone wants more information: https://smsprivacy.org/bip148-uasf

Should be a bit more informative than TFA.

24
jancsika 4 days ago 0 replies      
I haven't kept up with Bitcoin tech for awhile. Hopefully the following questions are relevant here:

1. What percentage of Bitcoin's PoW belongs to Bitmain?

2. Are the drivers for Bitmain's hardware free-as-in-freedom?

3. Is mining hardware in the same class as Bitmain's manufactured anywhere in the world other than China?

Edit: Bonus question: If all cutting edge hardware tends to be developed and manufactured in one particular spot in one particular nation state, and if Bitcoin mining efficiency now depends mainly upon the manufacture of newer, more powerful hardware, does that change any of the implicit assumptions made in the Bitcoin whitepaper? (Esp. considering that same nation state has put a hard speed limit on all data moving in/out its borders.)

25
nthcolumn 4 days ago 1 reply      
I obviously don't understand this at all. I thought it was distributed and that that was the whole point.
26
nemoniac 4 days ago 2 replies      
What time zone is "GMT+0100 (IST)" supposed to be?

India Standard Time is something like GMT+5.

27
pyroinferno 4 days ago 0 replies      
Good, good. Bitcoin will fall, and eth will become king.
28
davidbeep 4 days ago 0 replies      
Terribly uninformative. I'm actually surprised the coin is trading as high as it is considering all the uncertainty. I expected a greater freak out from technologically inapt investors/speculators.
29
ented 4 days ago 1 reply      
What is the max tx/s speed? Still no consensus???
30
jageen 4 days ago 1 reply      
It will surely affect on ransomware collectors.
31
cgb223 4 days ago 1 reply      
What is the disruption?

Why is this happening?

32
ragelink 4 days ago 2 replies      
anyone know why out of all timezones they pick Central america Time?
33
wyager 4 days ago 0 replies      
This looks like FUD. As I recall, the owners of Bitcoin.org are mad that their exact proposed scaling solution didn't go through.

85+% of miners are signaling support for segwit2x, so it's extremely unlikely that there will be any disruption. https://coin.dance/blocks

28
AMD Ryzen Threadripper 1920X and 1950X CPUs Announced anandtech.com
376 points by zdw  4 days ago   316 comments top 21
1
ChuckMcM 4 days ago 9 replies      
I really hope the ECC carries through. It irritates me to have to buy a "server" CPU if I want ECC on my desktop (which I do) and it isn't that many gates! Its not like folks are tight on transistors or anything. And on my 48GB desktop (currently using a Xeon CPU) I'll see anywhere from 1 to 4 corrected single bit errors a month.

For things like large CAD drawings which are essentially one giant data structure, flipping a bit in the middle of them somewhere silently can leave the file unable to be opened. So I certainly prefer not to have those bits flip.

2
walkingolof 4 days ago 3 replies      
Best thing about this is that competition is back (in the high end x86 market) and the winner is the consumer, CPU market have been stale for a while.
3
SCdF 4 days ago 14 replies      
How do people with many CPU cores find it helps their day to day, excluding people who run VMs, or do highly parallelisable things as their 80% core job loop (ie you run some form of data.paralellMap(awesomeness) all day)?

Does it help with general responsiveness? Do many apps / processes parallalise nicely? Or is it more "Everything is 99% idle until I need to run that Photoshop filter, and then it does it really fast"?

4
jokoon 4 days ago 3 replies      
A CPU that large reminds me of the famous remark made by Grace Hopper about how light can move 30cm in one nano second, I guess theoretically meaning that CPU could have some kind of maximum size.

Of course since current CPU contains cores, it doesn't apply.

5
shmerl 4 days ago 3 replies      
I'm still waiting for this bug to be fixed: https://community.amd.com/message/2796982

Note: this isn't a bug in gcc, but looks like hardware bug related to hyperthreading.

6
InTheArena 4 days ago 3 replies      
What I am going to be interested in is this versus EPYC parts. I think the higher clocks are mainly to achieve some of the more insane (and useless) FPS counts for games. If you are willing to ramp down the FPS to a number that your monitor can actually display, it may be better to find a general purpose EPYC MB and chipset, and use that. Especially if homelab / big data / compiling linux/ occasional gaming is you cup of tea.
7
johnbellone 4 days ago 4 replies      
Its been awhile since I've built a computer with my own two hands, but either that man's hands are really small or hot damn AMD Ryzen CPU are huge.
8
arcaster 4 days ago 3 replies      
I'm still waiting for a more diverse set of synthetic and real-world benchmarks. It'll be interesting to see how IPC performance holds up with Threadripper, however I think the most interesting debate will be whether the 1920x or lowest end Epyc CPU are a better buy.

Unfortunately, even as an enthusiast $799 is more than I'm willing to spend on a CPU. I'm also still hard pressed to build a Ryzen 1700 System since I can purchase an i7 7700 from MicroCenter for about $10 less than the Ryzen part (and have equal or better general performance with notable better IPC).

9
strong-minded 4 days ago 0 replies      
A simple formula: The 1920X beats the 7920X by a few hundred in Cinebench and a couple of hundred in the pocket.

I wonder if the 'Number Copy War' (started with the X299 vs. X399 Chipset) will continue throughout the year.

10
thoughtexprmnt 4 days ago 10 replies      
Since the article does refer to these as desktop CPUs, I'm curious what kind of desktop workloads people are running that could benefit from / justify them?
11
drewg123 4 days ago 3 replies      
It is great that they're announced for an August release, but when I can actually BUY one?

Given that Naples (aka Epyc) was "released" in June, I went looking to actually buy one, and I could not find a single place selling them. Not Newegg, nothing local, nothing in Google shopping, etc.

12
dis-sys 4 days ago 3 replies      
$999 list price translates to $1100-$1150 retail price in countries where you have a GST style tax, then you factor in an expensive motherboard plus heat sink, 64GB of RAM, the upgrade is like $2k.

the problem is with this confirmed return of competition between Intel vs AMD, I am no longer sure whether it is a good idea to upgrade now as it is basically the first iteration between those two. Are they going to release something even better in 6-12 months time?

13
crb002 4 days ago 4 replies      
AMD needs to come out with a few AVX-1024 instructions for vector ops. Essentially make one core into a GPU that doesn't suck at branching.
14
eemax 4 days ago 1 reply      
The comparisons in this article are mostly against the high-end Intel core line, but these CPUs support server / enterprise type features like ECC memory, lots of PCI-E lanes, and virtualization features (I think?).

Shouldn't Threadripper be compared to Xeons?

EDIT: Or rather, what I'm really wondering is what these CPUs lack that AMD's server line (EPYC) have.

15
sergiotapia 4 days ago 1 reply      
I'm waiting for these to launch so I can build a great multi-threaded computer. My Elixir apps are waiting for all these threads! :)

Does anyone know if Plex is going to see much benefit transcoding video files on the fly?

16
gigatexal 4 days ago 0 replies      
soon as i have some funds i will be getting one but only if ECC is supported -- what would be even better is if one could do a mild OC on the part but also have ECC
17
balls187 4 days ago 1 reply      
Quad channel, so you have to install RAM with 4 match sticks at a time?
18
api 4 days ago 0 replies      
I did a lot of work with artificial life and evolutionary computation in the early 2000s. Wish we had these chips back then.
19
DonHopkins 4 days ago 0 replies      
How long does it take to drip a threa?
20
jhoutromundo 4 days ago 0 replies      
Opteron feels o//
21
mrilhan 4 days ago 1 reply      
I recently tried to go the AMD/Ryzen route. I like an underdog comeback story as much as the next guy.

But be warned: Motherboards that "support" Ryzen do not in fact support Ryzen out of the box. You have to update the BIOS to support Ryzen. How do you POST without a CPU you ask? Who knows? Magic, possibly.

I still don't understand how AMD expects their customers to have more than one CPU (and possibly DDR4-2133 sticks) to be able to POST and update the BIOS.

I returned everything AMD and went back to safe, good ole Intel. Worked on first try. I'm never getting sucked into AMD hype again.

Also, when I went back to return the AMD components to Fry's, the manager said they were aware/used to getting Ryzen returns because of this.

29
Hacker's guide to Neural Networks (2012) karpathy.github.io
420 points by catherinezng  3 days ago   38 comments top 11
1
frenchie4111 3 days ago 6 replies      
I've read so many of these, none of them include the information I need.

If someone wrote a "Hackers guide to Tuning Hyperparameters" or "Hackers guide to building models for production" I would ready/share the shit out of those.

2
NegatioN 3 days ago 2 replies      
This has been submitted quite a few times in the past: https://hn.algolia.com/?query=karpathy.github.io%2Fneuralnet...
3
postit 3 days ago 0 replies      
A good sit in probability theory and multivariate calculus is the first thing you should spend your time if you want to understand NN, ML and most of AI for once.

These hacker guides only scratch the surface of the subject which, in part, contributes to creating this aura of black magic that haunts the field; I'm not saying that is a bad thing though, but it needs to be a complementary material, not the way to go.

4
stared 3 days ago 0 replies      
When it comes to backpropagation, PyTorch introduction contains some valuable parts: http://pytorch.org/tutorials/beginner/deep_learning_60min_bl...
5
GoldDust 3 days ago 0 replies      
As someone who is quite new to this field and also a software developer I really look forward to seeing this progress. I write and look at code all day so for me this is much easier to read than the dry math!
6
debacle 3 days ago 0 replies      
Static neural networks on Rosetta Code for basic things like Hello World, etc, would do a lot to aid in people's understanding of neural networks. It would be interesting to visualize different trained solutions.
7
nategri 3 days ago 0 replies      
Knew this wasn't for me when he had to introduce what a derivative was with a weird metaphor. I like this approach to teaching things (it's Feynman-y) but half the time I end up hung up on trying to understand a particular author's hand-waving for a concept I already grok.
8
adamkochanowicz 3 days ago 0 replies      
Thank you for posting this! I hadn't seen it and have been looking for a simple guide like this one.
9
finchisko 3 days ago 0 replies      
thanks for sharing, apparently i missed past submits
10
amelius 3 days ago 5 replies      
Hmm, I've just scanned through this, but it seems this gets the concept of stochastic gradient descent (SGD) completely wrong.

The nice part of SGD is that you can backpropagate even functions that are not differentiable.

This is totally missed here.

11
du_bing 3 days ago 0 replies      
Wonderful guide, thanks for sharing!
30
Improving air conditioner efficiency could reduce worldwide temps nytimes.com
274 points by aaronbrethorst  4 days ago   330 comments top 34
1
djsumdog 4 days ago 14 replies      
So we curb emissions by building a bunch of new A/C units? Sorry, that's silly.

CO2 is just one of many many forms of pollution. Think you're doing your part by purchasing a hybrid or electric vehicle? There are barrels of oil that go into those tiers, the plastics, not to mention all the pollution that goes into battery production. If your car is fuel efficient, the best thing you can do for the environment is drive it until the wheels fall off. When you do need to purchase a replacement, get a used hybrid or electric.

Climate change/CO2 is not the problem. It's the symptom of rampant consumerism. We can't buy and purchase our way out of destroying the planet. We have to consume less, build cell phones that are upgradable and last a decade instead of 2 ~ 3 years. Companies need to be praised for smaller factories and lower sales for products that cost more and last longer.

That is a very very huge shift in the way we think. I'm not sure if it's even remotely feasible or what it would take to convince people, industry, the world to simply consume less.

2
cannonpr 4 days ago 3 replies      
I hate to say it but AC always felt like thoughtless engineering and consumerism, especially the electric varieties.It's ironic that in a sunny, energy rich environment, you spend extra energy on a heat pump. In a lot of environments some better architecture will take care of the problem via passive methods, additionally evaporative methods work pretty well in dry environments and polute considerably less ?Failing that, hell atleast use some solar energy to run the heat pumps locally, atleast stop burning stuff to power them.

Failing all of the above, stoicism isn't that bad, honest, neither are life style changes that shift high activity periods to later in the day, they are widely practised in Mediterranean countries.

3
clumsysmurf 4 days ago 4 replies      
Trump's 2018 budget zeroes funding for Energy Star, which among other things, helps consumers pick the most efficient devices, save money in the long run.

What other ways can consumers compare the efficiency of A/C units? I would think some standardized testing and labels would be required.

4
bradlys 4 days ago 8 replies      
> The Lawrence Berkeley study argues that even a 30 percent improvement in efficiency could avoid the peak load equivalent of about 1,500 power plants by 2030.

Okay, but where is this 30% jump in efficiency going to come from? That seems like a pretty big leap!

5
davidw 4 days ago 3 replies      
People in the US consume way too much air conditioning. It's pretty common where I work for people to have sweaters to put on inside due to the AC. Outside it's in the 80ies, with something like 10% humidity in the summer - absolutely perfect unless you're doing hard labor in direct sunlight.
6
SilasX 4 days ago 0 replies      
... only if this doesn't temporarily bid down energy prices and lead others to use the same energy somewhere else.

https://en.wikipedia.org/wiki/Jevons_paradox

Note: the more potential uses of a resources, the more vulnerable it is to Jevons effects, where people use a resource more in response to being able to use it more efficiently.

The real benefit of energy efficiency is not that it reduces energy use by itself, but that it reduces the utility loss from implementing the caps and taxes necessary to actually reduce total usage.

7
rb808 4 days ago 5 replies      
Half the article wasn't about a/c efficiency it was that HFC is more of a greenhouse gas than CO2, and was agreed to be phased out.

Is there really an HFC replacement - what is it? I wasnt aware.

8
SmellTheGlove 4 days ago 3 replies      
I have a Fujitsu mini split that has been awesome in terms of bringing my electric bill down versus window units (we live in Maine, central A/C is less common here, and few homes were built with it until the 2000's). It does, like most splits, use R-410A, but I'd be happy to use something else if it didn't kill the efficiency.

In parallel with refrigerants and efficiency, though, I wonder if the article misses on mentioning geothermal cooling. Those systems are expensive, but if you can bring down the install cost and power them with cleaner energy, you solve some other problems. In developing nations, maybe you try and build larger systems designed to cool multiple residential units - and start to require it for mid/high-rise residential construction?

9
johngalt 3 days ago 0 replies      
Large number of comments here acting like A/C is some wasteful extravagance, or that people who live in warm climates should just move or 'get used to the heat'.

I don't mean to spoil the moralizing fun here, but cooling uses less energy than heating. So perhaps you should put on a sweater when it drops below freezing where you live. You'll get used to it. Or you could move.

10
Dangeranger 4 days ago 4 replies      
Could higher efficiency cooling be done by using more evaporative cooling systems (Swamp Coolers)[0] rather than traditional AC units?

There are climates where evaporative cooling is not effective, but perhaps they would be useful in the majority of climate regions.

[0] https://en.wikipedia.org/wiki/Evaporative_cooler

11
Element_ 4 days ago 1 reply      
Toronto has a deep lake water cooling system that pumps cold water from the bottom of Lake Ontario and circulates it around the downtown core. It is capable of cooling 100 high-rise buildings. I believe when it was constructed it was the largest system in North America.

https://en.wikipedia.org/wiki/Enwave

12
adgqet 4 days ago 1 reply      
Misleading headline. Research found that the temperature increase could be lowered by one degree centigrade.
13
bcatanzaro 4 days ago 0 replies      
The planet would be better off if people moved out of the cold North and instead used more air conditioning. That's because heating is incredibly carbon intensive. Think about the temperature gradients in New York in the winter time. Going from 20 or 30 degrees F to 70 degrees is more carbon intensive than going from 90 degrees to 70 degrees, and the number of days it's cold in the winter is often greater than the number of days it's hot in the summer. The overall carbon burden of heating is greater than that of cooling.

This means that the carbon angst directed at AC is primarily a puritanical impulse. It's a new thing! It feels nice! So it must be a sin!

However, refrigerants are bad for climate because they have huge greenhouse gas potential multipliers.

So the solution isn't really to improve air conditioner efficiency, it's rather to find refrigerants with less warming potential.

And move everyone out of New York and Boston - their climate conditioning is very carbon intensive.

14
dmritard96 4 days ago 2 replies      
One thing missing from this article is demand response:

"It matters, researchers say, because cooling has a direct relationship with the building of coal-fired power plants to meet peak demand. If more air-conditioners are humming in more homes and offices, then more capacity will be required to meet the demand. So 1.6 billion new air-conditioners by 2050 means thousands of new power plants will have to come on line to support them."

We https://flair.co offer demand response tech for minisplit control that can help prevent having to build all the 'peaker plants'. This gets extra interesting when you add intermittent supply (solar/wind) and grid tied storage (Tesla has been making big pushes here among others). Hopefully, we are able to scale these up in parallel to prevent a bunch of coal fired plants from being built for the 1-3% of the year with the hottest days.

15
pierrebeaucamp 4 days ago 2 replies      
I'm pretty disappointed in the numbers they chose for a vegetarian diet. It feels to me as if they actively went ahead and picked to lowest values they could find in their source. (Btw the source itself is a good read imo: http://www.drawdown.org/solutions/food/plant-rich-diet)

You could argue that people are not willing to go vegetarian or even vegan - but at least level the numbers when comparing it with other solutions: If everyone would go vegetarian, their source states 132 gigatons of CO2 reductions.

I also liked this quote from the report:

> As Zen master Thich Nhat Hanh has said, making the transition to a plant-based diet may be the most effective way an individual can stop climate change.

16
quadrangle 4 days ago 0 replies      
We already have solutions for dramatically more effective conditioning of indoors. Simply do other effective things to cool the indoors. Modern insulated whole-house fans like Airscape, exterior shades, etc. see http://www.treehugger.com/sustainable-product-design/10-over...

The efficiency focus is itself misguided in several ways. http://freakonomics.com/podcast/how-efficient-is-energy-effi...

17
EGreg 4 days ago 3 replies      
Not for nothing, but ain't greenhouse gases only the short term problem?

The Earth radiates a fixed amount of energy into space every year. But when we produce electricity etc. no matter how we do it, more than half of the energy escapes as heat - a byproduct of boiling the water or whatever!

This isn't sustainable in the long run either! We are basically raising the temperature of the atmosphere even without greenhouse gases.

Tell me where I'm going wrong:

https://dothemath.ucsd.edu/2012/04/economist-meets-physicist...

https://dothemath.ucsd.edu/2011/10/why-not-space/

18
pdelbarba 4 days ago 0 replies      
I'm a little confused why solar isn't mentioned. Peak temperature and peak solar flux are highly correlated so this isn't some weird grid storage problem. Tighten standards for new systems and construction to be a little more efficient and let economics go to work.
19
zackmorris 4 days ago 0 replies      
One of the most wasteful components is the condenser. Salt water air conditioners can accomplish the same thing much more easily (50-75% savings):

http://www.happonomy.org/get-inspired/salt-water-air-conditi...

https://www.cnet.com/news/salt-driven-air-conditioner-looks-...

This is very old technology so people probably chose aesthetics over cost. Although when I think tacky, I think window air conditioning units..

20
PhantomGremlin 4 days ago 0 replies      
We need a corresponding article telling us how many power plants we can avoid building by not mining Bitcoin. I love the general idea of cyber currency / bitcoin / block chain, but I hate that the implementation requires so much energy.
21
grogenaut 4 days ago 0 replies      
If we bumped efficiency 30%, how many more people would run the AC 30% more?
22
clenfest 4 days ago 0 replies      
In this house we obey the laws of thermodynamics!
23
thomk 4 days ago 0 replies      
Slightly offtopic but I just had a new HVAC system put in my house and one of the things the tech pointed out to me is that effective AC has a lot to do with effective dehumidifying.

I don't know why it never crossed my mind before but now when I transition from indoors to outdoors (and back) I notice the humidity delta as much as the temperature delta.

24
uses 4 days ago 0 replies      
It's funny how almost without fail on HN, I can go to the comments, and the #1-5 comments is someone who quickly dismisses the main premise of the linked article. It's ridiculous how common this is. I've been reading HN over a decade and I don't remember if it was always like this?
25
afinlayson 4 days ago 0 replies      
Air conditioners are really inefficient, and people run them in excess. And because there's no carbon tax, the cost of them is too cheap to curb usage. Sure it won't solve the whole problem, but solving this issue would be very valuable to the planet.
26
kylehotchkiss 4 days ago 3 replies      
Wouldn't switching to DC motors for both the fan and the compressor save a lot of power?
27
Mz 4 days ago 1 reply      
Passive solar and vernacular architecture makes vastly more sense. I get so tired of these schemes to make our broken lifestyles "more efficient." Just adopt a better method entirely and quit quibbling about tiny efficiency gains.
28
axelfontaine 4 days ago 12 replies      
American air conditioners running at full power, chilling the interior and dripping on the sidewalk below on a hot day always deeply trouble me. Maybe it's my european view on things, but for contrast here in Munich we aren't just building out a city-wide heat network, but we also have a cold network! Cold river water flows through the pipe network that traverses the city and large office building can get connected to it. This way they can save massively on electricity for air conditioning by having the water do the cooling instead. And then once the water has traversed all pipes, it simply gets released back into its stream on the other end of town, just as clean as when it entered, and only slightly warmer.
29
petre 4 days ago 0 replies      
Using a white roof and employing other passive coolong techniques could improve AC efficiency, or even make it redundant.
30
maxxxxx 4 days ago 2 replies      
Just insulate the houses in the US. I am always shocked how badly built US houses that cost 600k are.
31
return0 4 days ago 0 replies      
let's just build a giant A/C and put the external unit on the moon
32
jwilk 4 days ago 1 reply      
Wrong symbol in the title:

= ordinal indicator

= degree

33
Zarath 4 days ago 0 replies      
Open a damn window. I'm sure in some places AC is necessary, but way too often I hear/see people running it when there is absolutely no reason other than they are even mildly uncomfortable.

Seriously, this problem isn't going to be fixed until people actually pay the true cost of what they are doing: Electricity + Global Warming externalities.

34
33W 4 days ago 3 replies      
[SPOILER ALERT]

Can we change the post title to match the article?

"If You Fix This, You Fix a Big Piece of the Climate Puzzle"

       cached 18 July 2017 02:11:01 GMT