hacker news with inline top comments    .. more ..    14 Jul 2017 Best
home   ask   best   3 months ago   
1
Net Neutrality Day of Action: Help Preserve the Open Internet blog.google
1631 points by ghosh  1 day ago   423 comments top 52
1
mabbo 1 day ago 32 replies      
If Google were actually serious about Net Neutrality, they would use their insane market power to protect it.

How? Well, a simple statement saying "any ISP who abuses net neutrality will have their customers cut off from Google products". No Google search, no YouTube, no Gmail. Have those requests instead redirect to a website telling the customer what their ISP is doing, why Google won't work with them, and how to call to complain to the ISP. Make the site list competitors in the user's area that don't play stupid games.

Is this an insane idea? Yep. Would Google come under scrutiny because of their now-obvious market power? Oh definitely. And Google would probably lose money over it. But it would certainly work.

People don't get internet, and then decide to use Google. They want Google and then get internet for that purpose.

edit: an hour later, fixing an autocorrect word

2
AndrewKemendo 1 day ago 5 replies      
Thanks in part to net neutrality, the open internet has grown to become an unrivaled source of choice, competition, innovation, free expression, and opportunity.

Unless my history is wrong, and please correct me if that is the case, until the Title II decision in 2015, there were no regulations preventing an ISP from discriminating network traffic. So to say that Net Neutrality has been key to an open internet from 1980-2015 seems without merit.

I think the argument here is the same for any argument of nationalization: To turn a private good into a public one.

Businesses, local and federal governments, have all contributed to the infrastructure that is the internet. So the private company can't say, "well it was all our investment" and equally the Government can't say "This is a public good."

3
ambicapter 1 day ago 11 replies      
This has been the weakest day of action I could imagine. I thought sites were going to be throttled. Turns out its just some color changes and, oh, reddit has a fancy "slow-loading" gif for their website name. A real wake-up call!
4
bobcallme 1 day ago 5 replies      
"Net Neutrality" in its final form did not solve or fix any problems with the Internet. The definition of "Net Neutrality" is poorly defined, too vague and does not have any proposed legislation attached to "fix" things. Even when new rules were implemented, ISPs still throttled torrents and manipulated traffic. The only way to fix the Internet is to do so from a technical perspective, not by adding more regulations that ISPs won't obey (they work that into their business model). The "Internet" has never been free and has always been controlled by a handful of entities. The only fix for the Internet is if everyone actively participates in the Internet's infrastructure and we work to create technologies that thwart active threats from ISPs or that gives ISPs competition.

;TLDR I don't support Net Neutrality.

5
cyphar 1 day ago 1 reply      
I know this is "old news" now, but it's very fascinating that Google is suddenly so concerned about "the open internet" 4 days after EME was ratified (a proposal that they authored and forced other browsers into supporting thanks to their enormous browser share).

It feels like Google (and other companies for that matter) are only concerned about "the open internet" when it benefits their bottom line. In fact, I'm not convinced that Google _does_ care. For SOPA and PIPA they actually did a (lukewarm) blackout of their site for the day of action. Wikipedia shut down on that day. Where has all of the enthusiasm gone?

6
EdSharkey 1 day ago 4 replies      
I don't understand the logic of ISP's throttling certain sites based on the traffic to those sites.

As a consumer on ISP's last mile lines, I make a series of TCP requests and I expect responses. Fill my pipes with those responses as best you can and charge me for the privilege. If you're not making enough money on that, charge me more for the bandwidth.

Market-wise, why would an ISP anything else than fill my pipe with what I'm asking for?

An ISP should make all the money it needs to make off my service subscription. It's not too far of a leap for me to imagine U.S. laws being changed that restrict ISP's to only being able to charge the end-user for their subscriptions with heavily regulated flat fees for peering arrangements and co-location services placed near the consumer.

The obvious shenanagans that are ramping up here will eventually lead to a massive consumer backlash and a regulatory hammer coming down. People are not going to forget what the open internet looked like.

7
rtx 1 day ago 6 replies      
FCC Chairman Ajit Pai: Why He's Rejecting Net Neutralityhttps://www.youtube.com/watch?v=s1IzN9tst28
8
peterashford 1 day ago 4 replies      
As a New Zealander, I find it extraordinarily inappropriate that global infrastructure like the Internet is being shaped by the whims of US politics and corporate culture. The Internet is a global network of global concern and it should be above the manoeuvring of Republicans and American Internet providers
9
gremlinsinc 9 hours ago 0 replies      
So glad I live in Utah -- where we have X-mission Pete Ashdown is a huge supporter of EFF and Net Neutrality and anti-NSA -- and Google fiber - google's a big supporter as well. Loved X-mission, but new landlord only has google fiber installed so using that, but both had 1GB connections..

Two great ISP's who WON'T be doing shenanigans like comcast/att when net neutrality is destroyed.

Too bad more people in America don't have good choices... I do think though the biggest thing they could do for 'action' --would be every Monday block all comcast/att users from using Google, Facebook, Twitter, Youtube, Reddit in protest... till the ISPs cry and beg and plead w/ the FCC to re-instate net-neurality.

If it's legal to prioritize websites over others... then it's legal for those same websites to prioritize certain ISPs over others...

10
lerpa 1 day ago 2 replies      
Net neutrality just helps the status quo, and forces the "evil greedy ISPs" to take your money. Yeah let's show them by giving them money and no competition to their business... wait.

Vote for less regulation, not just getting rid of NN but getting rid of the monopolies that exist at the local level.

11
crucini 11 hours ago 0 replies      
While I don't have a good grasp on the larger issue, I hope we can protect small players from being squeezed. In my limited understanding, there are really two separate things here: Comcast vs Youtube and Comcast vs startup. As I understand it, Comcast gets mad that they have to invest in infrastructure so people can watch Youtube. They think Youtube is free-riding on their infrastructure. Comcast is envious of Youtube's profits and eyeballs. So Comcast wants to squeeze money out of Youtube. A battle between giants.

The other issue is that small sites including startups could get throttled almost incidentally in this war. They don't use much bandwidth, being small, but if Comcast enacts some "bizdev" process where it takes six months of negotiations to get into the fast lane, any deal below $1M is probably not worth their time.

This is how cell phone software worked before the iPhone - get permission before you can develop (IIRC). If we end up with fast-lane preferential pricing, it should really be available to the smallest players. Ideally it should be free, but the Apple app store model would work - $99/year for fast lane access until your bandwidth is really significant. But would the individual have to pay $99 to every major ISP out there?

12
JoshTriplett 1 day ago 2 replies      
Now if only this were linked from the bottom of google.com .
13
zackbloom 1 day ago 1 reply      
If you use Cloudflare you can install the Battle for the Net widget: https://www.cloudflare.com/apps/net-neutrality
14
natch 1 day ago 0 replies      
Am I going blind, or is Google not listed here amongst the companies listed as participants behind battleforthenet.com?

https://www.battleforthenet.com/july12/#participants

Why, Google?

Yes I see they sponsored https://netneutrality.internetassociation.org/action/ but why not get behind both sites?

15
thidr0 12 hours ago 2 replies      
One thing I don't understand about net neutrality. Say I'm a toll road. I built the road when cars were relatively small and light. Now, some cars are getting really heavy and big (think semi trucks) and are the majority of my traffic. Because of this, they beat up the road and cause more congestion. So I want to repair the road and/or add more lanes by increasing the toll on these trucks. But all the trucking companies are complaining and preventing me from doing it, thus ultimately hurting the small personal cars that want to zip through.

Obviously this is an analogy to net neutrality, so why is this reasonable situation fundamentally different? In a free market, shouldn't I be able to increase the tolls on my private infrastructure for those that put the most stress on it?

(Now I will say, the fact that there's only one toll road option for many people is anti-competitive and against the free market, but that's not this topic)

16
throwanem 1 day ago 0 replies      
In the Notice of Proposed Rulemaking (Docket No. 17-108), much is made of the rapid growth of the Internet under the former "light-touch" regulatory regime. The notice overlooks that this was also an environment in which competition among many Internet service providers could and did flourish.

Since then, the provision of connectivity has consolidated among only a few very large companies, which among them have strongly oligopolic power to enforce whatever conditions they please upon their customers, both residential and commercial.

In the late-1990s, early-2000s environment of healthy competition among Internet service providers, utility-style regulation of ISPs, such as that here under consideration of repeal, was not a necessary measure.

However, in the current strongly oligopolic environment, only the regulatory power of the United States Government can continue to provide and enforce sufficient oversight to maintain a semblance of free market behavior.

Internet-powered entrepreneurship greatly benefits the US economy. The small, and occasionally large, businesses thus created have an outsized economic impact in terms of taxes paid and jobs created. Absent a true free market, or even the regulatory semblance of one, for Internet connectivity, these businesses may well find themselves severely hampered in their ability to earn revenue, with concomitant negative effect on their ability to contribute to our economy.

As such, I must strongly urge the new regulatory regime proposed in this filing not be adopted.

I thank you very kindly for your time and your consideration, and trust that you will decide in that fashion which you regard to best serve the interests of your constituents and of the nation which you serve.

(Also, the "Battle for the Net" folks would have done well to hire a UX designer - or perhaps to hire a different one. The lack of any clear confirmation that one's message has been sent fails to inspire confidence. Perhaps there's an email confirmation that has yet to arrive, but...)

17
heydonovan 1 day ago 1 reply      
The marketing for Net Neutrality is very poor. Just asked a few non-technical friends about it. A few responded with "Do you believe everything you read on the Internet?". Now if all their favorite websites were shutdown for a day, that would get everyones attention.
18
openloop 1 day ago 1 reply      
I am starting a small business. One of the decisions I must account for is network performance versus price. Perhaps I choose to partner with a company that my network deprioritizes. I am already at a disadvantage because I cannot afford to run my own lines or peer like large corporations.

These same corporations can invest or purchase smaller new buisness and enhance their portfolio. Some already support network neutrality as they understand this.

I know my buisness depends upon my own effort. But I am sure many other small buisness owners face the same difficulty.

I know it is hard to be fair and objective in allowing access to the entire electromagnetic spectrum. Thanks for the article.

19
rf15 1 day ago 1 reply      
Can I contribute without being an US citizen?It seems to be an US-internal issue, but considering that most of the net belongs to the US, this might actually be a far more global question than is legally coverable/definable by US law.
20
shmerl 1 day ago 1 reply      
I didn't see any Net Neutrality related banner at: https://google.com

So Google didn't do what they could here.

21
joeyspn 1 day ago 1 reply      
22
FRex 1 day ago 1 reply      
I can't even enter the USA without a visa that is expensive, hard to get and doesn't guarantee entry but I'm getting all these net neutrality PSAs today telling me to send letters to FCC and Congress... I'm supportive of the idea itself but it's a bit funny and stupid, the Americano-centrism.
23
chroem- 1 day ago 1 reply      
It's disingenuous for big business to try to frame this as a grassroots movement for freedom on the internet when they were completely silent about illegal NSA spying. The only difference between NSA spying and losing net neutrality is that without net neutrality their profits might be threatened.
24
mychael 23 hours ago 0 replies      
Follow the money. Do you really think the biggest corporations in America support Net Neutrality because of some altruistic need for things to be "fair"?
25
openloop 1 day ago 0 replies      
I am starting a small business. One of the decisions I must account for is network performance versus price. Perhaps I choose to partner with a company that my network deprioritizes. I am already at a disadvantage because I cannot afford to run my own lines cross state like large corporations.

These same corporations can invest or purchase smaller new buisness and enhance their portfolio. Some already support network neutrality as they understand this.

I know my buisness depends upon my own effort. But I am sure many other small buisness owners face the same difficulty.

I know it is hard to be fair and objective in allowing access to the entire electromagnetic spectrum.

26
executive 1 day ago 0 replies      
Help Preserve the Open Internet: Repeal and Replace Google AMP
27
forgotmysn 1 day ago 0 replies      
If anyone would like to ask more direct questions about Net Neutrality, the ACLU is having an AMA on reddit right nowhttps://www.reddit.com/r/IAmA/comments/6mvhn3/we_are_the_acl...
28
daveheq 1 day ago 0 replies      
When everybody relies on the internet, even moreso than phones, it's a public utility that needs protection from the greed-feeders.
29
Anarchonaut 1 day ago 0 replies      
Net neutrality (government's involvement in the Internet) sucks

https://www.google.de/amp/s/techcrunch.com/2017/05/19/these-...

30
yarg 1 day ago 1 reply      
The only real way to ensure net neutrality is to ignore the bullshit and implement a distributed secure internet.

Net neutrality could be forced into place, regardless of the laws passed by Congress or the malfeasance of the ISPs.

I see no reason why Google would ever support such a thing.

31
thinkingemote 1 day ago 1 reply      
Forgive me as a European but are there companies who oppose net neutrality? As in are there HN readers who work for them? If so, who are they and what are their reasons? Is the issue like same sex marriage where the only opposition is so laughably out of date or are there nuances?
32
blue_leader 12 hours ago 0 replies      
All this going and Darpa wants to put ethernet jacks into our brains.
33
geff82 1 day ago 0 replies      
Greetings from Europe where we have er neutrality. Good luck to my American friends with voting for a sane government in 3 years. Maybe there are some remainings of the country you could have been.
34
tmaly 1 day ago 0 replies      
Another channel to consider, but much more of a long tail play is to put some effort into the state level political races. Many politicians with the exception of wealthier business people get started at the state level.
35
protomyth 1 day ago 0 replies      
Does anyone have actual legislation written up that I can point my Congresspeople to? Is there a bill that can be introduced that will accomplish the objective of "Net Neutrality"?
36
wenbert 1 day ago 0 replies      
If this turns out to be big amongst other things, then some "big" news will come up in next few days to cover it up.

At least that how they would do it in Philippines.

37
pducks32 1 day ago 0 replies      
Off topic: this is a very nice site. Its clean, easy to read (iPhone and iPad), and I think it makes good use of Google's design language.
38
rnhmjoj 1 day ago 0 replies      
Google trying to preserve the Open Internet... yeah right.
39
nickysielicki 1 day ago 3 replies      
(This comment is a little bit disorganized, so I apologize for that.)

Far too many people don't seem to understand the arguments against net neutrality as it has been proposed... There's been much made about the "astroturfing" and automated comments on the FCC website that go against net neutrality-- but what about the reverse? John Oliver doesn't know what the hell he's talking about. Reddit and HN provide warped perspectives on the issue.

Don't you guys realize that no matter what policy is chosen, someone is getting screwed and someone going to profit? Don't get me wrong, the ISPs are not exactly benevolent organizations. But I don't think they're evil either. Plain and simple, if you think this is a cut-and-dry, good-versus-evil, conglomerates-versus-littleguy issue, I think you're not hearing both sides of the issue. This issue is between content providers that serve far more bits than they take in, and ISPs, and there are billions of dollars on both sides.

In other words, don't think for a second that this is about protecting small internet websites from having to pay ransom. That's not what is going to happen. The only people who are going to be squeezed are the giants like Google, Netflix, etc., and it's no surprise that these are the people who are making such a fuss about it today.

The particular event that made me reconsider net neutrality was digging into the details of the Comcast/Netflix/Level3 fiasco a couple years ago. Everything I had heard about that situation made it sound to me like Comcast was simply demanding ransom. The reality of the situation is that L3 and Netflix acted extremely recklessly in how they made their deals, and IMO deserved everything that came to them. Much is made about "eyeball ISPs" and the power it gives them. In reality, I think Netflix has more power in swaying consumers, and I think they used that power to bail themselves out of a sticky situation by badmouthing Comcast.

I don't see how compensatory peering agreements would work out well in a net neutral world. Specifically, the FCC proposal for Title II classification (paraphrasing here) said that the FCC would step in when it believed one party was acting unfairly. It is far too open-ended, doesn't list any criteria for what that means, and it's not the FCCs job anyway, the FTC should be doing that.

But in general I don't think net neutrality is a good idea. I think that people are out of touch with internet access in rural parts of the US, and I don't think NN is beneficial for that situation at all. My grandmother pays $30/mo for internet access that she barely uses, and I don't think it's right to enshrine into law that Comcast can't offer her a plan where she pays $5/mo instead for limited access to the few sites she uses.

As a bandwidth-hogging internet user, a lack of net neutrality will probably mean that I will pay more. But maybe that's how it is supposed to be. The internet didn't turn out to be what the academics once hoped it would be. And that's okay. The internet should serve everyone, however they want to use it, and the market should be built around that principle-- not around decades-old cypherpunk ideals.

I think it's incredible that behemoths like Google have the nerve to paint this as if they care about an open internet. It's obvious that their dominance is what makes an open internet irrelevant.

40
tyng 19 hours ago 0 replies      
Funny thing is, I can't even visit blog.google from China
41
valuearb 1 day ago 3 replies      
I have never understood the need for net neutrality. That doesn't mean we don't need it, it means that no one has ever explained the need to me in a way that makes sense. Give me real world examples. What has any ISP done that would violate Net Neutrality that I would object to?
42
aryehof 1 day ago 0 replies      
Is this just an issue in the USA?
43
hzhou321 1 day ago 0 replies      
Google, Amazon, Netflix vs. ATT, Verizon, Comcast.

Monopolies vs monopolies.

Where's the freedom for us?

44
mnm1 1 day ago 6 replies      
Sorry Google (and FB, Amazon, etc.) this doesn't actually count as taking action. Not even a single link on their home page. An obscure post on a blog won't do shit. Let's stop pretending that you want net neutrality, Google, et al. Day of action my fucking ass.
45
dzonga 1 day ago 0 replies      
Simple way to understand Net Neutrality, look at the way AT&T prioritizes DirecTV Content on Mobile. It should be illegal, but well
46
aaronbrethorst 20 hours ago 0 replies      
Consider this your friendly reminder that Clinton wouldve preserved the NN rules set up under Obama, and we wouldnt even be having this discussion had she been elected.

Especially consider this the next time a friend says every politician is the same, or whatever.

47
unityByFreedom 1 day ago 0 replies      
I'm just bummed Google didn't change their banner like the SOPA days. Big miss there.
48
tyrrvk 1 day ago 5 replies      
I see a lot of shills posting their anti-Network Neutrality stuff here, so I wanted to chime in reminding folks of a few things:Telco's were forced at one point to share phone lines. Remember all those DSL startups? Remember speakeasy? This was called local loop unbundling. What did the Telco's do? everything possible to break or interfere with these startup service provides. The telco's felt that it was "their lines". Customers were angry and eventually local loop unbundling was dismantled. Ironically - France, South Korea and other nations copied this idea for their high speed network providers - and it actually worked! You can get high speed internet in these countries from a variety of providers. Competition! If the FTC/FCC wasn't completely under regulatory capture, and telcos like AT&T were punished for this behaviour and competitors were allowed to provide services over last mile connections then yes, we might not need something like Network Neutrality.Instead we have entrenched ISP monopolies and no competition. So we need consumer protections like TitleII and Network Neutrality. We also need community owned fiber networks springing up everywhere, which then over time could lessen the need for regulation as market forces would prevail. However, entrenched monopolies like Comcast and AT&T have to be shackled. It's the only way.
49
throwawaycuz 1 day ago 10 replies      
Serious question, could someone please educate me.

1) How is Net Neutrality different from a slippery slope to communism?

2) During the President Obama years, my ISP in the U.S. offered 3 different tiers of service at 3 different prices. How is that pure "net neutrality"? (this was similar to the situation where in the U.S., rich lefty-liberals don't send their kids to public schools... but want poor conservatives to send their kids to public schools, rich lefty-liberals don't want public housing built in their neighborhoods... etc. etc... but still want to virtue signal that they're in favor of public education and public housing)

50
dmamills 1 day ago 1 reply      
This day is a joke.
51
idyllei 1 day ago 0 replies      
Net neutrality has been a buzzword for a while now. Large new companies like to harp on it just for views, and they don't really explain to viewers just what losing it will mean. FOX News's motto "We report. You Decide" makes it evident that large networks don't care about the validity of information, just that it generates the largest amount of revenue for them. Companies (and individuals) with money won't care about net neutrality--they can pay their way around it. But the casual user can't afford that, and they aren't being educated as to what this means for them. We need to get large news networks to accurately report the situation and how consumers can help.
52
pheldagryph 1 day ago 1 reply      
I understand why tech companies and VCs want net neutrality. But this protest is what is wrong with Silicon Valley "culture". It's incredibly out of touch with reality.

Are we really being asked to take this hill? Why? By whom?

History will record the hundreds of thousands of children who will die of the current-and-present famine affecting East Africa and the Arabian Peninsula. It will only exacerbate the current, historic, and costly human migration to Europe.

This is a matter of life and death for millions. Though, unfortunately, the cost can only be measured in human lives:https://www.oxfam.org/en/emergencies/famine-and-hunger-crisi...

2
Battle for the Internet battleforthenet.com
1238 points by anigbrowl  1 day ago   472 comments top 50
1
Clanan 1 day ago 11 replies      
Can someone please respond to the actual pro-repeal arguments (in a non-John-Oliver-smug way)? Everyone is focusing on "woe is the unfree internet!" which seems like a spoonfed, naive response with no content. And just having Google et. al. on one side isn't enough of a reason, given their motivations. The given reasons for the current FCC's actions appear to be:

1. The Title II Regs were designed during the Great Depression to address Ma Bell and don't match the internet now.

2. The FCC isn't the right vehicle for addressing anti-competitive behavior in this case; the FTC would be better.

3. The internet didn't need fixing in 2010 when the regs were passed.

2
drucik 1 day ago 7 replies      
I don't get why I see arguments like 'Oh, why would it matter, its not neutral anyway' or 'it won't change anything' and no one tries to explain why allowing an end of net neutrality would be bad.I would say the reason why net neutrality is important is the following:

'On paper' the end of net neutrality will mean that big companies like google or facebook (which, according to the website, do not support net neutrality [why would they, right?]) will pay the ISPs for priority connection to their service, and ISPs will be able to create 2 payment plans for their customers - throttled network and high-speed, super unthrottled network for some premium money.And some people are fine with that - 'it's their service' or 'i only use email so i don't care' or other things like that.

But we are living in a capitalism world and things aren't that nice. If it is not illegal to slow down connections 'just because', I bet in some (probably short) time companies will start abusing it to protect their markets and their profits. I'd expect under the table payments, so the company F or B will be favored by a given ISP, and you can forget about startups trying to shake up the giants.

3
d3sandoval 1 day ago 2 replies      
If your internet browser were a hearing aid, the information coming in would be sound - whether that's your husband or wife asking you to do the dishes, a ring at your doorbell, or even an advertisement on the radio.

now imagine if that hearing aid wasn't neutral in how it handled sound. imagine if, when the advertisement played on the radio, it would be louder than all other sounds around. at that time, you might miss an important call, maybe your wife just said "I love you", or perhaps there's a fire in the other room that you are now not aware of, because clorox wipes demanded your full attention.

without net neutrality, we lose the ability to chose our own inputs. our provider, our hearing aid, gets to choose for us. this could mean slower video downloads for some, if they're using a competitor's streaming service for instance, but it could also mean the loss of vital information that the provider is not aware even exists.

By rejecting Title II recommendations, the FCC will introduce a whole new set of prioritization problems, where consumers no longer have the ability to decide which information is most important to them. and, if the provider goes so far as to block access to some information entirely, which it very well could without Title II protections, consumers would be at risk of missing vital information - like a fire in the house or their husband saying "I love you"

4
pedrocr 1 day ago 5 replies      
I fully support the net neutrality argument, it seems like a no brainer to me. However I find it interesting that companies like Netflix and Amazon who heavily differentiate in which devices you can have which video quality[1] will then argue that ISPs shouldn't be able to differentiate which services should have which transport quality.

The situation seems completely analogous to me. I'm paying my ISP for a connection and it thinks it should be able to restrict which services I use on top of it. I'm paying a content provider for some shows/movies and it thinks it should be able to restrict which device I use to view them.

The argument for regulation also seems the same. ISPs don't have effective competition because physical infrastructure is a natural monopoly. Content providers also don't have effective competition because content access is also a natural monopoly because of network effects (right now there are 2-3 relevant players worldwide).

[1] Both of them heavily restrict which devices can access 4K content. Both of them make it very hard to have HD from non-standard devices. Netflix even makes it hard to get 1080p on anything that isn't the absolute mainstream (impossible on Linux for example).

5
marcoperaza 1 day ago 2 replies      
John Oliver, College Humor, and some comedian are featured heavily. You're going to need to do more than give liberal millennials something to feel smug about, if you actually want to win this political battle.

I don't know where I stand on net neutrality, but this is certainly not going to convince me.

6
eriknstr 1 day ago 4 replies      
Very recently I bought an iPhone and a subscription that includes 4G service. With this subscription I have 6 GB of traffic per month anywhere in EU, BUT any traffic to Spotify is unmetered, and I don't know quite how to feel about this. On one side it's great having unlimited access to all the music in Spotify at any time and any place within the whole of EU, but on the other side I worry that I am helping damage net neutrality.

Now Spotify, like Netflix and YouTube and a lot of other big streaming services, almost certainly has edge servers placed topologically near to the cell towers. I think this is probably ok. In order to provide streaming services to a lot of people you are going to need lots of servers and bandwidth no matter what, and when you do you might as well work with the ISPs to reduce the cost of bandwidth as much as possible by placing out servers at the edges. So IMO Spotify is in a different market entirely from anyone who hasn't got millions or billions of dollars to spend, and if you have that money it should be no more difficult for you to place edge servers at the ISPs than it was for them.

But the unmetered bandwith deal might be harmful to net neutrality, maybe?

7
_nedR 1 day ago 1 reply      
Where were the protests, blackouts, outrage and calls for action from these companies (Google, Amazon, Netflix) when the Internet privacy bill was being repealed? I'll tell you where they were - In line outside Comcast and Verizon, eagerly waiting to buy our browsing histories.

We had their back the last time net neutrality issue came around (lets be honest, their business depends on a neutral net). But they didn't do the same for us. Screw them.

8
franciscop 1 day ago 4 replies      
As a foreigner who deeply cares about the web, what can I do to help? For good or for bad, USA decisions on the Internet spread widely around the world. "Benign" example: the mess before UTF8, malign example: DRM and copyright fight.

Note: Besides spreading the word; I do not know so many Americans

9
melq 1 day ago 1 reply      
The form on this page requires you to submit your personal information for use by third parties. I refreshed the page 3 times and saw 3 different notices:

"Fight for the Future willcontact you about future campaigns.""Demand Progress willcontact you about future campaigns.""FreePress willcontact you about future campaigns."

No opt out, no thank you.

10
superasn 1 day ago 1 reply      
This is great. I think the letter textarea should also be empty.

Instead there can be a small wizard with questions like "why is net neutrality important to you", etc with a guideline on what to write.

This way each letter will be a differently expressed opinion instead of every person sending the same thing and may create more impact.

11
webXL 1 day ago 2 replies      
It saddens me to see HN jump on the bandwagon of an anti-free-market campaign such as this. Words such as "battle" and "fight" have no place when coming up with solutions to problems in a market economy. I know government and its history of cronyism are a big part of the problem, but to think that more regulation will make everything better is woefully misguided. How did it come to pass that there's so little trust and understanding in the system of voluntary, peaceful, free trade that has produced virtually all of the wealth we see around us? Sure there are tons of problems, but I'm sure you'll agree that they pale in comparison to those of state-run economies.

The mistrust of large corporations is definitely warranted. McDonald's doesn't give a rat's ass about your health as long as you're healthy and happy enough keep coming back. And the reason why people come back is because McDonald's knows they have options; enough so that we all have it pretty good dietary-wise. Consumers and suppliers don't need to organize protests and boycotts of fast-food chains. Likewise, I don't think the major ISPs give a rat's ass about our choice/speed of content, so long as we're happy enough to not jump to another provider. As with food vendors, more choice, not more regulation, is the answer. The market should determine what it wants; not bureaucrats under the influence of large corporations.

12
agentgt 1 day ago 2 replies      
I have often thought the government should provide an alternative option for critical service just like they do with the mail and now health insurance (ignoring current politics).

That is I think the net neutrality issue could be mitigated or non issue if there were say a US ISP that operates anywhere where there is a telephones poles and public towers analogous to the United States Postal service (USPS).

Just like the roads (postal service) the government pseudo owns the telephone poles and airways (FTC) so they should be able to force their way in.

I realize this is not as free market as people would like but I would like to see the USPS experiment attempted some more particularly in highly leverage-able industries.

13
exabrial 1 day ago 2 replies      
Hey guys,

The Trump administration expressed interest in having the FTC regulate ISPs. Does it really matter who enforces net neutrality as long as we have it?

It's not secret that ISPs have local monopolies, and that's an area of expertise the FTC has successfully regulated in the past (look at how the Gas station economy works).

It's really time to move past the 2016 election and put petty political arguments aside. We're failing because we're divided. I beg everyone to please stop being smug, and push collaboration with the powers that be rather than confrontation.

14
kuon 1 day ago 2 replies      
I'm fully in favor of net neutrality, but I am not against premium plans for some content.

For example let's say I have a 50/20Mb internet. I should be able to browse the entire internet at that speed. But, if I want to pay extra to have like 100Mb with QoS only from netflix, I am not against this kind of service.

15
elbrodeur 1 day ago 0 replies      
Hey everyone! My name is Aaron and I'm on the team that helped put together some of the digital tools that are making this day of action possible. If you find any issues please let us know here or here: https://github.com/fightforthefuture/battleforthenet
16
redm 1 day ago 0 replies      
I see everyone framing this conversation around Comcast charging customers to access websites. I feel that's just a talking point, not the real meat of the issue.

Regarding Backbones:

If I recall correctly, this originally came about over a peering dispute between Level 3's network and Netflix. The internet backbones work on settlement-free or paid to peer. When there is an imbalance, the party with the imbalance pays. When there is the balance, no one pays. This system has worked well for a very long time.

Regarding Personal Internet Access:

Consumer Internet connections are overbooked, meaning you may have the 100Mb link to the ISP but the ISP doesn't have the capacity for all users to use 100Mb at the same time. In short, they aren't designed for all users to be using high capacity at the same time. These networks expect users using bursts of capacity. This is why tech like BitTorrent has been an issue too.

There is a fundamental shift occurring where users are consuming far more network capacity per user because of technology like Netflix. I know I'm streaming 4k Netflix :D

17
bluesign 1 day ago 1 reply      
Why not make barrier of entry easy for other/new ISPs by forcing them to share infrastructure for a fee, and then allow them to tier/price as much as they want?
18
Pigo 1 day ago 0 replies      
It's very disheartening that this is a battle that doesn't seem to end. They are just going to keep bringing proposals in hopes that one time there won't be enough noise to scare politicians, or worse the politicians are already in pocket just waiting for the opposition level to be at a minimum. The inevitability vibe is growing.
19
mnm1 1 day ago 1 reply      
Are Google, FB, Amazon, and others actually supporting this and if so, how? I don't see anything on their sites about this. As far as I'm concerned, they're not doing anything to support this. And of course, why would they?
20
polskibus 1 day ago 0 replies      
That's a great illustration of what happens when you let the market be owned by only several entities. Long time ago, there were more, with time centralization happened and now you have to bow to the survivors.

Similar situation but at an earlier stage can be observed on the Cloud horizon - see Google, AMZN, MS, and maybe FB. They own so much traffic, mindshare and sales power, in theory they are not monopolies, but together their policies and trends they generate shape the internet world.

I'm not saying this current situation with Verizon et al is OK, just saying that if you intend to fix it, consider addressing the next centralization that is still happening.

21
pycal 1 day ago 1 reply      
There's truth found in Ajit's comment, that Americans' internet infrastructure just isn't as good as other countries. Is that because of the regulatory climate? The ISPs receive a lease on the public spectrum; are they expected to meet a minimum service level of quality?

According to this source, the US rates low in many categories of internet access i.e. % of people over 4mbit, and average bandwidth:

https://www.fastmetrics.com/internet-connection-speed-by-cou...

22
sexydefinesher 1 day ago 0 replies      
*the American internet

Meanwhile the EU already has laws for Net Neutrality (though zero-rating is still allowed).

23
_eht 1 day ago 4 replies      
All I can find are arguments for neutrality, it seems like a very vocal crowd full of businesses who currently make a lot of money from people on the internet (reddit, Facebook, et al).

Anyone want to share resources or their pro priority internet stance?

24
callinyouin 1 day ago 0 replies      
I hope I'm not alone in saying that if we lose net neutrality I'll be looking to help organize and set up a locally owned/operated ISP in my area.
25
sergiotapia 1 day ago 1 reply      
"Net neutrality" sounds good but it's just more and more laws to regulate and censor the internet via the FCC.
26
leesalminen 1 day ago 1 reply      
I'm currently unable to submit the form on https://www.battleforthenet.com/.

https://queue.fightforthefuture.org/action is returning HTTP 500.

27
steve_taylor 1 day ago 0 replies      
This website gives me the impression that this is the latest cause that the left has repurposed as something they can beat us over the head with. The video thumbnails look like a gallery of the who's who of the left. This is disappointing, because this is an issue for all of us. People are sick and tired of the left beating them over the head with various causes and tend to rebel against them regardless of their merit. We shouldn't lose our internet freedoms over a petty culture war that has nothing to do with this.
28
AndyMcConachie 1 day ago 4 replies      
Just to be clear, this has nothing to do with the Internet, and everything to do with the USA. Most Internet users can't be affected by stupid actions of the FCC.

I guess I'm just a little annoyed that Americans think their Internet experience somehow represents 'the' Internet experience.

29
mbonzo 1 day ago 0 replies      
Ah, seems like this battle is just a part of the bigger war that is the ugly side of capitalism. The top companies that Millennials are raving about are threatening old companies, and as a result those old companies are making a pact to bring their rivals down.

Examples include Airbnb; the business is now being banned by cities like New York city, New Orleans, Santa Monica and countless others. Another is Uber; it's banned in Texas, Alaska, Oregon (except Portland), and more. Now it's our beloved, favorite websites that are being targeted by Internet Providers.

Who do you think will win this war?

30
untangle 1 day ago 0 replies      
I wouldn't care so much about net neutrality if there was open access to the last-mile conduit for broadband to my house. But there isn't. Comcast owns the coax and there is no fiber here (even though I live in the heart of Silicon Valley).

Comcast is conflicted on topics from content to voice service. So neutering net neutrality is tantamount to deregulating a monopoly. That doesn't sound smart to me.

31
scott_s 1 day ago 0 replies      
I feel like this site is missing context - what recent events have caused all of these organizations to protest? I found this NY Times article gave me a better idea of this context: "F.C.C. Chairman Pushes Sweeping Changes to Net Neutrality Rules", https://www.nytimes.com/2017/04/26/technology/net-neutrality...
32
web007 1 day ago 0 replies      
I can't help but feel like the site would be more effective if they removed https://www.battleforthenet.com/how-we-won/ - "we" didn't win, we just got a short reprieve from losing.
33
coryfklein 1 day ago 0 replies      
Is anybody else fatigued of this "battle"? I have historically spent time and effort supporting net neutrality, but it seems to rear its head again every 6 months.

It only seems inevitable now that these big-budget companies with great incentive will get their way.

34
joekrill 1 day ago 1 reply      
Is this form broken for anyone else? I'm getting CORS errors when it tries to submit to https://queue.fightforthefuture.org/action. That seems like a pretty big blunder, so I'm guessing maybe the site is just under heavy load?
35
ShirsenduK 1 day ago 1 reply      
In my hometown, Darjeeling (India), Internet has been blocked since June 17, 2017 by the government to censor the citizens of the area. Media doesn't cover us well as its a small town. How do we Battle for the Internet? How can we drum up support?
36
gricardo99 1 day ago 0 replies      
Perhaps the battle is already being lost.

Anyone else getting this error trying to send a letter from the site?

"! There was an error submitting the form, please try again"

http://imgur.com/a/V60gh

37
mvanveen 1 day ago 0 replies      
Please also check out and share the video recording flow we've built at https://video.battleforthenet.com
38
Flemlord 1 day ago 1 reply      
reddit (US Alexa rank = 4) is showing a popup to all users that sends them to www.battleforthenet.com. It is about time a major player started leveraging their platform. Why aren't Google, HN, etc. doing this too?
39
acjohnson55 1 day ago 0 replies      
I assume this is why the title bar has changed? Curious, since there doesn't seem to be a definitive statement about that.
40
stephenitis 1 day ago 0 replies      
Not surprising... Yahoo.com which was bought by Verizon has no mention of net neutrality.

HEADLINE: Kim Kardashian Goes Braless in Tank Top With Gym Shorts and Heels: See the Unusual Look!

Trending Now1. Chris Hemsworth 2. Wiz Khalifa 3. John McCain 4. Joanna Krupa 5. Sean Hannity 6. Universal Studios 7. McGregor suit 8. Maps 9. Loan Depot 10. Justin Bourne

Imagine if this was the fastest homepage for millions of Verizon customers. head explodes

edit.They are at least highlighting the FBI Director hearing on the homepage. shrug.

41
elorant 1 day ago 0 replies      
Is there anything we who don't live in US can do for you guys? I mean beyond spreading the word.
42
dec0dedab0de 1 day ago 0 replies      
I wish ARIN, and IANA would just blacklist any companies that actively work against net neutrality.
43
tboyd47 1 day ago 0 replies      
Tried submitting the form without a phone number, got an error.
44
OJFord 1 day ago 0 replies      
Slightly tangentially, it seems that today the only way to get to get to the front page, other than the 'back' button if applicable, is to edit the URL?
45
sharemywin 1 day ago 0 replies      
I'm all about the net neutrality.

But while we're at it how about some hardware neutrality.

And some data portability and control over who sees my information.

And maybe an API neutrality.

And how about letting the municipalities offer free wifi.

46
dep_b 1 day ago 0 replies      
Interestingly all this kind of stuff seems to happen in the 1984th year since Jesus died.
47
forgottenpass 1 day ago 0 replies      
Let me see if I have this right. The complaint against ISPs goes like this:

They established themselves as dominant middlemen and want to leverage that position to enable new revenue streams by putting their nose where it doesn't belong.

I'd have much more sympathy for belly-aching tech companies if they weren't all doing (or trying to do) the same goddamn thing.

48
shortnamed 1 day ago 11 replies      
love the blatant americentrism in the site:

"This is a battle for the Internet's future." - just American internet's future

"Team Cable want to control & tax the Internet" - they will be able to control the global system in which the US is just a part of?

"If we lose net neutrality, the Internet will never be the same" - American internet, others will be fine

49
dis-sys 1 day ago 5 replies      
Last time when I checked, there are 730 million Chinese users who mostly don't use _any_ US Internet services, their switches and servers are made/operated in China mostly by Huawei and ZTE. It is also laughable to believe that US domestic policies are going to affect Chinese decision making.

Policy leader? Not after we Chinese declared independence from the monopoly of your US "lead" on Internet.

50
throwaway2048 1 day ago 1 reply      
strange most of the top level comments are arguing against either net neutrality, or this campaign.

On a site that is otherwise extremely strongly for net neutrality.

Nothing suspicious about that...

3
How Discord Scaled Elixir to 5M Concurrent Users discordapp.com
782 points by b1naryth1ef  2 days ago   245 comments top 39
1
iagooar 2 days ago 7 replies      
This writeup make me even more convinced of Elixir becoming one of the large players when it comes to hugely scaling applications.

If there is one thing I truly love about Elixir, it is the easiness of getting started, while standing on the shoulders of a giant that is the Erlang VM. You can start by building a simple, not very demanding application with it, yet once you hit a large scale, there is plenty of battle-proven tools to save you massive headaches and costly rewrites.

Still, I feel, that using Elixir is, today, still a large bet. You need to convince your colleagues as much as your bosses / customers to take the risk. But you can rest assured it will not fail you as you need to push it to the next level.

Nothing comes for free, and at the right scale, even the Erlang VM is not a silver bullet and will require your engineering team to invest their talent, time and effort to fine tune it. Yet, once you dig deep enough into it, you'll find plenty of ways to solve your problem at a lower cost as compared to other solutions.

I see a bright future for Elixir, and a breath of fresh air for Erlang. It's such a great time to be alive!

2
jakebasile 2 days ago 4 replies      
I'm continually impressed with Discord and their technical blogs contribute to my respect for them. I use it in both my personal life (I run a small server for online friends, plus large game centric servers) and my professional life (instead of Slack). It's a delight to use, the voice chat is extremely high quality, text chat is fast and searchable, and notifications actually work. Discord has become the de facto place for many gaming communities to organize which is a big deal considering how discriminating and exacting PC gamers can be.

My only concern is their long term viability and I don't just mean money wise. I'm concerned they'll have to sacrifice the user experience to either achieve sustainability or consent to a buyout by a larger company that only wants the users and brand. I hope I'm wrong, and I bought a year of Nitro to do my part.

3
Cieplak 2 days ago 8 replies      
I know that the JVM is a modern marvel of software engineering, so I'm always surprised when my Erlang apps consume less than 10MB of RAM, start up nearly instantaneously, respond to HTTP requests in less than 10ms and run forever, while my Java apps take 2 minutes to start up, have several hundred millisecond HTTP response latency and horde memory. Granted, it's more an issue with Spring than with Java, and Parallel Universe's Quasar is basically OTP for Java, so I know logically that Java is basically a superset of Erlang at this point, but perhaps there's an element of "less is more" going on here.

Also, we're looking for Erlang folks with payments experience.

cGF0cmljaytobkBmaW5peHBheW1lbnRzLmNvbQ==

4
rdtsc 2 days ago 3 replies      
Good stuff. Erlang VM FTW!

> mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data

There is a nice new OTP 20.0 optimization - now the value doesn't get copied even on message sends on the local node.

Jesper L. Andersen (jlouis) talked about it in his blog: https://medium.com/@jlouis666/an-erlang-otp-20-0-optimizatio...

> After some research we stumbled upon :ets.update_counter/4

Might not help in this case but 20.0 adds select_replace so can do a full on CAS (compare and exchange) pattern http://erlang.org/doc/man/ets.html#select_replace-2 . So something like acquiring a lock would be much easier to do.

> We found that the wall clock time of a single send/2 call could range from 30s to 70us due to Erlang de-scheduling the calling process.

There are few tricks the VM uses there and it's pretty configurable.

For example sending to a process with a long message queue will add a bit of a backpressure to the sender and un-schedule them.

There are tons of configuration settings for the scheduler. There is to bind scheduler to physical cores to reduce the chance of scheduler threads jumping around between cores: http://erlang.org/doc/man/erl.html#+sbt Sometimes it helps sometimes it doesn't.

Another general trick is to build the VM with the lcnt feature. This will add performance counters for locks / semaphores in the VM. So then can check for the hotspots and know where to optimize:

http://erlang.org/doc/man/lcnt.html

5
mbesto 2 days ago 1 reply      
This is one of those few instances where getting the technology choice right actually has an impact on cost of operations, service reliability, and overall experience of a product. For like 80% of all the other cases, it doesn't matter what you use as long as your devs are comfortable with it.
6
jlouis 2 days ago 1 reply      
A fun idea is to do away with the "guild" servers in the architecture and simply run message passes from the websocket process over the Manifold system. A little bit of ETS work should make this doable and now an eager sending process is paying for the work itself, slowing it down. This is exactly the behavior you want. If you are bit more sinister you also format most of the message in the sending process and makes it into a binary. This ensures data is passed by reference and not copied in the system. It ought to bring message sends down to about funcall overhead if done right.

It is probably not a solution for current Discord as they rely on linearizability, but I toyed with building an IRCd in Erlang years ago, and there we managed to avoid having a process per channel in the system via the above trick.

As for the "hoops you have to jump through", it is usually true in any language. When a system experiences pressure, how easy it is to deal with that pressure is usually what matters. Other languages are "phase shifts" and while certain things become simpler in that language, other things become much harder to pull off.

7
danso 2 days ago 1 reply      
According to Wikipedia, Discord's initial release was March 2015. Elixir hit 1.0 in September 2014 [0]. That's impressively early for adoption of a language for prototyping and for production.

[0] https://github.com/elixir-lang/elixir/releases/tag/v1.0.0

8
didibus 2 days ago 5 replies      
So, at this point, every language was scaled to very high concurrent loads. What does that tell us? Sounds to me like languages don't matter for scale. In fact, that makes sense, scale is all about parallel processes, horizontally distributing work can be achieved in all language. Scale is not like perforance, where if you need it, you are restricted to a few languages only.

That's why I'd like to hear more about productivity and ease now. Is it faster and more fun to scale things in certain languages then others. Beam is modeled on actors, and offer no alternatives. Java offers all sorts of models, including actors, but if actors are the currently most fun and procudctive way to scale, that doesn't matter.

Anyways, learning how team scaled is interesting, but it's clear to me now languages aren't limiting factors to scale.

9
jmcgough 2 days ago 0 replies      
Great to see more posts like this promoting Elixir. I've been really enjoying the language and how much power it gets from BEAM.

Hopefully more companies see success stories like this and take the plunge - I'm working on an Elixir project right now at my startup and am loving it.

10
ShaneWilton 2 days ago 1 reply      
Thanks for putting this writeup together! I use Elixir and Erlang every day at work, and the Discord blog has been incredibly useful in terms of pointing me towards the right tooling when I run into a weird performance bottleneck.

FastGlobal in particular looks like it nicely solves a problem I've manually had to work around in the past. I'll probably be pulling that into our codebase soon.

11
joonoro 2 days ago 1 reply      
Elixir was one of the reasons I started using Discord in the first place. I figured if they were smart enough to use Elixir for a program like this then they would probably have a bright future ahead of them.

In practice, Discord hasn't been completely reliable for my group. Lately messages have been dropping out or being sent multiple times. Voice gets messed up (robot voice) at least a couple times per week and we have to switch servers to make it work again. A few times a person's voice connection has stopped working completely for several minutes and there's nothing we can do about it.

I don't know if these problems have anything to do with the Elixir backend or the server.

EDIT: Grammar

12
majidazimi 1 day ago 3 replies      
It seems awkward to me. What if Erlang/OTP team can not guarantee message serialization compatibility across a major release? How you are going to upgrade a cluster one node at a time? What if you want to communicate with other platforms? How you are going to modify distribution protocol on a running cluster without downtime?

As soon as you introduce standard message format, then all nice features such as built-in distribution, automatic reconnect, ... are almost useless. You have to do all these manually. May be I'm missing something. Correct me if I'm wrong.

For a fast time to market it seems quite nice approach. But for a long running maintainable back-end it not enough.

13
_ar7 2 days ago 0 replies      
Really liked the blog post. Elixir and the capabilities of the BEAM VM seems really awesome, but I can't really find an excuse to really use them in my day to day anywhere.
14
StreamBright 1 day ago 0 replies      
Whatsapp's story is somewhat similar. Relevant read to this subject.

http://www.erlang-factory.com/upload/presentations/558/efsf2...

15
ConanRus 2 days ago 1 reply      
I do not see there any Elixir specific, it is all basically Erlang/Erlang VM/OTP stuff. When you using Erlang, you think in terms of actors/processes and message passing, and this is (IMHO) a natural way of thinking about distributed systems.So this article is a perfect example how simple solutions can solve scalability issues if you're using right platform for that.
16
concatime 8 hours ago 0 replies      
Sad to see some people taking raw and insignificant benchmarks to evaluate a language[0].

[0] https://news.ycombinator.com/item?id=14479757

17
brian_herman 2 days ago 0 replies      
I love discord's posts they are very informative and easy to read.
18
etblg 1 day ago 0 replies      
Reading posts like this about widely distributed applications always gets me interested in it as a career path. Currently I'm working as a front-end dev with moderate non-distributed back-end experience. How would someone in my situation, with no distributed back-end experience, break in to a position working on something like Discord?
19
OOPMan 1 day ago 1 reply      
5 million concurrent users is great and all, but it would be nice if Discord could work out how to use WebSockets without duplicating sent messages.

This seems to happen a lot when you are switching between wireless networks (E.g. My home router has 2Ghz and 5Ghz wireless networks) or when you're on mobile (Seems to happen regularly, even if you're not moving around).

It's terribly annoying though and makes using the app via the mobile client to be very tedious.

20
renaudg 1 day ago 1 reply      
It looks like they have built an interesting, robust and scalable system which is perfectly tailored to their needs.

If one didn't want to build all of that in house though, is there anything they've described here that an off the shelf system like https://socketcluster.io doesn't provide ?

21
sriram_malhar 1 day ago 1 reply      
I really like elxir the language, but find myself strangely hamstrung by the _mix_ tool. There is only an introduction to the tool, but not a reference to all the bells and whistles of the tool. I'm not looking for extra bells and whistles, but simple stuff like pulling in a module from GitHub and incorporate it. Is there such documentation? How do you crack Mix?
22
omeid2 1 day ago 0 replies      
I think while this is great, it is good to remember that your current tech stack maybe just fine! after all, Discord start with mongodb[0].

[1]. https://blog.discordapp.com/how-discord-stores-billions-of-m...

23
alberth 2 days ago 2 replies      
Is there any update on BEAMJIT?

It was super promising 3 or so years ago. But I haven't seen an update.

Erlang is amazing in numerous ways but raw performance is not one of them. BEAMJIT is a project to address exactly that.

https://www.sics.se/projects/beamjit

24
ramchip 2 days ago 1 reply      
Very interesting article! One thing I'm curious about is how to ensure a given guild's process only runs on one node at a time, and the ring is consistent between nodes.

Do you use an external system like zookeeper? Or do you have very reliable networking and consider netsplits a tolerable risk?

25
andy_ppp 1 day ago 1 reply      
Just as an aside how would people build something like this if they were to use say Python and try to scale to these sort of user levels? Has anyone succeeded? I'd say it would be quite a struggle without some seriously clever work!
26
neya 1 day ago 0 replies      
Hi community,Let me share my experience with you. I'm a hardcore Rails guy and I've been advocating and teaching Rails to the community for years.

My workflow for trying out a new language involves using the language for a small side project and gradually would try to scale it up. So, here's my summary, my experience of all the languages so far:

Scala - It's a vast academic language (official book is with ~700 pages) with multiple ways of doing things and it's attractiveness for me was the JVM. It's proven, robust and highly scalable. However, the language was not quite easy to understand and the frameworks that I've tried (Play 2, Lift) weren't as easy to transition to, for a Rails developer like me.

Nevertheless, I did build a simple calendar application, but it took me 2 months to learn the language and build it.

GoLang - This was my next bet, although I didn't give up on Scala completely (I know it has its uses), I wanted something simple. I used Go and had the same experience as I had when I used C++. It's a fine language, but, for a simple language, I had to fight a lot with configuration to get it working for me - (For example, it has this crazy concept of GOPATH where your project should reside and if your project isn't there it'll keep complaining).Nevertheless, I build my own (simple) Rails clone in GO and realized this isn't what I was looking for. It took my about a month to conquer the language and build my (simple) side project.

Elixir - Finally, I heard of Elixir on multiple HN Rails release threads and decided to give it a go. I started off with Phoenix. The transition was definitely wayy smoother from Rails, especially considering the founding member of this language was a Rails dev. himself (the author of "devise" gem). At first some concepts seemed different (like piping), but once I got used to it, for me there was no looking back.

All was fine until they released Phoenix 1.3, where they introduced the concept of contexts and (re) introduced Umbrella applications. Basically they encourage you to break your application into smaller applications by business function (similar to microservices) except that you can do this however you like (unopinionated).For example, I broke down my application by business units (Finance, Marketing, etc.). This forced me to re-think my application in a way I never would have thought and by this time I had finished reading all 3 popular books on this topic (Domain Driven Design). I loved how the fact that Elixir's design choices are really well suited for DDD. If you're new to DDD I suggest you try giving it a shot, it really can force you to re-think the way you develop software.

By the end of two weeks after being introduced to Elixir, I picked up the language. In a month and a half, I built a complete Salesforce clone just working on the weekends. And this includes even the UI. And I love how my application is always blazing fast, picks up errors even before it compiles and warns me if I'm no using a variable I defined somewhere.

P.S there IS a small learning curve involved if you're starting out fresh:

1) IF you're used to the Rails asset pipeline, you'll need to learn some new tools like Brunch / Webpack / etc.2) Understand about contexts & DDD (optional) if you want to better architect your application.3) There is no return statement in Elixir!

As a Ruby developer, here are my thoughts:

1. So, will I be developing with Rails again? Probably yes, for simpler applications / API servers.2. Is Ruby dying? No. In fact, I can't wait for Ruby 3.

Some drawbacks of Elixir:1. Relatively new, so sometimes you'll be on your own and that's okay.2. Fewer libraries as compared to the Ruby eco-system. But you can easily write your own.3. Fewer developers, but should be fairly to onboard Ruby developers.

Cheers.

27
agentgt 1 day ago 0 replies      
I realize this is off topic but how does Discord make money? I can't figure out their biz model (I'm not a gamer so I didn't even know about them).
28
myth_drannon 2 days ago 1 reply      
It's interesting how on StackOverflow Jobs Elixir knowledge is required more often than Erlang.

http://www.reallyhyped.com/?keywords=erlang%2Celixir

29
jaequery 2 days ago 6 replies      
Anyone know if Phoenix/Elixir have something similar to Ruby's bettererror gem? I see Phoenix has a built-in error stack trace page which looks like a clone of bettererror but it doesn't have the real-time console inside of it.

Also, I wish they had a ORM like Sequel. These two are really what is holding me back from going full in on Elixir. Anyone can care to comment on this?

30
zitterbewegung 2 days ago 1 reply      
Compared to slack discord is a much better service for large groups . Facebook uses them for react.
31
grantwu 1 day ago 0 replies      
"Discord clients depend on linearizability of events"

Could this be possibly be the cause of the message reordering and dropping that I experience when I'm on a spotty connection?

32
dandare 1 day ago 1 reply      
What is the business model behind Discord? They boast about being free multiple times, how do they make money? Or plan to make money?
33
framp 2 days ago 0 replies      
Really lovely post!

I wonder how Cloud Haskell would fare in such a scenario

34
brightball 2 days ago 1 reply      
I so appreciate write ups that get into details of microsecond size performance gains at that scale. It's a huge help for the community.
35
KrishnaHarish 1 day ago 0 replies      
Scale!
36
KrishnaHarish 1 day ago 0 replies      
What is Discord and Elixir?
37
marlokk 2 days ago 0 replies      
"How Discord Scaled Elixir to 5M Concurrent Users"

click link

[Error 504 Gateway time-out]

only on Hacker News

38
orliesaurus 2 days ago 1 reply      
Unlike Discord's design team who seem to just copy all of Slack's designs and assets, the Engineering team seems to have their shit together, it is delightful to read your Elixir blogposts. Good job!
39
khanan 2 days ago 1 reply      
Problem is that Discord sucks since it does not have a dedicated server. Sorry, move along.
4
ECMAScript 2017 Language Specification ecma-international.org
590 points by samerbuna  2 days ago   238 comments top 27
1
thomasfoster96 2 days ago 4 replies      
Proposals [0] that made it into ES8 (whats new):

* Object.values/Object.entries - https://github.com/tc39/proposal-object-values-entries

* String padding - https://github.com/tc39/proposal-string-pad-start-end

* Object.getOwnPropertyDescriptors - https://github.com/ljharb/proposal-object-getownpropertydesc...

* Trailing commas - https://github.com/tc39/proposal-trailing-function-commas

* Async functions - https://github.com/tc39/ecmascript-asyncawait

* Shared memory and atomics - https://github.com/tc39/ecmascript_sharedmem

The first five have been available via Babel and/or polyfills for ~18 months or so, so theyve been used for a while now.

[0] https://github.com/tc39/proposals/blob/master/finished-propo...

2
callumlocke 2 days ago 3 replies      
This is mostly symbolic. The annual ECMAScript 'editions' aren't very significant now except as a talking point.

What matters is the ongoing standardisation process. New JS features are proposed, then graduate through four stages. Once at stage four, they are "done" and guaranteed to be in the next annual ES edition write-up. Engines can confidently implement features as soon as they hit stage 4, which can happen at any time of year.

For example, async functions just missed the ES2016 boat. They reached stage 4 last July [1]. So they're officially part of ES2017 but they've been "done" for almost a year, and landed in Chrome and Node stable quite a while ago.

[1] https://ecmascript-daily.github.io/2016/07/29/move-async-fun...

3
HugoDaniel 2 days ago 5 replies      
I would really love to see an object map function. I know it is easy to implement, but since they seem to be gaining ranks through syntax sugar, why not just have a obj.map( (prop, value) => ... ) ? :)
4
ihsw2 2 days ago 2 replies      
Notably, with shared memory and atomics, pthreads support is on the horizon.

https://kripken.github.io/emscripten-site/docs/porting/pthre...

Granted it may be limited to consumption via Emscripten, it is nevertheless now within the realm of possibility.

For this that cannot grok the gravity of this -- proper concurrent/parallel execution just got a lot closer for those targeting the browser.

5
flavio81 2 days ago 2 replies      
What I wish ECMAScript had was true support for number types other than the default 32-bit float. I can use 32 and 64 bit integers using "asm.js", but this introduces other complications of its own -- basically, having to program in a much lower level language.

It would be nice if EcmaScript could give us a middle ground -- ability to use 32/64 bit integers without having to go all the way down to asm.js or wasm.

6
pier25 1 day ago 2 replies      
In the last couple of years we've seen a small number of significant improvements like async/await but mostly small tepid improvements like string padding, array.map(), etc. It's like TC39 are simply polishing JS.

I'd like to see TC39 tackling the big problems of JS like the lack of static type checking. I'm tired of looking at a method and having to figure out if it is expecting a string, or an object.

We had EcmaScript4 about 10 years ago with plenty of great features but TC39 killed it. And yeah, it probably made sense since the browser vendor landscape was very different back then. Today it would be possible to implement significant changes to the language much like the WebAssembly initiative.

7
pi-rat 2 days ago 5 replies      
Really hate the naming for JS standards.. ES2017, ES8, ECMA-262. Way to confuse people :/
8
baron816 2 days ago 0 replies      
Regardless of what gets included in the spec, I hope people think critically about what to use and what not to use before they jump in. Just because something is shiny and new in JS, it doesn't mean you have to use it or that it's some sort of "best practice."
9
43224gg252 2 days ago 7 replies      
Can anyone recommend a good book or guide for someone who knows pre-ES6 javascript but wants to learn all the latest ES6+ features in depth?
10
pgl 2 days ago 2 replies      
Heres whats in it: https://github.com/tc39/proposals/blob/master/finished-propo...

And some interesting tweets by Kent C. Dodds: https://twitter.com/kentcdodds/status/880121426824630273

Edit: fixed KCD's name.Edit #2: No, really.

11
drinchev 2 days ago 1 reply      
For anyone wondering what's NodeJS support of ES8.

Everything is supported, except "Shared memory and atomics"

[1] http://node.green

12
speg 2 days ago 1 reply      
Is there a "What's new" section?
13
correctsir 1 day ago 0 replies      
I've been looking at the stage 2 and 3 proposals. I have a difficult time finding use for any of them except for Object spread/rest. The stage 4 template string proposal allowing invalid \u and \x sequences seems like a really bad idea to me that would inadvertently introduce programmer errors. I do hope the ECMAScript standardization folks will raise the barrier to entry for many of these questionable new features that create a maintenance burden for browsers and ES tooling and a cognitive burden on programmers. It was possible to understand 100% of ES5. I can't say the same thing for its successors. I think there should be a freeze on new features until all the browser vendors fully implement ES6 import and export.
14
rpedela 2 days ago 2 replies      
Has there been any progress on supporting 64-bit integers?
15
jadbox 2 days ago 1 reply      
I wish this-binding sugar would get promoted into stage 1.
16
gregjw 2 days ago 1 reply      
I should really learn ES6
17
ascom 2 days ago 1 reply      
Looks like ECMA's site is overloaded. Here's a Wayback Machine link for the lazy: https://web.archive.org/web/20170711055957/https://www.ecma-...
18
wilgertvelinga 2 days ago 2 replies      
Really interesting how bad the only JavaScript code used on their own site is: https://www.ecma-international.org/js/loadImg.js
19
emehrkay 2 days ago 2 replies      
I'd like to be able to capture object modifications like Python's magic __getattr__ __setattr__ __delattr__ and calling methods that do not exist on objects. In the meantime I am writing a get, set, delete method on my object and using those instead
20
espadrine 2 days ago 0 replies      
I made a short sum-up of changes in this specification here: http://espadrine.github.io/New-In-A-Spec/es2017/
21
lukasm 2 days ago 1 reply      
What is up with decorators?
22
komali2 2 days ago 0 replies      
>AWB: Alternatively we could add this to a standard Dict module.

>BT: Assuming we get standard modules?

>AWB: We'll get them.

lol

23
j0e1 2 days ago 1 reply      
> Kindly note that the normative copy is the HTML version;

Am I the only one who finds this ironic..

24
idibidiart 2 days ago 0 replies      
Wait, so async generators and web streams are 2018 or 2016?
25
Swizec 2 days ago 3 replies      
Time to update https://es6cheatsheet.com

What's the feature you're most excited about?

26
bitL 2 days ago 2 replies      
Heh, maybe JS becomes finally usable just before WebAssembly takes off, rendering it obsolete :-D
27
cies 2 days ago 2 replies      
Nice 90s style website ECMA!
5
Cloudflares fight with a patent troll could alter the game techcrunch.com
709 points by Stanleyc23  2 days ago   268 comments top 32
1
jgrahamc 2 days ago 3 replies      
More detail on what we are doing from three blog posts:

Standing Up to a Dangerous New Breed of Patent Trollhttps://blog.cloudflare.com/standing-up-to-a-dangerous-new-b...

Project Jengohttps://blog.cloudflare.com/project-jengo/

Patent Troll Battle Update: Doubling Down on Project Jengohttps://blog.cloudflare.com/patent-troll-battle-update-doubl...

2
JumpCrisscross 2 days ago 4 replies      
I've used Latham & Watkins. Just made a call to let a partner there know what I think about his firm's alumna and how it colors my opinion of him and his firm.

Encourage everyone to check with your firm's General Counsel about this. If you use Latham, or Kirkland or Weil, encourage your GC to reach out and make your views heard. It's despicable that these lawyers are harassing their firms' former and potential clients.

3
notyourday 2 days ago 3 replies      
It is all about finding a correct pressure point.

Long time ago certain Philadelphia area law firms decided to represent vegan protesters that created a major mess in a couple of high end restaurants.

A certain flamboyant owner of one the restaurants targeted decided to have a good time applying his version of asymmetric warfare. The next partners from those law firm showed up to wine and dine their clients in the establishment, the establishment(s) politely refused the service to the utter horror of the lawyers.

Needless to say, the foie gras won...

[Edit: spelling]

4
tracker1 2 days ago 3 replies      
I think that this is absolutely brilliant. I've been against the patent of generalistic ideas, and basic processes for a very long time. Anything in software should not really be patentable, unless there is a concrete implementation of an invention, it's not an invention, it's a set of instructions.

Let software work under trade secrets, but not patents. Anyone can implement something they think through. It's usually a clear example of a need. That said, I think the types of patent trolling law firms such as this deserve every bit of backlash against them that they get.

5
avodonosov 2 days ago 6 replies      
It was late summer night when I noticed that article on HN. I immediately noticed it's organized like a novel - this popular lame style which often annoys me lately:

 Matthew Prince knew what was coming. The CEO of Cloudflare, an internet security company and content delivery network in San Francisco, was behind his desk when the emails began to trickle in ...
Was he really behind his desk?

Hesitated a little before posting - am I trying to self-assert by deriding others? But this "novel" article style is some new fashion / cliche which might be interesting to discuss. Let's see what others think.

6
siliconc0w 2 days ago 2 replies      
I'm not a fan of the argument that if Blackbird weren't a NPE it'd be okay because Cloudflare could then aim it's 150 strong patent portfolio cannon back at them. It's basically saying incumbents like Cloudflare don't really want to fix the system, they want to keep the untenable 'cold war' status quo which protects them but burdens new entrants.
7
oskarth 2 days ago 5 replies      
> So-called non-practicing entities or holders of a patent for a process or product that they dont plan to develop often use them to sue companies that would sooner settle rather than pay what can add up to $1 million by the time a case reaches a courtroom.

Why on earth aren't non-practicing entity patent lawsuits outlawed? Seems like a no-brainer, and I can't imagine these firms being big enough to have any seriously lobbying power.

8
mabbo 2 days ago 2 replies      
> [Is Blackbird] doing anything thing that is illegal or unethical? continues Cheng. For the most part, its unethical. But its probably not illegal.

If it's not illegal, more work needs to be done to make it illegal. Inventors always have avenues, moreso today than ever before.

9
FussyZeus 2 days ago 3 replies      
I've never heard a good argument against this so I'll say it here: Require that the plaintiff in this cases show demonstrable, actual, and quantifiable loss by the activity of the defendant. It seems like such a no-brainer that a business suing for damage to it's business prospects after someone stole their idea would have to actually show how it was damaged. Even allowing very flimsy evidence would do a lot to dissuade most trolls, because as every article points out, they don't make anything. And if they don't make or sell a product, then patent or not, they haven't lost anything or been damaged in any way.
10
corobo 1 day ago 0 replies      
I'm hoping their fight actually leads to a defeat rather than a submission. I have faith that Cloudflare will see this through but I also had faith that Carolla would too.

https://www.eff.org/deeplinks/2014/08/good-bad-and-ugly-adam...

11
mgleason_3 2 days ago 2 replies      
We need to get rid of software patents. Patents were created to encourage innovation. Software patents simply rewarding the first person who patents what is almost always an obvious next step. That's not innovation.
12
tragomaskhalos 1 day ago 0 replies      
This reminds me of an altercation in the street that my neighbour reported overhearing some years ago:

Aggressive Woman: You need to watch your step, my husband is a criminal lawyer

Woman she was trying to intimidate: (deadpans) Aren't they all ?

13
ovi256 2 days ago 4 replies      
I've noticed a Techcrunch comment that makes this fight about software patents and states that forbiding them would be a good solution. I think that's a very wrong view to take. The software patent fight is worth fighting, but do not conflate the two issues. Abuse by patent trolls or non-practicing entities can happen even without software patents.

The law patch that shuts down patent trolls will have no effect on software patents, and vice-versa.

14
shmerl 2 days ago 2 replies      
Someone should figure out a way how to put these extortionists in prison for protection racket.
15
anonjuly12 1 day ago 0 replies      
> Its for this reason that Prince sees Cloudflares primary mission as figuring out how to increase Blackbirds costs. Explains Prince, We thought, if its asymmetric, because its so much cheaper for Blackbird to sue than for a company to defend itself, how can we make it more symmetric? And every minute that they spend having to defend themselves somewhere else is a minute they arent suing us or someone else.

They should take it a step further and apply the Thiel strategy of finding people with grievances against the founders of the patent troll and support individual lawsuits against them.

16
drtillberg 1 day ago 0 replies      
This is a dysfunction in the patent and legal processes that cannot be fixed by even more dysfunctional tactics deployed against the NPE. The rules against champterty (buying a cause of action) have been relaxed considerably to the extent in many jurisdictions of being a dead letter, and the litigation financing industry seems to have a better sound bite.

At least half of the problem is the "American Rule" of rarely shifting legal fees, which if you dig a bit you will find is of recent vintage. Back in time, for example in Massachusetts, there actually is a law for shifting legal fees as costs as a matter of course; the catch is that the fee is very low (even at the time it was enacted) of about $2.50 per case, which partly reflects inflation and partly antagonism toward legal fees.

I wonder whether a compromise solution would be to require a deposit for costs of a percentage of the demand for recovery like 2.5% of $34mm, which post-suit you could figure how to divvy up. That would make the demand more meaningful, and provide a tangible incentive to the plaintiff to think a little harder about pricing low-probability lottery-ticket-type litigation.

17
kelukelugames 2 days ago 1 reply      
I'm in tech but not in the valley. How accurate is HBO's representation of patent trolls?
18
unityByFreedom 2 days ago 0 replies      
> Blackbird is a new, especially dangerous breed of patent troll... Blackbird combines both a law firm and intellectual property rights holder into a single entity. In doing so, they remove legal fees from their cost structure and can bring lawsuits of potentially dubious merit without having to bear any meaningful cost

That's not new. It's exactly what Intellectual Ventures was (or is?) doing.

19
avodonosov 2 days ago 0 replies      
I've read the patent. But what part of CloudFlare services it claims to cover?

Also, the patent applies the same way to almost any proxy server (ICAP and similar https://en.wikipedia.org/wiki/Internet_Content_Adaptation_Pr...)

20
bluejekyll 1 day ago 0 replies      
Something needs to give on this stuff. It's probably going to be hard to get a significant change done, such as getting rid of software patents (following from no patents on Math).

I've wondered if one way to chip away at them, would be to make Patents non-transferable. This would preserve the intent, to protect the inventors R&D costs, but not allow the patents to be exploited by trolls. This would have the effect of devaluing patents themselves, but it's not clear that patents were ever intended to carry direct value rather they exist to grant temporary monopolies for the inventor to earn back the investment.

21
fhrow4484 2 days ago 1 reply      
What is the state of "anti-patent trolls" laws in different state? I know for instance Washington state has a law like this effective since July 2015 [1][2]. What is it like in other states, specifically California?

[1] http://www.atg.wa.gov/news/news-releases/attorney-general-s-...

[2] http://app.leg.wa.gov/RCW/default.aspx?cite=19.350&full=true

22
redm 2 days ago 0 replies      
It would be great if the "game" was really altered but I've heard that statement and hope many times over the last 10 years. While there has been some progress, patent trolling continues. Here's hoping...
23
bluesign 1 day ago 0 replies      
Tbh I dont think there is a practical solution for patent trolls.

Patents are basically assets, and they are transferable.

Making then non-transferable is not a solution at all. Basically law firms can represent patent owners.

System needs different validity for patents, which should be set after an evaluation, and can be challenged at the courts.

Putting all patents in the same basket is plain stupid.

24
arikrak 2 days ago 0 replies      
Business usually settle rather than fight patent trolls, but I wonder if fighting is worth it if it can deter others from suing them in the future? I guess it depends somewhat on the outcome of the case..
25
SaturateDK 2 days ago 0 replies      
This is great, I guess I'm going "Prior art searching" right away.
26
draw_down 2 days ago 0 replies      
Unfortunately, I think this is written in a way that makes it hard to understand what exactly Cloudflare is doing against the troll. They're crowdsourcing prior art and petitioning the USPTO?
27
avodonosov 2 days ago 0 replies      
Can the Decorator design pattern be considered a prior art?
28
y0ssar1an 1 day ago 0 replies      
Go Cloudflare Go!
29
danschumann 2 days ago 0 replies      
Can I create 5 more HN accounts just to +1 this some more?
30
dsfyu404ed 2 days ago 1 reply      
31
subhrm 2 days ago 1 reply      
Long live patents !
32
ivanbakel 2 days ago 3 replies      
I don't see anything game-changing about their approach. Fighting instead of settling should definitely be praised, but the only differences between this legal challenge and any of the previous ones are the result of recent changes in the law or the judiciary, which are beyond Cloudflare's control. Nothing suggests that patent-trolling itself as a "game" is going to shift or go away after this, and until that is made to happen, it's going to be as lucrative as ever.
6
The Facebook Algorithm Mom Problem boffosocko.com
697 points by pmlnr  1 day ago   293 comments top 38
1
ryanbrunner 1 day ago 20 replies      
I find a lot of sites feel like they're overtuning their recommendation engines, to the detriment of using the site. YouTube is particularly bad for this - given the years of history and somewhat regular viewing of the site, I feel like it should have a relatively good idea of what I'm interested in. Instead, the YouTube homepage seems myopically focused on the last 5-10 videos I watched.
2
danso 1 day ago 1 reply      
tl;dr, as I understand it: when family members Like your Facebook content in relative quick succession, FB apparently interprets it as a signal that it is family-specific content. I didn't see any metrics but this seems plausible.

I think I'm more of a fan of FB than the average web geek, probably because I used it at its phase of peak innocence (college years) and have since weaned myself off to the point of checking it on a less-than-weekly basis. I also almost never post professional work there, nor "friend" current colleagues. Moreover, I've actively avoided declaring familial relationships (though I have listed a few fake relationships just to screw with the algorithm). But wasn't the feature of making yourself a "brand page" and/or having "subscribers" (which don't count toward the 5,000 friend limit) supposed to mitigate this a bit? I guess I'm so used to keeping Facebook solely for personal content (and using Twitter for public-facing content) that I'm out of touch with the sharing mechanics. That, and anecdotal experience of how baby/wedding pics seems to be the most Liked/Shared content in my friend network.

4
siliconc0w 1 day ago 1 reply      
I feel like these are just shitty models. A good recommendation model would get features like "is_mom" and learn that "is_mom" is a shitty predictor of relevance.

Similarly with Amazon, products should have some sort of 'elasticity' score where it should learn that recommendations of inelastic products is a waste of screen real-estate. I mean, I doubt the model is giving a high % to most of those recommends - it's likely more a business/UX issue in that they've decided it's worth showing you low-probability recommends instead of a cleaner page (or something more useful).

Youtube, on the hand, seems to be precision tuned to get you to watch easy to digest crap. You consume the crap voraciously but are generally left unfulfilled. This is a more difficult problem where you're rewarding naive views rather than a more difficult to discern 'intrinsic value' metric. As a 'long term' business play the model should probably weight more intellectually challenging content just like fast food restaurants should probably figure out how to sell healthier food because by pedaling crap you're only meeting consumer's immediate needs, not their long term ones.

5
mrleinad 1 day ago 8 replies      
I'd like Facebook to have an option to see all posts, without filtering, just as they're posted. It's not hard, it's a simple UX, but it's just not there.
6
pmlnr 1 day ago 1 reply      
Yesterday I sent one of my friends a link to an old - 4.5 years old, from 2013 dec - entry he wrote as a Facebook note. There were 70+ likes, 30+ commenters and 110 comments on it.

He added a new comment yesterday - I only saw it, because I randomly decided to read through the comments.

Those who commented on it should have received a notification - well, in the end, 2 people got something.

This is how you effectively kill conversation - which dazzles me, because keeping conversations running = engagement, which is supposed to be one of the end goals.

I get the "need" of a filter bubble, even though I'd simple let people choke on the amount of crap they'd get if follow would actually mean get everything - they may learn not to like things without thinking.

But not sending notifications at all? Why? Why is that good?

7
kromem 1 day ago 5 replies      
Facebook also has a serious problem in that its news feed is a content recommendation engine with only positive reinforcement but no negative reinforcement. So you end up with a ton of false positives even when actively interacting with the content, and their system doesn't even know how wrong it is.

And should you really not like some content, the solution is unfriending the poster, rather than simply matching against that type of content (political, religious, etc).

The fact there isn't a private dislike button (that no one sees you clicked other than Facebook), is remarkable at this point. It's either woefully obtuse, or intentional so that a feed of false positives better hides moderately targeted ads.

8
chjohasbrouck 22 hours ago 1 reply      
Some proof or data to back up the article's claim would be great. I'm not really buying it.

If moms auto-like every post, then how is that a relevant signal? Everyone has a mom. That would mean every post is getting penalized in the same way (which effectively means no posts are getting penalized).

And if circumventing this was as simple as excluding his mom, wouldn't the effect be even greater if he excluded all non-technical friends and family?

Which pretty much just means you're posting this for the greater public, which presumably a lot of users of Facebook's API already do. Since his intention is for his content to be seen by the greater public, then... go ahead and tell the API that?

It's a great angle for an article, and it's very shareable, but he provides no data (even though he seems like someone who would have all of the data).

9
paulcnichols 1 day ago 2 replies      
Honestly, if FB was just me and my mom I'd probably get more value out of the site. Smaller radius is better IMO.
10
ianhawes 1 day ago 3 replies      
There is a very simple solution for this issue. Create a Facebook Page for yourself as a brand, post links to your articles on that page, then share it from your personal Facebook page.
11
type0 12 hours ago 0 replies      
> Id love to hear from others whove seen a similar effect and love their mothers (or other close loved ones) enough to not cut them out of their Facebook lives.

I think the bigger issue is family members, friends and relatives who do cut out their non-fb using closed ones by ignoring all other methods of telecommunication. "Oh, you didn't knew we planned a wedding, too bad you're not on fb!"

12
aeturnum 1 day ago 3 replies      
I'm pretty sure this description is wrong. My impression is facebook shows your content to a subset of friends and then classifies it based on likes received. If your mom 'like's 9/10 posts and your other friends like 3/10 posts, then 60% of your posts /are/ family content. Even if they're about mathematical theories.
13
robbles 22 hours ago 0 replies      
I'm not a machine learning expert, but isn't this an easily solved problem?

Similar to TF/IDF, where you mitigate common words by dividing by their overall frequency, you should be able to divide the weight of any particular "like" by the frequency of likes between the two people. That way a genuine expression of interest by an acquaintance is weighted far higher than a relative or close friend that reflexively likes everything you post.

14
harry8 19 hours ago 1 reply      
Dump facebook, it's sucks for basically everything. Ring your mom, tell her she's awesome and you love her. Wish I could ring mine...

Post a blog post to a real blog under your control. If you want a colleague to see it, email the link and ask for feedback.

There solved. It's a good algo too.

15
mullingitover 1 day ago 0 replies      
I generally hold facebook in contempt for the forced filtering that they subject me to. Making the 'sort posts chronologically' flag come unstuck is a dirty hack that they should be ashamed of.
16
anotheryou 1 day ago 0 replies      
Any proof of this happening? And does it utilize the user-provided relationship data or just group people?
17
firasd 1 day ago 0 replies      
Even aside from complicated questions like the Newsfeed algorithm, when a friend started hitting Like on nearly every post of mine I appreciated their caring but mentally discounted the meaning of their Like in terms of being a reaction to the content of my post. It's like "Like Inflation". So the algo should probably do the same about indiscriminate likes...
18
jondubois 12 hours ago 0 replies      
It's a more general problem than that. Maybe Facebook should penalise posts written by popular authors, celebrities or recognizable brands to offset the popularity (rich-get-richer) factor and select only for quality.
19
_greim_ 1 day ago 1 reply      
> Facebook, despite the fact that they know shes my mom, doesnt take this fact into account in their algorithm.

Wouldn't it also be possible to analyze the content of the post to determine if it's family-related? It seems like with a math or technical post, that should be easy for FB to do.

20
compiler-guy 1 day ago 1 reply      
For every social media site XX:

Question: "Why does XX do things this way?"

Answer: "Because it increases engagement."

Why does the Facebook algorithm do this? Because it increases clicks.Why does Youtube use autoplay? Because it increases watch time.

For every single social media site.

21
warcher 1 day ago 1 reply      
Unrelated but worse problem: top of feed livelock. If you're below the fold, cause, I dunno, you got shit to do, you are only going to get viewed by heavy scrollers, which overly favors (IMHO, as somebody with shit to do) folks who get on the thread first. Even a one-dimensional "rank by number of uplikes" filter still doesn't calculate your likes/views ratio, which is what you'd actually care about.
22
pdevine 23 hours ago 0 replies      
I'm pretty sure they're using a machine learning algorithm, and it's determining the way to handle your post. Can someone who understands the ML algorithms better than I explain how this would interact with the feature weights? I'd be curious as to how we think that would play out.
23
arikrak 23 hours ago 0 replies      
I haven't found Facebook to be very good at recommending things. They often don't seem to be able to tell what people are interested in, and they don't really let users control who they're posting to. For example, they should make it easier to just post to people who live in the same city...
24
rainhacker 1 day ago 0 replies      
I feel it's not limited to family. Empirically I've noticed liking of a post factors in how much someone likes the poster then just the contents of a post.
25
viraptor 21 hours ago 0 replies      
I find this interesting: So much talk about the technical solutions both in the post and in the comments here. Yet, it doesn't seem like he asked his mom not to like his tech posts. If that's the goal - why not start there?
26
smrtinsert 1 day ago 0 replies      
Beyond a 'mom' problem, this seems like a highly plausible cause for the incredibly silo-ed content on Facebook.
27
erikb 1 day ago 0 replies      
I also have a problem with social media in general, especially with following people instead of institutions/groups. Usually what 99.9% of people like is totally not what I like. So if you base the content I should consume on the assumptions that I like what my connections liked you are nearly going the opposite direction of what I want.

PS: Maybe some of you have the experience of having an active following. I notice that many social networks like Twitter, FB, Youtube, allow comments. But almost never does the content creator/sharer actually react to comments. Some may use comments in future content, but some don't react at all. Are these people not even reading the comments? Why are people commenting when it's so obvious that it's just going down a black hole? For instance, on Twitter a share with additional text is nearly the same amount of work than a comment. And it's obvious that you reach more people by share-commenting rather than simply writing your comment underneath the content. So why do people do that? And why does Twitter has the option to comment even?

28
piker 1 day ago 0 replies      
Interesting post. I clicked thinking someone had coined a clever new "NASCAR Dad" moniker about parents who read and parrot their Facebook echo chamber at Thanksgiving, but was pleasantly surprised.
29
Beltiras 1 day ago 0 replies      
I like the idea of "embargo from group for n days".
30
jrochkind1 1 day ago 0 replies      
Kind of fascinating.
31
iplaw 13 hours ago 0 replies      
No joke. My mom is the first person to LIKE anything that I post -- and then make an inappropriate comment on it.

I have added all of my family to a family group. I'll see if I can post to friends/publically and exclude an entire group of contacts.

32
EGreg 1 day ago 1 reply      
Simple solution: hide post from family :)
33
carapace 1 day ago 1 reply      
I've said it before and I'll say it again, FB users are a kind of digital peasant or serf.

To me it feels like we're seeing the genesis of the divide between Morlocks and Eloi.

34
45h34jh53k4j 1 day ago 1 reply      
Facebook is disgusting.

* Delete your account* Convince your mother to delete her account

35
AznHisoka 1 day ago 3 replies      
This is not a problem, and I certainly hope Facebook does not fix it. Why? Because it forced the OP to narrow down his audience and show the post only to those who would enjoy it.

That's a much better experience than everyone trying to push everything they publish to you.

36
rickpmg 1 day ago 1 reply      
>.. shame on Facebook for torturing them for the exposure when I was originally targeting maybe 10 other colleagues to begin with.

Seems like the facebook algorithm is actually working for the users by in effect blocking insipid idiots from posting their crap trying to game the system.

You don't 'target' colleagues.. colleagues are people you work with and respect.. not try to spam.

37
mamon 1 day ago 1 reply      
You are obviously giving Facebook to much information to act upon. Some suggestions:

1. Don't use Facebook :)

2. If you use it don't tag your family members as such. Or your close friends as FB "close friends"

3. Never tag anyone in photos

4. Never set "relationship status"

5. Never add info about where do you live, work, etc.

6. Having separate public profile for your company/work related stuff is probably a good idea.

7. Never post anything you wouldn't want to see in CNN news :)

38
andreasgonewild 1 day ago 2 replies      
It's simple really, just stop participating in that evil experiment. From the outside, you look like morons; talking to opposite sides of an algorithm while interpreting what comes out as reality. It's been proven over and over again that consuming that crap makes everyone feel bad and hate each other. There are plenty of alternatives, but this one is mine: https://github.com/andreas-gone-wild/snackis
7
Math education: Its not about numbers, its about learning how to think nwaonline.com
549 points by CarolineW  2 days ago   326 comments top 52
1
d3ckard 2 days ago 18 replies      
Maybe I'm wrong, but I have always believed that if you want people to be good at math, it's their first years of education which are important, not the last ones. In other worlds, push for STEM should be present in kindergartens and elementary schools. By the time people go to high school it is to late.

I never had any problems with math until I went to university, so I was merely a passive observer of everyday struggle for some people. I honestly believe that foundations are the key. Either you're taught to think critically, see patterns and focus on the train of thought, or you focus on numbers and memorization.

The latter obviously fails at some point, in many cases sufficiently late to make it really hard to go back and relearn everything.

Math is extremely hierarchical and I believe schools do not do enough to make sure students are on the same page. If we want to fix teaching math, I would start there, instead of working on motivation and general attitude. Those are consequences, not the reasons.

2
gusmd 2 days ago 4 replies      
I studied Mechanical Engineering, and it was my experience that several professors are only interested in having the students learn how to solve problems (which in the end boil down to math and applying equations), instead of actually learning the interesting and important concepts behind them.

My wife went to school for Architecture, where she learned "basic" structural mechanics, and some Calculus, but still cannot explain to me in simple words what an integral or a derivative is. Not her fault at all: her Calculus professor had them calculate polynomial derivatives for 3 months, without ever making them understand the concept of "rate or change", or what "infinitesimal" means.

For me that's a big failure of our current "science" education system: too much focus on stupid application of equations and formulas, and too little focus on actually comprehending the abstract concepts behind them.

3
Koshkin 2 days ago 9 replies      
Learning "how to think" is just one part of it. The other part - the one that makes it much more difficult for many, if not most, people to learn math - especially the more abstract branches of it - is learning to think about math specifically. The reason is that mathematics creates its own universe of concepts and ideas, and this universe, all these notions are so different from what we have to deal with every day that learning them takes a lot of training, years of intensive experience dealing with mathematical structures of one kind or another, so it should come as no surprise that people have difficulty learning math.
4
spodek 2 days ago 1 reply      
> it's about learning how to think

It's about learning a set of thinking skills, not how to think. Many people who know no math can think and function very well in their domains and many people who know lots of math function and think poorly outside of math.

5
J_Sherz 2 days ago 2 replies      
My problem with Math education was always that speed was an enormous factor in testing. You can methodically go through each question aiming for 100% accuracy and not finish the test paper, while other students can comfortably breeze through all the questions and get 80% accuracy but ultimately score higher on the test. This kind of penalizing for a lack of speed can lead to younger kids who are maximizing for grades to move away from Math for the wrong reasons.

Source: I'm slow but good at Math and ended up dropping it as soon as I could because it would not get me the grades I needed to enter a top tier university.

6
BrandiATMuhkuh 2 days ago 0 replies      
Disclaimer: I'm CTO of https://www.amy.ac an online math tutor.

From our experience most people struggle with math since they forgot/missed a curtain math skill they might have learned a year or two before. But most teaching methods only tell the students to practise more of the same. When looking at good tutors, we could see that a tutor observes a student and then teaches them the missing skill before they actually go to the problem the student wanted help with. That seems to be a usefull/working approach.

7
Nihilartikel 2 days ago 0 replies      
This is something I've been pondering quite a bit recently. It is my firm belief that mathematical skill and general numeracy are actually a small subset of abstract thought. Am I wrong in thinking that school math is the closest to deliberate training in abstract reasoning that one would find in public education?

Abstract reasoning, intuition, and creativity, to me, represent the underpinnings of software engineering, and really, most engineering and science, but are taught more by osmosis along side the unintuitive often boring mechanics of subjects. The difference between a good engineer of any sort and one that 'just knows the formulas' is the ability to fluently manipulate and reason with symbols and effects that don't necessarily have any relation or simple metaphor in the tangible world. And taking it further, creativity and intuition beyond dull calculation are the crucial art behind choosing the right hypothesis to investigate. Essentially, learning to 'see' in this non-spacial space of relations.When I'm doing system engineering work, I don't think in terms of X Gb/s throughput and Y FLOPS... (until later at least) but in my mind I have a model of the information and data structures clicking and buzzing, like watching the gears of a clock, and I sort of visualize working with this, playing with changes. It wouldn't surprise me of most knowledge workers arrive have similar mental models of their own. But what I have observed is that people who have trouble with mathematics or coding aren't primed at all to 'see' abstractions in their minds eye. This skill takes years to cultivate, but, it seems that its cultivation is left entirely to chance by orthodox STEM education.

I was just thinking that this sort of thing could be approached a lot more deliberately and could yield very broad positive results in STEM teaching.

8
mindcrime 2 days ago 1 reply      
This part really resonates with me as well:

"You read all the time, right? We constantly have to read. If you're not someone who picks up a book, you have to read menus, you've got to read traffic signs, you've got to read instructions, you've got to read subtitles -- all sorts of things. But how often do you have to do any sort of complicated problem-solving with mathematics? The average person, not too often."

From this, two deductions:

Having trouble remembering the quadratic equation formula doesn't mean you're not a "numbers-person."

To remember your math skills, use them more often.

What I remember from high-school and college was this: I'd take a given math class (say, Algebra I) and learn it reasonably well. Then, summer vacation hits. Next term, taking Algebra II, all the Algebra I stuff is forgotten because, well, who uses Algebra I over their summer vacation? Now, Algebra II is harder than it should be because it builds on the previous stuff. Lather, rinse, repeat.

This is one reason I love Khan Academy so much. You can just pop over there anytime and spend a few minutes going back over stuff at any level, from basic freaking fractions, up through Calculus and Linear Algebra.

9
jtreagan 2 days ago 0 replies      
You say "it's not about numbers, it's about learning how to think," but the truth is it's about both. Without the number skills and the memorization of all those number facts and formulas, a person is handicapped both in learning other subjects and skills and in succeeding and progressing in their work and daily life. The two concepts -- number skills and thinking skills -- go hand in hand. Thinking skills can't grow if the number skills aren't there as a foundation. That's what's wrong with the Common Core and all the other fads that are driving math education these days. They push thinking skills and shove a calculator at you for the number skills -- and you stall, crash and burn.

The article brings out a good point about math anxiety. I have had to deal with it a lot in my years of teaching math. Sometimes my classroom has seemed so full of math anxiety that you could cut it with a butter knife. I read one comment that advocated starting our children out even earlier on learning these skills, but the truth is the root of math anxiety in most people lies in being forced to try to learn it at too early an age. Most children's brains are not cognitively developed enough in the early grades to learn the concepts we are pushing at them, so when a child finds failure at being asked to do something he/she is not capable of doing, anxiety results and eventually becomes habit, a part of their basic self-concept and personality. What we should instead do is delay starting school until age 8 or even 9. Some people don't develop cognitively until 12. Sweden recently raised their mandatory school age to 7 because of what the research has been telling us about this.

10
jeffdavis 2 days ago 2 replies      
My theory is that math anxiety is really anxiety about a cold assessment.

In other subjects you can rationalize to yourself in various ways: the teacher doesn't like me, or I got unlucky and they only asked the history questions I didn't know.

But with math, no rationalization is possible. There's no hope the teacher will go easy on you, or be happy that you got the gist of the solution.

Failure in math is often (but not always) a sign that education has failed in general. Teachers can be lazy or too nice and give good grades in art or history or reading to any student. But when the standardized math test comes around, there's no hiding from it (teacher or student).

11
quantum_state 2 days ago 0 replies      
Wow ... this blows me away ... in a few short hours, so many people chimed in sharing thoughts ... It is great ... Would like to share mine as well.Fundamentally, math to me is like a language. It's meant to help us to describe things a bit more quantitatively and to reason a bit more abstractly and consistently ... if it can be made mechanical and reduce the burden on one's brain, it would be ideal. Since it's like a language, as long as one knows the basics, such as some basic things of set theory, function, etc., one should be ready to explore the world with it. Math is often perceived as a set of concepts, theorems, rules, etc. But if one gets behind the scene to get to know some of the original stories of the things, it would become very nature. At some point, one would have one's mind liberated and start to use math or create math like we usually do with day to day languages such as English.
12
g9yuayon 2 days ago 2 replies      
Is this a US thing? Why would people still think that math is about numbers? Math is about patterns, which got drilled into us by our teachers in primary school. I really don't understand how US education system can fuck up so badly on fundamental subject like math.
13
monic_binomial 2 days ago 1 reply      
I was a math teacher for 10 years. I had to give it up when I came to realize that "how to think" is about 90% biological and strongly correlated to what we measure with IQ tests.

This may be grave heresy in the Temple of Tabula Rasa where most education policy is concocted, but nonetheless every teacher I ever knew was ultimately forced to chose between teaching real math class with a ~30% pass rate or a watered-down math Kabuki show with a pass rate just high enough to keep their admins' complaints to a low grumble.

In the end we teachers would all go about loudly professing to each other that "It's not about numbers, it's about learning how to think" in a desperate bid to quash our private suspicions that there's actually precious little that can be done to teach "how to think."

14
ouid 2 days ago 0 replies      
When people talk about the failure of mathematics education, we often talk about it in terms of the students inability to "think mathematically".

It's impossible to tell if students are capable of thinking mathematically, however, because I have not met a single (non-mathlete) student who could give me the mathematical definition of... anything. How can we evaluate student's mathematical reasoning ability if they have zero mathematical objects about which to reason?

15
yellowapple 2 days ago 0 replies      
I wish school curricula would embrace that "learning how to think" bit.

With the sole exception of Geometry, every single math class I took in middle and high school was an absolutely miserable time of rote memorization and soul-crushing "do this same problem 100 times" busy work. Geometry, meanwhile, taught me about proofs and theorems v. postulates and actually using logical reasoning. Unsurprisingly, Geometry was the one and only math class I ever actually enjoyed.

16
brendan_a_b 2 days ago 1 reply      
My mind was blown when I came across this Github repo that demonstrates mathematical notation by showing comparisons with JavaScript code https://github.com/Jam3/math-as-code

I think I often struggled or was intimidated by the syntax of math. I started web development after years of thinking I just wasn't a math person. When looking at this repo, I was surprised at how much more easily and naturally I was able to grasp concepts in code compared to being introduced to them in math classes.

17
alistproducer2 2 days ago 1 reply      
I can't agree more. Math is about intuition of what the symbols are doing. In the case of functions, intuition about how the symbols are transforming the input. I've always thought I was "bad at math." It wasn't until my late 20's when I took it upon myself to get better at calculus and I used "Calculus Success in 20 Minute a Day[0]" did I finally realize why I was "bad" at it; I never understood what I was doing.

That series of book really put intuition at the forefront. I began to realize that the crazy symbols and formulas were stand-in for living, breathing dynamic systems: number transformers. Each formula and symbol represented an action. Once I understood Math as a way to encode useful number transformation, it all clicked. Those rules and functions were encoded after a person came up with something they wanted to do. The formula or function is merely a compact way of describing this dynamic system to other people.

The irony was I always thought math was boring. In retrospect it was because it was taught as if it had no purpose other than to provide useless mental exercise. Once I started realizing that derivatives are used all around me to do cool shit, I was inspired to learn how they worked because I wanted to use them to do cool shit too. I went through several years of math courses and none of them even attempted to tell me that math was just a way to represent cool real world things. It took a $10 used book from amazon to do that. Ain't life grand?

[0]:https://www.amazon.com/Calculus-Success-20-Minutes-Day/dp/15...

18
gxs 2 days ago 0 replies      
Late to the party but wanted to share my experience.

I was an Applied Math major at Berkely. Why?

When I was in 7th grade, I had an old school Russian math teacher. She was tough, not one for niceties, but extremely fair.

One day, being the typical smart ass that I was, I said, why the hell do I need to do this, I have 0 interest in Geometry.

Her answer completely changed my outlook and eventually was the reason why I took extensive math in HS and majored in math in college.

Instead of dismissing me, instead of just telling me to shut up and sit down, she explained things to me very calmly.

She said doing math beyond improving your math skills improves your reasoning ability. It's a workout for your brain and helps develop your logical thinking. Studying it now at a young age will help it become part of your intuition so that in the future you can reason about complex topics that require more than a moment's thoughts.

She really reached me on that day, took me a while to realize it. Wish I could have said thank you.

Wherever you are Ms. Zavesova, thank you.

Other beneits: doing hard math really builds up your tolerance for building hard problems. Reasoning through long problems, trying and failing, really requires a certain kind of stamina. My major definitely gave me this. I am a product manager now and while I don't code, I have an extremely easy time working with engineers to get stuff done.

19
Tommyixi 1 day ago 0 replies      
For me, math has always been a source of unplugging. I'd sit at my kitchen table, put in some headphones, and just get lost in endless math problems.

Interestingly, now as a masters student in a statistics graduate program, I've learned that I don't like "doing" math but get enjoyment from teaching it. I really like it when students challenge me when I'm at the chalkboard and I'll do anything for those "ah-ha!" moments. The best is at the end of the semester hearing students say "I thought this class was going to suck but I worked hard and am proud of the work I did." I'm hoping that on some small scale I'm shaping their views on math. Or at least give them the confidence to say, "I don't get this, but I'm not afraid to learn it."

20
taneq 2 days ago 6 replies      
As my old boss once said, "never confuse mathematics with mere arithmetic."
21
dbcurtis 2 days ago 0 replies      
Permit me to make a tangentially related comment of interest to parents reading this thread: This camp for 11-14 y/o kids: http://www.mathpath.org/ is absolutely excellent. My kid loved it so much they attended three years. Great faculty... John Conway, Francis Su, many others. If you have a math-loving kid of middle-school age, I encourage you to check it out.
22
simias 2 days ago 1 reply      
I completely agree. I think we start all wrong too, the first memories I have of maths at school was learning how to compute an addition, a subtraction and later a multiplication and division. Then we had to memorize by heart the multiplication tables.

That can be useful of course (especially back then when we didn't carry computers in our pockets at all times) but I think it sends some pupils on a bad path with regards to mathematics.

Maths shouldn't be mainly about memorizing tables and "dumbly" applying algorithms without understanding what they mean. That's how you end up with kids who can answer "what's 36 divided by 4" but not "you have 36 candies that you want to split equally with 3 other people, how many candies do you end up with?"

And that goes beyond pure maths too. In physics if you pay attention to the relationship between the various units you probably won't have to memorize many equations, it'll just make sense. You'll also be much more likely to spot errors. "Wait, I want to compute a speed and I'm multiplying amperes and moles, does that really make sense?".

23
jrells 2 days ago 0 replies      
I often worry that mathematics education is strongly supported on the grounds that it is about "learning how to think", yet the way it is executed rarely prioritizes this goal. What would it look like if math curriculum were redesigned to be super focused on "learning how to think"? Different, for sure.
24
lordnacho 2 days ago 4 replies      
I think a major issue with math problems in school is that they're obvious.

By that I don't mean it's easy. But when you're grappling with some problem, whatever it is, eg find some angle or integrate some function, if you don't find the answer, someone will show you, and you'll think "OMG why didn't I think of that?"

And you won't have any excuses for why you didn't think of it. Because math is a bunch of little logical steps. If you'd followed them, you'd have gotten everything right.

Which is a good reason to feel stupid.

But don't worry. There are things that mathematicians, real ones with PhDs, will discover in the future. By taking a number of little logical steps that haven't been taken yet. They could have gone that way towards the next big theorem, but they haven't done it yet for whatever reason (eg there's a LOT of connections to be made).

25
tnone 1 day ago 0 replies      
Is there any other subject that is given as much leeway for its abysmal pedagogical failures?

"Economics, it's not about learning how money and markets work, it's about learning how to think."

"Art, it's not about learning about aesthetics, style, or technique, it's about learning how to think."

"French, it's not about learning how to speak another language, it's..."

Math has a problem, and it's because the math curriculum is a pile of dull, abstract cart-before-the-horse idiocy posing as discipline.

26
alexandercrohde 2 days ago 0 replies      
Enough "I" statements already. It's ironic how many people seem to think their personal experience is somehow relevant on a post about "critical thinking."

The ONLY sane way to answer these questions:- Does math increase critical thinking?- Does critical thinking lead to more career earnings/happiness/etc?- When does math education increase critical thinking most?- What kind of math education increases critical thinking?

Is with a large-scale research study that defines an objective way to measure critical thinking and controls for relevant variables.

Meaning you don't get an anecdotal opinion on the matter on your study-of-1 no-control-group no-objective-measure personal experience.

27
dahart 2 days ago 4 replies      
I wonder if a large part of our math problem is our legacy fixation on Greek letters. Would math be more approachable to English speakers if we just used English?

I like to think about math as language, rather than thought or logic or formulas or numbers. The Greek letters are part of that language, and part of why learning math is learning a completely foreign language, even though so many people who say they can't do math practice mathematical concepts without Greek letters. All of the math we do on computers, symbolic and numeric, analytic and approximations, can be done using a Turing machine that starts with only symbols and no built-in concept of a number.

28
WheelsAtLarge 2 days ago 0 replies      
True, Math is ultimately about how to think but students need to memorize and grasp the basics in addition to making sure that new material is truly understood. That's where things fall apart. We are bombarded with new concepts before we ultimately know how to use what we learned. How many people use imaginary numbers in their daily life? Need I say more?

We don't communicate in Math jargon every day so it's ultimate a losing battle. We learn new concepts but we lose them since we don't use them. Additionally a large number of students get lost and frustrated and finally give up. Which ultimately makes math a poor method to teach thinking since only a few students can attain the ultimate benefits.

Yes, Math is important, and needs to be taught, but if we want to use it as away to learn how to think there are better methods. Programming is a great way. Students can learn it in one semester and can use it for life and can also expand on what they already know.

Also, exploring literature and discussing what the author tries to convey is a great way to learn how to think. All those hours in English class trying to interpret what the author meant was more about exploring your mind and your peer's thoughts than what the author actually meant. The author lost his sphere of influence once the book was publish. It's up to the readers of every generation to interpret the work. So literature is a very strong way to teach students how to think.

29
listentojohan 2 days ago 0 replies      
The true eye-opener for me was reading Number - The Language of Science by Tobias Dantzig. The philosophy part of math as an abstraction layer for what is observed or deducted was a nice touch.
30
yequalsx 2 days ago 3 replies      
I teach math at a community college. I've tried many times to teach my courses in such a way that understanding the concepts and thinking were the goals. Perhaps I'm jaded by the failures I encountered but students do not want to think. They want to see a set of problem types that need to be mimicked.

In our lowest level course we teach beginning algebra. Almost everyone has an intuition that 2x + 3x should be 5x. It's very difficult to get them to understand that there is a rule for this that makes sense. And that it is the application of this rule that allows you to conclude that 2x + 3x is 5x. Furthermore, and here is the difficulty, that same rule is why 3x + a x is (3+a)x.

I believe that for most people mathematics is just brainwashing via familiarity. Most people end up understanding math by collecting knowledge about problem types, tricks, and becoming situationally aware. Very few people actually discover a problem type on their own. Very few people are willing, or have been trained to be willing, to really contemplate a new problem type or situation.

Math education in its practice has nothing to do with learning how to think. At least in my experience and as I understand what it means to learn how to think.

31
katdev 1 day ago 0 replies      
You know what helps kids (and adults) learn math? The abacus/soroban. Yes, automaticity with math facts/basic math is important but what's really important is being able to represent the base-10 system mentally.

The abacus is an amazing tool that's been successful in creating math savants - here's the world champion adding 10 four-digit numbers in 1.7 seconds using mental math https://www.theguardian.com/science/alexs-adventures-in-numb...

Students are actually taught how to think of numbers in groups of tens, fives, ones in Common Core math -- however, most are not given the abacus as a tool/manipulative.

32
lucidguppy 2 days ago 0 replies      
Why aren't people taught how to think explicitly? The Greeks and the Romans thought it was a good idea.
33
Mz 2 days ago 0 replies      
Well, I actually liked math and took kind of a lot of it in K-12. I was in my 30s before I knew there were actual applications for some of the things I memorized my way through without really understanding.

When I homeschooled my sons, I knew this approach would not work. My oldest has trouble with numbers, but he got a solid education in the concepts. He has a better grasp of things like GIGO than most folks. We also pursued a stats track (at their choice) rather than an algebra-geometry-trig track.

Stats is much more relevant to life for most people most of the time and there are very user-friendly books on the topic, like "How to lie with statistics." If you are struggling with this stuff, I highly recommend pursuing something like that.

34
0xFFC 2 days ago 0 replies      
Exactly, as ordinary hacker i was always afraid of math. But after taking mathematical Analysis I realized how wonderful math is. These day i am in love with pure mathematics. It literally corrected my brain pipeline in so many ways and it continues to do it further and further.

I have thought about changing my major to pure mathematics too.

35
jmml97 2 days ago 1 reply      
I'm studying math right now and I have that problem. We're just being vomited theorems and propositions in class instead of making us think. There's not a single subject dedicated to learning the process of thinking in maths. So I think we're learning the wrong (the hard) way.
36
djohnston 1 day ago 0 replies      
Anecdotally, I was a pretty average math student growing up and a pretty good math student in university. One of the reasons I studied math in college was to improve what was objectively my weakest area intellectually, but I found that once we were working with much more abstract models and theories, I was more competent.
37
andyjohnson0 1 day ago 0 replies      
A couple of years ago I did the Introduction to Mathematical Thinking course on Coursera [1]. Even though I found it hard, I enjoyed it and learned a lot, and I feel I got some insight into mathematical though processes. Recommended.

[1] https://www.coursera.org/learn/mathematical-thinking

38
keymone 2 days ago 1 reply      
i always found munging numbers and memorizing formulas discouraging. i think physics classes teach kids more math than math classes and in more interesting ways (or at least have potential to).
39
JoshTriplett 2 days ago 0 replies      
One of the most critical skills I see differentiating people around me (co-workers and otherwise) who succeed and those who don't is an analytical, pattern-recognizing and pattern-applying mindset. Math itself is quite useful, but I really like the way this particular article highlights the mental blocks and misconceptions that seem to particularly crop up around mathematics; those same blocks and misconceptions tend to get applied to other topics as well, just less overtly.
40
cosinetau 2 days ago 0 replies      
As a someone with a degree in applied mathematics, I feel the problem with learning mathematics is more often than not a problem or a fault of the instructor of mathematics.

Many instructors approach the subject with a very broad understanding of the subject, and it's very difficult (more difficult than math) to shake that understanding and abstract it to understandable chunks of knowledge or reasoning.

41
archeantus 2 days ago 0 replies      
If we want to teach people how to think, I propose that math isn't the best way to do it. I can't tell you how many times I complained about how senseless math was. The real-world application is very limited, for the most part.

Contrast that to if I had learned programming instead. Programming definitely teaches you how to think, but it also has immense value and definite real-world application.

42
bojo 1 day ago 0 replies      
When I first saw it I thought the sign in the mentioned tweet may have been because the deli was next to a mathematics department and the professors/students would stand around and hold up the line while discussing math.

Overactive imagination I guess.

43
k__ 2 days ago 0 replies      
I always had the feeling I failed to grasp math because I never got good at mid level things.

It took me reeeally long to grasp things like linear algebra and calculus and I never was any good at it.

It was a struggle to get my CS degree.

Funny thing is, I'm really good at the low level elementary school stuff so most people think I'm good at math...

44
dorianm 23 hours ago 0 replies      
Maths problems are cool too, like counting apples and oranges :) (Or gold and rubies)
45
GarvielLoken 1 day ago 0 replies      
tl;drA couple of numbers-nerds are sad and offended that math is not as recognized as reading and literature, where there are great works that speaks of the human condition and illustrates life.

Also they have the mandatory "everything is really math! ". "LeGrand notes that dancing and music are mathematics in motion. So ... dance, play an instrument."

Just because i can describe history through the perspective of capitalism or Marx theories, does not make history the same thing as either of those.

46
EGreg 2 days ago 0 replies      
There just needs to be faster feedback than once a test.

https://opinionator.blogs.nytimes.com/2011/04/21/teaching-ma...

47
CoolNickname 2 days ago 0 replies      
School is not about learning but learning how to think. The way it is now it's more about showing off than it is about anything actually useful. They don't reward effort, they reward talent.
48
humbleMouse 2 days ago 0 replies      
On a somewhat related tangent, I think about programming the same way.

I always tell people programming and syntax are easy - it's learning to think in a systems and design mindset that is the hard part.

49
crb002 2 days ago 2 replies      
Programming needs to be taught alongside Algebra I. Especially in a language like Haskell or Scheme where algebraic refactoring of type signatures looks like normal algebra notation.
50
calebm 2 days ago 0 replies      
I agree, but have a small caveat: math does typically strongly involve numbers, so in a way, it is about numbers, though it's definitely not about just memorizing things or blindly applying formulas.

It just bugs me sometimes when people make hyperbolic statements like that. I remember coworkers saying things like "software consulting isn't about programming". Yes it is! The primary skill involved is programming, even programming is not the ONLY required skill.

51
pklausler 2 days ago 0 replies      
How do you "learn to think" without numbers?

Depressing.

52
bitwize 2 days ago 0 replies      
Only really a problem in the USA. In civilized countries, there's no particular aversion to math or to disciplined thinking in general.
8
Toward Go 2 golang.org
538 points by dmit  8 hours ago   433 comments top 30
1
dgacmu 53 minutes ago 0 replies      
I should send this to rsc, but it's fairly easy to find examples where the lack of generics caused an opportunity cost.

(1) I started porting our high-performance, concurrent cuckoo hashing code to Go about 4 years ago. I quit. You can probably guess why from the comments at the top of the file about boxing things with interface{}. It just got slow and gross, to the point where libcuckoo-go was slower and more bloated than the integrated map type, just because of all the boxing: https://github.com/efficient/go-cuckoo/blob/master/cuckoo.go

(my research group created libcuckoo.)

Go 1.9 offers a native concurrent map type, four years after we looked at getting libcuckoo on go -- because fundamental containers like this really benefit from being type-safe and fast.

(2) I chose to very tightly restrict the initial set of operations we initially accepted into the TensorFlow Go API because there was no non-gross way that I could see to manipulate Tensor types without adding the syntactic equivalent of the bigint library, where everything was Tensor.This(a, b), and Tensor.That(z, q). https://github.com/tensorflow/tensorflow/pull/1237and https://github.com/tensorflow/tensorflow/pull/1771

I love go, but the lack of generics simply causes me to look elsewhere for certain large classes of development and research. We need them.

2
munificent 8 hours ago 3 replies      

 > I can't answer a design question like whether to support > generic methods, which is to say methods that are > parameterized separately from the receiver.
I work on the Dart language. Dart was initially designed with generic classes but not generic methods. Even at the time, some people on the team felt Dart should have had both.

We proceeded that way for several years. It was annoying, but tolerable because of Dart's optional type system -- you can sneak around the type checker really easily anyway, so in most cases you can just use "dynamic" instead of a generic method and get your code to run. Of course, it won't be type safe, but it will at least mostly do what you want.

When we later moved to a sound static type system, generic methods were a key part of that. Even though end users don't define their own generic methods very often, they use them all the time. Critical common core library methods like Iterable.map() are generic methods and need to be in order to be safely, precisely typed.

This is partially because functional-styled code is fairly idiomatic on Dart. You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.

I do think if you add generic types without generic methods, you will run into their lack. Methods are how you abstract over and reuse behavior. If you have generic methods without generic classes, you lose the ability to abstract over operations that happen to use generic classes.

A simple example is a constructor function. If you define a generic class that needs some kind of initialization (discouraged in Go, but it still happens), you really need that constructor to be generic too.

3
fusiongyro 8 hours ago 19 replies      
The paragraph I was looking for is this:

> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues. I wonder if Go's historical antipathy towards this feature has driven away most of the people who would want it, or if there is still enough latent desire for generics that serious Go users will be able to produce the necessary mountain of real-world use cases to get something going here.

4
bad_user 8 hours ago 4 replies      
Java`s generics have had issues due to use site variance, plus the language isn't expressive enough, leading its users into a corner where they start wishing for reified generics (although arguably it's a case of missing the forest from the trees).

But even so, even with all the shortcomings, once Java 5 was released people migrated to usage of generics, even if generics in Java are totally optional by design.

My guess to why that happens is that the extra type safety and expressivity is definitely worth it in a language and without generics that type system ends up staying in your way. I personally can tolerate many things, but not a language without generics.

You might as well use a dynamic language. Not Python of course, but something like Erlang would definitely fit the bill for Google's notion of "systems programming".

The Go designers are right to not want to introduce generics though, because if you don't plan for generics from the get go, you inevitably end up with a broken implementation due to backwards compatibility concerns, just like Java before it.

But just like Java before it, Go will have half-assed generics. It's inevitable.

Personally I'm sad because Google had an opportunity to introduce a better language, given their marketing muscle. New mainstream languages are in fact a rare event. They had an opportunity here to really improve the status quo. And we got Go, yay!

5
didibus 6 hours ago 6 replies      
I get that everyone would love to have a functional language that's eager by default with optional lazy constructs, great polymorphism, statically typed with inference, generics, great concurrency story, an efficient GC, that compiles quickly to self contained binaries with simple and effective tooling which takes only seconds to setup while giving you perfomance that equals java and can rival C, with a low memory footprint.

But, I don't know of one, and maybe that's because the Go team is right, some tradeoffs need to be made, and they did, and so Go is what it is. You can't add all the other great features you want and eat the Go cake too.

Disclaimer: I'm no language design expert. Just thinking this from the fact that I've yet to hear of such a language.

6
EddieRingle 8 hours ago 3 replies      

 > To minimize disruption, each change will require > careful thought, planning, and tooling, which in > turn limits the number of changes we can make. > Maybe we can do two or three, certainly not more than five. > ... I'm focusing today on possible major changes, > such as additional support for error handling, or > introducing immutable or read-only values, or adding > some form of generics, or other important topics > not yet suggested. We can do only a few of those > major changes. We will have to choose carefully.
This makes very little sense to me. If you _finally_ have the opportunity to break backwards-compatibility, just do it. Especially if, as he mentions earlier, they want to build tools to ease the transition from 1 to 2.

 > Once all the backwards-compatible work is done, > say in Go 1.20, then we can make the backwards- > incompatible changes in Go 2.0. If there turn out > to be no backwards-incompatible changes, maybe we > just declare that Go 1.20 is Go 2.0. Either way, > at that point we will transition from working on > the Go 1.X release sequence to working on the > Go 2.X sequence, perhaps with an extended support > window for the final Go 1.X release.
If there aren't any backwards-incompatible changes, why call it Go 2? Why confuse anyone?

---

Additionally, I'm of the opinion that more projects should adopt faster release cycles. The Linux kernel has a new release roughly every ~7-8 weeks. GitLab releases monthly. This allows a tight, quick iterate-and-feedback loop.

Set a timetable, and cut a release with whatever is ready at the time. If there are concerns of stability, you could do separate LTS releases. Two releases per year is far too short, I feel. Besides, isn't the whole idea of Go to go fast?

7
jimjimjim 5 hours ago 4 replies      
Here be Opinions:

I hate generics. also, I hate exceptions.

Too many people are wanting "magic" in their software. All some people want is to write the "Happy Path" through their code to get some Glory.

If it's your pet project to control your toilet with tweets then that's fine. But if it's for a program that will run 24/7 without human intervention then the code had better be plain, filled with the Unhappy Paths and boring.

Better one hour writing "if err" than two hours looking at logs at ohshit.30am.

8
zackmorris 6 hours ago 2 replies      
Go doesn't have const structs, maps or other objects:

https://stackoverflow.com/questions/43368604/constant-struct...

https://stackoverflow.com/questions/18342195/how-to-declare-...

This is a remarkable oversight which makes it impossible to write purely-functional code with Go. We also see this same problem in most other imperative languages, with organizations going to great lengths to emulate const data:

https://facebook.github.io/immutable-js/

Const-ness in the spirit of languages like Clojure would seem to be a relatively straightforward feature to add, so I don't really understand the philosophy of leaving it out. Hopefully someone here knows and can enlighten us!

9
loup-vaillant 6 hours ago 1 reply      
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. [] If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.

Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild You had it coming, obviously.

Now what's the next step, refusing to implement generics because nobody uses it?

> Every major potential change to Go should be motivated by one or more experience reports documenting how people use Go today and why that's not working well enough.

My goodness, it looks like that is the next step. Go users have put up with the absence of generics, so they're not likely to complain too loudly at this point (besides, I hear the empty interface escape hatch, while not very safe, does work). More exacting developers have probably dismissed Go from the outset, so the won't be able to provide those experience reports.

10
lemoncucumber 6 hours ago 2 replies      
As much as I want them to fix the big things like lack of generics, I hope they fix some of the little things that the compiler doesn't catch but could/should. One that comes to mind is how easy it is to accidentally write:

 for foo := range(bar)
Instead of:

 for _, foo := range(bar)
When you just want to iterate over the contents of a slice and don't care about the indices. Failing to unpack both the index and the value should be a compile error.

11
tschellenbach 7 hours ago 3 replies      
The beauty of Go is that you get developer productivity pretty close to Ruby/Python levels with performance that is similar to Java/C++

Improvements to package management is probably the highest item on my wishlist for Go 2.

12
Xeoncross 6 hours ago 1 reply      
Generics have never stopped me from building in Go... But without them I often do my prototyping in python, javascript, or php.

Working with batch processing I'm often changing my maps to lists or hashes multiple times during discovery. Go makes me rewrite all my code each time I change the variable type.

13
alexandernst 8 hours ago 6 replies      
How about fixing all the GOPATH crap?
14
oelmekki 7 hours ago 0 replies      
As a ruby and go dev, I'm a bit sad to see backward-compatibility going. Thinking I could write code with minimum dependencies and that would just work as is years later was really refreshing compared to the high level of maintenance needed in a ruby app.

But well, I trust the core team to make the best choices.

15
nebabyte 5 hours ago 0 replies      
But I always heard never to use Go 2! :P
16
egonschiele 4 hours ago 0 replies      
Most of the discussion here seems to be around generics, and it sounds like they still don't see the benefit of generics.

I like Go, but the maintainers have a maddeningly stubborn attitude towards generics and package managers and won't ease up even with many voices asking for these features.

17
cdnsteve 8 hours ago 1 reply      
"We estimate that there are at least half a million Go developers worldwide, which means there are millions of Go source files and at least a billion of lines of Go code"
18
concatime 4 hours ago 0 replies      
The leap second problem reminds me of this post[0].

[0] https://news.ycombinator.com/item?id=14121780

19
beliu 3 hours ago 1 reply      
This was announced at GopherCon today. FYI, if folks are interested in following along other conference proceedings, there is no livestream, but there is an official liveblog: https://sourcegraph.com/gophercon
20
drfuchs 8 hours ago 2 replies      
"Go 2 considered harmful" - Edsger Dijkstra, 1968
21
elliotmr 5 hours ago 6 replies      
I must say that whenever there is a discussion about the merits of the Go programming language, it really feels hostile in the discussion thread. It seems that people are seriously angry that others even consider using the language. It is sort of painful reading through the responses which implicitly declare that anybody who enjoys programming with Go is clueless.

It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is. I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.

What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately. When maintaining large code bases with many developers of differing skill levels, this advantage can't be understated. That is the reason there are so many successful new programs popping up in Go with large open-source communities. It is because Go is accessible and friendly to people of varying skill levels, unlike most of the opinions expressed in this thread.

22
jasonwatkinspdx 7 hours ago 6 replies      
Disclaimer: I mean this with love

This post really frustrates me, because the lengthy discussion about identifying problems and implementing solutions is pure BS. Go read the years worth of tickets asking for monotonic time, and see how notable names in the core team responded. Pick any particular issue people commonly have with golang, and you'll likely find a ticket with the same pattern: overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue. It's infuriating that the same people making those comments are now taking credit for the solution, when they had to be dragged into even admitting the issue was legitimate.

23
Analemma_ 8 hours ago 5 replies      
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.

This is sampling bias at work. The people who need generics have long since given up on Go and no longer even bother participating in Go-related discussions, because they've believe it will never happen. Meanwhile, if you're still using Go, you must have use cases where the lack of generics is not a problem and the existing language features are good enough. Sampling Go users to try and find compelling use cases for adding generics is not going to yield any useful data almost by definition.

24
AnimalMuppet 7 hours ago 0 replies      
There are many comments griping about generics. There are many comments griping about the Go team daring to even ask what problems the lack of generics cause.

But take a look at this article about the design goals of Go: https://talks.golang.org/2012/splash.article Look especially at section 4, "Pain Points". That is what Go is trying to solve. So what the Go team is asking for, I suspect, is concrete ways that the lack of generics hinders Go from solving those problems.

You say those aren't your problems? That's fine. You're free to use Go for your problems, but you aren't their target audience. Feel free to use another language that is more to your liking.

Note well: I'm not on the Go team, and I don't speak for them. This is my impression of what's going on - that there's a disconnect in what they're asking for and what the comments here are supplying.

(And by the way, for those here who say - or imply - that the Go team is ignorant of other languages and techniques, note in section 7 the casual way they say "oh, yeah, this technique has been used since the 1970s, Modula 2 and Ada used it, so don't think we're so brilliant to have come up with this one". These people know their stuff, they know their history, they know more languages than you think they do. They probably know more languages than you do - even pjmlp. Stop assuming they're ignorant of how generics are done in other languages. Seriously. Just stop it.)

25
notjack 6 hours ago 2 replies      
> We did what we always do when there's a problem without a clear solution: we waited. Waiting gives us more time to add experience and understanding of the problem and also more time to find a good solution. In this case, waiting added to our understanding of the significance of the problem, in the form of a thankfully minor outage at Cloudflare. Their Go code timed DNS requests during the end-of-2016 leap second as taking around negative 990 milliseconds, which caused simultaneous panics across their servers, breaking 0.2% of DNS queries at peak.

This is absurd. Waiting to fix a known language design issue until a production outage of a major customer is a failure of process, not an achievement. The fact that the post presents this as a positive aspect of Go's development process is beyond comprehension to me.

26
jgrahamc 8 hours ago 0 replies      
I didn't expect to get namechecked in that.

Shows the value of constantly being transparent and supporting open source projects.

27
johansch 7 hours ago 5 replies      
My main wish:

Please don't refuse to compile just because there are unused imports.

Please do warn, and loudly say it's NON-CONFORMANT or whatever will embarass me enough from sharing my piece of Go code with someone else.. but.. can I please just run my code, in private, when experimenting?

28
nickbauman 7 hours ago 1 reply      
Will there be gotos in Go 2? Asking for a friend.
29
EngineerBetter 6 hours ago 1 reply      
I suspect the authors of Golang got drunk, and challenged each other to see how many times they could get people to type 'if err != nil' in the next decade.
30
JepZ 8 hours ago 2 replies      
I know there is a lot of discussion about generics, but I am not sure if that is missing the point. I mean 'generics' sounds like a complex concept from the java development and I am uncertain if that's really what we need in go.

From my experience I think we should talk about container formats because they make 80% of what we would like to have generics for. Actually, go feels as if it has only two container data structures: Slices and Maps. And both feel as if they are pretty hard coded into the language.

Yes, I am sure there are more container formats and it is possible to build your own, but I think it is not easy enough.

9
Reverse-engineering the Starbucks ordering API tendigi.com
614 points by nickplee  1 day ago   142 comments top 27
1
dsacco 1 day ago 10 replies      
Solid writeup. From someone who does/did a lot of this professionally:

1. Android typically is easier for this kind of work (you don't even need a rooted/jailbroken device, and it's all Java/smali),

2. That said, instead of installing an entire framework like Xposed that hooks the process to bypass certificate pinning, you can usually just decompile the APK and nop out all the function calls in the smali related to checking if the certificate is correct, then recompile/resign it for your device (again, easier on Android than iOS),

3. Request signing is increasingly implemented on APIs with any sort of business value, but you can almost always bypass it within an hour by searching through the application for functions related to things like "HMAC", figuring out exactly which request inputs are put into the algorithm in which order, and seeing where/how the secret key is stored (or loaded, as it were),

4. There is no true way to protect an API on a mobile app. You can only make it more or less difficult to secure. The best you can do is a frequently rotated secret key stored in shared libraries with weird parameters attached to the signing algorithm. To make up for this savvy companies typically reduce the cover time required (i.e. change the secret key very frequently by updating the app weekly or biweekly) or by using using a secret key with several parts generated from components in .so files, which are significantly more tedious to reverse.

2
joombaga 22 hours ago 7 replies      
I did this with the Papa John's webapp a while back (which was waaaay simpler btw). They limited duplicate toppings to (I think) 3 of the same, but "duplicate_item" was just a numerical property on the (e.g.) "bacon" object. Turns out you could just add multiple "bacon" members to the toppings array to exceed the limit, and they didn't charge for duplicates, so I ordered a pizza with like 50 bacons.

It definitely didn't have 50x worth of bacon, but it did have more than 3x, maybe 5x-6x. The receipt was hilariously long though.

3
rkunnamp 19 hours ago 2 replies      
The dominos ordering app in India had a terrible flow a while back. Once the products are added to the cart and proceeded to checkout , the flow was as follows

1. First a payment collection flow is initiated from the browser (asking Credit Card details, pin etc)

2. The payment confirmation comes to the browser

3. The browser then places the order(the pizza information) to another api end point, marking the payment part as 'Paid'

The thing is, one could add as many pizzas to the cart in a different tab, while the original tab proceeds to payment. The end result is, you get to pay only for pizzas that were initially in the cart, but could get any number of pizzas. For literally Rs100 one could order thousands of rupees worth pizza.

I discovered it accidentally and did report to them. Neither did they acknowledge nor did they send me a free pizza :(

They later fixed this, by not allowing to load the cart in a different tab. But there is a high chance that there could be another hack even now. Since I had wowed not to eat junk food anymore, there was not much incentive for me to spend any more time on it.

4
heywire 11 hours ago 0 replies      
I've always wondered the legality of doing things like this. Have there been cases where someone was taken to court (in the US) for reversing an API from a mobile app or website? Assuming no malicious action or intent of course. Could the CFAA be used in this case, even if the intent was just to understand the API for personal use?

With so many IoT devices out there relying on 3rd party web services that may or may not be around a year from now, I expect that the right to inspect and understand these APIs will become more and more important. Not to mention wanting the ability to build interactions between devices where the manufacturer may not have interest (IFTTT, etc).

5
rconti 22 hours ago 6 replies      
Excellent writeup.

I have to take issue with the "Starbucks app is great" line, though. I think I've had more problems with it (on iOS) than any other app. It's the only app that (for a period of many months if not a year) was regularly unable to find my location. Even if I opened up Maps or some other location enabled app and found my location before launching Starbucks, it would just bomb out.

Overall the app seems to have tons of 'issues'. It's been better the past few months though. And it beats the hell out of standing in a 10 minute line. I honestly wouldn't stop at Starbucks anymore if it wasn't for the app.

6
zitterbewegung 1 day ago 3 replies      
If an open API existed yes there would be more integrations. Of course you would have to hire engineers to perform upkeep. Eventually if the ordering API isn't profitable you get a bunch of sunk costs and have to reassign people. Its not just "make this open" and POOF. Also your access could be revoked by unofficially using the API and or they could just change it at any time.
7
spike021 20 hours ago 1 reply      
Maybe a bit off-topic, but APIs always make me wonder a bit when they can be reverse-engineered or.. for lack of a better word, misused.

I know of one website (site A) that sells items for sports and uses an API of a sports website (site B) to provide current statistics and other information.

Thing is, that sports website's API is now deprecated for public use, there's no way to request a token, and from what I can tell it's only to be used for commercial purposes/paid for by companies.

But, I can easily find the API token being used on site A, dig through the private/deprecated docs for the API of site B, and use any of their endpoints and data for pretty much whatever I want.

At least, this was the case roughly 4-6 months ago and I haven't looked into it since; perhaps they've changed it since then.

But I wonder how this works. Wouldn't it be a misuse of their API and something they wouldn't want allowed? Usually sports statistics APIs are fairly expensive, and the fact that some random person like me can get access easily for free seems unfair to site B, especially when they don't want normal people using their API anymore.

8
supergeek133 12 hours ago 0 replies      
The way to solve this is just for companies to give out their API in a public manner. You'll almost never be able to secure it from scrapers.

We experienced this at Honeywell, when I first started here we were blocking users that scraped the app instead of giving them public access and teach them how to use it correctly.

9
coworkerblues 21 hours ago 3 replies      
I want to write a similar write-up for a company which basically does everything over HTTP with their own half-baked hardcoded AES key in app for sending credit card info. and that their confirmation checkup is stupid (for SMS) and can be bypassed.

The problem is that their site TOS forbids reverse engineering, and I am afraid their lawyers will go after me instead of fixing the security issues (even if I just contact them), any tips for me ?

10
bpicolo 1 day ago 4 replies      
Opening up an API like this is ripe for abuse, so taking care makes sense. Bad actors translate directly to lost money.

A real method of securing APIs would be a godsend, but in current tech it's just not possible. This is the one place where mediocre security-by-obscurity is your only choice =(

11
pilif 20 hours ago 2 replies      
I wonder why they went such great lengths to prevent unauthorized clients (which also is a thing thats fundamentally impossible. All you can do is making it harder for attackers). What would be so bad about third party apps ordering coffee?

Generally, its a good idea to be protective, but between cert pinning and that encrypted device fingerprint and the time based signature, this adds a lot of points of possible failure to an app you want to be as available as possible to as many people as possible.

What information this API has access to is so precious to warrant all of this?

12
nailer 14 hours ago 0 replies      
From: https://github.com/tendigi/starbucks

> Once you've obtained your client_id, client_secret

I'd like to use this module. Question for author: what's the fastest way to get a CLIENT_ID and CLIENT_SECRET?

13
chx 1 day ago 0 replies      
Xposed is the reason I am never getting anything but an unlockable Android phone.
14
leesalminen 1 day ago 2 replies      
If there was a IoT button in my kitchen that could order my usual morning order, well I'm not sure. I may or not shout out in joy.
15
shortstuffsushi 22 hours ago 0 replies      
This is super interesting, especially the part about trying to reverse engineer their "security" measure. I did something like this a few months back for Paylocity, a time logging system that has a terrible web interface. After trying to talk with their sales people about them potentially offering an API, I was told "no API, just this mobile app." Turns out the mobile app is just an Ionic app with all of the resources (all, including tests and even test environment logins and stuff) baked in. Super easy to grab their API out of that (literally called /mobileapi/), but then the trouble was figuring out how they generated their security token header, which was also a little dance of timestamps and faked device info.

The best part was when I contacted them afterwards and warned them about the extra pieces of info they had baked in, their response was basically "yeah, we're aware that there's more in there than there should be, but it's not a priority." Oh well, they just have all of my personal information.

16
masthead 17 hours ago 0 replies      
This was already done before, in 2016

https://www.ryanpickren.com/starbucks-button

17
IshKebab 17 hours ago 0 replies      
Probably would have been easier to decompile the Android app than the iOS one. Even if they use proguard the code is much easier to read.
18
King-Aaron 21 hours ago 0 replies      
Good article, though one main take-home for me is some killer new Spotify playlists to run at work :D
19
kyle-rb 2 hours ago 0 replies      
HTCPCP implementation when?
20
Lxr 22 hours ago 1 reply      
What's the motivation for Starbucks to make it this difficult to reverse?
21
catshirt 23 hours ago 1 reply      
eyyyyy tendigi. hi Jeff! looks like you're doing some cool stuff.

- your old downstairs guy

22
jorblumesea 21 hours ago 0 replies      
Interesting, what's with the hardcoded new relic ID? Writeup didn't mention it, I assume that's analytics/monitoring related? Does it need to be set?
23
amelius 14 hours ago 0 replies      
Can somebody explain what the use/fun of this is, if Starbucks can push an update that invalidates the approach anytime they want?
24
turdnagel 1 day ago 4 replies      
Is there a good guide out there on reverse engineering mobile apps for iOS & Android?
25
wpovell 6 hours ago 0 replies      
Is there a good place to learn how to do this sort of reverse engineering?
26
githubcomyottu 15 hours ago 0 replies      
Starbucks is nice for dessert, I wish they sold decent coffee though.
27
dabockster 22 hours ago 0 replies      
>issue tracking turned off

Well, now how am I supposed to tell him that Tully's is better?

10
Students Are Better Off Without a Laptop in the Classroom scientificamerican.com
421 points by thearn4  2 days ago   240 comments top 58
1
zeta0134 2 days ago 5 replies      
Oh, okay, I thought the study was going to be on the benefits of attempting to use the laptop itself for classroom purposes, not for social media distractions. This would be more accurately titled, "Students Are Better Off Without Distractions in the Classroom." Though I suppose, it wouldn't make a very catchy headline.

I found my laptop to be very beneficial in my classroom learning during college, but only when I made it so. My secret was to avoid even connecting to the internet. I opened up a word processor, focused my eyes on the professor's slides or visual aids, and typed everything I saw, adding notes and annotations based on the professor's lecture.

This had the opposite effect of what this article describes: my focusing my distracted efforts on formatting the article and making my notes more coherent, I kept myself focused, and could much more easily engage with the class. Something about the menial task of taking the notes (which I found I rarely needed to review) prevented me from losing focus and wandering off to perform some unrelated activity.

I realize my experience is anecdotal, but then again, isn't everyone's? I think each student should evaluate their own style of learning, and decide how to best use the tools available to them. If the laptop is a distraction? Remove it! Goodness though, you're paying several hundred (/thousand) dollars per credit hour, best try to do everything you can to make that investment pay off.

2
makecheck 2 days ago 9 replies      
If students arent engaged, they arent going to become star pupils once you take away their distractions. Perhaps kids attend more lectures than before knowing that they can always listen in while futzing with other things (and otherwise, they may skip some of the classes entirely).

The lecture format is what needs changing. You need a reason to go to class, and there was nothing worse than a professor showing slides from the pages of his own book (say) or droning through anything that could be Googled and read in less time. If there isnt some live demonstration, or lecture-only material, regular quizzes or other hook, you cant expect students to fully engage.

3
ourmandave 2 days ago 5 replies      
This reminds me of the running gag in some college movie where the first day all the students show up.

The next cut some students come to class, put a recorder on their desk and leave, then pick it up later.

Eventually there's a scene of the professor lecturing to a bunch of empty desks with just recorders.

And the final scene there's the professor's tape player playing to the student's recorders.

4
imgabe 2 days ago 4 replies      
I went to college just as laptops were starting to become ubiquitous, but I never saw the point of them in class. I still think they're pretty useless for math, engineering, and science classes where you need to draw symbols and diagrams that you can't easily type. Even for topics where you can write prose notes, I always found it more helpful to be able to arrange them spatially in a way that made sense rather than the limited order of a text editor or word processor.
5
njarboe 2 days ago 1 reply      
This is a summary of an article titled "Logged In and Zoned Out: How Laptop Internet Use Relates to Classroom Learning" published in Psychological Science in 2017; The DOI is 10.1177/0956797616677314 if you want to check out the details.

Abstract: Laptop computers are widely prevalent in university classrooms. Although laptops are a valuable tool, they offer access to a distracting temptation: the Internet. In the study reported here, we assessed the relationship between classroom performance and actual Internet usage for academic and nonacademic purposes. Students who were enrolled in an introductory psychology course logged into a proxy server that monitored their online activity during class. Past research relied on self-report, but the current methodology objectively measured time, frequency, and browsing history of participants Internet usage. In addition, we assessed whether intelligence, motivation, and interest in course material could account for the relationship between Internet use and performance. Our results showed that nonacademic Internet use was common among students who brought laptops to class and was inversely related to class performance. This relationship was upheld after we accounted for motivation, interest, and intelligence. Class- related Internet use was not associated with a benefit to classroom performance.

6
stevemk14ebr 2 days ago 2 replies      
I think this is a highly personal topic. As a student myself i find a laptop in class is very nice, i can type my notes faster, and organize them better. Most of my professors lectures are scatter brained and i frequently have to go back to previous section and annotate or insert new sections. With a computer i just go back and type, with a pen and paper i have to scribble, or write in the margins. Of course computers can be distractions, but that is the students responsibility, let natural selection take its course and stop hindering my ability to learn how i do best (I am a CS major so computers are >= paper to me). If you cannot do your work with a computer, then don't bring one yourself, dont ban them for everyone.
7
baron816 2 days ago 0 replies      
Why are lectures still being conducted in the classroom? Students shouldn't just be sitting there copying what the teacher writes on the board anyway. They should be having discussions, working together or independently on practice problems, teaching each other the material, or just doing anything that's actually engaging. Lecturing should be done at home via YouTube.
8
zengid 2 days ago 0 replies      
Please excuse me for relating an experience, but it's relevant. To get into my IT grad program I had to take a few undergrad courses (my degree is in music, and I didn't have all of the pre-reqs). One course was Intro to Computer Science, which unfortunately had to be taught in the computer lab used for the programming courses. It was sad to see how undisciplined the students were. Barely anyone paid attention to the lectures as they googled the most random shit (one kid spent a whole lecture searching through images of vegetables). The final exam was open-book. I feel a little guilty, but I enjoyed seeing most of the students nervously flip through the chapters the whole time, while it took me 25 minutes to finish (the questions were nearly identical to those from previous exams).
9
shahbaby 2 days ago 1 reply      
"Thus, there seems to be little upside to laptop use in class, while there is clearly a downside."

Thanks to bs articles like this that try to over generalize their results, I was unsure if I "needed" a laptop when returning to school.

Got a Surface Book and here's what I've experienced over the last 2 semesters.- Going paperless, I'm more organized than ever. I just need to make sure I bring my surface with me wherever I go and I'm good.

- Record lectures, tutorials, office hours, etc. Although I still take notes to keep myself focused, I can go back and review things with 100% accuracy thanks to this.

- Being at 2 places at once. ie: Make last minute changes before submitting an assignment for class A or attend review lecture to prepare for next week's quiz in class B? I can leave the surface in class B to record the lecture while I finish up the assignment for class A.

If you can't control yourself from browsing the internet during a lecture then the problem is not with your laptop...

10
Kenji 2 days ago 0 replies      
If you keep your laptop open during class, you're not just distracting yourself, you're distracting everyone behind you (that's how human attention works - if you see a bright display with moving things, your attention is drawn towards it), and that's not right. That's why at my uni, there was an unspoken (de-facto) policy that if you keep your laptop open during lectures, you're sitting in the backrows, especially if you play games or do stuff like that. It worked great - I was always in the front row with pen & paper.

However, a laptop is very useful to get work done during breaks or labs when you're actually supposed to use it.

11
rdtsc 2 days ago 2 replies      
I had a laptop and left it home most of the time. And just stuck with taking notes with a pen and sitting upfront.

I took lots notes. Some people claim it's pointless and distracts from learning but for me the act of taking notes is what helped solidify the concepts a better. Heck due to my horrible handwriting I couldn't even read some of the notes later. But it was still worth it. Typing them out just wasn't the same.

12
alkonaut 2 days ago 0 replies      
This is the same as laptops not being allowed in meetings. A company where it's common for meeting participants to "take notes" on a laptop is dysfunctional. Laptops need to be banned in meetings (and smartphones in meetings and lectures).

Also re: other comments: A video lecture is to a physical lecture what a conference call is to a proper meeting. A professor rambling for 3h is still miles better than watching the same thing on YouTube. The same holds for tv versus watching a film on a movie screen.

Zero distractions and complete immersion. Maybe VR will allow it some day.

13
brightball 2 days ago 1 reply      
Shocker. I remember being part of Clemson's laptop pilot program in 1998. If you were ever presenting you basically had to ask everyone to close their laptops or their eyes would never even look up.
14
tsumnia 2 days ago 1 reply      
I think its a double edge sword; not just paper > laptop or laptop > paper. As many people have already stated, its about engagement. Since coming back for my PhD, I've subscribed to the pencil/paper approach as a simple show of respect to the instructor. Despite what we think, professors are human and flawed, and being in their shoes, it can be disheartening to not be able to feed off your audience.

That being said, you can't control them; however, I like to look at different performance styles. What makes someone binge watch Netflix episodes but want to nod off during a lecture. Sure, one has less cognitive load, but replace Netflix binge with anything. People are willing to engage, as long as the medium is engaging (this doesn't mean easy or funny, simply engaging).

[Purely anecdotal opinion based discussion] This is one of the reasons I think flipping the classroom does work; they can't tune out. But, if its purely them doing work, what's your purpose there? To babysit? There needs to be a happy median between work and lecture.

I like to look at the class time in an episodic structure. Pick a show and you'll notice there's a pattern to how the shows work. By maintaining a consistency in the classroom, the students know what to expect.

To tie it back to the article, the laptop is a great tool to use when you need them to do something on the computer. However, they should be looking at you, and you should be drawing their attention. Otherwise, you're just reading your PowerPoint slides.

15
wccrawford 2 days ago 3 replies      
I'd be more impressed if they also did the same study with notepads and doodles and daydreams, and compared the numbers.

I have a feeling that people who aren't paying attention weren't going to anyhow.

However, I'd also guess that at least some people use the computer to look up additional information instead of stopping the class and asking, which helps everyone involved.

16
emptybits 2 days ago 0 replies      
It makes sense that during a lecture, simple transcription (associated with typing) yields worse results than cognition (associated with writing). So pardon my ignorance (long out of the formal student loop):

Are students taught how to take notes effectively (with laptops) early in their academic lives? Before we throw laptops out of classrooms, could we be improving the situation by putting students through a "How To Take Notes" course, with emphasis on effective laptopping?

It's akin to "how to listen to music" and "how to read a book" courses -- much to be gained IMO.

17
LaikaF 2 days ago 0 replies      
My high school did the one laptop loan out thing (later got sued for it) and I can tell you it was useless as a learning tool. At least in the way intended. I learned quite a bit mainly about navigating around the blocks and rules they put in place. In high school my friends and I ran our own image board, learned about reverse proxying via meebo repeater, hosted our own domains to dodge filtering, and much much more. As far as what I used them for in class... if I needed to take notes I was there with note book and pen. If I didn't I used the laptop to do homework for other classes while in class. I had a reputation among my teachers for handing in assignments the day they were assigned.

In college I slid into the pattern they saw here. I started spending more time on social media, paying less attention in class, slacking on my assignments. As my burnout increased the actual class times became less a thing I learned from and more just something I was required to sit in. One of my college classes literally just required me to show up. It was a was one of the few electives in the college for a large university. The students were frustrated they had to be there, and the teacher was tired of teaching to students who just didn't care.

Overall I left college burnt out and pissed at the whole experience. I went in wanting to learn it just didn't work out.

18
Fomite 2 days ago 1 reply      
Just personally, for me it was often a choice between "Laptop-based Distractions" or "Fall Asleep in Morning Lecture".

The former was definitely the superior of the two options.

19
free_everybody 2 days ago 0 replies      
I find that having my laptop out is great for my learning, even during lectures. If somethings not clear or I want more context, I can quickly look up some information without interrupting the teacher. Also, paper notes don't travel well. If everything is on my laptop and backed up online, I know that if I have my laptop, I can study anything I want. Even if I don't have my laptop, I could use another computer to access my notes and documents. This is a HUGE benefit.
20
dalbasal 1 day ago 0 replies      
I think there is a mentality shift that may come with digitizing learning which might help here.

The discussion on a topic like this can go in two ways. (1) Is to talk about how a laptop can help if students use it to xyz and avoid cba. It's up to the student. Bring a horse to water...(2) The second way you can look at this is to compare outomes, statistacally or quasi-statistically. IE, If laptops are banned we predict an N% increase in Z, where Z is (hopefully) a good proxy for learning or enjoyment or something else we want. IE, think about improving a college course the same way we think about optimizing a dating site.

On a MOOC, the second mentality will tend to dominate. Both have downsides, especially when applied blindly (which tends to happen). In any case, new thinking tends to help.

21
jon889 1 day ago 0 replies      
I have had lectures where I have had a laptop/iPad/phone and ones where Ive not had any. i did get distracted, but I found that if I didnt have say Twitter Id get distracted for longer. With Twitter Id catch up on my news feed and then a few minutes later be back to concentrating. Without it Id end up day dreaming and losing focus for 10-20 minutes.

The biggest problem isnt distractions, or computers and social media. Its that hour long lectures are an awful method of transferring information. In my first year we had small groups of ~8 people and a student from 3rd/4th year and wed go through problems from the maths and programming lectures. I learnt much more in these.

Honestly learning would be much more improved if lectures were condensed into half an hour YouTube videos you can pause, speed up and rewind. Then have smaller groups in which you can interact with the lecturers/assistants.

22
BigChiefSmokem 2 days ago 0 replies      
I'll give you no laptops in the class if you give me no standardized testing and only four 15-20 minute lectures per day and let the kids work on projects the rest of the time as a way to prove their learning and experiences in a more tangible way.

Trying to fix the problem by applying only patches, as us technically inclined would say, always leads to horribly unreliable and broken systems.

23
kyle-rb 2 days ago 0 replies      
>students spent less than 5 minutes on average using the internet for class-related purposes (e.g., accessing the syllabus, reviewing course-related slides or supplemental materials, searching for content related to the lecture)

I wonder if that could be skewed, because it only takes one request to pull up a course syllabus, but if I have Facebook Messenger open in another tab, it could be receiving updates periodically, leading to more time recorded in this experiment.

24
TazeTSchnitzel 2 days ago 0 replies      
> In contrast with their heavy nonacademic internet use, students spent less than 5 minutes on average using the internet for class-related purposes

This is a potential methodological flaw. It takes me 5 minutes to log onto my university's VLE and download the course materials. I then read them offline. Likewise, taking notes in class happens offline.

Internet use does not reflect computer use.

25
fatso784 2 days ago 0 replies      
There's another study showing that students around you with laptops harm your ability to concentrate, even if you're not on a laptop yourself. This is in my opinion a stronger argument against laptops, because it harms those not privileged enough to have a laptop. (not enough time to find study but you can find it if you search!)
26
brodock 1 day ago 1 reply      
Any research that takes students as an homogenic group is flawed. People can be (more or less) in about one of the 7 different types of learning styles https://www.learning-styles-online.com/overview/.

So making claims like "doing X works better than Y" is meaningless without pointing to a specific learning style.

That's why you hear people defending writing to paper, while others prefer just hearing the lectures or others have better performance while discussing with peers (and some hate all of the other interactions and can perform better by isolating and studying on your own... which is probably the one who will benefit the most of having a laptop available).

27
homie 2 days ago 0 replies      
instructors are also better off without computers in the classroom. lecture has been reduced to staring at a projector while each and every students eyes roll to the back of their skull
28
Zpalmtree 1 day ago 1 reply      
I like having a laptop at uni just because I can program when the lectures are boring, I find the material is too easy in UK universities in CS at least, dunno about other courses or countries, but the amount of effort you need to get good marks along with the amount you're paying is a bit silly, and mostly you'll learn more by yourself...

That said, if you're in a programming class, having a laptop to follow along and try out the concepts is really handy, when we were in an C++/ASM class, seeing the different ASM GCC/G++ and Microsoft's C++ compiler spat out was quite interesting.

29
vblord 2 days ago 0 replies      
During indoor recess at my kids school, kids don't eat their lunch and just throw it away because of the chromebooks. There are only have a few computers and they are first come first serve. Kids would rather go without lunch to be able to play on the internet for 20 minutes.
30
nerpderp83 2 days ago 1 reply      
Paying attention requires work, we need to purposefully use tools that are also distractions.
31
zokier 2 days ago 1 reply      
I love how any education-related topic brings out the armchair-pedagogist out from the woodworks. Of course a big aspect there is that everyone has encountered some amount of education, and especially both courses they enjoyed and disliked. And there is of course the "think of the children" aspect.

To avoid making purely meta comment, in my opinion the ship has already sailed; we are going to have computers in classrooms for better or worse. So the big question is how can we make the best use of that situation.

32
erikb 2 days ago 0 replies      
I'd argue that students are better off without a classroom as long as they have a laptop (and internet, but that is often also better at home/cafe than in the classroom).
33
thisrod 2 days ago 0 replies      
> First, participants spent almost 40 minutes out of every 100-minute class period using the internet for nonacademic purposes

I think that I'd be one of them; in the absence of a laptop, I'd spend that time daydreaming. How many people can really concentrate through a 100 minute nonstop lecture about differential geometry or the decline of the Majapahit empire?

34
_e 1 day ago 0 replies      
Politicians are also better off without a laptop during legislative sessions [0].

[0] http://www.snopes.com/photos/politics/solitaire.asp

35
kgilpin 2 days ago 0 replies      
It sounds like what students need are better teachers. I haven't been to school in a while but I had plenty of classes that were more interesting than surfing YouTube; and some that weren't.

The same is true for meetings at work. In a good session, people are using their laptops to look up contributing information. In a bad one... well... you know.

36
zitterbewegung 2 days ago 0 replies      
When I was in College I would take notes using a notebook and pad and paper. I audited some classes with my laptop using latex but most of the time I used a notebook. Also, sometimes I would just go to class without a notebook and get the information that way. It also helped that I didn't have a smartphone with Cellular data half of the time I was in school.
37
polote 2 days ago 0 replies      
Well it depends on what you do in the classroom, when class is mandatory but you are not able to learn this way (by listening to a teacher), having a laptop can let you do other things. And then use your time efficiently, like doing some administrative work, send email, coding ...

Some students are of course better with a laptop in the classroom

38
jessepage1989 2 days ago 0 replies      
I find taking paper notes and then reorganizing on the computer works best. The repetition helps memorization.
39
mark_l_watson 1 day ago 0 replies      
In what universe would it be a good idea for students use laptops in class?

Use of digital devices should be limited because the very use of digital devices separates us from what is going on around us. Students should listen and take notes (in a notebook) as necessary.

40
wh313 2 days ago 0 replies      
Could it be that the intermittent requests to servers by running apps, say Facebook Messenger or WhatsApp, be tracked as social media use? Because they all use HTTPS I don't see how the researchers distinguished between idle traffic vs sending a message.
41
qguv 1 day ago 0 replies      
Internet access, especially to Wikipedia, did wonders for me whenever the lecture turned to something I was already familiar with. That alone kept me from getting distracted and frustrated as I would in classes whose professors prohibited laptop use.
42
marlokk 2 days ago 0 replies      
Students are better off with instructors who don't bore students into bringing out their laptops.
43
Radle 1 day ago 0 replies      
If students thing the class is boring enough, they'll watch youtube whether on the laptop or on their mobile is no really important.
44
dorianm 23 hours ago 0 replies      
Pen and papers are the best. Also chromebooks are pretty cool
45
jonbarker 1 day ago 0 replies      
Students need a GUI-less computer like a minimalist linux distro.
46
Shinchy 1 day ago 0 replies      
I've always find the idea of taking a laptop to a lecture pretty rude. I'm there to give the person teaching my full attention, not stare at a laptop screen. So personally I never use them in any type of lecturing / teaching environment simply as a mark of respect.
47
Glyptodon 2 days ago 2 replies      
I feel like the conclusion is a bit off base: that students lack the self control to restrict the use of laptops laptops to class-related activities is somehow a sign that the problem is the laptop and not the students? I think it's very possible that younger generations have big issues with self-control and instant gratification. But I think it's wrong to think that laptops are the faulty party.
48
alistproducer2 2 days ago 0 replies      
"Duh" - anyone who's ever been in a class with a laptop.
49
exabrial 2 days ago 0 replies      
Students are best of with the least amount of distractions
50
rokhayakebe 2 days ago 1 reply      
We really need to begin ditching most studies. We have the ability now to collect vast amount of data and use that to make conclusions based on millions of endpoints, not just 10, 100 or 1000 pieces of information.
51
partycoder 2 days ago 1 reply      
I think VR will be the future of education.
52
ChiliDogSwirl 2 days ago 1 reply      
Maybe it would be helpful if our operating systems were optimised for working and learning rather than to selling us crap and mining our data.
53
catnaroek 1 day ago 0 replies      
This is why I like to program in front of a whiteboard rather than in front of my computer: to be more productive.
54
aurelianito 1 day ago 0 replies      
Even better, just remove the surrounding classroom of the laptop. Now we can learn anything anywhere. Having to go to take a class were a professor recites something is ridiculous.
55
Bearwithme 2 days ago 0 replies      
They should try this study again, but with laptops heavily locked down. Disable just about everything that isn't productive including a strict web filter. I am willing to bet the results would be much better for the kids with laptops. Of course if you let them have free reign they are going to be more interested in entertainment than productivity.
56
microcolonel 2 days ago 1 reply      
57
bitJericho 2 days ago 2 replies      
The schools are so messed up in the US. Best to just educate children yourself as best you can. As for college kids, best to travel abroad.
58
FussyZeus 2 days ago 0 replies      
Disengaged and uninterested students will find a distraction; yes, perhaps a laptop makes it easier but my education in distraction seeking during middle school, well before laptops were even close to schools, shows that the lack of a computer in front of me was no obstacle to locating something more interesting to put my attention to.

The real solution is to engage students so they don't feel the urge to get distracted in the first place. Then you could give them completely unfiltered Internet and they would still be learning (perhaps even faster, using additional resources.) You can't substitute an urge to learn, no matter if you strap them to the chairs and pin their eyeballs open with their individual fingers strapped down, it won't do anything. It just makes school less interesting, less fun, and less appealing, which makes learning by extension less fun, less appealing, and less interesting.

11
SFO near miss might have triggered aviation disaster mercurynews.com
477 points by milesf  2 days ago   412 comments top 37
1
ddeck 2 days ago 4 replies      
Attempts to take off from or land on taxiways are alarmingly common, including those by Harrison Ford:

 Harrison Ford won't face disciplinary action for landing on a taxiway at John Wayne Airport [1] Serious incident: Finnair A340 attempts takeoff from Hong Kong taxiway [2] HK Airlines 737 tries to take off from taxiway [3] Passenger plane lands on the TAXIWAY instead of runway in fourth incident of its kind at Seattle airport [4]
[1] http://www.latimes.com/local/lanow/la-me-ln-ford-taxiway-agr...

[2] https://news.aviation-safety.net/2010/12/03/serious-incident...

[3] https://www.flightglobal.com/news/articles/hk-airlines-tries...

[4] http://www.dailymail.co.uk/travel/travel_news/article-337864...

2
charlietran 2 days ago 3 replies      
There's an mp3 of the radio chatter here:

https://forums.liveatc.net/atcaviation-audio-clips/7-july-ks...

> Audio from the air traffic controller communication archived by a user on LiveATC.net and reviewed by this newspaper organization showed how a the confused Air Canada pilot asks if hes clear to land on 28R because he sees lights on the runway.

> Theres no one on 28R but you, the air controller responds.

> An unidentified voice, presumably another pilot, then chimes in: Wheres this guy going. Hes on the taxiway.

> The air controller quickly tells the Air Canada pilot to go around. telling the pilot it looks like you were lined up for Charlie (Taxiway C) there.

> A United Airlines pilot radios in: United One, Air Canada flew directly over us.

> Yeah, I saw that guys, the control tower responds.

3
Animats 2 days ago 1 reply      
Here's a night approach on 28R at SFO.[1] Same approach during the day.[2] The taxiway is on the right. It's a straight-in approach over the bay. The runway, like all runways at major airports worldwide, has the standardized lighting that makes it very distinctive at night, including the long line of lights out into the bay. This was in clear conditions. WTF? Looking forward to reading the investigation results.

The planes on the taxiway are facing incoming aircraft as they wait for the turn onto the runway and takeoff. So they saw the Air Canada plane coming right at them. That must have been scary.

[1] https://www.youtube.com/watch?v=rNMtMYUGjnQ[2] https://www.youtube.com/watch?v=mv7_lzFKCSM

4
watson 2 days ago 5 replies      
English is not my native language, but shouldn't the headline have read "SFO near miss would have triggered aviation disaster"? "Might" seems to indicate that something else happened afterwards as a possible result of the near miss
5
tmsh 2 days ago 2 replies      
The moral of this story for me is: be that "another pilot." To be clear, "another pilot" of another aircraft. Not as clear as it could be just like the title of this article is ambiguous.

The moral of this story for me is: call out immediately if you see something off. He's the real hero. Even if the ATC controller immediately saw the plane being misaligned at the same time - that feedback confirming another set of eyes on something that is off couldn't have hurt. All 1000 people on the ground needed that feedback. Always speak up in situations like this.

6
WalterBright 2 days ago 4 replies      
In the early 1960s, a pilot mistook a WW2 airfield for Heathrow, and landed his 707 on it, barely stopping before the end of the runway.

The runway being too short to lift a 707, mechanics stripped everything out of it they could to reduce the weight - seats, interiors, etc. They put barely enough gas in it to hop over to Heathrow, and managed to get it there safely.

The pilot who landed there was cashiered.

7
mate_soos 2 days ago 3 replies      
Before crying pilot error, we must all read Sydney Dekker's A Field Giude to Understading "Human Error" (and fully appreciate why he uses those quotes). Don't immediately assign blame to the sharp end. Take a look at the blunt one first. Most likely not a pilot error. Assigning blame is a very human need, but assigning it to the most visible and accessible part is almost always wrong.
8
cperciva 2 days ago 1 reply      
Can we have "might have triggered" changed to "could have triggered" in the title?
9
phkahler 2 days ago 0 replies      
A different kind of error... I was returning from Las Vegas in the middle of the day and the tower cleared us for departure on 9 and another plane on 27. We had taxied out and then the pilot pulled over, turned around and waited for the other plane to depart. He told us what had happened - there was a bit of frustration in his voice. Imagine pulling up and seeing another plane sitting at the opposite end of the runway ready to go. (it may not have been 9 and 27 I don't know which pair it was) Earlier waiting in the terminal I had seen a different plane go around, but didn't know why. Apparently there was a noob in the tower that day. This is why you look out the window and communicate.
10
lisper 2 days ago 0 replies      
Possible explanation for why this happened: it was night, and the parallel runway 28L was closed and therefore unlit. The pilot may have mistaken 28R for 28L and hence the taxiway for 28R. This comes nowhere near excusing this mistake (there is no excuse for a screwup of this magnitude) but it makes it a little more understandable.
11
mikeash 2 days ago 0 replies      
I wonder just how likely this was to end in disaster. It feels overstated. The pilot in question seemed to think something was wrong, he just hadn't figured it out yet. I imagine he would have seen the aircraft on the taxiway in time to go around on his own if he hadn't been warned off.

I'm having trouble figuring out the timeline. The recording in the article makes it sound like this all happened in a matter of seconds, but it's edited down to the highlights so that's misleading. LiveATC has an archived recording of the event (http://archive-server.liveatc.net/ksfo/KSFO-Twr2-Jul-08-2017..., relevant part starts at about 14:45) but even those appear to have silent parts edited out. (That recording covers a 30 minute period but is only about 18 minutes long.) In the archived recording, about 40 seconds elapse between the plane being told to go around and the "flew directly over us" call, but I don't know how much silence was edited out in between.

Certainly this shouldn't have happened, but I wonder just how bad it actually was.

12
blhack 2 days ago 3 replies      
People "could" run their cars off of bridges every day, but they don't because they can see, and because roads have signs warning them of curves.

This sounds like a story of how well the aviation system works more than anything. The pilot is in constant communication with the tower. The system worked as intended here and he went around.

It seems like a non story.

13
vermontdevil 2 days ago 0 replies      
Found a cockpit video of a landing approach to 28R to give you an idea (daylight, good weather etc)

https://www.youtube.com/watch?v=I0Y6GTI9pg4

14
cmurf 2 days ago 0 replies      
Near as I can tell HIRL could not have been on, they were not following another aicraft to land, and the runway and taxiway lighting must've been sufficiently low that the taxi lights (low intensity version of a landing light) on the queued up airplanes on the taxiway, made it look like the taxiway was the runway. Pilot fatigue, and experience at this airport also are questions.

http://flightaware.com/resources/airport/SFO/IAP/ILS+RWY+28R...

All runways have high intensity runway lighting (HIRL) and 28R has touchdown zone and centerline lighting (TDZ/CL). Runway lights are white, taxiway lights are blue. If you see these elements, there's no way to get confused. So my assumption is the pilots, neither of them, saw this distinction.

HIRL is typically off for visual landings even at night. That's questionable because night conditions are reduced visibility situations and in many other countries night flying is considered as operating under instrument rules, but not in the U.S. You do not need instrument rated aircraft or pilot certification. For a long time I've though low intensity HIRL should be enabled briefy in the case of visual night landings, where an aircraft is not following behind another, at the time "runway in sight" verbal verification happens between ATC and pilot.

15
URSpider94 1 day ago 0 replies      
Incidentally, I heard a story on KQED (SF Bay Area public radio) today that mentioned a potential clue. There are two parallel runways on this heading -- however -- the left runway is closed for repairs and therefore is currently unlit. If the pilot didn't remember this (it would have been included in his briefings and approach charts for the flight, but he may not have internalized it), he would likely have been looking for two parallel runways and would have lined up on the right one, which in this case would have been the taxiway...
16
mannykannot 2 days ago 0 replies      
AFAIK (not that I follow the issue closely) the problem of radio interference that ended the last-chance attempt to prevent the Tenerife crash has not been addressed [1]. If so, then it may be very fortunate that only one person called out that the landing airplane had lined up its approach on the taxiway, and not, for example, the crews of every airplane on the taxiway, simultaneously.

[1] http://www.salon.com/2002/03/28/heterodyne/

TL;DR: At Tenerife, both the Pan-Am crew and the tower realized that the KLM aircraft had started its take-off roll, and both tried to warn its crew at the same time, but the resulting radio interference made the messages unintelligible. The author states that a technical solution is feasible and relatively easily implementable.

17
ryenus 2 days ago 1 reply      
This reminds me of the runway incursion incident at Shanghai, in Oct 2016:

http://www.jacdec.de/2016/10/11/2016-10-11-china-eastern-a32...

18
rdtsc 2 days ago 4 replies      
Without knowing the cause but if I had to guess this looks like pilot error. At least statistically that the leading cause of crashes.

I am surprised pilots still manually land planes. Is the auto-landing feature not implemented well enough? But then it's relied upon in low visibility. So it has to work, they why isn't it used more often?

19
radialbrain 2 days ago 0 replies      
The avherald article has a slightly more factual account of the event (with links to the ATC recording): https://avherald.com/h?article=4ab79f58
20
dba7dba 2 days ago 0 replies      
I'd like to suggest that if you are still interested in learning more about what happened, you should look for a video from "VASAviation" on youtube. I'm sure his subscribers have asked him already for analysis and he's working on the video.

The channel focuses on aviation comms channel.

I find it informative because the youtube channel provides detailed voice/video/photo/analysis of incidents (actual/close-calls) involving planes/passengers taxing/landing/taking-off in/around airports.

21
briandear 2 days ago 0 replies      
I wonder why on 35R they wouldnt have the taxiway to the left of the runway. Then the right is always the runway. Same for the left. Basically have parallel taxiways on the opposite side of the R/L designation of the runway. So at SFO, the parallel taxiways would be inside the two runways.

However, approach lighting is pretty clear, but at dusk, I agree with another comment that it can be rather hard to distinguish depending on angles. I think that approach would be landing into setting sun, so that could have some bearing.

22
4ad 2 days ago 0 replies      
It's not a near miss, it's a near hit.

https://www.youtube.com/watch?v=zDKdvTecYAM

23
exabrial 2 days ago 1 reply      
Wouldn't the word be "near hit" instead of "near Miss"? If you were close too missing, you'd hit something...
24
milesf 2 days ago 3 replies      
How is this even possible? Is it gross negligence on the part of the pilot, a systems problem, or something else? (IANAP)
25
BusinessInsider 2 days ago 0 replies      
Theoretically - if the plane had landed, how many planes would it have taken out? It obviously wouldn't have been pretty, but I doubt the AirCanada would have reached the fourth plane, or maybe even the third.
26
perseusprime11 2 days ago 0 replies      
How will an autonomous system handle this issue? Will it figure out the light colors of runways vs. taxiways or will it rely close geolocation capabilities?
27
TheSpecialist 2 days ago 0 replies      
I always wondered what about SFO makes it so much more dangerous than the other airports in the area? It seems like they have a potential disaster every couple years.
28
heeen2 2 days ago 0 replies      
Aren't there lights that have to line up if you're on the right course for the runway like with nautic harbors?Or warning lights that are visible when you're not aligned correctly?
29
jjallen 2 days ago 0 replies      
Does anyone know just how close of a call this was? Was the landing aircraft 100, 200 meters above ground?

How many more seconds until they would have been too slow to pull up?

30
TrickyRick 2 days ago 2 replies      
> Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic. [1]

Is it just me or is this blatantly off-topic? Or is anything major happening in the bay area automatically on-topic for Hacker News?

[1] https://news.ycombinator.com/newsguidelines.html

31
martijn_himself 2 days ago 1 reply      
I get that this was a manual (non-ILS) landing, but why is there no audio warning to indicate the aircraft is not lined up with the runway?
32
FiloSottile 2 days ago 11 replies      
I am just a passenger, but this looks very over-blown. A pilot aligned with the taxiway, that's bad. But no pilot would ever land on a runway (or taxiway) with 3 planes on it. Just search the Aviation Herald for "runway incursion". And indeed, he spotted them, communicated, went around.

Aviation safety margins are so wide that this does not qualify as a near-miss.

33
kwhitefoot 2 days ago 0 replies      
Why is instrument landing not routinely done? Is it because it is not good enough?
34
EGreg 2 days ago 0 replies      
35
leoharsha2 2 days ago 0 replies      
Reporting on disasters that didn't happen.
36
stygiansonic 2 days ago 0 replies      
Wow, I landed on the next day on the same flight (AC 759)
37
petre 2 days ago 1 reply      
Paint the runway and the taxiway in different colors and also use different colors for the light signals that illuminate them at night. Blue/white is rather confusing. Use clearly distinguishable colors such as red/blue or orange/blue or magenta/yellow.
12
CSS and JS code coverage in Chrome DevTools developers.google.com
418 points by HearMeRoar  1 day ago   118 comments top 24
1
umaar 1 day ago 9 replies      
If you're interested in staying up to date with Chrome DevTools, I run this project called Dev Tips: https://umaar.com/dev-tips/

It contains around 150 tips which I display as short, animated gifs, so you don't have to read much text to learn how a particular feature works.

2
cjCamel 1 day ago 4 replies      
From the same link, being able to take a full page screenshot (as in, below the fold) is also very excellent. I notice from the YouTube page description there is a further shortcut:

 1. Open the Command Menu with Command+Shift+P (Mac) or Control+Shift+P (Windows, Linux, Chrome OS). 2. Start typing "Screenshots" and select "Capture full size screenshots".
I needed this literally yesterday, when I used MS Paint to cut and paste a screen together like a total mug.

3
TekMol 1 day ago 4 replies      
So I recorded my site for a while. Then sorted by unused bytes. What was on top?

Google's own analytics.js

4
err4nt 1 day ago 1 reply      
Interesting tool, but even more interesting results. I just tried it on a simple, one-page website I built recently and there is not a single line of _code_ that's unused, yet it's still showing me 182 lines unused.

Things it seems to consider unused: `style` tags, if your CSS rule is on more than one line - the lines for the selector and closing tag.

There should be 0 unused lines since there are 0 unused rules, and the opening and closing `style` tags are DEFINITELY being used, so until these false results get weeded out it will be noisey to try to use this to track down real unused lines.

5
orliesaurus 1 day ago 4 replies      
Chrome Dev tools, the first reason why I started using Chrome. I wonder if HN has any better alternatives to suggest? I'm curious to see what I could be missing on!
6
laurencei 1 day ago 3 replies      
My vague recollection of the Google event where this was first announced (was it late 2016 or early 2017?) - was it was going to "record" your site usage for as long as you were "recording" - and give the report at the end.

But this now sounds like a coverage tool for a single page?

Does anyone know if it can record over multiple pages and/or application usage (such as an SPA)?

7
wiradikusuma 1 day ago 1 reply      
How do I exclude "chrome-extension://" and "extensions::" from the list? I can't do anything with them anyway, so it's just clutter.
8
KevanM 1 day ago 4 replies      
A single page solution for a site wide issue.
9
TekMol 1 day ago 1 reply      
In the CSS file view, isn't it unpractical, that it marks whitespace as unused? That makes it much harder to find rules that are unused.
10
genieyclo 1 day ago 1 reply      
Is there an easy way to filter out extensions from the Coverage tab, besides opening it in incognito mode?
11
hacksonx 1 day ago 2 replies      
"{ Version 57.0.2987.98 (64-bit)

 Updates are disabled by your administrator
"}

Guess I will only be able to comment on these when I get home. The full screen screenshot feature is going to be a welcomed addition. I will especially have to teach it to the BA's since they always want to take screenshots to show to business when design is finished but test is still acting up.

12
indescions_2017 1 day ago 1 reply      
I like this, and it's addictive ;) Any way to automatically generate output that consists of the 100% essential code subset?

As suspected: a typical medium.com page contains approx 75% extra code. Most egregious offenders seem to be content loader scripts like embedly, fonts, unity, youtube, etc.

On the other hand, besides net load performance, I'm not really worrying about the "coverage" metric. Compiling unreal engine via emscripten to build tappy dodo may result in 80%+ unused code, but near native runtime worth is a healthy tradeoff.

Try, for example: http://webassembly.org/demo/

13
i_live_ther3 1 day ago 2 replies      
What happened with shipping everything in a single file and letting cache magic happen?
14
mrskitch 1 day ago 0 replies      
I wrote a tool to automate this (right now it's just JavaScript coverage) here: https://github.com/joelgriffith/navalia. Here's a walk through on the doing so: https://codeburst.io/capturing-unused-application-code-2b759...
15
arthurwinter 1 day ago 2 replies      
It'd be awesome if there was a button to download a file with the code that's used, and the code that's unused, instead of just having a diff. Hint hint :)
16
rypskar 1 day ago 1 reply      
Excellent timing, I had given up finding a good tool for coverage on JS and CSS and where right now using audits in Chrome trying to find unused CSS and searching through the code to find unused JS on our landing page. Even if it is hard for at tool to find everything that is unused on a page it will show what is used so we know what we don't have to check in the code
17
foodie_ 1 day ago 1 reply      
Hurray! Now they just need to make it part of an analytics program so we can let the users tell us what code is never running!
18
dethos 1 day ago 0 replies      
Is there anything similar for Firefox? On the developer tools or as an external extension?
19
TekMol 1 day ago 1 reply      
Would be super useful if it recorded over multiple pageviews. To find unused CSS+JS and to measure the coverage of tests.

But it seems to silently forget what happened on the first page as soon as you go to the second page.

20
geniium 1 day ago 0 replies      
Very nice! The Coverage feature is something I have been waiting for since a long time!
21
mgalka 1 day ago 0 replies      
This is great, such a useful function. Thanks for posting.
22
wmkthpn 1 day ago 2 replies      
Can this be useful when someone uses webpack?
23
ajohnclark 1 day ago 0 replies      
Yesss!!
24
_pmf_ 1 day ago 0 replies      
When your developer experience depends on how much free time Chrome developers have ...
13
Bitcoin Potential Network Disruption on July 31st bitcoin.org
438 points by amdixon  22 hours ago   336 comments top 31
1
jpatokal 21 hours ago 16 replies      
Well, that's a remarkably uninformative announcement. Here's an attempt at a neutral tl;dr from a Bitcoin amateur.

Bitcoin is currently suffering from significant scaling problems, which lead to high transaction fees. Numerous proposals to fix the scaling issue have been proposed, the two main camps being "increase the block size" and "muddle through by discarding less useful data" (aka Segregated Witness/SegWit). However, any changes require consensus from the miners who create Bitcoins and process transactions, and because it's not in their best incentive to do anything to reduce those transaction fees, no change has received majority consensus.

In an attempt to break this deadlock, there is a "Bitcoin Improvement Proposal #148" (BIP148) that proposes a User-Activated Soft Fork (UASF) taking effect on August 1, 2017. Basically, everybody who agrees to this proposal wants SegWit to happen and (here's the key part) commits to discarding all confirmations that do not flag support for SegWit from this date onward. If successful, this will fork Bitcoin, because whether a transaction succeeded or not is going to depend on which side of the network you believe.

However, BIP148's odds of success look low, as many of the largest miners out there led by Bitmain have stated that they will trigger a User-Activated Hard Fork (UAHF) if needed to stop it. Specifically, if UASF appears successful, instead of complying with SegWit, they'll start mining BTC with large blocks instead: https://blog.bitmain.com/en/uahf-contingency-plan-uasf-bip14...

Anyway, it all boils down to significant uncertainty, and unless you've got a dog in the race you'll probably want to refraining from making BTC transactions around the deadline or purchasing new BTC until the dust settles down.

And an important disclaimer: this is an extremely contentious issue in the Bitcoin community and it's really difficult to find info that's not polarized one way or the other. Most notably, Reddit's /r/bitcoin is rabidly pro-BIP148 and /r/btc is equally rabidly against it. Here's one reasonably neutral primer: https://bitcoinmagazine.com/articles/bitcoin-beginners-guide...

2
buttershakes 7 hours ago 4 replies      
This will go down as a massive failure in governance. The Bitcoin core guys have completely created this situation by taking a hard liner stance based on a non issue. Committing to a 2 megabyte hard fork 2+ years ago would have averted this situation and kept control within the core dev team. Now we see miners taking a stance because SegWit doesn't necessarily benefit them. Further payment channels and other off chain scaling haven't really been tested or materialized, and the SegWit code itself is a series of changes to the fundamentals of Bitcoin without requiring a hard fork. In other words it is overly engineered to avoid having to have real consensus.

Further the almost rabid attacks against a 2mb increase are bordering on complete insanity. No serious software engineer would say that an additional 1 megabyte of traffic every 10 minutes is a problem in any way. Instead we are stuck with a proportional increase in bandwidth and processing to support segwit and a minor increase in block size, which is through some convoluted logic preventing centralization. This whole thing is a power grab, plain and simple.

Now the alternative implementations are racing to complete something the miners will agree with, the sole purpose being to wrest control away from the "Bitcoin core" development group which has made some a complete mess of governance. Anyone who invested in Blockstream has to seriously be scratching their heads and wondering why they are killing the golden goose over some ideological bs instead of making what is really a trivial change. I think at this point they have screamed so loudly for so long that back tracking would reveal them to be hypocritical in the extreme. To couch this whole debate as a rallying cry against centralized interests instead of a corporate power grab is completely absurdist.

3
sktrdie 15 hours ago 7 replies      
All this "unconsensus" is weird to me given that PoW was created to fix just that. I don't understand how can any other group of people decide what should happen other than the miners. After all, anybody can be a miner. Anything other than that just doesn't make it decentralized anymore.

If you trust the developers, exchanges or even users to make decisions, then why not just make a BitcoinSQL where the servers are controlled by these groups?

Mining specifically allows for this not to happen. one-CPU-one-vote as per Satoshi's paper. No matter the rules of the protocol, the chain with most work is the one that most people agreed upon. This seems to me the only true democratic solution and I don't understand how anything else is possible.

With regards to fees being to high and miners actually liking that, that's bullshit, because miners (which are also users!) care about the health of the entire system. If something like SegWit will bring many more users, that's a win for them.

Let's not forget that anybody can be a miner! Miners aren't just these chinese groups of people. It's the only true democratic way of reaching consensus - anything else is really not a way to reach trustless consensus in my opinion.

4
BenoitP 16 hours ago 2 replies      
To me, hashing power is not the process by which the outcome will be decided.

IMHO, the percentage of technical signalling will not even matter that much.

Two chains will get created quite quickly. And some BTC holders will try to take advantage of the situation.

Since transactions can get replayed on the other chain (and copying them from one chain to the other brings a stability advantage) the technical way things are going to occur is double-spending to different adresses.

... Which means services supporting different chains will be pitted against each other.

Users will empty out one wallet at the same time to one exchange on a chain they don't support, and to another address they control on the chain they support. In cashing out on the exchange, they will crash the market value of that chain.

... Which brings me to: exchanges should start signalling support and come to a consensus pretty quickly, in their own interest. They don't want to be the exchange everybody cashes out on.

Questions abound:

* Have they started signalling it?

* What software are they running?

* If you hold some BTCs: are you planning to double spend?

* How are you going to proceed?

* Which chain do you support, and how many BTC do you possess?

TL;DR: There will be a run. Exchanges will determine the outcome.

5
Twisell 20 hours ago 3 replies      
Ok just another proof that bitcoin can definitely not be compared to gold. Or maybe it could?

"After the event you might end up with gold or lead it all depends if your banker believe in transmutation or not (and if transmutation is actually achievable which will be determined by the best alchemists of the kingdom that need to agree together).

So all in all the guild of merchants recommend that you don't accept gold as a payement on the last day before new moon (and for a few day after that), because you wil not be able to tell if you are getting real gold or lead during that period.

Well to be honest it won't technically be lead you would get but forked gold, a gold that could be gold but isn't until the alchemists say so. But you shall still be able to use it in a limited way with people who believe in the same alchemists dissidents.

It's totally normal if it's sound complicated, it's magic after all!"

6
unabridged 20 hours ago 2 replies      
This is late FUD, a last minute whine by the owners of "bitcoin.org" aka core. The discussion over scaling has been happening for many months and consensus has actually just been reached in the last couple weeks. 85% of the mining power is signalling for segwit2x, and if this continues it will lock in before Aug 1st completely avoiding the scary situation talked about in the post.
7
matt_wulfeck 19 hours ago 5 replies      
> This means that any bitcoins you receive after that time may later disappear from your wallet or be a type of bitcoin that other people will not accept as payment.

Can you imagine the uproar if Visa said the same thing? It would be totally unthinkable.

Bitcoin can get away with this type of "disruption" because it's not really being used for anything other than a speculative vehicle.

8
1ba9115454 19 hours ago 0 replies      
If you hold Bitcoin then you need to think about getting hold of your private keys or using a non custodial wallet like https://strongcoin.com

If the network splits there will 1 or 2 new types of Bitcoin.

If the exchanges decide to support the new types of Bitcoin you will be able to sell your holding on the new chains whilst still keeping coins on the main Bitcoin chain.

But to do this you need to manage your own private keys.

9
taspeotis 21 hours ago 0 replies      
https://github.com/bitcoin-dot-org/bitcoin.org/commit/9ddfa8...

 Alerts: BIP148/92: change title over objection Note: I object to this change, which I think makes the alert less clear, less forceful, and degrades alert usability.

10
dmitriid 20 hours ago 0 replies      
Ladies and gentlemen, we give you the most amazing stable scalable global tech of the future
11
Fej 21 hours ago 2 replies      
Do any significant number of people genuinely take Bitcoin to be the future of currency at this point?
12
atemerev 16 hours ago 2 replies      
For me, the lightning network is the obviously right solution, bringing more decentralization and totally removing the need for global consensus. Why it is not that popular, I don't know.
13
cableshaft 10 hours ago 1 reply      
> "Be wary of storing your bitcoins on an exchange or any service that doesnt allow you to make a local backup copy of your private keys."

I know a couple people who have some bitcoin on Coinbase and aren't too comfortable moving it to a local wallet (Coinbase is just easier for them, they don't have to worry about the security of their personal computer).

Does Coinbase allow making a local backup of private keys? I'm thinking they might not, but maybe they do.

14
gopz 12 hours ago 3 replies      
As someone with a basic Comp Sci understanding of crypto currencies could someone explain to me why there is a scalability problem? I thought one of the primary benefits of Bitcoin was that higher transaction fees will attract more miners and ergo the transactions can be processed at a higher rate. Why won't this problem be resolved naturally? Tinkering with the block size makes sense to me as a way to crank through more transactions per mined block, but again, why is it even a problem? The mining power is just not there?
15
isubkhankulov 21 hours ago 4 replies      
this post feels like propoganda. prior bitcoin upgrades have gone much smoother and when they do go wrong the community banded together to spread the right information. albiet the userbase was likely a lot smaller back then.
16
frenchie4111 21 hours ago 3 replies      
Can someone give me some context on what is causing this?
17
roadbeats 21 hours ago 2 replies      
What strategy is the best for small investors ? Moving the money into altcoins or just pulling completely back ? Should we expect a soar on altcoins (e.g litecoin, ripple, antshares a.k.a neo) ?
18
Taek 19 hours ago 0 replies      
I wish there was a concise way to explain what is going on, but there really isn't. I'm going to do my best though.

Bitcoin is a consensus system. This means that the goal is to have everyone believe the exact same thing at all times. Bitcoin achieves this by having everyone run identical software which is able to compile a list of transactions, and from there decide what money belongs to which person.

As you can imagine, it's a problem if you have $10, and Alice believes she owns that $10, Bob believes he owns that same $10, and Charlie believes that the money was never sent to either of them. These three people can't interact with eachother, because they can't agree on who owns the money. Spending money has no meaning here.

In Bitcoin, there are very precise rules that define how money is allowed to move around. These rules are identical on all machines, and because they are identical for everyone on the network, nobody is ever confused about whether or not they own money.

Unfortunately, there are now 3 versions of the software floating around (well... there are more. But there are only 3 that seem to have any real traction right now, though even that is hard to be certain about). Currently, all versions of the software have the exact same set of rules, but on August 1st, one of those versions of the software will be running a different set of rules. So, depending, people may not be able to agree on the ownership of money. If you are running one version, and your friend is running another, your friend may receive that money, or they may not. This is of course a bad situation for both of you, and its even worse if you are working with automated systems, because an automated system likely has no idea that this is happening, and it may have no way to fix any costly mistakes.

It gets worse. The version of the software that is splitting off actually has the power to destroy the other two versions of the software. I don't know how to put this in simple terms either.

In Bitcoin, it is possible to have multiple simultaneous histories. As long as all of the histories are mathematically correct (that is, they follow all of the formal rules of Bitcoin), you know which history is the real history based on how much work is behind it. The history with the most work wins. If the history is illegal, you ignore it no matter how much work is behind it.

So, this troublemaker version of the software (the UASF version) has a compatible set of rules with the other 2 versions. Basically, everything that it does, the other versions see as valid. So if its history is the longest, the other versions will treat that history as the one true history. The thing is, this troublemaker version of the software is stubborn, and so even if the histories of the other two versions have more work, it'll ignore them and focus only on its own version of history.

So, the dramatic / problematic situation happens if the UASF software initially has less work in its history. What'll happen is a split, and two different versions of Bitcoin will exist at the exact same time. But then, if the UASF software ends up with more work after some period of time (days, weeks, etc.), the other versions of the software will prefer its version of history over their own.

Basically, what happens there is that entire days, or weeks, etc. of history get completely obliterated. The UASF history becomes canonical, and the histories built by the other versions all get destroyed. Miners lose all of their money, people who accepted payments lose those payments, people who made payments get those payments back. Basically a lot of chaos where people end up losing probably millions and millions of dollars.

----

I hope that helps. This whole situation is screwed up, and really the best thing to do is to put your coins in a cold wallet (one that you control, not an exchange), and then just not send or receive any coins for a few weeks. Let the dust settle, and then resume using Bitcoin once its clear that the turmoil is over.

----

The most likely situation here is that nothing interesting happens at all. My personal opinion is that the vast majority of people who matter in Bitcoin aren't even paying attention to the drama, and something dramatic is really only possible if the majority of Bitcoin users opt-in to doing something. I don't think that's the case at all, which means essentially nothing interesting is going to happen.

But, I could be wrong. There's a non-zero chance that something very unfortunate happens, and there's a pretty easy way to isolate yourself: don't send or receive any Bitcoins starting July 31st, and don't resume until it's clear that the storm has passed. It'll likely take less than a week to come to a well-defined conclusion.

19
wittgenstein 9 hours ago 0 replies      
Does anyone know how Coinbase is going to handle this?
20
modeless 19 hours ago 2 replies      
The problems described in this post are unlikely to happen. There is an attempt to split ("fork") the network scheduled for August 1. The people forking will force activation of a new feature, Segwit, while the non-forkers won't. However, the non-forkers are currently planning to activate Segwit as part of a compromise plan before the deadline. If this compromise happens as planned, there will be no need to force-activate Segwit with a fork, and so no fork will happen on August 1.

Frankly, even if the compromise solution fails and the fork does happen on August 1, it will be a complete non-event. Bitcoin.org is biased as they are affiliated with people who support the August 1 fork, and so they're attempting to publicize it. However, the fork has practically zero support from Bitcoin miners or exchanges. On Aug 1 the vast majority of miners and exchanges will stay with the current network. Without significant miner support the forked network will run extremely slowly, and it will be vulnerable to several kinds of attacks. Without exchange support the forked network will not have economic value, and will quickly become irrelevant.

Although August 1 will likely not be a problem either way, there is another date that will. Around the end of October, another proposal to fork the network is scheduled, and this one is supported by miners and exchanges. What will happen then is much more murky. It will become clearer as the date approaches.

21
pyroinferno 6 hours ago 0 replies      
Good, good. Bitcoin will fall, and eth will become king.
22
jancsika 14 hours ago 0 replies      
I haven't kept up with Bitcoin tech for awhile. Hopefully the following questions are relevant here:

1. What percentage of Bitcoin's PoW belongs to Bitmain?

2. Are the drivers for Bitmain's hardware free-as-in-freedom?

3. Is mining hardware in the same class as Bitmain's manufactured anywhere in the world other than China?

Edit: Bonus question: If all cutting edge hardware tends to be developed and manufactured in one particular spot in one particular nation state, and if Bitcoin mining efficiency now depends mainly upon the manufacture of newer, more powerful hardware, does that change any of the implicit assumptions made in the Bitcoin whitepaper? (Esp. considering that same nation state has put a hard speed limit on all data moving in/out its borders.)

23
jstanley 17 hours ago 1 reply      
I wrote this in case anyone wants more information: https://smsprivacy.org/bip148-uasf

Should be a bit more informative than TFA.

24
nthcolumn 15 hours ago 1 reply      
I obviously don't understand this at all. I thought it was distributed and that that was the whole point.
25
davidbeep 11 hours ago 0 replies      
Terribly uninformative. I'm actually surprised the coin is trading as high as it is considering all the uncertainty. I expected a greater freak out from technologically inapt investors/speculators.
26
nemoniac 16 hours ago 1 reply      
What time zone is "GMT+0100 (IST)" supposed to be?

India Standard Time is something like GMT+5.

27
ented 20 hours ago 1 reply      
What is the max tx/s speed? Still no consensus???
28
jageen 19 hours ago 1 reply      
It will surely affect on ransomware collectors.
29
ragelink 18 hours ago 2 replies      
anyone know why out of all timezones they pick Central america Time?
30
cgb223 21 hours ago 1 reply      
What is the disruption?

Why is this happening?

31
wyager 19 hours ago 0 replies      
This looks like FUD. As I recall, the owners of Bitcoin.org are mad that their exact proposed scaling solution didn't go through.

85+% of miners are signaling support for segwit2x, so it's extremely unlikely that there will be any disruption. https://coin.dance/blocks

14
AMD Ryzen Threadripper 1920X and 1950X CPUs Announced anandtech.com
348 points by zdw  12 hours ago   280 comments top 21
1
ChuckMcM 9 hours ago 7 replies      
I really hope the ECC carries through. It irritates me to have to buy a "server" CPU if I want ECC on my desktop (which I do) and it isn't that many gates! Its not like folks are tight on transistors or anything. And on my 48GB desktop (currently using a Xeon CPU) I'll see anywhere from 1 to 4 corrected single bit errors a month.

For things like large CAD drawings which are essentially one giant data structure, flipping a bit in the middle of them somewhere silently can leave the file unable to be opened. So I certainly prefer not to have those bits flip.

2
walkingolof 11 hours ago 3 replies      
Best thing about this is that competition is back (in the high end x86 market) and the winner is the consumer, CPU market have been stale for a while.
3
jokoon 11 hours ago 3 replies      
A CPU that large reminds me of the famous remark made by Grace Hopper about how light can move 30cm in one nano second, I guess theoretically meaning that CPU could have some kind of maximum size.

Of course since current CPU contains cores, it doesn't apply.

4
shmerl 11 hours ago 3 replies      
I'm still waiting for this bug to be fixed: https://community.amd.com/message/2796982

Note: this isn't a bug in gcc, but looks like hardware bug related to hyperthreading.

5
SCdF 11 hours ago 14 replies      
How do people with many CPU cores find it helps their day to day, excluding people who run VMs, or do highly parallelisable things as their 80% core job loop (ie you run some form of data.paralellMap(awesomeness) all day)?

Does it help with general responsiveness? Do many apps / processes parallalise nicely? Or is it more "Everything is 99% idle until I need to run that Photoshop filter, and then it does it really fast"?

6
InTheArena 11 hours ago 3 replies      
What I am going to be interested in is this versus EPYC parts. I think the higher clocks are mainly to achieve some of the more insane (and useless) FPS counts for games. If you are willing to ramp down the FPS to a number that your monitor can actually display, it may be better to find a general purpose EPYC MB and chipset, and use that. Especially if homelab / big data / compiling linux/ occasional gaming is you cup of tea.
7
strong-minded 6 hours ago 0 replies      
A simple formula: The 1920X beats the 7920X by a few hundred in Cinebench and a couple of hundred in the pocket.

I wonder if the 'Number Copy War' (started with the X299 vs. X399 Chipset) will continue throughout the year.

8
johnbellone 10 hours ago 4 replies      
Its been awhile since I've built a computer with my own two hands, but either that man's hands are really small or hot damn AMD Ryzen CPU are huge.
9
arcaster 11 hours ago 3 replies      
I'm still waiting for a more diverse set of synthetic and real-world benchmarks. It'll be interesting to see how IPC performance holds up with Threadripper, however I think the most interesting debate will be whether the 1920x or lowest end Epyc CPU are a better buy.

Unfortunately, even as an enthusiast $799 is more than I'm willing to spend on a CPU. I'm also still hard pressed to build a Ryzen 1700 System since I can purchase an i7 7700 from MicroCenter for about $10 less than the Ryzen part (and have equal or better general performance with notable better IPC).

10
thoughtexprmnt 11 hours ago 10 replies      
Since the article does refer to these as desktop CPUs, I'm curious what kind of desktop workloads people are running that could benefit from / justify them?
11
drewg123 11 hours ago 3 replies      
It is great that they're announced for an August release, but when I can actually BUY one?

Given that Naples (aka Epyc) was "released" in June, I went looking to actually buy one, and I could not find a single place selling them. Not Newegg, nothing local, nothing in Google shopping, etc.

12
dis-sys 9 hours ago 3 replies      
$999 list price translates to $1100-$1150 retail price in countries where you have a GST style tax, then you factor in an expensive motherboard plus heat sink, 64GB of RAM, the upgrade is like $2k.

the problem is with this confirmed return of competition between Intel vs AMD, I am no longer sure whether it is a good idea to upgrade now as it is basically the first iteration between those two. Are they going to release something even better in 6-12 months time?

13
crb002 10 hours ago 4 replies      
AMD needs to come out with a few AVX-1024 instructions for vector ops. Essentially make one core into a GPU that doesn't suck at branching.
14
eemax 11 hours ago 1 reply      
The comparisons in this article are mostly against the high-end Intel core line, but these CPUs support server / enterprise type features like ECC memory, lots of PCI-E lanes, and virtualization features (I think?).

Shouldn't Threadripper be compared to Xeons?

EDIT: Or rather, what I'm really wondering is what these CPUs lack that AMD's server line (EPYC) have.

15
sergiotapia 9 hours ago 1 reply      
I'm waiting for these to launch so I can build a great multi-threaded computer. My Elixir apps are waiting for all these threads! :)

Does anyone know if Plex is going to see much benefit transcoding video files on the fly?

16
gigatexal 8 hours ago 0 replies      
soon as i have some funds i will be getting one but only if ECC is supported -- what would be even better is if one could do a mild OC on the part but also have ECC
17
balls187 10 hours ago 1 reply      
Quad channel, so you have to install RAM with 4 match sticks at a time?
18
DonHopkins 8 hours ago 0 replies      
How long does it take to drip a threa?
19
api 9 hours ago 0 replies      
I did a lot of work with artificial life and evolutionary computation in the early 2000s. Wish we had these chips back then.
20
jhoutromundo 10 hours ago 0 replies      
Opteron feels o//
21
mrilhan 5 hours ago 1 reply      
I recently tried to go the AMD/Ryzen route. I like an underdog comeback story as much as the next guy.

But be warned: Motherboards that "support" Ryzen do not in fact support Ryzen out of the box. You have to update the BIOS to support Ryzen. How do you POST without a CPU you ask? Who knows? Magic, possibly.

I still don't understand how AMD expects their customers to have more than one CPU (and possibly DDR4-2133 sticks) to be able to POST and update the BIOS.

I returned everything AMD and went back to safe, good ole Intel. Worked on first try. I'm never getting sucked into AMD hype again.

Also, when I went back to return the AMD components to Fry's, the manager said they were aware/used to getting Ryzen returns because of this.

15
Improving air conditioner efficiency could reduce worldwide temps nytimes.com
261 points by aaronbrethorst  6 hours ago   301 comments top 33
1
djsumdog 5 hours ago 14 replies      
So we curb emissions by building a bunch of new A/C units? Sorry, that's silly.

CO2 is just one of many many forms of pollution. Think you're doing your part by purchasing a hybrid or electric vehicle? There are barrels of oil that go into those tiers, the plastics, not to mention all the pollution that goes into battery production. If your car is fuel efficient, the best thing you can do for the environment is drive it until the wheels fall off. When you do need to purchase a replacement, get a used hybrid or electric.

Climate change/CO2 is not the problem. It's the symptom of rampant consumerism. We can't buy and purchase our way out of destroying the planet. We have to consume less, build cell phones that are upgradable and last a decade instead of 2 ~ 3 years. Companies need to be praised for smaller factories and lower sales for products that cost more and last longer.

That is a very very huge shift in the way we think. I'm not sure if it's even remotely feasible or what it would take to convince people, industry, the world to simply consume less.

2
cannonpr 5 hours ago 3 replies      
I hate to say it but AC always felt like thoughtless engineering and consumerism, especially the electric varieties.It's ironic that in a sunny, energy rich environment, you spend extra energy on a heat pump. In a lot of environments some better architecture will take care of the problem via passive methods, additionally evaporative methods work pretty well in dry environments and polute considerably less ?Failing that, hell atleast use some solar energy to run the heat pumps locally, atleast stop burning stuff to power them.

Failing all of the above, stoicism isn't that bad, honest, neither are life style changes that shift high activity periods to later in the day, they are widely practised in Mediterranean countries.

3
clumsysmurf 6 hours ago 3 replies      
Trump's 2018 budget zeroes funding for Energy Star, which among other things, helps consumers pick the most efficient devices, save money in the long run.

What other ways can consumers compare the efficiency of A/C units? I would think some standardized testing and labels would be required.

4
bradlys 6 hours ago 8 replies      
> The Lawrence Berkeley study argues that even a 30 percent improvement in efficiency could avoid the peak load equivalent of about 1,500 power plants by 2030.

Okay, but where is this 30% jump in efficiency going to come from? That seems like a pretty big leap!

5
davidw 5 hours ago 3 replies      
People in the US consume way too much air conditioning. It's pretty common where I work for people to have sweaters to put on inside due to the AC. Outside it's in the 80ies, with something like 10% humidity in the summer - absolutely perfect unless you're doing hard labor in direct sunlight.
6
SilasX 5 hours ago 0 replies      
... only if this doesn't temporarily bid down energy prices and lead others to use the same energy somewhere else.

https://en.wikipedia.org/wiki/Jevons_paradox

Note: the more potential uses of a resources, the more vulnerable it is to Jevons effects, where people use a resource more in response to being able to use it more efficiently.

The real benefit of energy efficiency is not that it reduces energy use by itself, but that it reduces the utility loss from implementing the caps and taxes necessary to actually reduce total usage.

7
rb808 6 hours ago 5 replies      
Half the article wasn't about a/c efficiency it was that HFC is more of a greenhouse gas than CO2, and was agreed to be phased out.

Is there really an HFC replacement - what is it? I wasnt aware.

8
SmellTheGlove 6 hours ago 3 replies      
I have a Fujitsu mini split that has been awesome in terms of bringing my electric bill down versus window units (we live in Maine, central A/C is less common here, and few homes were built with it until the 2000's). It does, like most splits, use R-410A, but I'd be happy to use something else if it didn't kill the efficiency.

In parallel with refrigerants and efficiency, though, I wonder if the article misses on mentioning geothermal cooling. Those systems are expensive, but if you can bring down the install cost and power them with cleaner energy, you solve some other problems. In developing nations, maybe you try and build larger systems designed to cool multiple residential units - and start to require it for mid/high-rise residential construction?

9
quadrangle 1 hour ago 0 replies      
We already have solutions for dramatically more effective conditioning of indoors. Simply do other effective things to cool the indoors. Modern insulated whole-house fans like Airscape, exterior shades, etc. see http://www.treehugger.com/sustainable-product-design/10-over...

The efficiency focus is itself misguided in several ways. http://freakonomics.com/podcast/how-efficient-is-energy-effi...

10
Dangeranger 6 hours ago 4 replies      
Could higher efficiency cooling be done by using more evaporative cooling systems (Swamp Coolers)[0] rather than traditional AC units?

There are climates where evaporative cooling is not effective, but perhaps they would be useful in the majority of climate regions.

[0] https://en.wikipedia.org/wiki/Evaporative_cooler

11
zackmorris 2 hours ago 0 replies      
One of the most wasteful components is the condenser. Salt water air conditioners can accomplish the same thing much more easily (50-75% savings):

http://www.happonomy.org/get-inspired/salt-water-air-conditi...

https://www.cnet.com/news/salt-driven-air-conditioner-looks-...

This is very old technology so people probably chose aesthetics over cost. Although when I think tacky, I think window air conditioning units..

12
Element_ 5 hours ago 1 reply      
Toronto has a deep lake water cooling system that pumps cold water from the bottom of Lake Ontario and circulates it around the downtown core. It is capable of cooling 100 high-rise buildings. I believe when it was constructed it was the largest system in North America.

https://en.wikipedia.org/wiki/Enwave

13
dmritard96 5 hours ago 2 replies      
One thing missing from this article is demand response:

"It matters, researchers say, because cooling has a direct relationship with the building of coal-fired power plants to meet peak demand. If more air-conditioners are humming in more homes and offices, then more capacity will be required to meet the demand. So 1.6 billion new air-conditioners by 2050 means thousands of new power plants will have to come on line to support them."

We https://flair.co offer demand response tech for minisplit control that can help prevent having to build all the 'peaker plants'. This gets extra interesting when you add intermittent supply (solar/wind) and grid tied storage (Tesla has been making big pushes here among others). Hopefully, we are able to scale these up in parallel to prevent a bunch of coal fired plants from being built for the 1-3% of the year with the hottest days.

14
adgqet 6 hours ago 1 reply      
Misleading headline. Research found that the temperature increase could be lowered by one degree centigrade.
15
pdelbarba 5 hours ago 0 replies      
I'm a little confused why solar isn't mentioned. Peak temperature and peak solar flux are highly correlated so this isn't some weird grid storage problem. Tighten standards for new systems and construction to be a little more efficient and let economics go to work.
16
pierrebeaucamp 5 hours ago 2 replies      
I'm pretty disappointed in the numbers they chose for a vegetarian diet. It feels to me as if they actively went ahead and picked to lowest values they could find in their source. (Btw the source itself is a good read imo: http://www.drawdown.org/solutions/food/plant-rich-diet)

You could argue that people are not willing to go vegetarian or even vegan - but at least level the numbers when comparing it with other solutions: If everyone would go vegetarian, their source states 132 gigatons of CO2 reductions.

I also liked this quote from the report:

> As Zen master Thich Nhat Hanh has said, making the transition to a plant-based diet may be the most effective way an individual can stop climate change.

17
uses 4 hours ago 0 replies      
It's funny how almost without fail on HN, I can go to the comments, and the #1-5 comments is someone who quickly dismisses the main premise of the linked article. It's ridiculous how common this is. I've been reading HN over a decade and I don't remember if it was always like this?
18
thomk 5 hours ago 0 replies      
Slightly offtopic but I just had a new HVAC system put in my house and one of the things the tech pointed out to me is that effective AC has a lot to do with effective dehumidifying.

I don't know why it never crossed my mind before but now when I transition from indoors to outdoors (and back) I notice the humidity delta as much as the temperature delta.

19
afinlayson 4 hours ago 0 replies      
Air conditioners are really inefficient, and people run them in excess. And because there's no carbon tax, the cost of them is too cheap to curb usage. Sure it won't solve the whole problem, but solving this issue would be very valuable to the planet.
20
bcatanzaro 4 hours ago 0 replies      
The planet would be better off if people moved out of the cold North and instead used more air conditioning. That's because heating is incredibly carbon intensive. Think about the temperature gradients in New York in the winter time. Going from 20 or 30 degrees F to 70 degrees is more carbon intensive than going from 90 degrees to 70 degrees, and the number of days it's cold in the winter is often greater than the number of days it's hot in the summer. The overall carbon burden of heating is greater than that of cooling.

This means that the carbon angst directed at AC is primarily a puritanical impulse. It's a new thing! It feels nice! So it must be a sin!

However, refrigerants are bad for climate because they have huge greenhouse gas potential multipliers.

So the solution isn't really to improve air conditioner efficiency, it's rather to find refrigerants with less warming potential.

And move everyone out of New York and Boston - their climate conditioning is very carbon intensive.

21
clenfest 6 hours ago 0 replies      
In this house we obey the laws of thermodynamics!
22
Mz 5 hours ago 1 reply      
Passive solar and vernacular architecture makes vastly more sense. I get so tired of these schemes to make our broken lifestyles "more efficient." Just adopt a better method entirely and quit quibbling about tiny efficiency gains.
23
grogenaut 5 hours ago 0 replies      
If we bumped efficiency 30%, how many more people would run the AC 30% more?
24
kylehotchkiss 6 hours ago 3 replies      
Wouldn't switching to DC motors for both the fan and the compressor save a lot of power?
25
axelfontaine 5 hours ago 12 replies      
American air conditioners running at full power, chilling the interior and dripping on the sidewalk below on a hot day always deeply trouble me. Maybe it's my european view on things, but for contrast here in Munich we aren't just building out a city-wide heat network, but we also have a cold network! Cold river water flows through the pipe network that traverses the city and large office building can get connected to it. This way they can save massively on electricity for air conditioning by having the water do the cooling instead. And then once the water has traversed all pipes, it simply gets released back into its stream on the other end of town, just as clean as when it entered, and only slightly warmer.
26
petre 4 hours ago 0 replies      
Using a white roof and employing other passive coolong techniques could improve AC efficiency, or even make it redundant.
27
maxxxxx 5 hours ago 2 replies      
Just insulate the houses in the US. I am always shocked how badly built US houses that cost 600k are.
28
return0 4 hours ago 0 replies      
let's just build a giant A/C and put the external unit on the moon
29
jwilk 5 hours ago 1 reply      
Wrong symbol in the title:

= ordinal indicator

= degree

30
PhantomGremlin 5 hours ago 0 replies      
We need a corresponding article telling us how many power plants we can avoid building by not mining Bitcoin. I love the general idea of cyber currency / bitcoin / block chain, but I hate that the implementation requires so much energy.
31
EGreg 6 hours ago 3 replies      
Not for nothing, but ain't greenhouse gases only the short term problem?

The Earth radiates a fixed amount of energy into space every year. But when we produce electricity etc. no matter how we do it, more than half of the energy escapes as heat - a byproduct of boiling the water or whatever!

This isn't sustainable in the long run either! We are basically raising the temperature of the atmosphere even without greenhouse gases.

Tell me where I'm going wrong:

https://dothemath.ucsd.edu/2012/04/economist-meets-physicist...

https://dothemath.ucsd.edu/2011/10/why-not-space/

32
33W 6 hours ago 3 replies      
[SPOILER ALERT]

Can we change the post title to match the article?

"If You Fix This, You Fix a Big Piece of the Climate Puzzle"

33
Zarath 3 hours ago 0 replies      
Open a damn window. I'm sure in some places AC is necessary, but way too often I hear/see people running it when there is absolutely no reason other than they are even mildly uncomfortable.

Seriously, this problem isn't going to be fixed until people actually pay the true cost of what they are doing: Electricity + Global Warming externalities.

16
How To Go Viral By Using Fake Reddit Likes hack-pr.com
434 points by scribu  3 days ago   188 comments top 35
1
jawns 3 days ago 6 replies      
The entire stunt appeals to people's sense of moral outrage over businesses buying influence in the form of political donations. The reason people find it morally outrageous is because it corrupts the political process: politicians are supposed to represent their constituents, not the whims of the highest corporate bidder. Politicians who engage in this kind of quid pro quo behavior put selfish gain ahead of the good of the community.

Which is why I found it particularly galling that the PR firm relied on people's moral outrage about paying for influence to peddle their message ("tell them you like our initiative and are TIRED of politicians taking legal bribes") -- while doing exactly the same thing: paying for influence, in the form of purchased Reddit upvotes, which corrupts the upvote process and puts selfish gain ahead of the good of the Reddit community.

Normally, when PR firms use "hacking" to describe their techniques, they're talking about novel approaches to getting coverage, sort of like how "life hacks" are novel solutions to life's problems. But in this case, the firm is using "hacking" very literally -- infiltrating and taking control of a system by illicit means. They are black hats, and we should view them not only as morally bankrupt but also very dangerous.

I'm expecting that any day now they'll run a follow-up post, "How we hacked the U.S. media to help an anonymous powerful Russian client sway the presidential election."

2
0x00000000 3 days ago 6 replies      
People get extremely defensive on Reddit if you insinuate that this is common. But it really doesn't take a whole lot of skepticism to see though the more blatant ones.

Reddit is still a really great site when you unsubscribe from all default subs and any sub that has gone "critical shill" at about 100k or more subs.

3
illys 2 days ago 2 replies      
Is this article for real?

On a topic where one would expect citizens chasing for public good, we find marketers and advertisers working for a wealthy businessman paying a convictionless campaign to become famous!

And the advertisers are so proud of it, they give all the details of their Reddit cheating, and worse, all the details of the absence of political conviction of their wanna-be-politician client.

Maybe the story is real, but I cannot believe the advertisers are dummy enough to be the ones writing this article.

I would better think of someone related to Fiverr.com behind... [edit: or an enemy/competitor of the politician]

4
paultopia 3 days ago 9 replies      
Didn't they just massively throw their client under the bus? Not hard to find the guy's name, and now everyone knows:

- his big political stunt wasn't even his own idea, and

- he paid people a ton of money to fraudulently promote it.

What a way to burn your clients...

5
flashman 3 days ago 2 replies      
Look, I give them credit for coming clean to the public. And a lot of people use Reddit to promote their business, band or other brand (though they do it honestly, not by purchasing a boost). But the more well-known the technique of buying upvotes becomes, the worse the site will be for myself and other users.

Early paid upvotes are the seed for later organic upvotes. You don't even need to spend $200 to get them.

6
Haydos585x2 3 days ago 4 replies      
This was an interesting read. I'm not sure it's the best idea as a blog post because I'm sure Reddit staff will get onto it then keep a much closer eye on this firm. I feel like journalists will be the same too. If I received 10 emails about these guys I'd be a bit skeptical that there is any actual interest.

As an aside, I wonder if they're using the same tactics here.

7
minimaxir 3 days ago 3 replies      
> This gave the campaign the boost we needed and it was all the direct result of one thing: hustle .

Deliberately breaking the rules that exist for a good reason isn't "hustle." It's just cheating.

8
scotchio 3 days ago 5 replies      
Speaking of fake Reddit stuff...

Reddit has a SERIOUS political astro-turfing problem.

Some would argue it swayed the US election. Some would argue Reddit is bought and sold.

The popular or all experience is completely different. Commenting you don't even know if it's a real person or not.

Does anyone know a forum similar to this or Reddit where it's ALL verified accounts?

9
ricksharp 2 days ago 4 replies      
Dear Reddit, Maybe this idea would help slow down this type of abuse:

It seems like it would be easy enough and cheap enough to build a honeypot to identify accounts used for the purchased Reddit upvotes.

For example, Reddit could set up some honeypot posts to track paid upvote accounts.

They then go and pay these upvoters to upvote the honeypot post and identify the accounts used. (It would be helpful if the post was hidden so other people don't find it accidentally. In fact, it is possible to just use a tracking redirect page given only to the paid upvoters and use any post as the upvote "job" so it would be hard to identify by the upvoters.)

Then Reddit could ghost those identified accounts. Simply ignore their votes in the system, but don't tell the account owners, so the owners continue using the accounts without realizing the problem.

This would make it very difficult for the account owners to know which of their accounts were compromised.

Then on any new posts where these upvoter accounts are being used in majority, other accounts can be found. The other accounts that also similarly upvoted on this article could represent other paid upvote accounts.

Track those other accounts and how often they appear beside the ghosted paid accounts, and voila, you have found more paid upvoters.

Keep doing this and it makes the paid upvoters ineffective because although they can work the system, their work is only being used to find other paid upvote accounts and also clients who are paying for paid upvotes.

After a time period, the clients could be sent a warning:

It has been detected that you are using paid upvote services which are against Reddit TOS. Please contact customer service so we can work together to remedy the problem. Failure to do so may cause your account to be banned and all your posts removed from Reddit. Have a good day.

Of course Reddit doesn't have to do this, and really anyone could do the same process to build a list of paid upvoter accounts and a list of articles and clients that use those services...

So what do you think, would this put a dent in the upvoters effectiveness?

10
imron 3 days ago 4 replies      
> these are the types of things we do several times a day now

And this type of marketing posing as news, pushed to the front page by bots and fake accounts is precisely why /r/politics is now a shitbed.

Thanks Hack-PR.

11
visarga 2 days ago 0 replies      
The OP is using techniques that used to work on the wild wild web 10-15 years ago. I thought by now everything is being normalized, or at least serious people don't use spamming techniques to launch a business.

If all these bought upvotes come from new accounts, or from the same few IP ranges, or have a lesser ratio of comments to upvotes, or are interacting only between themselves and not with the larger community -> reddit can detect them and turn them into ghost accounts.

Reddit needs to open up a Kaggle challenge for detecting rented upvotes and other abuses, use the data it has already shared with the AI community (the reddit dataset) to detect such attempts as they happen.

12
gehsty 2 days ago 0 replies      
Maybe this is all just another 'viral' advertisement for a guy selling upvotes on Fiverr?
13
soared 2 days ago 0 replies      
Post was deleted, heres a cached link. mirror

http://webcache.googleusercontent.com/search?q=cache:rGmQFEr...

14
rmc 2 days ago 0 replies      
The title of the article is "How we hacked reddit...", this submission currently says "How to go viral by using fake reddit likes", and is more accurate. They didn't hack reddit, they bought upvotes.
15
JonDav 3 days ago 1 reply      
16
oDot 3 days ago 1 reply      
Comments here are missing one crucial thing -- it's a shame that success in Reddit depends so much on initial upvotes.
17
known 2 days ago 0 replies      
VERIFY and TRUST;

"Media does not spread free opinion; It generates opinion" --Oswald,1918 https://en.wikipedia.org/wiki/Decline_of_the_West

18
joelthelion 2 days ago 0 replies      
Fake likes only explain part of this initiative's success. This would never have worked with an idea that doesn't appeal to redditors.
19
lsmarigo 2 days ago 1 reply      
Everyone does this, including the reddit founders themselves in the early days of the site (https://motherboard.vice.com/en_us/article/z4444w/how-reddit...).
20
Simulacra 1 day ago 0 replies      
I don't know if this is a hack, per se. I work in media and PR, and this is just one of those things you do. Pump up the issue, get eyeballs on the campaign, find a way to jazz the reporters, and off to the races. What may have made this fly is that the idea was already in the minds of the public, and the media. It's a LOT easier when that happens.
21
danso 3 days ago 0 replies      
I don't get the impression that there's any substantial vote monitoring, and so it surprises me that it even cost money to do this kind of astroturfing. How hard would it be to setup and maintain a dozen Reddit accounts and spread them over a VPN service? 10 min initial startup, and not more than a minute a day of doing innocuous activity on those accounts, occasionally. When a campaign rolls out, then have the accounts work in concert.

Sure, it might not be as 100% successful as Fiverr (though I imagine it's fairly easy for Reddit to ad-hoc identify voting blocs if something was known to be bought). But you could employ additional optimization techniques, such as the one used by most high-karma users (e.g. Gallowboob): if a post fails to hit critical upvote mass, then delete and resubmit later in the day.

To give you an idea of how things seem to be relatively unmonitored until users flag it, there's the story of Unidan:

https://www.reddit.com/r/MuseumOfReddit/comments/2m5q11/a_fe...

And as a more recent, obscure example, there was the mystery of why the mod of r/evilbuildings had something like 499 of the 500 most upvoted posts in his own subreddit. The math was so laughably in favor of manipulation but a Reddit admin, using whatever shit tools they have to investigate this, acquitted the mod:

https://www.reddit.com/r/SubredditDrama/comments/67r1ht/the_...

Follow up:

https://www.reddit.com/r/SubredditDrama/comments/6ao8cv/dram...

The details of how this mod was able to boost his own posts without being called out for vote manipulation is too banal to explain in detail (basically, he would shadowdelete other popular posts so that his would get picked up by the Reddit front page, and then undelete the popular posts before anyone noticed). But the fact that a Reddit admin (I.e. a paid employee) thought that the evilbuildings mod always having the top post in his own forum for 6 months straight was just a coincidence, and/or because that mod was just apparently an amazing content submitter, spoke hugely about how uncreative the Reddit admins might be in detecting fraud.

Edit: if you are interested in subreddit drama details, here's a thread that combines the evilbuildings drama and Gallowboob: https://www.reddit.com/r/SubredditDrama/comments/6d3syc/evil...

If this is the kind of effort users put toward imaginary points (though arguably raising karma is part of Gallowboob's professional work), I'm nervous to think about the schemes that PR firms will construct when they realize the easy return on investment offered by Reddit popularity.

22
rnprince 3 days ago 2 replies      
If you're into this kind of thing, I enjoyed reading "Trust Me I'm Lying" by Ryan Holiday.
23
Doubletough 2 days ago 0 replies      
Looks like they've been shamed into submission and have pulled the article. It was getting hammered with comments earlier. Well deserved ones.
24
RileyJames 2 days ago 1 reply      
It seems that everyone is aware that likes, follows, upvotes, etc can all be bought, and therefore these numbers are manipulated regularly. But does anyone care to see the problem solved?
25
seoseokho 2 days ago 1 reply      
Anybody have a copy of this? link is 404 now
26
blackice 2 days ago 0 replies      
Reddit should really try to proxy / VPN / Bot detection because I'm willing to bet the people on fiverr are using large proxy networks to achieve this.
27
meant2be 2 days ago 0 replies      
What would be the proper way of gaining traction on reddit? Is that even possible anymore? I mean if the game is already rigged what chance do honest businesses stand in this environment?I dont have an account on reddit (been there for what? 7 years now?) and I always wondered how somebody go viral and get traction now this stuff makes me think everything is basically done and paid for.
28
dchuk 3 days ago 3 replies      
So I'm working on a side project that basically has an HN/Reddit interface. One monetization idea I had for down the road is basically a legitimate means to boost certain posts for certain periods of time, giving them prominence on the site in a clearly labeled area for such purposes.

Is this something people would be interested in?

29
logicallee 2 days ago 1 reply      
A lot of people don't seem to realize that being the top link on r/politics is a public good that's available to everyone. Just because someone pays $200 to make some politician's publicity stunt that nobody cares about be the top link there (I mean really, nobody cared - the idea of a law forcing politicians to walk around wearing the logos of their top ten donors is beyond silly), doesn't mean that everyone else can't also be the top link there at the same time, with other publicity stunts nobody cares about!

The great thing about being a top link is everyone can do it at the same time. It doesn't corrupt the process at all or waste anyone's time. Everyone can benefit from it and it doesn't make things worse for anyone.

For example imagine if all the top links on hacker news were just corporate advertisements disguised as stories. Would it be a worse place or cause any of us harm? Of course not.

30
ameister14 2 days ago 0 replies      
While I understand people finding this distasteful, it's exactly the kind of rule-breaking that they should be doing. Cheating? Airbnb broke Craigslist's rules to good effect, among others.

It's naughty without being outright evil. When did that become a bad thing on HN?

31
visarga 2 days ago 1 reply      
What I'm worrying about is that the reddit database is used by AI for learning dialogue and this kind of spamming actions just pollute the dataset.
32
silimike 2 days ago 1 reply      
This story brought to you by Fiverr.com
33
HearMeRoar 3 days ago 1 reply      
>How we hacked Reddit

Really? Hacked?

34
paulpauper 2 days ago 0 replies      
How about all the times this failed
35
notananthem 3 days ago 0 replies      
That is the least hacky and also least efficient way to do that, and also make yourself look like a total goober.
17
'Living Drug' That Fights Cancer by Harnessing Immune System Clears Key Hurdle npr.org
332 points by daegloe  13 hours ago   117 comments top 9
1
jfarlow 10 hours ago 2 replies      
Congratulations! The Chimeric Antigen Receptor (CAR) deployed here is very much unlike the standard 'small molecule' drug that 'disrupts a bad thing', and much more like a rationally engineered tool using the body's very own technologies to overcome a particular limitation. In this case, it gives the patient's own immune system a notion of what the cancer looks like.

If you want to build your own 'living drugs' we've built a digital infrastructure to allow you. Though we just made public our generic protein design software (thanks ShowHN! [1]), we're employing the same underlying digital infrastructure to build, evaluate, and manage CAR designs in high throughput [2]. The drug approved here was painstakingly designed by hand, while we think the technology now exists to permit many more such advances to be created at a much more rapid pace.

[1] https://news.ycombinator.com/item?id=14446679

[2] https://serotiny.bio/notes/applications/car

Design your own 'living' protein drugs here right now: https://serotiny.bio/pinecone/ (and let us know what you think, and how we can make it better!)

2
Young_God 6 hours ago 0 replies      
A friend of mine is alive today because he was part of one of the early trials.He had been told by his doctor, just before he was accepted into the trial, that he should start putting his affairs in order.
3
stillfinite 10 hours ago 4 replies      
The significant thing about CAR-T cell therapy is that it's not very specific to the type of cancer - all cancer cells have damaged DNA that leads to the productions of antigens. Leukemia is the low-hanging fruit because it's easy to inject the T-cells back into the body right where the cancers cells are. It's hard to tell whether you could get enough T-cells to diffuse out of the bloodstream to have an effect on something like prostate cancer. It would be a real breakthrough if you could overcome that hurdle, because then you would have a treatment that works on many different cancers without much modification.
4
eatbitseveryday 12 hours ago 3 replies      
NYTimes also covers the story (https://www.nytimes.com/2017/07/12/health/fda-novartis-leuke...) with more discussion about individual patients.

From the NYT article:

> The panel recommended approving the treatment for B-cell acute lymphoblastic leukemia that has resisted treatment, or relapsed, in children and young adults aged 3 to 25.

Why so young?

5
JoeAltmaier 12 hours ago 8 replies      
From the article:

 "Scientists use a virus to make the genetic changes in the T cells, raising fears about possible long-term side effects"
Is this a real risk? Is 'using a virus' in this way, still risky at all? or is it just the word 'virus' that makes writers put this line in every article about gene therapy?

{edit: real risk}

6
sjbase 12 hours ago 0 replies      
Does anyone know: what are the failure rates like for the gene editing technology being used for this? Thinking like a software engineer, are there transposition errors (GATC --> GTAC) , atomicity issues (GATC --> GA)? Mutations afterward?
7
judah 4 hours ago 1 reply      
Is this the same CAR-T treatment that Juno Therapeutics tried and scrapped[0] after 5 trial patients died after receiving the treatment?

[0]: http://www.xconomy.com/seattle/2017/03/01/after-trial-deaths...

8
ceejayoz 13 hours ago 3 replies      
> Another big concern is the cost. While Novartis will not estimate the price it will ultimately put on the treatment, some industry analysts project it will cost $500,000 per infusion.

Welp, guess my insurance premiums aren't stabilizing anytime soon.

9
aaronbrethorst 6 hours ago 2 replies      
"While Novartis will not estimate the price it will ultimately put on the treatment, some industry analysts project it will cost $500,000 per infusion."

Meanwhile, the latest version of the US Senate's healthcare bill includes the so-called Cruz Amendment[1], which would allow insurance companies to offer health insurance plans without essential health benefits, which would allow lifetime caps on insurance[2], which could mean that your six year old with recurring leukemia gets pulled off their treatment when they're halfway through. Not because you did anything wrong, per se, but because maybe your employer refuses to spring for health care plans with more than an $x dollar cap. Or you never anticipated something so horrific and catastrophic happening to your family.

[1] https://www.nytimes.com/2017/07/13/us/politics/senate-republ...

[2] https://www.brookings.edu/2017/05/02/allowing-states-to-defi...

18
NASA admits it doesnt have the funding to land humans on Mars arstechnica.com
271 points by chha  12 hours ago   312 comments top 25
1
habosa 10 hours ago 10 replies      
In many science fiction books we assume that if an alien planet ever got a whiff of us they'd quickly board their space ships and come see us.

It's somewhat comforting to think the alien planet could also be in a perpetual bureaucratic budget crisis and they've dismantled their space program to make more room for tax cuts.

Politics could save us from an alien invasion!

2
FiatLuxDave 11 hours ago 8 replies      
I had a conversation a few years ago with Buzz Aldrin. He was talking about his idea for a Mars Cycler which would travel continuously between Earth and Mars. I told him that I thought it was a great idea (especially as it would be investing in 'permanent' space infrastructure instead of a single-shot mission) but that I thought it was unlikely that the government would allocate enough resources to build it. He seemed very disappointed with me, as if by making a realistic assessment of today's politics that I was voting in that way. I'm all for spending money in Chryse Planitia instead of Helmand province. So is almost everyone I know. But I feel like the chance of the US government actually funding something serious in space is pretty much nil. And I have no idea how to go about changing that.
3
maxxxxx 12 hours ago 4 replies      
I have been watching this since 2000. New president comes in, scraps old programs, declares new "vision". NASA does a few incoherent things and the whole thing restarts after a few years. It's pretty sad. I wish they would commit to something and actually finish it.
4
thearn4 11 hours ago 4 replies      
I'm a bigger fan of putting a semi-permanent ISS 2.0 on the surface of the moon vs. boots on Mars. I don't work directly in exploration systems, but I'm not the only one at NASA who feels this way.

But more than anything I think we and the other executive agencies would take any strong commitment on an exploration and human spaceflight direction from congress that survives across presidential administrations over any specific technical consideration.

I.e. we're waiting for strong elected leadership.

5
Robotbeat 10 hours ago 2 replies      
The main thing NASA needs to land people on Mars (or the Moon for that matter) is a lander. NASA does not have one, nor is one being funded. All the other details for Mars can be done with variants of what already exists or will fly shortly (commercial crew vehicles or even Soyuz, ISS modules for a transfer craft, launch vehicles like the EELVs used by the military or Falcon 9 or Falcon Heavy, in-orbit docking and propellant transfer which is commonly used on the Space Station, etc). If you see a lander being developed and tested, then you know you have a serious human space exploration program.

NASA has sufficient funding for accomplishing a human Mars landing. But not the political freedom to direct that funding where it's most critical (i.e. a lander).

SpaceX, on the other hand, is developing this technology for a lander. Their reuse technology for Falcon 9 proved for the first time the feasibility of supersonic retropropulsion, a CRITICAL technology needed for a human-scale Mars lander. A vertically landing reusable upper stage, which SpaceX intends to develop next (after block 5 Falcon 9) as part of their Mars rocket plans, is essentially a Mars lander prototype.

SpaceX, even though they have less funding and have to rely on funding from commercial launches (as well as capital used for developing commercially viable hardware, like the constellation) to develop their Mars lander, is thus on a better and surer path to Mars than NASA.

This is SpaceX's Mars architecture in a nutshell:https://www.youtube.com/watch?v=0qo78R_yYFA

In order to pay for it, they will develop a smaller (but still tremendously huge) and more economical version of the rocket shown in that video to replace Falcon 9 and Falcon Heavy. They will use it to launch and maintain their 12,000 satellite megaconstellation (thousands of satellites per year), something that would BARELY be feasible with their partially reusable Falcon architecture (but not feasible with expendable rockets) but which fits nicely and economically into the capability of their subscale Mars rocket. This way, they can leverage capital they'll raise for their megaconstellation to build the primary pieces of their human Mars transportation architecture.

6
John23832 12 hours ago 1 reply      
I think anyone who remotely follows space exploration or NASA knew this.
7
Tepix 11 hours ago 2 replies      
NASA has enough funding, they are just spending it on SLS and Orion(). It appears that if they were to pay the "new space" companies to get them to Mars, the money would be enough.

() because Congress wants them to, because ... jobs (as if "new space" companies weren't creating jobs, too... even competitive ones)

8
Asdfbla 8 hours ago 0 replies      
While this would certainly be a blow for science in general, I don't understand why people are so enamoured with human space travel and think it's a realistic avenue for humanity to get out of the responsibility we have for Earth (there's the strange defeatist sentiment on the internet that we have to leave this planet in the foreseeable future).

Fact is, the laws of physics probably dictate that we won't ever leave the solar system and in our solar system there's not much we can work with to make the other planets habitable. It's comparatively soooo much easier to simply make life sustainable on Earth and then figure out space travel in the thousands/millions of years we have left until some external disaster (asteroid, exploding sun, whatever) threatens us. In the meantime, we can explore space efficiently with robots.

9
jankotek 11 hours ago 1 reply      
Better science could be done with automated machines. We could explore entire solar system for a price of single mission.
10
kilroy123 8 hours ago 0 replies      
Of course, they don't have the money. They literally need up to 100 billion UDS to make it happen. Alternatively, partner with China and a few other countries.

We could wait until private enterprises can get us there but that probably wouldn't be a far a long time.

http://www.houstonchronicle.com/news/houston-texas/houston/a...

11
Navigator 6 hours ago 0 replies      
Considering NASA costs next to nothing (about 0.5% of the US govt's total budget), and the studies I've seen referenced show its return on investment to be about $10 for every $1 used (granted, it's a difficult figure to calculate, but even if assuming a huge error margin that's still great ROI), it's no wonder you chose to post that anonymously.
12
valuearb 2 hours ago 0 replies      
If NASA built their manned space program around the SpaceX Falcon Heavy and Blue Origin New Glenn (and future uprated versions of both), they could start launching crewed vehicles into deep space next year at less than 1/10 the cost of the SLS that won't be launching humans for at least 4 years.

It's not just that they have been held hostage by congress to build the SLS as a pork delivery service. They've also become risk averse. The Saturn V was built with "all up" testing, rushed to testing a completed rocket instead of focusing on component testing. They only flew two Saturn V unmanned missions before they launched one with men on it. Today, SpaceX has launched the Falcon 9 over 20 times, and has a capsule with the safest abort mode ever, and NASA still hasn't man-rated it.

NASA could take a fraction of the money they are spending on the SLS, and start doing monthly deep space launches by the end of next year. They could use 140,000 lb capacity Falcon Heavies and Dragon Capsules to do lunar missions. They could put astronauts back on the moon, build a constantly manned moon base, develop and test rovers and other equipment they want to use on mars.

Astronauts would be lined up to volunteer, even if the Falcon Heavy only has unmanned two test flights. They are far more rational judges of what the safety levels should be than the PR department at NASA.

Then within a few more years, NASA could shift to doing Mars missions when SpaceX and Blue Origin or anyone else can start giving them 300,000+ lb cargo capacity launches for less than $1,000/lb. At that price again they could average a dozen or two dozen launches a year. All that launch capacity would enable them to launch a group of Aldrin Cyclers to provide regular transport to mars and back with heavy radiation shielding, supply storage and room for big crews. Other robot launches can pre-cache supplies, equipment and return fuel on Mars.

But they can never do it using the SLS path. It's going to start off costing near $20,000 per lb for LEO access, and even the later versions will still cost over $10,000 per lb. That just makes Mars missions almost economically impossible. The SLS could only do an Apollo style program, where a decade from now they launch a handful of all-in-one missions (two orbiters, a couple that land) before congress wilts under the enormous costs.

13
Kazamai 11 hours ago 3 replies      
I don't see the purpose of landing people on Mars. Just to say, "we did it". Wouldn't it be a much more rewarding goal to research and execute systems that could send humans one way to planets in our galaxy. Even seeding organisms on other planets in the hope that they evolve into intelligent life.
14
squarefoot 9 hours ago 1 reply      
If the bean counters at NASA read "Buy Jupiter!" by Isaac Asimov they'd already have the solution at hand. Ok, building flying billboards is still a bit hard, but advertising is the point. What about sending probes and ships named after the highest bidder name/company? Of course they would have also a name for the scientific community and those of us who would never ever accept saying "Coca Cola has landed on Mars".
15
veeragoni 12 hours ago 2 replies      
Bill Nye have a different argument.https://youtu.be/5ekUbzciyKg
16
coss 9 hours ago 2 replies      
I like to imagine a world where tax payers can choose where they wish to allocate their taxes.
17
ninguem2 11 hours ago 0 replies      
From the body of the article, it seems that landing humans on Mars is not the problem. It's bringing them back.
18
MightyPowerful 6 hours ago 0 replies      
Come on Trump! Are you a man or what? Are you really going to allow other countries to beat us in yet another thing? Get these guys the funding we need to make America great again!
19
beachbum8029 7 hours ago 0 replies      
Shouldn't we be researching ways to terraform Mars from afar rather than ship a couple humans to go live in the middle of red rocks for a few decades and then die?
20
princetontiger 7 hours ago 0 replies      
Last I checked, Bolden/NASA has spent time gallivanting around the world trying to be inclusive (Middle East, Asia, Africa, etc.). NASA is a larger allegory for the USA. This entire country is toast in 100 years. It's similar to the last Spanish galleons leaving Cordoba. In 2011, we have a similar Galleon Moment. STS-135 will end up being the flight from NASA, ever. To digest this fact makes me extremely sad.

There is a reason that the UK did not have a space program, but lead the exploration of the West in the 1800s. Britons had intestinal fortitude during the Victorian era, and this same urge moved to Americans after WW2.

21
indigo0086 11 hours ago 2 replies      
And it should never have the funding. Let the private sector invest in that venture.
22
louithethrid 6 hours ago 0 replies      
NSA admits it does have the funding to land humans on Mars - but lacks the A to go there
23
spiritomb 12 hours ago 1 reply      
such a waste of time and (other ppl's) money.
24
BlackjackCF 11 hours ago 2 replies      
Is it possible for NASA to Kickstart this? I'd throw money at them.
25
shams93 10 hours ago 1 reply      
Landing humans on other planets is the wrong way to go, at least at this stage of technological development. You combine tele-present robots with upcoming quantum teleportation of photons and you have instant communication between the drone on Mars and the human operator on Earth. Its going to cost even more to terraform Mars to make it even remotely do-able for human habitation.
19
Life Is About to Get Harder for Websites Without HTTPS troyhunt.com
281 points by finnn  2 days ago   349 comments top 19
1
userbinator 1 day ago 5 replies      
I'm most worried about the "long tail" of often very interesting, useful, and rare content (a lot of it from a time when the Internet was far less commercialised) that is unlikely to be hosted on HTTPS, and whose owner may have even forgotten about or can't be bothered to do anything about, but still serves a purpose for visitors. The "not secure" will drive a lot of visitors away, and even lead to the death of many such sites.

Imagine someone who knew enough to set up a site on his own server a long time ago and had left it alone ever since. Maybe he'd considered turning it off a few times, but just couldn't be bothered to. Now he suddenly gets contacted by a bunch of people telling him his site is "not secure". Keep in mind that he and his visitors are largely not highly knowledgeable in exactly what that means, or what to do about it. It could push him over the edge.

...and then there's things like http://www.homebrewcpu.com/ which might never have existed if HTTPS was strongly enforced all along.

I understand the security motivation, but I disagree very very strongly with these actions when it also means there's a high risk of destroying valuable and unique, maybe even irreplaceable content. In general, I think that security should not be the ultimate and only goal of society, contrary to what seems the popular notion today. It somewhat reminds me of https://en.wikipedia.org/wiki/Slum_clearance .

(I also oppose the increased centralisation of authority/control that CAs and enforced HTTPS will bring, but that's a rant for another time...)

2
y0ghur7_xxx 1 day ago 5 replies      
I hope lans are exluded? I'm scared that I will get security warnings everywhere in my lan.

- when I log in to my webcams it says the connection is not secure

- when I log in on my nas it says the connection is not secure

- when I log in on my router it says the connection is not secure

- when I log in on the web interface of mythtv it says the connection is not secure

- when I log in on my self hosted gitea instance it says the connection is not secure

- when I log in to my self hosted nextcloud it says the connection is not secure

- when I log in to the configuration page of my toaster it says the connection is not secure

All these things are on my lan, and on most things there is no way to install a tls cert on them, nor would I want to do that.

Firefox already nags me that the connection is not secure when i enter a username and a password in any of those sites.

3
eliben 1 day ago 5 replies      
Serious question: if I just run a simple blog with static HTML hosted with Apache, do I really need HTTPS? Will I be penalized by not having it?
4
vmp 1 day ago 4 replies      
Off-topic: If only IPv6 adaptation would have as much momentum as HTTPS.
5
cryo 1 day ago 19 replies      
HTTPS is pain in the neck and _currently_ I hate it from the bottom of my heart.

TLTR: if you have a commercial service or device running in a local network forget HTTPS and service workers, use HTTP and HTML5 appcache.

-- RANT starts here --

It would be lovely when every website and webapp uses HTTPS. But for a significant amount of them it's just not f..... possible without driving users completely insane.

If the HTTPS server doesn't (and never will) have a public domain forget about encryption and security, forget about using service workers. The following examples can't, by the love of god, ever provide HTTPS without completely f..cking up user experience due self signed certificates warnings:

1) internal corporation services, websites and webapps.

2) services that run in a local private network like on a Raspberry Pi.

3) webapps which are served via public HTTPS website, but need to talk via CORS to local unsecured services, like to a Philips hue bridge, or any other IoT device which is in the local network but only provides HTTP. These will enlight the users with a shiny mixed-content warning.

.... JUST use self-signed certificates, they said.

NO.

For normal users the UX of self-signed certificates is just non existent, it's a complete mess! It will scare the sh't out of users and will almost always look like your service is plain malware.

It looks much more secure to serve a good'ol HTTP site with no encryption at all.

6
makecheck 1 day ago 0 replies      
I hope they did some user testing to see how people actually behave in the presence of such warnings but in my experience it does nothing. Worse, it's in an environment that is already rife with little messages in corners trying to get your attention (ads) so users may be more "blind" when browsing than usual.

The success of "Let's Encrypt" suggests that a key part of the problem wasn't a lack of user complaints about security. Rather, it was a lack of a sane model (both technically and economically) for setting up and maintaining certificates. In the end, people maintaining sites already had 100 other things to worry about and weren't going to get around to HTTPS with anything less.

7
sebcat 1 day ago 2 replies      
I wish people would stop equating "secure" with "HTTPS".
8
milankragujevic 1 day ago 4 replies      
With Cloudflare's first easy to use free SSL and later Lets Encrypt, I think it there are no more excuses for not being secure.
9
a_imho 1 day ago 1 reply      
I deploy ssl on all my sites, but imo the article is way overestimating the importance of browser notifications.
10
wfunction 1 day ago 2 replies      
How is a gateway serving a configuration page at 192.168.1.1 to internal users supposed to eventually get an HTTPS certificate for that address...?
11
daxfohl 18 hours ago 0 replies      
How about a warning in Chrome that says "You're about to use Chrome to visit this website, and thus send everything about yourself to Google to do whatever they want with", for all websites staring in Chrome ~67?
12
TekMol 1 day ago 6 replies      
How hard is it to provide HTTPS these days?

Say you have a plain Debian 8 install, running a typical LAMP stack serving a single domain.

If you want to make it use a LetsEncrypt cert and serve the domain over HTTPS - what would be the minimum number of steps on the command line to make it do that?

13
epalmer 1 day ago 2 replies      
I have been anticipating this but have had better things to spend my limited time on. I have more than 135 sites I need to convert to https and they are load balanced. I don't think letsencrypt handles load balanced sites yet. My management is against wildcard certs. This might push them over the edge in favor of wildcard certs.
14
KevinEldon 1 day ago 0 replies      
HTTPS gives your ISP less of your information to collect, analyze and sell to advertisers which in turn protects the value of Google's information about you. I think the changes to Chrome are well-intentioned, but can't help but smile at how this side-effect favors Google's business.
15
gator-io 1 day ago 0 replies      
Here's another take on how much of the web is HTTPS:

https://truemarketshare.com/report?id=https

16
kylehotchkiss 1 day ago 1 reply      
Bleh. Wish I could use ssl on my GitHub pages site with custom domain.
17
fatzombi_ 1 day ago 2 replies      
what about self signed certificates? wouldn't it be great if these swebsites treated like http ones, without any security flags
18
idibidiart 1 day ago 4 replies      
19
chillingeffect 1 day ago 3 replies      
Bit of a scare tactic. Page is an ad for Mr. Hunt's $299 course.

It's all true. However, I would make the case for Pat Q. Mainstream feeling less alarmed by "Not Secure" messages than most HN readers.

Note the Twitter example is from Mr. Hunt, not a random internet user.

20
Toward a Reasonably Secure Laptop qubes-os.org
337 points by doener  2 days ago   98 comments top 11
1
HugoDaniel 2 days ago 2 replies      
"Finally, we are going to require that Qubes-certified hardware does not have any built-in USB-connected microphones (e.g. as part of a USB-connected built-in camera) that cannot be easily physically disabled by the user, e.g. via a convenient mechanical switch. However, it should be noted that the majority of laptops on the market that we have seen satisfy this condition out of the box, because their built-in microphones are typically connected to the internal audio device, which itself is a PCIe type of device. This is important, because such PCIe audio devices are by default assigned to Qubes (trusted) dom0 and exposed through our carefully designed protocol only to select AppVMs when the user explicitly chooses to do so."

This made me download Qubes. Amazing project that seems to care.

2
x86insecure 2 days ago 5 replies      
There are things we can do to help get us out of this Intel ME rut.

* Let AMD know that open-sourcing/disabling PSP is important to you [1].

* Contribute to RISC-V. You can buy a RISC-V SoC today [2]. Does your favorite compiler have a RISC-V backend?

[1] https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo...[2] https://www.sifive.com/products/hifive1/

3
cyphar 2 days ago 0 replies      
> Another important requirement were introducing today is that Qubes-certified hardware should run only open-source boot firmware (aka the BIOS), such as coreboot.

I recently flashed coreboot on my X220 (and it worked surprisingly enough). However, I couldn't find any solid guides on how to set up TianoCore (UEFI) as a payload -- does Qubes require Trusted Boot to be supported on their platforms (I would hope so)? And if so, is there any documentation on how to set up TianoCore as a payload (the documentation is _sparse_ at best, with weird references to VBOOT2 and U-Boot)?

Otherwise I'm not sure how a vendor could fulfill both sets of requirements.

4
d33 2 days ago 10 replies      
If I read that right, they're allowing Intel ME, which sounds like a sad compromise to me. Given that it's a pretty big complex black box that one can't easily disable, would you agree that x86 is doomed when it comes to security? If that's the case, is there any hope we could have a CPU with competitive capabilities? (For example, is there an i7 alternative for ARM?)

What could one do to make it possible to have ME-less x86 in the future?

5
Taek 2 days ago 3 replies      
Is this something we could achieve with a corporate alliance? I know a lot of tech companies would like to give their employees secure laptops. I also know that there are large costs associated with making hardware, especially if you are talking about dropping ME.

A dozen companies with 1000 employees each and a budget of $2,500 per employee gets you $30 million, which is surely enough to get a decent, qubes-secure laptop with no ME. You aren't going to be designing your own chips at that point, but you could grab power8 or sparc or arm.

Are there companies that would reasonably be willing to throw in a few million to fund a secure laptop? I imagine at least a few. And maybe we could get a Google or someone to put in $10m plus.

6
ashleysmithgpu 2 days ago 5 replies      
Looks like Qubes make you pay to get certified: https://puri.sm/posts/ "The costs involved, requiring a supplementary technical consulting contract with Qubes/ITL (as per their new Commercial Hardware Goals proposal document), are not financially justifiable for us."
7
Aissen 2 days ago 1 reply      
> The vendor will also have to be willing to freeze the configuration of the laptop for at least one year.

This is one of the most important points. The speed at which laptop vendors are releasing new SKUs is staggering. I know the whole supply chain is to blame, but apart from a few models, the number of different SKUs is way too high.

8
notacissp 2 days ago 0 replies      
This article helped me get up and running with Qubes:

https://medium.com/@securitystreak/living-with-qubes-os-r3-2...

9
digi_owl 2 days ago 1 reply      
Once more i get the impression that computer security people are off in a different universe where a computer at the bottom of the ocean is a "reasonable" way to do computing.
10
listic 2 days ago 0 replies      
Looks like even Purism is not interested in certifying compatibility with Qubes anymore. That's sad.
11
awinter-py 2 days ago 0 replies      
It's a shame that chromebook's boot verification isn't easily extensible to open source.
21
ZFS Is the Best Filesystem For Now fosketts.net
290 points by ingve  1 day ago   257 comments top 22
1
floatboth 1 day ago 13 replies      
> ZFS never really adapted to todays world of widely-available flash storage: Although flash can be used to support the ZIL and L2ARC caches, these are of dubious value in a system with sufficient RAM, and ZFS has no true hybrid storage capability.

How is L2ARC not "true hybrid"?

> And no one is talking about NVMe even though its everywhere in performance PCs.

Why should a filesystem care about NVMe? It's a different layer. ZFS generally doesn't care if it's IDE, SATA, NVMe or a microSD card.

> can be a pain to use (except in FreeBSD, Solaris, and purpose-built appliances)

I think it's just a package install away on many Linux distros? Also installable on macOS I had a ZFS USB disk I shared between Mac and FreeBSD.

Also it's interesting that these two sentences appear in the same article:

> best level of data protection in a small office/home office (SOHO) environment.

> Its laughable that the ZFS documentation obsesses over a few GB of SLC flash when multi-TB 3D NAND drives are on the market

Who has enough money to get a mutli-TB SSD for SOHO?!

2
mixmastamyk 1 day ago 4 replies      
I've been disappointed in linux filesystems and Intel hardware lately. Little integrity checking in ext4 and btrfs is still having growing pains. Recent search for a svelte laptop with ECC memory yielded nothing. Sheesh, wasn't this stuff invented like 30+ years ago?

I understand Intel is segmenting reliability into higher-priced business gear, but as a developer that depends on this stuff for their livelihood the current status quo is not acceptable.

Linux should have better options since profit margins are not an impediment.

3
peapicker 1 day ago 2 replies      
ZFS, at least on Solaris, has issue with many multiple readers of the same file, blocking after ~31 simultaneous readers (even when there are NO writers). Ran into this with a third party library which reads a large TTF to produce business PDF documents. The hundreds of reporting processes all slowed to a crawl when accessing the 20Mb Chinese TTF for reporting because ZFS was blocking.

I can't change the code since it is third party. The only way I saw to easily fix it was on system startup to copy the fonts under a new subdir in /tmp (so in tmpfs, ie RAM, no ZFS at all there ) and then softlink the dir the product was expecting to the new dir off of /tmp, eliminating the ZFS high-volume multiple-reader bottleneck.

Never had this problem with the latest EXT filesystems on my volume groups on my Linux VMs with the same 3rd party library and same volume of throughput.

4
Mic92 1 day ago 5 replies      
The article does not mention bcachefs as a future alternative: http://bcachefs.org/
5
conductor 1 day ago 1 reply      
DragonFlyBSD's HAMMER [0] is another viable alternative.

Unfortunately the next generation HAMMER2 [1] filesystem's development is moving forward very slowly [2].

Nevertheless, kudos to Matt for his great work.

[0] https://www.dragonflybsd.org/hammer/

[1] https://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEA...

[2] https://gitweb.dragonflybsd.org/dragonfly.git/history/HEAD:/...

6
alyandon 1 day ago 5 replies      

 "Once you build a ZFS volume, its pretty much fixed for life."
The ease of growing/shrinking existing volumes and adding/removing storage is why I made the decision to go with btrfs when I rebuilt my home file server.

7
Perseids 1 day ago 1 reply      
(Near) zero-cost snapshots and filesystem-based incremental backups are amazing. Just today I was saved by my auto snapshots [1]. Apparently I didn't `git add` a file to my feature branch and without the snapshot I wouldn't have been able to recover it after some extensive resetting and cleaning before I switched back to the feature branch. It's really comforting to have this easy to access [2] safety net available at all times.

Now that Ubuntu has ZFS build-in by default, I'm seriously considering switching back, and since I too have been burned by Btrfs, I guess I'll stay with ZFS for quite some time. Still, the criticism of the blog post is fair, e.g. I was only able to get the RAM usage in control after I set hard lower and upper limits of the ARC as kernel boot parameters (`zfs.zfs_arc_max=1073741824 zfs.zfs_arc_min=536870912`).

[1] https://github.com/zfsonlinux/zfs-auto-snapshot

[2] The coolest feature is the virtual auto mount where you can access the snapshots via the magical `.zfs` directory at the root of your filesystem.

8
Veratyr 1 day ago 8 replies      
This might be somewhat off topic but I'm desperate. I've been looking for a way to store files:

- Using parity rather than mirroring. I'm happy to deal with some loss of IOPS in exchange for extra usable storage.

- That deals with bitrot.

- That I can migrate to without somehow moving all of my files somewhere first (i.e. supports addition/removal of disks).

- Is stable (doesn't frequently crash or lose data)

- Is free or has transparent pricing (not "Contact Sales").

- Ideally, supports arbitrary stripe width (i.e. 2 blocks data + 1 block parity on a 6 disk array)

Unfortunately it doesn't appear that a solution for this exists:

- ZFS doesn't support addition of disks unless you're happy to put a RAID0 on top of your RAID5/6 and it doesn't support removal of disks at all when parity is involved. It is possible to migrate by putting giant sparse files on the existing storage, filling the filesystem, removing a sparse file, removing a disk from the original FS and "replacing" the sparse file with the actual disk but this is somewhat risky.

- BTRFS has critical bugs and has been unstable even with my RAID1 filesystem.

- Ceph mostly works but I always seem to run into bugs that nobody else sees.

- I couldn't even figure out how to get GlusterFS to create a volume.

- MDADM/hardware RAID don't deal with bitrot.

- Minio has hard coded N/2 data N/2 parity erasure coding, which destroys IOPS and drastically reduces capacity in exchange for an obscene level of resiliency I don't need.

- FlexRAID either isn't realtime or doesn't deal with bitrot depending which version you choose.

- Windows storage spaces are slow as a dog (4 disks = 25MB/s write).

- QuoByte, the successor to XtreemFS has erasure coding but has "Contact Us" pricing and trial.

- Openstack Swift is complex as hell.

- BcacheFS seems extremely promising but it's still in development and EC isn't available yet.

I'm currently down to fixing bugs in Ceph, modifying Minio, evaluating Tahoe-LAFS and EMC ScaleIO or building my own solution.

9
cryptonector 1 day ago 1 reply      
Illumos has a way to expand pools, FYI. IDK if that's in OpenZFS yet.

It works thusly: ZFS creates a vdev inside the new larger vdev, then moves all the data from the old vdev to the new vdev, then when all these moves are done the nested vdevs are enlarged.

What should originally have happened is this: ZFS should have been closer to a pure CAS FS. I.e., physical block addresses should never have been part of the ZFS Merkle hash tree, thus allowing physical addresses to change without having to rewrite every block from the root down.

Now, the question then becomes "how do you get the physical address of a block given just its hash?". And the answer is simple: you store the physical addresses near the logical (CAS) block pointers, and you scribble over those if you move a block. To move a block you'd first write a new copy at the new location, then overwrite the previous "cached" address. This would require some machinery to recover from failures to overwrite cached addresses: a table of in-progress moves, and even a forwarding entry format to write into the moved block's old location. A forwarding entry format would have a checksum, naturally, and would link back into the in-progress-move / move-history table.

During a move (e.g., after a crash during a move) one can recover in several ways: you can go use the in-progress-moves table as journal to replay, or you can simply deref block addresses as usual and on checksum mismatch check if you read a forwarding entry or else check the in-progress-moves table.

For example, an indirect block should be not an array of zfs_blkptr_t but two arrays, one of logical block pointers (just a checksum and misc metadata), and one of physical locations corresponding to blocks referenced by the first array entries. When computing the checksum of an indirect block, only the array of logical block pointers would be checksummed, thus the Merkle hash tree would never bind physical addresses. The same would apply to znodes, since they contain some block pointers, which would then have three parts: non-blockpointer metadata, an array of logical block pointers, and an array of physical block pointers.

The main issue with such a design now is that it's much too hard to retrofit it into ZFS. It would have to be a new filesystem.

10
gulikoza 1 day ago 1 reply      
The thing I'm struggling with is 4K sector support. It's horribly inefficient with ZFS. RAIDZ2 wastes a ton of space when pool is made with ashift=12. And everybody knows 512e on AF disks is horribly slow...so ZFS is either very slow or wastes 10% of total space...Or both (ZVOL :D)

According to some bug reports, nobody has touched this since 2011...

11
fulafel 21 hours ago 1 reply      
He his talking about "best level of data protection in a small office/home office (SOHO) environment".

Trying to do this with FS features is misguided.

You need to have backups, and have regular practice in restoring from backups.

Some organizations need fancy filesystems in addition to backups, because they want to have high availability that will bridge storage failures. But that has a high cost in complexity, you should only consider it if you have IT/sysadmin staff and the risk management says it's worth the investment in cognitive opportunity cost, IT infrastructure complexity and time spent.

12
snakeanus 1 day ago 1 reply      
I am really excited for bcachefs. It is also the only fs that has support for chacha20-poly1305 encryption.
13
Cieplak 1 day ago 1 reply      
On my current laptop, I'm seeing a 20% reduction in disk usage relative to the filesystem size because of ZFS's built-in compression.
14
carlob 1 day ago 2 replies      
> Many remain skeptical of deduplication, which hogs expensive RAM in the best-case scenario. And I do mean expensive: Pretty much every ZFS FAQ flatly declares that ECC RAM is a must-have and 8 GB is the bare minimum. In my own experience with FreeNAS, 32 GB is a nice amount for an active small ZFS server, and this costs $200-$300 even at todays prices.

I use nas4free with much less ram

15
jerry40 1 day ago 2 replies      
Does anybody use ZFS as replacement for a database backup/restore on a test environment? I'm not sure but it seems that it's possible to use ZFS snapshots in order to quickly restore previous database state. Note: it's just a question, I'm not advising to try that.
16
Quequau 17 hours ago 0 replies      
I have to wonder what's going to happen once those storage level random access non-vol memory technologies finally make it out of R&D and into the market.

I mean, as it is now it seems like we have a hard enough time dealing with comparatively simple hybrid memory systems.

17
thibran 1 day ago 0 replies      
Another future alternative TFS:https://github.com/redox-os/tfs
18
Koshkin 1 day ago 4 replies      
A logical issue that I have with the existence of such filesystems as ZFS and BTRFS is that the problem of "bit rot" should be addressed at a lower abstraction level - hardware or the driver - rather than at the level that should be primarily responsible for user-visible organization of files, directories, etc.
19
cmurf 1 day ago 0 replies      
Btrfs might just become the ZFS of Linux but development has faltered lately, with a scary data loss bug derailing RAID 5 and 6 last year and not much heard since.

It was not a per se data loss bug. It was Btrfs corrupting parity during scrub when encountering already (non-Btrfs) corrupted data. So a data strip is corrupt somehow, a scrub is started, Btrfs detects the corrupt data and fixes it through reconstruction with good parity, but then sometimes computes a new wrong parity strip and writes it to disk. It's a bad bug, but you're still definitely better off than you were with corrupt data. Also, this bug is fixed in kernel 4.12.

https://lkml.org/lkml/2017/5/9/510

Update, minor quibbles:

lacking in Btrfs is support for flashBtrfs has such support and optimizations for flash, the gotcha though if you keep up with Btrfs development is there have been changes in FTL behavior and it's an open question whether or not these optimizations are effective for today's flash including NVMe. As for hybrid storage, that's the realm of bcache and dm-cache (managed by LVM) which should work with Btrfs as any other Linux file system.

ReFS uses B+ trees (similar to Btrfs)XFS uses B+ trees, Btrfs uses B-trees.

20
throw2016 1 day ago 3 replies      
The filesystem as basic infrastructure has to be robust and fuss free. The complex stuff is going to be built on top of that.

After years of btrfs I realized while the all the features around snapshotting, send/receive etc are great the cost in performance and other issues is too high.

And using plain old ext4 is more often than not the best compromise so you can forgot just about the fs and focus on higher layers.

21
moonbug22 1 day ago 2 replies      
I'll stick with GPFS, thanks.
22
zzzcpan 1 day ago 2 replies      
Ceph, gluster, object storages, all would do a better job serving SOHO. ZFS is a 90s way of thinking about storage, "a box" way. I don't think it deserves any of that HN hype.
22
Snap falls to IPO price usatoday.com
280 points by prostoalex  3 days ago   238 comments top 21
1
habosa 3 days ago 7 replies      
$17.00 was the IPO price but only for investors with access. Your average investor with an eTrade account saw a price of $24.00+ when the market opened that morning, and it hit almost $27.00 that day. So those folks have seen a 30%+ drop since IPO.

Given that Snap paid out billions in IPO bonuses to executives and other employees, it's turned out to be a pretty big wealth transfer from retail investors to Snap employees.

2
skywhopper 3 days ago 4 replies      
As skeptical as I am that Snap will ever be a moneymaker, this data point is not meaningful in any way. Facebook traded below (often _well_ below) its IPO price for the first 15 months on the market.
3
cft 3 days ago 1 reply      
Financially it does not matter for the founders that much anymore. Significant wealth has already been transferred and cashed out:

http://www.nasdaq.com/quotes/insiders/spiegel-evan-1016829

4
beager 3 days ago 11 replies      
What does $SNAP need to do to deliver on the hype? Is there anything that can make $SNAP a good investment for anyone other than the parties involved in trading the IPO?

I tend to be bearish on $SNAP in general, but I'm interested in the discussion. How do they right the ship and boost back up to that $25-30 range? What's their play?

5
mmanfrin 3 days ago 1 reply      
Pardon my language but: no fucking shit. $20bil was unbelievably overpriced.

This is the one major tech stock that I simply do not get. ~$125 a user is insane.

6
stevenj 3 days ago 0 replies      
I think Snap's stock price will continue to fall (to under $10) and then at some point it'll be acquired, possibly by an asian company or investment group.
7
JumpCrisscross 3 days ago 1 reply      
Lock-up expires 29 August [1]. On the other hand, they have $3.2bn of cash and short-term investments (as of 31 March) while burning about $600MM of cash from their operations (FYE 31 December 2016) a year. That's 5 years to get it right.

EDIT: insider lock-up at T+150 comes in at the end of July.

[1] http://www.nasdaq.com/markets/ipos/company/snap-inc-899497-8...

8
korzun 3 days ago 2 replies      
I think it is safe to say that the IPO was a total scam. The company was never profitable, and numbers never made any sense.

I have a bridge in Brooklyn up for grabs (cheap) if you still think the valuation was based on ridiculous data points such as active users, etc.

People already lined their pockets up and you will be reading another P.R piece on how great of a businessman Evan is within the next couple of months.

Spend all of the funding on aggressive marketing to get the numbers up pre-IPO, file for an IPO and cash out. Rinse & repeat.

9
chollida1 3 days ago 0 replies      
Not sure this is really news worthy.

They've been heavily shorted for a while now and their puts are expensive. But really 1 Billion new shares could flood the market in just a few more weeks.

To put it in perspective I believe Twitter, the other poster child for giving away options, had about half that amount.

Even the IPO underwriters are starting to crack. Credit Suisse used the stock drop to keep an "outperform" rating on the stock while lowering its target from $30 to $25.

IMHO Snap will do just fine for hte next year. They are big enough that companies will carve out a piece of their marketing budgets for Snap. It won't be until a year later when they have enough data on how well their Snap advertising is working that they'll decide if its worth it or not.

10
zitterbewegung 3 days ago 1 reply      
Following snapchats fluctuations about its stock price is somewhat interesting . To be honest though I'm getting fatigued on this. I remember when Facebook IPOed and we were getting similar posts. The fact that Snap was able to IPO and get money to become more competitive will have to wait for a few quarters though.
11
bsaul 3 days ago 0 replies      
i wonder if the community realizes how bad those kind of overhyped companies makes us look to the general audience, with founders cashing out shortly after the stock gets public and everybody realizes valuation were simply absurd.

i don't think we'll have to wait for a long time before we see traditional investment funds and banks not willing to take part in that game anymore.

12
andirk 3 days ago 0 replies      
With no knowledge other than using the app and from general bar talk, I saw this as a huge flop. I wish I shorted it. It may have potential, but I don't see what that potential is when their features are so incredibly easy to mimic. And no, SnapChat, I don't care what Puff Daddy did with Beyonce or whatever it is their news feed tells me.
13
dfrey 3 days ago 6 replies      
snapchat baffles me. It's just another instant messaging platform except that they came up with the idea of messages that delete themselves after a time period. The problem is that this killer feature is easily subverted by taking a picture of your screen. So basically, they have provided an instant messaging platform with one extra useless feature.
14
havella 3 days ago 0 replies      
As an excercise for the reader, would be interesting to test the performance of a buy and hold strategy of stocks a.) Going below IPO price, b.) Going below 50% IPO price. A second filter that can be applied is the time span between first day of trading of such event.
15
Rjevski 3 days ago 2 replies      
I can't understand what are they doing to not make any profits. They are selling ads. Server resources to exchange the pictures are a minor cost, so where is all that ad money going to?

If they stopped wasting money on development time making their UX even worse, or stupid stuff like Spectacles, or this: https://www.recode.net/2017/6/17/15824222/snapchat-ferris-wh... - maybe they would actually be making profits right now.

16
smpetrey 3 days ago 1 reply      
For your consideration, not a single share purchased is a voting share. SNAPs shares should be closer to $13.
17
gigatexal 3 days ago 0 replies      
maybe twitter should buy them
18
mmmpop 3 days ago 1 reply      
Invest in the things you love and use most! What could go wrong?!
19
opendomain 2 days ago 0 replies      
Is is much below the IPO price now

currently 15/62 but trending lower

This is before the lockup period ends when ususal new stocks drop

20
nicolrx 2 days ago 0 replies      
All my friends are turning off their Snapchat account to keep using Instagram that does the same + better photo features.
21
jdavid 3 days ago 1 reply      
Based on revenue and flat growth, the company is worth about $4 a share. If it get's to that price I'll think about buying in, if and only if i think the company is going to turn it around.

I've made good money waiting for the time to be right before buying in. This stock is worthless above $8 a share.

23
p5.js A library to make coding accessible for artists, designers, educators p5js.org
342 points by joeyespo  2 days ago   75 comments top 27
1
bluetwo 1 day ago 3 replies      
I give them credit for putting together a fun explanation of what they are trying to do.

I've used Processing Java and Processing.js, so I assume this is some continuation or extension of those projects.

It might be more effective if the explanation focused on the benefits of using p5, rather than just saying "It makes a circle" or "It draws a slider" (which are features).

For instance, maybe some of these are true:

- p5 speeds the creation of animations on your site

- p5 allows beginners to create complex interactions compatible across devices

- p5 allows low-cost prototyping of game designs

- p5 shrinks site size by replacing videos with animations

Of course you would have to look to the community and beyond to figure of which resonate with potential users.

2
jamesrom 1 day ago 4 replies      
I've built all kinds of things with d3, I've been using it for over 4 years. It's a seriously great library.

However, there's a cognitive overhead of thinking in selections and update patterns... It's hard to remove state and make composable components that can work well in React, Angular, et al.

While p5 sounds like the answer. We've been stripping away imperative programming on the web for the best part of it's history. Modern web development is more and more declarative every day... I just can't help shake the feeling it's a step backwards.

3
nrjames 1 day ago 0 replies      
I made a few fun tools with p5.js a little over a year ago.

Image quilting: http://clayheaton.github.io/p5jsTiler/index.html

Genetic algorithm cartoon generator (that I never really finished): http://clayheaton.github.io/generative_cartoons/index.html

I love both Processing and p5.js. They're great tools for fun creative coding and also both are useful other types of prototyping and app development.

Some day I'll get around to extending the image quilting sketch to generate Wang tiles.

4
krat0sprakhar 1 day ago 2 replies      
p5js is an awesome library! If you're looking for ideas to play with, checkout Dan Shiffman's Youtube Channel - Coding Train[0], a series in which he build ML libraries, games and lots of fun mini-projects.

0: https://www.youtube.com/user/shiffman

5
NickRRau 1 day ago 1 reply      
For anyone who has previously read or seen Shiffman's book 'The Nature of Code', he's also ported the examples in the book(Processing) to p5.js

https://github.com/shiffman/The-Nature-of-Code-Examples-p5.j...http://natureofcode.com/

6
cocktailpeanuts 1 day ago 5 replies      
I don't get why that guy in the video is so excited about this. Isn't this just some interactive JS animations overlaid on top of video?

I say this because at first I was also excited just by watching that guy get excited, and then suddenly I was like "wait..isn't this already possible with pretty much 100s of libraries out there?"

Maybe someone can explain what makes this unique (so much so that the guy is so excited about it)?

7
uptown 2 days ago 0 replies      
8
ryan-allen 2 days ago 1 reply      
Developer friendly intro: https://p5js.org/, examples https://p5js.org/examples/ (this library is ace!)
9
greggman 1 day ago 0 replies      
both processing and p5.js are amazing but I'm curious are they amazing by design or by effort and luck.

Bret Victor went over many of the reasons why Processing is poorly designed in his opinion

http://worrydream.com/LearnableProgramming/

it kind of made I feel like more of right place at right time for success rather than by design

not that I have any hope of supplanting it with something better anymore than 8086 assembly being replaced something more elegant.

note: much of that linked article is not related so search in page for Processing

10
a1371 1 day ago 1 reply      
At first I thought it was a video until he said that the clusters are avoiding his head. It is nice to have video interactivity like this.

Also, my opinion might not be popular but kudos to them for making the introduction so dumb-proof with the video and the visuals. More projects have to do this.

11
Xoros 1 day ago 1 reply      
There were a video ??? Waited 40s and nothing happened (iPhone + Safari)
12
Joboman555 1 day ago 0 replies      
Link took over 10 seconds to load on iPad before I gave up.
13
thomasfl 1 day ago 0 replies      
One upvote for the interactive playground on top of a video of a friendly fellow named Dan that tells you what you can do.
14
franciscop 1 day ago 0 replies      
About the web design: please value readability a lot more; now it is quite difficult to read the text with that background. Besides that, Processing was really fun to play with back when I did, I wish you the best as well!
15
toisanji 1 day ago 0 replies      
love p5js, I put together this site for practicing drawing with p5js:you get challenges to draw: http://www.pushpopchallenge.com/
16
falsedan 1 day ago 0 replies      
Did did they render the text on the landing page as an image?

edit oh no, I see the <p> elements but the page makes the text non-selectable, and the right-click context menu acts like I clicked on the background image.

17
jwarren 1 day ago 0 replies      
I've seen Mike Brondbjerg speak a couple of times about iterative artistic development using Processing and p5.js. It's really cool stuff, and I'd suggest seeking it out if the area interests you.
18
aembleton 2 days ago 0 replies      
Firefox on MacOS and it sounds like it's about to take off.
19
bcruddy 1 day ago 0 replies      
I get that this is designed to facilitate "learning to code within the context of the visual arts" and I think that's great but holy shit tone down the javascript nonsense on your website. Granted, I'm not the target audience but the site was gorgeous before the animations started. Simple, complimentary colors made me want to read the carefully chosen font face.
20
JelteF 1 day ago 0 replies      
Looks cool, but the page uses one full CPU core on Chrome on my Linux machine.
21
lousken 1 day ago 0 replies      
Can't click on links in Edge (except the top menu)
22
efficax 1 day ago 0 replies      
But the arrows don't point to their heads.
23
eng_monkey 1 day ago 0 replies      
They are really excited people.
24
desireco42 1 day ago 1 reply      
This is by far the most novel way to introduce a library that I've seen in a long time and also very impressive. It isn't accessible, and I don't think p5 can be accessible as well, so I guess that is OK. I really bow to such original presentation.
25
colemickens 1 day ago 0 replies      
I'm a bit surprised that there aren't more comments about following the submission link and then being dropped on, what is apparently a technical product/project, with nothing but a full screen, long-form, non-transcribed video. If I hadn't read the comments here, I would've never assumed it was something technical. And I'm still not going to watch that video.

Hopefully they can put together some text that I can digest accessibly...

26
jnbiche 1 day ago 3 replies      
Can we please get this link changed to the text intro, dang, or anybody? It's at: https://p5js.org/ It's a cool project, so I don't want to downvote it, but the sound is loud and not opt-in.

Or at least add a [video] tag or something. It woke up half my family.

27
breerly 1 day ago 0 replies      
I just spent 30 seconds looking at a loading spinner - no thx bye p5.js
24
Scientists Design Solar Cell That Captures Nearly All Energy of Solar Spectrum rdmag.com
250 points by 3eto  1 day ago   123 comments top 9
1
sbierwagen 1 day ago 2 replies      
As usual with press releases, this pretends there is no prior art. Of course, stacking solar cells to increase efficiency has been a thing for five decades: https://en.wikipedia.org/wiki/Multi-junction_solar_cell
2
matt_wulfeck 1 day ago 1 reply      
I just had my panels turned on. I love solar. It's still difficult to justify it short-term on a cost-basis, but I'm saving about a dollar a day after all things are said and done.

That being said, I'm generating my own electricity and my panels will run for a very long time. The best is cranking the AC and still watching the meter run in reverse during really scorching days.

3
meri_dian 1 day ago 4 replies      
>"This particular solar cell is very expensive, however researchers believe it was important to show the upper limit of what is possible in terms of efficiency. Despite the current costs of the materials involved, the technique used to create the cells shows much promise. Eventually a similar product may be brought to market, enabled by cost reductions from very high solar concentration levels and technology to recycle the expensive growth substrates."

We will end our reliance on fossil fuels not by forcing masses of people to change their lifestyles and inconveniencing them, but by developing green energy tech that is simply more efficient and cost effective than fossil fuels. Once this happens the transition away from carbon based energy sources will be swift.

Given the rate of progress, I believe we'll see widespread adoption of renewable energy far before climactic conditions on earth become dire for humanity.

4
adamwong246 1 day ago 4 replies      
I always wondered why we did not just use prisms to separate the different wavelengths, then capturing selections of the spectrum with a variety of simpler, unstacked panels. Perhaps one could even deflect the infrared into a more conventional, presumably more efficient, heat collector while the higher frequencies are directed to true photovoltaics.
5
vectorjohn 1 day ago 4 replies      
What is it that makes solar panels cost what they do, ultimately? Not materials, right? Those are all basically sand and other not so special things. Labor? Isn't it mostly automated? Upkeep of the factories? Input energy?

Maybe it's just all those things together. But it sure seems like if we wanted to it wouldn't be that hard to ramp up production and drive costs down a couple fold. Not that I know how.

6
grandalf 12 hours ago 1 reply      
Would these panels capture energy from the signal being radiated by my mobile phone? What about gamma rays?

In other words, is a solar cell something that captures energy from photons and converts it into usable electricity? Or from some subset of photons?

7
Gys 1 day ago 1 reply      
'The new design converts direct sunlight to electricity with 44.5 percent efficiency, giving it the potential to become the most efficient solar cell in the world.'
8
philipkglass 1 day ago 2 replies      
The abstract is more informative than the press coverage:

http://onlinelibrary.wiley.com/doi/10.1002/aenm.201700345/ab...

The cell is assembled in a mini-module with a geometric concentration ratio of 744 suns on a two-axis tracking system and demonstrated a combined module efficiency of 41.2%, measured outdoors in Durham, NC. Taking into account the measured transmission of the optics gives an implied cell efficiency of 44.5%.

Since this is a concentrating cell, compare to the concentrator cell records tracked on NREL's PV efficiency records chart:

https://www.nrel.gov/pv/assets/images/efficiency-chart.png

The current record for 4-junction-or-more concentrator cells is 46.0%. This isn't a record-setting cell even if the implied efficiency holds up under standardized test conditions.

This cell like all high-concentration cells is unlikely to see mass market acceptance on Earth. The module needs precise two-axis sun tracking to work effectively even under perfect clear-sky conditions. That's significantly more expensive than fixed arrays or single-axis sun tracking as used by conventional large scale PV. And there's a vicious feedback loop: since two-axis tracking is significantly more expensive, it doesn't get developed/scaled, so the cost gap gets even wider over time WRT its competitors.

But that's not actually the worst problem of high-concentration PV for terrestrial use. The worst problem is that HCPV can use only direct normal irradiance. Ordinary non-concentrating PV cells produce very nearly 25% of its rated output if it receives 25% of test-condition illumination under non-ideal conditions (due to some combination of clouds, air pollution haze, dusty glass, etc.) Concentrating cells will produce close to 0% of rated output under the same non-ideal conditions. Few regions have clear enough skies to work with HCPV, but those same regions tend to be dusty, which the concentrating optics cannot tolerate. Mechanical and optical complications make HCPV higher-maintenance than ordinary flat PV and more expensive to install initially.

That's why there were a dozen+ companies working on concentrating PV in 2008 and all of them are now bankrupt or have exited HCPV manufacturing. Eking out another cell-level improvement wouldn't have rescued the value proposition of their complete systems. The refined polysilicon price spike that made exotic technologies look briefly promising only lasted a few years and then it became clear again that crystalline silicon is very hard to beat.

9
afeezaziz 1 day ago 2 replies      
If the process to make this kind of solar cell can be lowered enough through scale then they should communicate this process to Chinese solar companies. I am sorry for my poor understanding of chemical process; if the materials of the solar cell are roughly the same then it would be quite easy for the existing manufacturers to actually switch to this solar cell production.

I cannot wait for the era of super cheap electricity!

25
Ask HN: Why is Bluetooth so unreliable?
337 points by whitepoplar  1 day ago   251 comments top 33
1
bjt2n3904 1 day ago 9 replies      
This isn't the first time I've talked on this. I've had some experience with bluetooth on Linux, and as a radio guy. The answer is there are problems from Layer 1 to Layer 7, needless complexity, and design by committee.

Bluetooth is an EXTREMELY complex radio protocol on Layer 1. It's like a mating dance between scorpions in the middle of a freeway. High chance something gets messed up.

Layer 1 keeps drastically changing too. Bluetooth 1 and 2 use completely different modulations, and are not backwards compatible. Bluetooth 3 simply was an extension to 2. "Let's agree over Bluetooth 2.0 to use WiFi instead." Bluetooth 4, while much simpler, uses an entirely different scheme.

Instead of a "general purpose" wireless network like WiFi, Bluetooth tried to be application specific. Except the only profiles everyone wants are mice, wireless audio, and fitness trackers. If you look at the application layer spec, it reeks of design by committee. Everyone haphazardly jammed their pet projects together, and there are redundant and vestigial parts everywhere.

The Linux side of BlueZ is abysmal. Honestly, I don't even know how anyone does anything with Bluetooth on Linux besides a mouse and keyboard. And barely even that.

As much as I hate on the protocol, the Layer 1 spec is truly ahead of it's time, in some areas. Watching two radios frequency hop, and negotiate to avoid a congested wifi channel was unreal.

2
Duhck 1 day ago 4 replies      
I've spent the better half of 3 years building products on the 2.4ghz spectrum (WiFi and BLE).

Most of the issues in this thread are related to poor hardware design more than a crowded spectrum. While the spectrum is in fact crowded in metropolitan areas, most Bluetooth communication doesn't require much bandwidth and can handle error prone areas with ease.

While the frequency hopping helps a ton on BL (and WiFi for that matter), the issues people outlined are due to:

1) Shitty firmware2) Shitty hardware

Antenna design is black magic and only a few firms in the US do it well. It took us almost 10 months to fully design and test our antenna assembly(s) with a very capable third party firm.

It took dozens of trips to a test chamber, a dozen computer simulations that take a day to run, and PCB samples that take days to verify. They have to be tuned every time copper or mechanical parts move as well.

It's a real pain and most Bluetooth products use garbage chip antennas and baluns or reference designs for antennas. This increases the sensitivity to failure and provides a generally shitty experience.

Most of your product interactions around bluetooth are budget products connected on one side of the equation (e.g. a $50 bluetooth headset). So despite how capable your Mac or iPhone is, if you have a garbage headset on the other side with poor antenna design, it'll be a disaster of an experience.

3
IgorPartola 1 day ago 5 replies      
This is a daily goddamn struggle for me. My Macbook Pro routinely forgets about my Apple trackpad, and the only thing that fixes it is restarting the laptop. The sound system on the laptop often selects the wrong mic for the input when a BT headset is connected. My iPhone keeps switching between headset and internal speaker/mic when on a call. Pairing the same device to multiple hosts (laptop and phone) is like birthing a hedgehog backwards. And let's not forget where you try to initiate pairing from a laptop or phone instead of the device. Why even provide the damn buttons to do it if they never work?
4
drewg123 1 day ago 9 replies      
For me, the biggest problem with BT is that BT audio is almost entirely unbuffered. I wear a set of BT headphones connected to a fitness watch (Polar M600) when running. When the BT connection from the watch to the headphones is briefly blocked by part of my sweaty body (think arm movements when running), the BT signal is interrupted and the music cuts in and out with every stride. If BT audio could be buffered for 15-20 seconds, this would not be a problem.
5
api_or_ipa 1 day ago 3 replies      
I used to absolutely abhor BT, but in the past few years it seems to have gotten really, really good about picking up, and maintaining a decent connection. Since then, I've picked up BT headphones, BT keyboard + mouse (Apple), and a nifty little waterproof BT speaker. Now, the only issue I sometimes have is when I want to connect to a new host device. Other than that, BT has been really nice to me.
6
blumomo 1 day ago 0 replies      
For programmers using Bluetooth Low Energy (BLE 4.0) on Linux/BlueZ, we're working on this handy BLE GATT library for Python: https://github.com/getsenic/gatt-pythonBlueZ/GATT is very stable to our experience and supports most functions such as BLE device discovery, listing services and characteristics, reading/writing/subscribing values from/to characteristics.
7
evilduck 1 day ago 0 replies      
I have a lot of gear in the Apple ecosystem that uses Bluetooth and I don't consider it unreliable at all. I use at least 6 different Bluetooth devices all day, every day (MBP, keyboard, trackpad, iPhone, Watch, Airpods, with additional car pairing, portable speaker and iPad pairings) in close proximity to a bunch of other developers behaving similarly. Looking around I can count at least 40 BT devices in active use around the office and I would still characterize my Bluetooth devices as more reliable than any wifi AP I've ever used.
8
ComputerGuru 1 day ago 0 replies      
A big part of the reason "Bluetooth" is unreliable is that there is no one "Bluetooth." Each manufacturer's implementation differs in strength and weakness, and depending on the potentially shoddy chips in the devices you are connecting to, a different Bluetooth standard will be used.

I have Bluetooth devices years old that I've never had problems with, and others that are a constant nightmare. The software stack behind the Bluetooth is also a major component in the reliability question.

9
Spooky23 1 day ago 2 replies      
Is it? I'm a pretty heavy user if Bluetooth in a few different use cases and it's pretty reliable for me.

Best way to improve reliability is to avoid dodgy or counterfeit radios in crappy electronics.

10
synaesthesisx 1 day ago 2 replies      
Not 100% sure on this, but I believe devices utilizing Apple's W1 chip use a combination of Bluetooth + WiFi (WiFi for the initial handshake upon connecting probably or something like that). If anyone's ever used AirPods it's amazing how reliable they are compared to other bluetooth headsets.
11
AceyMan 1 day ago 0 replies      
Disclaimer: non-technical anecdotal evidence here

I had a colleague for a time who's dad was a hardware engineer with Toshiba & worked with/on their part of the specification Working Group.

His pop said that the whole BT stack was unambiguously a steaming pile of poo from the get-go, and it was nearly miraculous it functioned as well as it did.

At that I had to chuckle, seeing how I'd wager that each of us have had enough woggy experiences with the tech to agree with the point he made so plainly.

But I do love the chosen icon & the history behind it, vi-a-vi the name ("Bluetooth"), so it's not all bad <wink>. ---

this was around 2010 or so, to add some context wrt the relevant timeline(s).

12
js2 1 day ago 3 replies      
The last few years, I have not had trouble with BT, but maybe it's because I simplified my use cases to ones which work after early failures. Here's what works for me and never fails:

- Macbook to Apple bluetooth mouse

- iPhone 6s to late model Mazda infotainment system

- iPhone 6s BTLE connection to Garmin Forerunner watch

13
linsomniac 1 day ago 2 replies      
I gave up on Bluetooth at home around a year ago. Not sure what it is, but I'd pretty much have to put my phone right next to the bluetooth speaker for it it work reliably. Might as well use a cable.

I had high hopes for Google Chromecast Audio for my music at work and at home. Probably my fault for jinxing myself by asking "What could possibly be worse than Bluetooth?" Chromecast Audio has definitively answered that.

For one thing, you can't limit who can interact with the Chromecast. Anyone on the network can see it. At work, my music would usually pause ~4 times a day as someone else's phone would see it and connect to it. I'd have to set up a new wifi network that only I could use to fix this. Since I only listen to music a few hours a day, that's pretty frequent.

It also gets confused if I leave work and then try to use Google Play Music elsewhere: my Google Home in the bathroom will play a song and then stop, I think because "google play is being used on another device", but it doesn't tell you that.

Maybe I should just go back to using something like a Raspberry Pi with access to my music library, it still is mostly songs I have the CDs for and ripped, though I've added probably 50-100 songs over the last year on Google Play, my 600 CDs I have all in FLACs.

14
howard941 1 day ago 0 replies      
I switched from a bluetooth dongle of unknown provenance to a more powerful Zoom (brand) class 1 dongle and hung it from a usb cable off of a lighting fixture in my home office. I get complete coverage to a Jabra headset of a rather large screened in porch despite having to penetrate my pudding head, two interior walls, and one exterior wall. The class 2 dongle barely worked outside.
15
thewhitetulip 19 hours ago 0 replies      
Despite multiple apps of the likes of shareit, I find bluetooth to be the only mechanism of data transfer that just works. SHareit and the likes of it get new versions which break on my Android 7 for each upgrade and it is a PITA to get it working for different android versions, for some reason it does not show my device on a moto phone and I have to use the hacks like get a file from the other device to my device and then send something on the established connection.

but there is one thing, bluetooth is not useful if the file is big.

16
jbg_ 1 day ago 0 replies      
I know this is a little tangential, but this brought some simmering anger back to the surface. Just today I was trying to communicate with a bluetooth device that provides a serial channel.

Back in the day I used to just run "rfcomm bind <mac-address> <channel>". But it turns out BlueZ decided to deprecate (read: stop building by default) the rfcomm command in favour of... wait for it... a barely-documented D-Bus interface.

How much do you have to hate your users before you decide to take away a useful, functional executable and replace it with an unspecified interface over D-Bus that requires hours of research to use rather than 30 seconds of reading a manpage?

17
jimmies 1 day ago 1 reply      
I have a Linux computer (Dell Chromebook 13) connected to the Microsoft Mouse 3600 Bluetooth (BLE 4?) and it was rock solid. The mouse picks up immediately whenever the computer is on. It was almost miraculous how well it works. The mouse is really quite darn responsive too.

That is, I use the cutting edge Linux distribution (Ubuntu 17.10) -- it was pretty darn painful even on 17.04. I have another keyboard that is on Bluetooth 3.0 that fucking disconnects every other day.

So YMMV - I think BLE mice and keyboards are much better in terms of 'just works' unless you pair them with a whole bunch of devices.

18
bhouston 1 day ago 2 replies      
I never have Bluetooth issues in my Rav4 between any of my phones (ZTE, OnePlus), it is perfect always. I can not emphasize enough how amazing it is.

My and my wife's Fitbit have constant Bluetooth issues to our phones. This is completely and utterly annoying.

Driver related? Not sure.

19
jonbarker 1 day ago 5 replies      
I would buy a wireless audio speaker that uses NFC instead of bluetooth to connect to Android or iPhone. You would have to set the phone on the device but that would be a small price to pay if the connection were more reliable.
20
kahlonel 22 hours ago 0 replies      
I'll just leave here that the "official" linux bluetooth stack (i.e. BlueZ) has dogshit documentation.
21
gjvc 1 day ago 1 reply      
From an experiential view, I say "crowded spectrum" My bluetooth keyboard takes ages to associate at work (which is close to a mainline rail station), but at home in the relative country, it works smoothly.
22
FRex 1 day ago 0 replies      
Oh wow. And I through it's reliable. I used it only a few times on smartphones and laptops (I like my mice and keyboard with cables) but I still remember what a big deal it was compared to infrared and how mobile phones in early 2000s would lose connection and the only sure way to use IR was putting them next to each other on a flat table with the IR thingies of their physically touching(!).

That makes me a little less excited about my plans of getting Dual Shock 4 for my PC for gaming.

23
nthcolumn 1 day ago 0 replies      
I have nothing to add only 'yes me too my how I have suffered', the countless crappy bluetooth devices I have connected and disconnected and hours and hours I have wasted trying to get them paired with various linux boxes, nearly all in short order choosing death rather than do my bidding. I am looking at one right now currently unconnected. 'Dummy device'. Why indeed.
24
gargravarr 1 day ago 1 reply      
Part of the issue is that bluetooth as a whole is nothing more than a wireless serial connection. It's the various protocols built on top of it that determine its stability. The Bluetooth SIG only really control the pairing between the two devices, a low layer. You're hoping that the company you buy stuff from has implemented the protocol correctly, over which the SIG has no control.
25
jdlyga 1 day ago 0 replies      
I've always had trouble with bluetooth devices until I got AirPods. Whatever bluetooth setup they're using is very reliable. I use them with my phone, windows computer, ubuntu work machine, and I rarely ever have connection issues.
26
80211 1 day ago 0 replies      
I learned a lot about Bluetooth with an Ubertooth Bluetooth dongle. It also let me realize how many security issues (side channel leaks, especially) exist that can't be easily fixed.
27
moonbug22 1 day ago 1 reply      
You only need to look at the page count of the specs to know why.
28
digi_owl 1 day ago 0 replies      
I would try eliminating Bluez5 and Pulseaudio first...
29
rikkus 1 day ago 1 reply      
As much as I dislike proprietary protocols, I'd be greatly in favour of Apple deciding to make a replacement for Bluetooth that works with all their products - and Just Works. It'd be no use to me, as my only Apple product currently is an iPhone, but if I saw that Appletooth Just Worked, I'd be looking at diving (back) into their ecosystem.

I know some people are saying Bluetooth works perfectly between their Apple products, but plenty of people are saying it doesn't, too.

30
linuxlizard 1 day ago 8 replies      
Because it's not as popular as WiFi or Ethernet or USB. It hasn't had the decades of hard core, hard knocks field usage of WiFi/Ethernet/USB. So the chipsets are less robust to errors, are less sensitive to highly noisy environments. The drivers aren't as battle tested as the other connectivity.

WiFi in its initial days (802.11b) reminds me of bluetooth right now. Quirky, bad tools, weird errors. But WiFi caught on and manufacturers started throwing $B at R&D for better chips and better drivers for those chips.

Bluetooth just has a problem with scale.

31
mchannon 1 day ago 3 replies      
Simple- it inhabits the same band almost everything else inhabits- 2.4GHz. To an extent, the reason Bluetooth is unreliable is the same reason most Wifi is unreliable in crowded areas. There's a lot of appliances that use that bandwidth over incompatible standards.

Even worse are the "spark" kind of 2.4GHz appliances that don't play nice, like wireless camera systems and baby monitors. If your strong-signal wifi or bluetooth keeps dropping, it's far more likely to be one of those at fault than anything else.

32
baybal2 1 day ago 0 replies      
1. fragile encoding schemes

2. fragile modulation techniques (uwb would've been a "final solution" to the problem, but died to patent trolls)

3. interference from wifi (try using bt mouse while downloading an hd movie)

4. because of three different "wire protocols"

But the upside is that BT super cheap to implement, and thus ubiquitous

33
gdulli 1 day ago 2 replies      
My company bought me a $150 pair of noise canceling headphones last year, it was my first experience with Bluetooth. After a month I was back to using the $10 earphones that I've had for over 10 years. It turns out reliability and convenience was more important than blocking noise.

To be fair there were problems other than Bluetooth. The headphones were trying to be smart, if they sensed you taking them off they'd pause the music for you. Except it didn't always work so instead of pausing the music when I took off the headphones, which is ingrained and reflexive and automatic and no trouble at all, now I had to pay attention every time to whether the auto-pause worked and then either pause myself or not.

And sometimes I'd adjust the headphones slightly to scratch my ear or something and the music would pause. Sigh.

26
Ask a Repair Shop yurchuk.com
290 points by liquidcool  23 hours ago   116 comments top 15
1
xapata 21 hours ago 2 replies      
If you ask an enterprise vendor if their software has feature X, the answer is always, "Yes!" You'll find their software is infinitely customizable with just a bit of configuration. What they're not telling you is that their configuration tool is really a poorly implemented, proprietary programming language. You won't be able to configure the software yourself and must now hire consultants to read your watch and tell you the time.

The good news is that the enterprise sales folks know all the best restaurants in town. Cocktail bars, too.

2
richdougherty 16 hours ago 6 replies      
I did this when I was choosing a laptop. I called up a few laptop repair shops. It as so helpful! They could tell me what was junk and what was OK.

I also do something like this this when choosing a new ISP. I call the support line instead of the sales line. Somehow ISPs can answer sales enquiries instantly while support calls take 45 minutes to answer. This strategy has led me to use some of the smaller (slightly more expensive) ISPs, because I know they'll answer almost straight away.

3
inthewoods 14 hours ago 3 replies      
My experience is the opposite. Spoke with two system integrators regarding an implementation of a marketing automation tool. In both cases, the proposed lead on the project made statements that I knew were incorrect regarding the software. We also had a challenging requirement that was not part of any out-of-the-box solution and was told by both that it wasn't possible with one of the vendors we were considering. Simply using Google, I was able to find a solution.

I'd add (and the author mentions this) that most system integrators have a bias (whether financially driven or not) towards particular software. That makes it challenging to assess "is this the best software or what they pushing me to"?

I don't see how this is that much different from buying from the vendors.

For me, I usually take a vendor's customer page and start calling people myself. I also reach out to my network to see if anyone has an opinion. And if I can find a list of companies using the software (vs. who the company says they work with) then I call/reach out to them as well.

4
kd5bjo 19 hours ago 2 replies      
This is mostly the result of our checkbox-grid comparison shopping culture. "Features" like extra coats of paint and thicker metal cost the manufacturer more than they increase the market value. On the other hand, throwing in a dozen cheap bits of plastic with every vacuum cleaner pays for itself because it can now ostensibly do a dozen more things.
5
busterarm 20 hours ago 0 replies      
This reminds me so much of my experiences working in a computer repair shop ~15 years ago.

Funniest thing about it is that as a sales rep I sucked, but once I got moved into repair, I was absolutely destroying our sales team in sales often by 4x their best rep without even trying.

There were a couple of Black Fridays where the store made all computer sales take place at the repair center because of it.

6
8ig8 12 hours ago 1 reply      
Loosely related, here's a great Reddit AMA with a vacuum repair technician...

https://www.reddit.com/r/IAmA/comments/1pe2bd/iama_vacuum_re...

7
siliconc0w 21 hours ago 1 reply      
Ask the company for other customers you can talk to and then take a few members of the system engineering team responsible for supporting it out for dinner to learn the real story.

In truth though, a lot of enterprise software sucks and it sucks to support but there are usually few better options. Often velocity is the 1st, 2nd, and 3rd priority so it's easier to pickup some shitty software and spend some engineering resources to 'make it work' than it is to try to internally sell investing the resources needed to build a better bespoke solution.

8
jaredandrews 21 hours ago 4 replies      

 I got a full education on washers, including a lot of industry dirty laundry. 
Please tell us more.

9
nedwin 21 hours ago 0 replies      
Always nice to read a post from someone with a) experience and b) succinct writing style.
10
staofbur 17 hours ago 0 replies      
Another bit of advice I can give from dealing with a few particularly shitty vendors is that if you can't actually download a copy from their web site or extract one from their sales team and see it in action yourself, they have something to hide.

This is usually cost escalators, a really poor deployment and management story, an upsold incomplete product or just a wall of lies.

Also refuse to buy a license until you trial it on your own kit.

11
PeterStuer 15 hours ago 0 replies      
The multi-vendor/best-of-bread system integrator has been for the most part replaced by exclusive partnerships. The platform owner's demand for 'loyalty' has grown significantly over the last decade.
12
shopnearby 10 hours ago 0 replies      
This was a great article and I love how the author related it to a common purchase most people have made in their lives already.
13
Kenji 18 hours ago 1 reply      
It's actually much simpler than that: Never rely on the opinion of someone who can make immediate profit off your decision.
14
notrealname1 13 hours ago 0 replies      
The caveat to this approach on the software side is that some SI's including some of the biggest names are just effing awful; custom proprietary frameworks, low quality development, etc. Fortunately if you talk to their customers you can get a realistic assessment of quality..
15
gist 10 hours ago 0 replies      
> Oh, we dont repair GE anymore. Theyre pretty much throwaways now. When they break, you just buy a new one.

Reverse engineering of motive. If it were only that simple.

Although this 'business response' could be correct I wouldn't assume that is the case as if the repair shop has no axe to grind or other reason to make that statement.

Could have also lost their authorization or access to parts to repair GE appliances. Or perhaps they aren't listed on the approved list of repair shops (could be for various reasons).

Way back when you used to buy a fair amount of products that were typically repaired there were certain vendors that the manufacturer shuttled the most repair work to. The other shops could get access to parts however it wasn't typically cost effective for them to do so.

27
3D scanning by dipping into a liquid sdu.edu.cn
342 points by jakobegger  2 days ago   117 comments top 27
1
zellyn 2 days ago 4 replies      
I expected they'd be either visually watching the changing contours of an opaque liquid, or somehow using refraction to get multiple visual angles of the same features, but

they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?

2
AndrewKemendo 2 days ago 2 replies      
That is amazing. I think I've looked at every photogrammetry, desconstruction, hand modeling etc... technique for 3D reconstruction and this one takes the cake for ingenuity, quality and capability.

Not sure how practical it is right now, but I wonder if you could do this with air volume at a high enough delta measurement resolution you might get some amazing results.

3
proee 2 days ago 1 reply      
My friend created an advanced fluid scanner using a dodecahedron. His method is novel in that it:

1. Does not require rotation of the DUT, but instead uses just rising fluid level.

2. Uses permeable fluid so it achieves full density scans.

He spent a number of years trying to get the product to market as a startup, but ran out of personal funding.

He believes Archimedes may have used the Roman dodecahedron as a fluid scanner to test the quality of their projectiles to improve accuracy.

See http://www.romansystemsengineering.com/our_product.html

4
papercrane 2 days ago 1 reply      
Found a presentation on this technique on youtube:

https://www.youtube.com/watch?v=yHvyPnkuAiw

5
azernik 2 days ago 1 reply      
Comment on the Hacker News system - most combinations of {edu,ac,gov,mil}.{$ccTLD} should probably be collectively treated as a TLD for site-display purposes. e.g. sdu.edu.cn (Shandong University) would be more descriptive than plan edu.cn (some academic institution in China).
6
anfractuosity 2 days ago 2 replies      
Wow this is very impressive. I was thinking originally they were using the milk scanning technique - http://www.instructables.com/id/GotMesh-the-Most-Cheap-and-S...
7
gene-h 2 days ago 2 replies      
I once scanned myself at a maker fair in a similar manner. A swimming pool of blue dye with a camera facing above was used so that objects could be scanned by looking at their outline in the blue dye as they were dipped in(a different approach to the volume transforms presented here). Now to do this with a person involved strapping that person to a board and slowly dunking them in. Overall, the experience was unpleasant and what I imagine waterboarding is like, but hey at least I got a 3d scan of myself.
8
simon_acca 2 days ago 2 replies      
Hey, if you add a force sensor to the dipping arm couldn't you, in principle, obtain a 3d density map of the scanned object as well using archimede's principle?
9
nullc 1 day ago 0 replies      
Fun. So make a closed spherical assembly with lidar something to range the fluid height and put it in a three axis mount so the direction of down can be continually and smoothly changed, then drain the water very slowly.

Then change the fluid for something with less surface tension (hurray more uses for chlorofluorocarbons), and put it in a 20g centrifuge, and perhaps scanning times will be reasonable. :)

10
deepnet 2 days ago 1 reply      
11
xixixao 2 days ago 0 replies      
Suggestion for increasing applicability: Start with the optical scan, then only use the method to nail the occluded parts. And instead of just gathering data consider what angle will give you the biggest amount of new information next. Not sure if authors tried either.
12
dingo_bat 2 days ago 2 replies      
Awesome technique! But I cannot imagine this ever being a fast process. 1000 dips for a small model, and you cannot dip it with force.
13
beagle3 2 days ago 1 reply      
Isn't this "dip transform" basically the (inverse) Radon transform[0] used in CT and MRI?

[0] https://en.wikipedia.org/wiki/Radon_transform

14
randyrand 2 days ago 1 reply      
I wonder if they are taking water cohesion into account.

e.g, some of the water will stick to the sides of the object.

15
hemmer 2 days ago 1 reply      
I wonder how much of a role wetting/capillary effects play in this? The liquid interface will distort as it approaches the object, and will try to meet at a certain contact angle (based on surface tensions etc). Correcting for this might help improve the resolution of the scans?
16
skykooler 2 days ago 0 replies      
This is a really clever solution!
17
joshontheweb 1 day ago 1 reply      
Can someone ELI5 as to how this gathers the spatial data for the samples? In my ignorance, it doesn't seem clear to me as to how this works. I get that you would be gathering different volume data with each dip but it seems to me that this information would look like a graph that rises and falls back to zero. In the video, they showed each dip as gathering an accurate 2d cross section of the object. For instance, the 2d slices of the elephant on each dip graph. They seem to be able to shape a closed 2d polygon slice per dip and it is mystifying to me. Am i missing a part of the process or what? Is there some imaging going on as well on top of the volumetric sampling?

Edit: I should add that this really impresses me regardless of how they do it. Ive always thought it was a pretty big bummer how optical 3d scanning looks so incomplete in a lot of cases.

18
anotheryou 2 days ago 2 replies      
Hugged to death :/

How do they handle overhangs that trap bubbles?

Maybe shaking and scanning in reverse? (can stall cause weird effects when the air can't get back in, but should be more detectable.

19
Animats 1 day ago 1 reply      
That is very clever. So slow you'd have to run it overnight, but that could be OK for some applications.

A good test: run it on an auto throttle body. Those have lots of voids and holes, and some people need to duplicate existing ones.

20
dre85 2 days ago 0 replies      
Is the software open sourced? This looks like it would translate into a fun hack-a-day project. At first sight, the required hardware seems pretty basic, or? It would be awesome if someone replicated it with a RaspPi or something and posted a step-by-step tutorial.
21
SubiculumCode 1 day ago 0 replies      
I half expected this to be tongue in cheek exposition on creating molds. I guess it was on my mind after watching a friend artist creates sculpture molds for pewter casting.https://www.joshhardie.com/fullscreen-page/comp-izfweszu/f59...
22
antman 2 days ago 1 reply      
That seems equivalent to trying to get a joint distribution from its marginal distributions. So the constraint is probably that either it needs to be convex or you need to have a prior estimation of the object's cavities which means you need to know the 3d shape beforehand to have a mathematically guaranteed measurement.
24
rsp1984 2 days ago 1 reply      
Note that this only works for objects that don't have any flexible parts and don't interact with the water in any other way than pushing it aside (e.g. soak water).
25
samstave 2 days ago 1 reply      
How does it determine shape when measuring volume displacement? is it only measuring the displacement of the top surface-tension-layer of water as if it were a slice?
26
nom 2 days ago 0 replies      
How does one come up with something like this? The method is everything but straight forward and not practical at all, but it still produces good results. Amazing work.
27
dorianm 23 hours ago 0 replies      
Epic!
28
Using Deep Learning to Create Professional-Level Photographs googleblog.com
275 points by wsxiaoys  7 hours ago   63 comments top 21
1
parshimers 0 minutes ago 0 replies      
This is cool but I really don't get why one could call this actually creating "Professional-Level" photographs. It's more like a very good auto-retouch. There's still the matter of someone actually being there, realizing it is a beautiful place, and dragging a large camera with them and waiting for the right light.
2
wsxiaoys 6 hours ago 5 replies      
For those who think it's just another lame DL based instagram filter...

The method proposed in the paper(https://arxiv.org/abs/1707.03491) is mimicing a photographer's work: From taking the picture(image composition) to post-processing(traditional filter like HDR, Saturation. But also GAN powered local brightness editing).In the end it also picks the best photos(Aesthetic ranking)

Selected comments from professional photographers at the end of paper is very informative. There's also a showcase of model created photos in http://google.github.io/creatism

[Disclaimer: I'm the second author of the paper]

3
brudgers 2 hours ago 1 reply      
It is an interesting project and shows significant accomplishment. I'm not sold on the idea of "professional level" except in so far as people getting paid to make images. I am not sold because the little details of the images don't really hold up to close scrutiny (and I don't mean pixel peeping).

1. The diagonal lines in the clouds and the bright tree trunk at the extreme right of the first image are distractions that don't support the general aesthetic.

2. The bright linear object impinging on the right edge of the cow image and the bright patch of the partial face of the mountain on the extreme left. Probably the gravel at the left too since it does not really support the central theme.

3. The big black lump that obscures the 'corner' where the midground mountain meets the ground plane in the house image.

4. The minimal snow on the peaks in the snow capped mountain image is more documenting a crime scene than creating interest. I mean technically, yes there is snow and the claim that there was snow would probably stand up in a court of law, but it's not very interesting snow.

For me, it's the attention to detail that separates better than average snapshots from professional art. Or to put it another way, these are not the grade of images that a professional photographer would put in their portfolio. Even if they would get lots of likes on Facebook.

Again, it's an interesting project and a significant accomplishment. I just don't think the criteria by which images are being judged professional are adequate.

4
fudged71 2 hours ago 2 replies      
Very impressed by the results.

I hope that one day our driverless cars will alert us when there is a pretty view (or a rainbow) so we take a moment to look up from our phones. Every route can be a scenic route if you have an artistic eye.

5
andreyk 5 hours ago 3 replies      
Talking as a semi-pro (I've put in some money into cameras and lenses and spent a good bit of time on photo editing), this is a bit underwhelming. For landscapes (which this seemed to focus on), I've found that opening up the Windows photo editing programs and clicking 'enchance' or Gimp and clicking some equivalent already gets you most of the way there in terms editing for aesthetic effect. The most tricky bit is deciding on the artistic merit of a particular crop or shot, and as indicated by the difference between the model's and photographer's opinion at the end of the paper, the model is not that great at it. Still, pretty cool that they did that analysis.
6
d-sc 6 hours ago 4 replies      
As someone who lives in a relatively rural area with similar geography to much of the mountains and forests in these pictures I have noticed previously how professional pictures of these areas have a similar feeling of over saturating the emotion.

It's interesting to see algorithms catching up to being able to replicate this. However when you mention these kind of abilities to photographers, they get defensive, almost like you are threatening their identity by saying a computer can do it.

7
jff 6 hours ago 1 reply      
Automatically selecting what portion to crop is impressive, but just slamming the saturation level to maximum and applying an HDR filter is the sign of "professional" photography rather than good photography.
8
matthewvincent 4 hours ago 1 reply      
I don't know why but the "professional" label on this really irritates me. I'm curious to know how the images that got graded on their "professional" scale were selected for inclusion in the sample. Surely by a human who judged them to be the best of many? I'd love to see the duds.
9
wonderous 5 hours ago 1 reply      
Interesting how hi-res the photos of a small section of Google Street Car photo can be compared to what users see online; here's an example from the linked article:

https://2.bp.blogspot.com/-6bVWUgA8NEI/WWe1uoW8ayI/AAAAAAAAB...

10
jtraffic 4 hours ago 1 reply      
When a photographer takes or edits a picture, she doesn't need to predict or simulate her own reaction. There is no model or training necessary, because the real outcome is so easily accessible. However, she is only one person, and perhaps can't proxy well for a larger group.

The model has the reverse situation, of course: it cannot perfectly guess the emotional response for any one person, but it has access to a larger assortment of data.

In addition, in different contexts it may be easier/cheaper to place a machine vs. a human in a certain locale to get a picture.

If my theorizing makes any sense, it suggests that this technology would be useful in contexts where: the locale is hard to reach and the topic is likely to evoke a wide variety of emotional responses.

11
bitL 6 hours ago 0 replies      
Retouching is another field to play with - I am experimenting with CNN/GANs to clone styles of retouchers I like. If you are a photographer, you know that most studio photos look very bland and retouching is what makes them pop; for that everyone has a different bag of tricks. If you use plugins like Portraiture or do basic manual frequency separation followed by curves and dodge/burn adjustments, you leave some imprint of your taste. This can be cloned using CNN/GANs pretty well; the main issue is to prevent spills of retouched area to areas you want to stay unaffected.
12
mozzarella 2 hours ago 0 replies      
this is amazing, but 'professional photographers' aren't really the best arbiters of what a 'good' photograph is. Also, training on national parks binds the results to a naturally bland subject, no pun intended. While an amazing achievement, nothing shown here demonstrates ability beyond a photographer's assistant/digital tech adjusting settings to a client's tastes in Capture One Pro. Jon Rafman's 9 Eyes project comes to mind as something that produced interesting photographs, as does the idea to find a more rigorous panel of 'experts' (e.g. MoMA), or training the model on streets/different locations than national parks.
13
jonbarker 1 hour ago 1 reply      
From the article the caption of the first picture was interesting: "A professional(?) photograph of Jasper National Park, Canada." Is that the open scene from The Shining? If so I wonder why the question mark, is Stanley Kubrick not a professional photographer?
14
seany 6 hours ago 0 replies      
Would be interesting to see how well you could train this kind of thing off of a large catalog of lightroom edit data. to then mimic a specific editors style.
15
tuvistavie 2 hours ago 0 replies      
Up to what point can the output be controlled?Can complex conditions be created?e.g. a lake with a mountain background during the evening
16
k__ 3 hours ago 0 replies      
Is deep learning comparable to perceptual exposure?
17
Kevorkian 6 hours ago 0 replies      
Lately, there has been lots of talk of deep learning applied to create tools which can generaterequirements designs software code create builds test builds as well help with deploying builds to various environments. I'm excited for the future developments capable with ML.
18
olegkikin 4 hours ago 2 replies      
[deleted]
19
anigbrowl 2 hours ago 0 replies      
For example, whether a photograph is beautiful is measured by its aesthetic value, which is a highly subjective concept.

Oh really.

20
mozumder 5 hours ago 0 replies      
If they're doing dodging/burning, then they could really use the processing on raw files instead of jpegs. The dynamic range is obviously limited when dodging/burning jpegs, as you can see from the flat clouds and blown highlights on the cows.
21
mtgx 6 hours ago 1 reply      
Great, not all we need is specialized machine learning inference accelerators in our mobile phones. I wonder if Google has even considered making a mobile TPU for its future Pixel phones.
29
In Blow to Tech Industry, Trump Shelves Startup Immigrant Rule nytimes.com
238 points by aaronbrethorst  2 days ago   237 comments top 24
1
djb_hackernews 2 days ago 4 replies      
The rules of this startup visa never made any sense. They basically ensured that very few potential immigrant founders would qualify and the ones that did were in a place where they probably didn't see the value in that path as opposed to other business related visas.
2
geebee 2 days ago 1 reply      
I can't say I'm tremendously enthusiastic about this visa. My objection to it is the same as my objection to almost all specialized, employer sponsored visas - I don't think employers, universities, or investors should be empowered to decide who is allowed to live in the US. If you grant them this power, they will abuse it.

Think about it this way - you're a startup founder with US citizenship. You go to an investor who offers you 100K in seed funding in exchange for X% of your business. You say no deal. Ok, says the employer, but that does mean I won't give you my money.

Now imagine you don't have citizenship. Same scenario. Ok, says the employer, but that does mean I won't give you the right to live in the United States.

One point I've emphasized over and over here on HN is that you can be very pro-immigration and still be very opposed to programs that put private citizens (employers, investors) or corporations in a position of government sanctioned power over would-be immigrants. I think employers and investors should be allowed to decide who gets a job or money, but the power ends there. They absolutely should not be empowered to run the US immigration system for their personal benefit.

These visas are often described as "allows foreign engineers/entrepreneurs/etc to live in the US." That's misleading. It should be described as "allows corporations and investors to decide who is and isn't allowed to live in the US."

So, the tech industry is in favor of a regulatory regime that allows tech corporations to be gatekeepers for who is and isn't allowed to immigrate? What a surprise.

Think of it this way - I can accept that google is allowed to no-hire someone who contributed to open source but can't reverse a binary tree on the spot at a whiteboard. Google's call.

But should google be making decisions about who is allowed to live in the US based on these interviews? That changes the question dramatically for me.

3
brudgers 2 days ago 5 replies      
Based on the description of the rule in the article, the emphasis is on fundraising rather than producing jobs or value. The thresholds of $250k and $100k remind me of Dr. Evil's $1 million. It's just not enough of a bar to separate friends and family funds from legitimate business rationales.

Edit: In response to comments, currently $100k and $250k are not uncommon amounts for a person to incur attending university in the US. Solo operational costs in The Bay can swallow much of either in a year without hiring anyone or building anything...but in fairness either is a lot of money for most people if its yours and not very much if it is someone else's.

4
bitL 2 days ago 2 replies      
Alright, so I have companies in the US (Delaware/Nevada), never resided in the US and if needed I could go with one of those E visas given I pay $100-500k indirectly for the privilege. I am still able to visit US 2x year for 3 months at a time. What's all the fuss about?
5
kasey_junk 2 days ago 1 reply      
The bigger issue here is that it shows that the tech leaders that have been "working within the system" do not have very much sway over the Trump administration, as this was the only concession they were able to get from him. Thats the bigger blow than this specific rule.
6
threepipeproblm 2 days ago 2 replies      
I believe this hurts some clients of mine, who were looking at it as a possible answer. They have been spending significant $$$ in the US economy, developing a product that it's unlikely any domestic firm could, and struggling to find a way to stay here so that they have access to American talent. They are currently here on a importer's license or something, which makes it illegal for them to visit their home country.

While I don't want to be taken for one who is just losing their mind over Trump (since they have all typically seemed pretty lame to me), xenophobia is actively harmful to US economic interests.

7
goodroot 2 days ago 4 replies      
This is wonderful for Canada. Vancouver tech has already seen a nice bump from the American political hostility towards immigrants.
8
malchow 2 days ago 6 replies      
Is it a "blow" if the policy was never actually in effect?
9
luckydata 2 days ago 1 reply      
I'm no fan of Donald Trump but this "buy your own visa" scheme always left me very perplexed. I was trying to figure out who would benefit from it, and aside a few wealthy, mostly Asian citizens, I couldn't see the reason for it.

I'm glad it's dead.

p.s.: Before you call me xenophobe keep in mind I'm a foreign born US citizen, I immigrated through the normal procedures.

10
dmode 2 days ago 0 replies      
A lot of people are complaining about the original rule itself, but as someone who deals with US immigration laws everyday, I can tell you why it is. Immigration law is defined by Congress and there are very clear legislation on how immigrants can be granted visas and citizens. Given that, the executive branch has very little room to maneuver to visa rules. Thus the startup visa rules were written in a way to stay within the context of the original Congressional intent of creating EB visa. Otherwise, it will get challenged in court (which every new visa ruled does, including the H4 visa EAD). I was hoping that with Congressional majorities, Trump can actually create a legal start-up visa akin to what we have in other countries. But I guess that won't happen anymore
11
geff82 2 days ago 2 replies      
Berlin will have a place for you ;) ! Honestly, just another bad news from a country that now began to close its doors to be happy alone. Instead of embracing change and new people, it now makes it extremely difficult for anyone to legally come in. The American Culture became not so wide spreaded because of its interesting closedness, but because it gave many, many people a chance and because it incorporated cultures into this enormous melting pot. I think conservative Americans have not yet realized to what extent the current Administration destroys the glorious brand the country once was. Last time I checked the USA on google maps, there were large areas of empty land that could be the home of a hundred million more without disturbing anyone. At least the acre of Texas prarie that we own is a very lonely space to camp on...
12
xutopia 2 days ago 4 replies      
This will help Canadian startups a lot.
13
chrshawkes 2 days ago 1 reply      
I don't see how allowing somebody overseas to come to the USA to create a company for 2 to 5 years while attracting hundreds of thousands in American dollars, to only be sent back to their country after the Visa expires (5 years) is a good idea?

How does it benefit the USA to build foreign companies with United States dollars so they can empower their own country and their own people, while middle and lower class Americans continue to grasp at the scraps?

I think I side with Trump on this one.

14
kareemsabri 2 days ago 0 replies      
So how do YC companies with foreign founders live in the US? I know a few Canadian ones that do.
15
tbking 2 days ago 0 replies      
I see it more as a blow to tech industry in U.S. Especially when most other countries like Canada are opening up to skilled workers
16
idlewords 2 days ago 0 replies      
The startup visa is a bad idea (it makes immigration status conditional on funding, which adds perverse incentives to an already dysfunctional funding culture), but the fact that the tech industry can't even get small concessions like this from Trump vitiates the argument that it's worth attending very public meetings with him.
17
ghostbrainalpha 2 days ago 0 replies      
So the rule was delayed by "Homeland Security"...

Does this mean that they are protecting us from terrorists? Because I would have thought the protecting American jobs argument was stronger.

18
ajsharp 2 days ago 0 replies      
Mediocre US-born programmers, rejoice!
19
bharadwajk 2 days ago 1 reply      
Fk it, lets move to china
20
hildaman 2 days ago 8 replies      
The US skills based immigration system is based on bringing in indentured labor to enrich corporations.

L-1 visa holders _cannot_ change jobs.H-1B visa holders have to go through an expensive and cumbersome process to change jobs which effectively restricts their job mobility.

Obviously, outside super-hot job markets like silicon valley, US workers have a hard time competing with indentured labor.

This visa would have created brand new class of "entrepreneur" indentured to deep-pocketed VC firms. They would actually be worse off than H-1B and L-1 workers because - not only would they be beholden to the VC firms for funding (thereby their jobs) in the United States - but they would have to part with their ideas.

No nativist here - just saying how things are working on the ground once you shave off the corporate propaganda.

21
pikzen 2 days ago 1 reply      
22
elmar 2 days ago 4 replies      
how it's possible that Trump an entrepreneur inself doesn't see the stupidity of killing this "Startup Visa" I am speechless.
23
bharadwajk 2 days ago 1 reply      
We need a parallel world without borders. A way businesses can operate without govts meddling. Ethereum? Bitcoin? VR?
24
pbreit 2 days ago 0 replies      
Not sure how it's that much of a "blow" since the program hasn't even started yet.
30
Pell A simple and small rich-text editor for the web github.com
298 points by thmslee  18 hours ago   94 comments top 26
1
marijn 15 hours ago 7 replies      
Nonsense, I've written a much smaller one: `function tinyEditor(element) { element.contentEditable = true }`

That, plus a few crude buttons, is all this does. As several other comments point out, there's good reasons why real WYSIWYG packages are biggerthe user experience of working with a plain contentEditable element is still terrible, the output HTML still a complete mess. If you don't care much about that, you don't need an editor component, since setting an attribute and wiring up some buttons to call `execCommand` is easy enough to do from scratch.

Disclaimer: I work on one of those 'bloated' editors, http://prosemirror.net , and find it a little annoying when someone implies their crude afternoon hack is somehow equivalent and we're crazy for putting in all that effort.

2
Tade0 17 hours ago 1 reply      
As a person who used to work for a company developing one of the more popular in-browser text editors I can tell you that all that "bloat" is there so that you'll get consistent results across browsers.

Unfortunately there's no way around this short of not using contenteditable - which is even worse sometimes.

Most editors feature customized builds which let you reduce the footprint of the editor. If you're looking for quick gains first and foremost disable support for pasting from other applications, e.g. MS Word. It's always a major feature.

3
anilgulecha 18 hours ago 5 replies      
This is cool :)

That said (and not to take thunder away from your page), this is possible because the HTML spec itself allows for any element to be made into a "WYSIWYG" editor by adding the "contenteditable" attribute.

What most other larger editors are doing is working around some of the horrendous non-standardized versions of execCommand which varies across browsers.

Ideally a web standards body should come out with a <input type="richtext"> element, which is controlled through a sane API, and the spec spelled out to support everything by default. And get rid of the bane that is rich text editing on the web.

4
jaredreich 14 hours ago 2 replies      
Hi everyone, author of pell.js here, thanks for all the positive AND critical feedback. Hey, this is Hacker News, how boring would it be if everyone's comment was just "Cool" or "Good job"? I love hearing the gripes and individualized complaints of smart people that have worked on similar (and much larger) projects.

I'd like to mention a few things about pell.js and it's goals:

- It was not made to be the most full-featured editor out there

- It was not made to fight with or worry about browser inconsistencies (browsers are converging, quickly too)

- It was made to demonstrate the underlying simplicity of something that looks like magic to most beginner developers (including myself) and to teach people how to use `contentEditable` and `execCommand` via a tiny, extremely readable codebase

- It was made for people who need just a basic WYSIWYG editor in their app/site/whatever and who care about bundle size (PWA's anyone?)

Over time, many of the inconsistencies and issues will get fixed or implemented in pell, but the goal of remaining tiny yet functional will always remain paramount.

5
notzorbo3 16 hours ago 2 replies      
It's great that it's small, but it suffers from the same horrible interactions that most other web editors also suffer from.

Try quoting the last paragraph in the editor. It's now impossible to ever escape from the quote. All new text at the bottom stays "quoted".

From a cursory test of the editor, it suffers from basically all the issues that contenteditable suffers from.

The reason other editors are so big is because the take editing behaviour into account.

6
betageek 17 hours ago 0 replies      
Also the editor that will output the most inconsistent HTML across browsers. There's a reason all those other editors try and fix contenteditable (and are therefore > 1k), for more info see https://medium.engineering/why-contenteditable-is-terrible-1...
7
jwr 16 hours ago 3 replies      
Every time I see a new WYSIWYG text editor, I wonder: do people really need to set single words in bold, italic, underline, and strike-through?

I feel like this is a leftover of the old times, back when first WYSIWYG editors appeared and these capabilities were impressive. I think these days what one needs is not bold/italic, but facilities for editing structure, inserting links, and positioning images, all according to a style template.

8
codebeaker 12 hours ago 2 replies      
I'm surprised to see no mention of the Trix editor here amongst the comments about how contentEditable is the root of all evil. Trix has a nice solution, from their readme:

 Trix sidesteps these inconsistencies by treating contenteditable as an I/O devi- ice: when input makes its way to the ed- itor, Trix converts that input into an editing operation on its internal docum- ent model, then re-renders that document back into the editor. This gives Trix complete control over what happens after every keystroke, and avoids the need to use execCommand at all.
Trix is from basecamp, and has a super healthy plugin/extension ecosystem - https://github.com/basecamp/trix

Full disclosure: not affiliated at all, but the rare times I need a WYSIWYG editor I tend to look for Trix and be very pleased.

9
sheepy 18 hours ago 0 replies      
TL;DR that isdata:text/html,<div contenteditable>with buttons
10
oblio 13 hours ago 0 replies      
This is still relevant: https://www.joelonsoftware.com/2003/08/01/rick-chapman-is-in...

> When Pepsi-pusher John Sculley was developing the Apple Newton, he didnt know something that every computer science major in the country knows: handwriting recognition is not possible. This was at the same time that Bill Gates was hauling programmers into meetings begging them to create a single rich text edit control that could be reused in all their products. Put Jim Manzi (the suit who let the MBAs take over Lotus) in that meeting and he would be staring blankly. Whats a rich text edit control? It never would have occurred to him to take technological leadership because he didnt grok the technology; in fact, the very use of the word grok in that sentence would probably throw him off.

My point: Microsoft has identified and satisfied this need 20 years ago. Although I'm a big supporter of the Open Web and open systems in general, sometimes I do long for a bit of dictatorship to get things done. But then I come to my senses :p

11
StavrosK 15 hours ago 1 reply      
As another alternative, I used SimpleMDE (https://simplemde.com/) for IPFessay (https://gitlab.com/stavros/IPFessay), and, while not this light, it's pretty good and featureful. It doesn't expose all the features I'd like it to expose, but I'm quite satisfied by it.
12
koehr 17 hours ago 1 reply      
I actually started a simple WYSIWYG editor that doesn't use contentEditable but instead a tiny subset of HTML and some virtual DOM like technology. Thanks for this post. It makes me want to work more on it!
13
kronos29296 18 hours ago 1 reply      
I am impressed by both the size and the features and no dependencies. Kinda like a gui linux under 20 MB. People need to do such similar things instead of a huge bloated single page web app with features that nobody want.

Great project.

14
_ao789 18 hours ago 2 replies      
1k does sound quite appealing.But: most of the time a WYSIWYG editor is called is when in 'admin pages' where little to nobody gives a shit about page load time. It's more about CRUD performing as expected.
15
Ciantic 16 hours ago 0 replies      
Copy & Pasting needs some work. Usually the biggest hurdle is people pasting content from other pages and the editor can't sanitize it properly. This doesn't appear to do any cleaning, which in practice never works because regular people do not know how to use the plain pasting mechanism (Ctrl+Shift+V instead of Ctrl+V).
16
agentgt 13 hours ago 1 reply      
I have always wanted a rich text editor where you could see the markdown as well as the formatting it produces at the same time (and in the same edit pane... ie not realtime rendering in a pane above).

That is if you typed:

 **blah**
It would show blah and still show the asterisk but in bold (apologies for using code format but HN will strip the asterisks).

There are of course some editor plugins, and IDEs that do this but I haven't seen one on the web.

EDIT Oh apparently there is stackedit (I hadn't googled in awhile).

17
dhosek 11 hours ago 1 reply      
What I really want is a WYSIWYG/M editor which saves/reads from markdown. I suspect that one or more of the various editors out there can do it, but it seems that all I can find are editors that will substitute the wysiwyg part for MD rather than the HTML part.
18
oneeyedpigeon 17 hours ago 1 reply      
Appears to fail my semantics test right off the bat. Type "Line one\nLine two" results in:

 Line one <div>Line two</div>

19
fovc 13 hours ago 0 replies      
What I really wish for is not a way to make textarea WYSIWYG, but a much lighter way to add bold/italic/underline to text inputs, in such a way that I can load lots of these in a single page. It sounds like I have a very niche need, since I haven't found anything like this
20
sethammons 14 hours ago 0 replies      
A tangent: does anyone have recommendations on a WYSIWYG editor that allows easy math notation?
21
kragen 6 hours ago 0 replies      
You can't pell at that lutt. It's crenned with glauds.
22
xchip 17 hours ago 1 reply      
Fantastic, I was in fact wondering why other editors were so big. I love it!
23
erikb 13 hours ago 0 replies      
Um, the simplest is a textbox and a send button.
24
h2onock 18 hours ago 0 replies      
Looks great, less bloat on the Internet is what we all need.
25
superqwert 16 hours ago 0 replies      
That min file could easily be minimised further....
26
chrismorgan 15 hours ago 1 reply      
Just for the fun of it, I dove in and manually minified the code further, without changing any semantics at all. With the following, I got it down from 2878 bytes to 2010 bytes which is a 30% saving:

 !function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t(e.pell={})}(this,function(e){"use strict";var t,n=Object,i=document,o=prompt,r="pell-",a="button",u="className",l="appendChild",d="createElement",c="insert",s="rderedList",f="Horizontal",p="formatBlock",m="Enter the ",b=n.assign||function(e,i,o,r){for(i=arguments,t=1;t<i.length;t++)for(r in o=i[t])n.prototype.hasOwnProperty.call(o,r)&&(e[r]=o[r]);return e},g=function(e,t,n){return{icon:e,title:t,result:n}},h=function(e,t){return i.execCommand.bind(i,e,!1,t)},k=function(e){return/^https?:\/\//.test(e)?e:"http://"+e},L={bold:g("<b>B</b>","Bold",h("bold")),italic:g("<i>I</i>","Italic",h("italic")),underline:g("<u>U</u>","Underline",h("underline")),strikethrough:g("<s>S</s>","Strike-through",h("strikeThrough")),heading1:g("<b>H<sub>1</sub></b>","Heading 1",h(p,"<H1>")),heading2:g("<b>H<sub>2</sub></b>","Heading 2",h(p,"<H2>")),paragraph:g("","Paragraph",h(p,"<P>")),quote:g(" ","Quote",h(p,"<BLOCKQUOTE>")),olist:g("#","Ordered List",h(c+"O"+s)),ulist:g("","Unordered List",h(c+"Uno"+s)),code:g("&lt;/&gt;","Code",h(p,"<PRE>")),line:g("",f+" Line",h(c+f+"Rule","<PRE>")),link:g("","Link",function(){(t=o(m+"link URL"))&&h("createLink",k(t))}),image:g("","Image",function(){(t=o(m+"image URL"))&&h(c+"Image",k(t))}),undo:g("","Undo",h("undo")),redo:g("","Redo",h("redo"))};e["default"]={init:e.init=function(e){var o=e.classes,c=e.actions,s=i.getElementById(e.root),f=i[d]("div"),p=i[d]("div");p.contentEditable=!0,f[u]=o.actionbar||r+"actionbar",p[u]=o.editor||r+"editor",p.oninput=function(t){return e.onChange&&e.onChange(t.target.innerHTML)},s[l](f),s[l](p),(c?c.map(function(e){return"string"==typeof e?L[e]:b({},L[e.name],e)}):n.keys(L).map(function(e){return L[e]})).forEach(function(e){t=i[d](a),t[u]=o.button||r+a,t.innerHTML=e.icon,t.title=e.title,t.onclick=e.result,f[l](t)})}},n.defineProperty(e,"__esModule",{value:!0})})
However, when you consider gzipping, this represents savings of roughly 1216 bytes (it depends a little on whether the files are noeol and whether you avoid file metadata going in the gzip streampro tip, make your gzipped files smaller with `gzip < x > x.gz` instead of `gzip x`). Quite a few of the latter save another one or two bytes tricks that I employed not only increase runtime trivially (e.g. concatenating two string literals instead of using one string literal), but they also increased gzip size by four or more bytes. (The tricks for the strings Enter the , Horizontal, rderedList and insert had saved seven bytes ungzipped at the cost of about 21 bytes gzipped, and pell- and button had saved one byte at the cost of eleven gzipped.) The lesson there isgzip is pretty good, and deduplication is normally a waste of time in tiny files! I was hoping to get it under 2048 bytes raw and 1024 bytes gzipped; I achieved 2048 bytes raw, but after undoing some of the silly tricks gzipped is still 60 bytes off.

Of course, if you decide to abandon AMD/CommonJS support, you can quickly whip 210 bytes (100 gzipped) off, but thats a change in semantics so I cant count that despite it getting it under 1000 bytes.

       cached 14 July 2017 02:11:01 GMT