hacker news with inline top comments    .. more ..    9 May 2017 Best
home   ask   best   12 months ago   
1
Uber faces criminal probe over software used to evade authorities reuters.com
881 points by techlover14159  4 days ago   558 comments top 40
1
gotothedoctor 4 days ago 12 replies      
Seems like a lot of people are confused/have questions about why Uber is being criminally investigated by the Department of Justice.*

Here's why:

1. Uber is subject to the laws in the jurisdictions in which it operates. Evading authorities is textbook obstruction of justice. Not only did Uber build software that they used to evade authorities & break local laws in multiple states and countries, but they profited from it (which has a variety of other RICO implications.)

2. Sure, corporations are people too, but, nonetheless, only people engage in civil disobedience. Related, for courts, a company that profits from violating of local laws is not a protester or freedom fighter battling injustice, it is criminal enterprise.

3. Charging Uber under RICO would by no means be unusual or a stretch; this is a quite run of the mill application of these laws. (eg see Preet Bharara's RICO prosecutions: https://www.google.com/search?q=preet+bharara+rico+prosectio... )

4. This is unquestionably a federal matter, within the DOJ's jurisdiction. Uber operates across state lines--and used Greyball in multiple jurisdictions. That said, Uber could & likely will face criminal investigation in other jurisdictions.

5. Finally, this is definitely not Trump's revenge on Travis. Not only does it simply not work that way--USAs are independent & it'd be beyond illegal, but this specific USA was appointed by Obama. (There were two Trump didn't fire; USA Stretch is one of them)

(*And, yes, I am a lawyer. And, many years ago, I worked at the Department of Justice)

2
naskwo 3 days ago 7 replies      
Last weekend, I visited Hamburg with my wife. I was surprised when I was told that I couldn't catch an Uber. However, on each German taxi (you know, the beige ones) there was a sticker prompting me to download the "EUTaxi" (or similarly named) app, which I did.

Brilliant. I was able to summon a car within minutes, and this app also allows for paying via the app.

Uber's biggest threat is, IMO, the creation of a well-working "push to ride & pay" taxi app by other countries that are as similarly well organised as Germany.

For me as a consumer, I could care less if I download Uber or EUTaxi. As long as I get my ride on time, and with a professionally licensed driver.

3
samrap 4 days ago 10 replies      
At first, it seemed like legitimate software to assess spam until:

> For example, it mined credit card information to see if the owner was affiliated with a credit union used by police and checked social media profiles to assess the likelihood that the person was in law enforcement.

Yikes. I know people who have the mentality that this sort of thing is ok. Whether you're a startup that never makes it, or one worth billions, at some point this kind of stuff surfaces. You can't run a successful company and get away with this stuff, especially as a start up when everyone is out to get you even moreso.

I'm still waiting for the big one that makes me quit using Uber though.

4
nostromo 4 days ago 9 replies      
I'm not a fan of Uber, but it seems like it's their right to decide who is allowed to use their service. (Baring protected classes, of course.)

If a regulator started creating fake user accounts in order to scrape Uber's data, I don't see why Uber can't put a stop to that.

And if a legislator is a vocal critic of Uber, I don't see why Uber should be forced to allow them to use their service.

5
pimmen89 3 days ago 0 replies      
I see the civil disobedience argument thrown around here now. Are you seriously saying that running a business without complying to regulations is a civil right?

Yes, the regulations can be argued to be whacky, but I don't see how they infringe on civil rights. Rosa Parks couldn't stop being African American but Uber can switch to another business plan.

6
sillysaurus3 4 days ago 1 reply      
It's so strange how quickly all of this happened.

Actually I guess it's been awhile. https://twitter.com/dhh/status/504374011711594496 was in 2014, wow.

7
ianamartin 4 days ago 1 reply      
Kalanick must have known this was on its way when he backed out of the Code Conference a few days ago. Even with just the events of the last few months, he was already going to be on the hot seat facing Kara Swisher and Walt Mossberg.

And you know neither of them were going to pull any punches.

John Gruber quipped that cancelling that interview was probably the smartest thing Kalanick has done in a while. I see that point, but I slightly disagree. If Uber could come up with some kind of decent response to all of the recent shitstorms including this one, that would be a great platform to spread the message.

Cancelling reads to me like they are afraid to answer tough questions because they have no good answers.

On the other hand, I wouldn't want to tangle with Kara Swisher in particular. She's tenacious; she doesn't care who you are or how big your company is; she will tear your intestines out your butthole and feed them to you while the entire technology world watches. I love her. She and Walt are shining examples of a free press holding the feet of the powerful to the fire when needed.

The rest of the media could learn a thing or two from them.

But back to the point, yeah, I bet he's glad he doesn't have to respond to this as well.

8
linkregister 4 days ago 6 replies      
This is really big; if they were misusing credit card records like that, they risk losing accreditation from their PCI systems. There is probably some criminal aspect to violating PCI as well. Does anyone here know more about how PCI works?
9
xixi77 4 days ago 5 replies      
What exactly part of this is criminal? I mean, isn't any business free to discriminate and refuse service to anyone they like (for example government employees, hackers, or Democrats), as long as they are not in one of several protected categories like sex/race/national origin/etc.? Or is this violating consumer privacy laws when CC info is used this way?
10
InclinedPlane 2 days ago 0 replies      
Uber has nowhere to hide here. Let's say you're ordinary schlubs who are engaging in illegal activity at work, what's the dumbest possible thing you could do? Leave evidence, of course, or worse, create evidence. Such as talking about it over email. That would be super, super dumb.

Let's look at the stratospheric levels of dumbness that Uber got up to here. This is a handy checklist of things not to do if you're engaging in criminal activity:

Document their crimes by talking about them openly on official, archived communication channels such as email.

Make the criminal activity official corporate policy.

Write software to support their criminal activity, with no reasonable believable cover story.

Give their criminal conspiracy a project codename.

Even garden variety street gangs aren't this idiotic. Imagine the police pulling someone in for questioning and opening up their bag to find a notebook. Page 1 of the notebook begins with this heading: "Project Keys: Smuggling Heroin into the United States". Page 2 of the notebook is an extended description of the exact methods used to smuggle heroin past border security. Page 3 of the notebook is a list of dates, times, and individuals who have smuggled heroin into the US. And so on. No drug dealer is that stupid, because that would put you in jail for a very long time if it fell into the hands of law enforcement. And yet, here we are, Uber really is that stupid.

11
stepitup 4 days ago 6 replies      
What's really strange is that I bet a good percente of the people reading this comment will know exactly what I mean when I refer to a: "Startup that employs mafia tactics and forces businesses to pay a protection fee, or actively fucks them up and attempts to hurt them."

Why aren't the authorities going over the ACTUAL, honest-to-god mafia startup? Sorry, I realize it's off topic here, but my first thought when I read "Justice Dept begins criminal probe" -- was FINALLY!

We all know which company I'm referring to. And all I said was "employs mafia tactics and forces businesses to pay a protection fee, or actively fucks them up and attempts to hurt them". There's only one company like that in our "community". Why are they allowed to behave that way with immunity?

(Sorry to hijack this thread. Also, I have no disclaimer to make and am not associated with either company.)

12
rajathagasthya 4 days ago 1 reply      
Uber is having one hell of a year. Anyone have an idea whether it has significantly affected their recruitment of new people?
13
maverick_iceman 4 days ago 1 reply      
Intuitively, it seems that Uber committed fraud; but is there a specific law that they violated? It's not a rhetorical question, I'm genuinely curious to know.
14
hackuser 4 days ago 0 replies      
> If a ride request was deemed illegitimate, Uber's app showed bogus information and the requester would not be picked up, the employees told Reuters.

Ignoring the illegality of interfering with law enforcement, this is bad customer service. Not only would I resent the invasion of privacy, but that algorithm is going to have a meaningful number of false-positives. It's a crappy way to deal with customers - if you think there's a problem, tell them you are denying service.

It's hard to imagine another business doing this. Does a restaurant just not bring food? An online store just not send the clothes? I know you are thinking about shadowbanning, which I also think is crappy, but at least it's much less consequential.

15
terribleplan 4 days ago 2 replies      
Ok, what's the over under on the people who coded this getting used as scapegoats? I'm at about 40/60 against.

Morality and legality in software is a massive frontier we hardly pause to think about.

16
beedogs 3 days ago 0 replies      
It's fun watching the world's worst unicorn startup being turned into dog food.
17
ausjke 3 days ago 1 reply      
Austin where I live rejected Uber and it seems that's a right move.
18
bogositosius 4 days ago 7 replies      
Evading local regulators is a federal matter?
19
raspasov 3 days ago 2 replies      
How is "software used to evade authorities"

different from

.. using a "radar used to detect highway patrol"?

20
wonderwonder 3 days ago 0 replies      
Uber needs new leadership yesterday.

Current team did the impossible and bootstrapped the company to where it is via hustle, working in the grey (and black apparently) and pure drive. They should be commended not for their methods but just the for the fact that they actually succeeded in the environment they worked in, one stacked against them.

Uber is a real company now and they need to hand it off to a proven leadership team that can guide it moving forward. Give the current leaders a golden parachute but the time has come to transition.

21
rdxm 4 days ago 0 replies      
lol.... i believe the proper phrase here is "chickens coming home to roost"
22
S3curityPlu5 3 days ago 0 replies      
Uber needs to go down enough is enough already. how many times can you get away with criminal acts, most likely they will just have to pay a fine again though. It seems like corporations can get away with anything these days, and if they get caught they just pay a fine and go on operating.
23
dkarapetyan 4 days ago 2 replies      
So Uber is now Theranos?
24
snappyTertle 3 days ago 1 reply      
Sure, Uber may have broken the law (or evaded it); however, just because a law exists, doesn't make it correct. We should also question if the law should be there in the first place.
25
mirimir 4 days ago 0 replies      
If DPR had played a tighter game, maybe they could have managed a fully subversive ride service. But no matter how well executed, drivers would still be putting their vehicles on the line. Maybe look like hitchhiking?
26
jeffdavis 4 days ago 1 reply      
Doesn't seem very likely to be a serious problem for Uber. Basically bad PR.

If I were them, I would be more worried that they can't kill competitors well enough to ever be highly profitable.

27
bbcbasic 4 days ago 0 replies      
Another one! Can't wait to watch the Uber movie.
28
Jabanga 4 days ago 2 replies      
The laws Uber is accused of facilitating the violation of are themselves tyrannical abridgements of personal liberty. The entire exercise, from the local laws to punish Uber drivers, to the DOJ's prosecution of Uber for helping its drivers evade economic persecution, are a disgusting exercise in majority supported tyranny.
29
SCAQTony 3 days ago 1 reply      
Why is the Uber board allowing their CEO to continue to make sketchy ethical decisions? You would think they would ask Travis Kalanick to fall on his sword so the company can reboot and perhaps rebrand it's image so it would seem less "sinister" towards regulation, it's drivers and the law.
30
ransom1538 3 days ago 1 reply      
Meanwhile, %36 of US homicides are never resolved.
31
nafizh 4 days ago 1 reply      
This might be a blow back for Kalanick resigning from Trump's advisory council?
32
LordHumungous 3 days ago 0 replies      
Theranuber
33
kafkaesq 4 days ago 0 replies      
Finally.
34
amelius 3 days ago 1 reply      
It seems like their company motto is: be evil.
35
laughingman2 3 days ago 3 replies      
Lots of Anarcho-Kapitalists drunk on Ayn Rand Kool Aid.

Travis is a criminal who has marketed his breaking of labor laws, civil laws as "disruption".

Maybe this is what will happen you get billions of investment dollars without earning through old fashioned way of making profits. Financialism has blinded America.

36
thr0waway1239 3 days ago 0 replies      
Mark Zuckerberg must be feeling ecstatic. You don't need to outrun the negative PR bear, you only need to outrun your idiot fratbro friend.
37
topitguys 3 days ago 1 reply      
What?? Looks like the sharing economy is really taking a lot of hit. I read somewhere that there is a conspiracy to defame companies like Uber and Airbnb. Both doing so well and helping people big time in such overly priced market..
38
grandalf 4 days ago 5 replies      
Edit: Please don't down-vote this. Up-vote it and argue articulately against it!

The only thing more embarrassing for authorities than having propped up a corrupt medallion taxi system for decades is this sort of probe.

In order to disrupt the corrupt medallion system it took billions of dollars and algorithms to evade the officials who had been tasked by the corrupt medallion industry to leverage small compliance technicalities to sabotage Uber in specific markets.

Every municipality that had a medallion system that was disrupted by Uber was effectively humiliated. Uber revealed just how inefficient and profligate those systems are.

The quality of car service everywhere Uber serves is supremely better than it had been before Uber. We can now get a car in minutes and see the ETA update as the driver approaches.

So many of us found it infuriating to call 333-TAXI (or equivalent) and be told "5 to 30 minutes" no matter how much demand was going on. Then when the cab failed to show up after 40 minutes, a follow-up call would yield "it should be another 5 to 30 minutes" after which the operator would simply hang up.

It took Uber's vision (and YC's vision in supporting it) to move the world forward into the future. We should all realize that the officials Uber had to fool using its algorithms were the foot soldiers of backwardness and corruption.

39
Kinnard 4 days ago 4 replies      
>the Greyball technique was also used against suspected local officials who could have been looking to fine drivers, impound cars or otherwise prevent Uber from operating, the employees said

Doesn't Uber have a responsibility to protect itself and its drivers from fake riders looking to do harm even if they're government employees??

I think going after average or in many cases poor people trying to make a buck driving Uber is an "aggressive tactic"

40
iloveluce 4 days ago 3 replies      
Everyone really is piling up against Uber. I really hope this isn't some sort of Justice Department revenge against Travis for having left the Trump advisory council [0]

[0] https://www.nytimes.com/2017/02/02/technology/uber-ceo-travi...

2
Wikimedia Foundation spending wikipedia.org
1027 points by apsec112  1 day ago   387 comments top 58
1
cs702 1 day ago 13 replies      
Also known as "the institutional imperative." Quoting Warren Buffett's 1989 letter to shareholders:[1]

"My most surprising discovery: the overwhelming importance in business of an unseen force that we might call 'the institutional imperative.' In business school, I was given no hint of the imperative's existence and I did not intuitively understand it when I entered the business world. I thought then that decent, intelligent, and experienced managers would automatically make rational business decisions. But I learned over time that isn't so. Instead, rationality frequently wilts when the institutional imperative comes into play.

For example: (1) As if governed by Newton's First Law of Motion, an institution will resist any change in its current direction; (2) Just as work expands to fill available time, corporate projects or acquisitions will materialize to soak up available funds; (3) Any business craving of the leader, however foolish, will be quickly supported by detailed rate-of-return and strategic studies prepared by his troops; and (4) The behavior of peer companies, whether they are expanding, acquiring, setting executive compensation or whatever, will be mindlessly imitated.

Institutional dynamics, not venality or stupidity, set businesses on these courses, which are too often misguided. After making some expensive mistakes because I ignored the power of the imperative, I have tried to organize and manage Berkshire in ways that minimize its influence. Furthermore, Charlie and I have attempted to concentrate our investments in companies that appear alert to the problem."

[1] http://www.berkshirehathaway.com/letters/1989.html

2
ordinaryperson 1 day ago 8 replies      
Pretty sure this article could have been called "Wikipedia's Costs Growing Unsustainably" instead of the clickbait headline.

But overall this oped is misplaced. Running the leanest possible operation shouldn't be Wikipedia's focus at this stage in its lifecycle, it's improving the quality of its content.

Back in 2005 Wikipedia had 438k articles and the focus was expanding the reach of its content to cover all topics; today the article count is 5.4 million it's quality that matters more. You can't improve quality just based on crowd-sourcing alone (see: Yelp, Reddit, etc), and the bigger it's gotten the more of a target it's become by disinformation activists.

This attitude on budgets over value strikes me as a classic engineer's POV. The OP is nostalgic about a time when the site was run by a single guy in his basement, but could 1 guy handle the assault of an army of political zealots or Russian hackers? DDoS attacks? Fundraising? Wikipedia is arguably one of the most coveted truth sources the world over, protecting and improving its content is more important than an optimal cost-to-profit ratio.

If the OP has specifics, by all means, share them, but this kind of generalized fearmongering about budgets isn't spectacularly useful, IMHO.

3
avar 1 day ago 6 replies      
I was very actively involved in MediaWiki development & Wikimedia ops (less so though) in 2004-2006 back when IIRC there were just 1-4 paid Wikimedia employees.

It was a very different time, and the whole thing was run much more like a typical open source project.

I think the whole project has gone in completely the wrong direction since then. Wikipedia itself is still awesome, but what's not awesome is that the typical reader / contributor experience is pretty much the same as it was in 2005.

Moreover, because of the growing number of employees & need for revenue the foundation's main goal is still to host a centrally maintained site that must get your pageviews & donations directly.

The goal of Wikipedia should be to spread the content as far & wide as possible, the way OpenStreetMap operates is a much better model. Success should be measured as a function of how likely any given person is to see factually accurate content sourced from Wikipedia, and it shouldn't matter if they're viewing it on some third party site.

Instead it's still run as one massive monolithic website, and it's still hard to get any sort of machine readable data out of it. This IMO should have been the main focus of Wikimedia's development efforts.

4
contingencies 1 day ago 1 reply      
Administrator since 2003 here. I have contributed to Wikipedia in various languages, Wikimedia Commons, Wikibooks, Wiktionary, Wikisource, etc. Three core points, particularly on Wikipedia:

(1) Bad experiences for new and established contributors mean less motivated contributors. This is due to factors such as too much bureaucracy, too many subjective guidelines, too much content being deleted (exclusionism), and an overwhelming mess of projects and policies.

(2) Not enough focus. By starting many projects the foundation has muddied its mission and public identity. In addition, it has broad and potentially mutually conflicting goals such as educating the public about various issues, educating the public about how to work with others to contribute to projects, asking the public for money, agitating governments and corporations for policy change and support, monitoring public behavior looking for evidence of wrongdoing, and engaging with education. Why not leave education to the educationalists, politics to the politicians, spying to the government and motivated contributors and fundraising to donors?

(3) Non-free academic media is hurting the project. Given that only a small number of editors have true access to major academic databases, it is often hard for contributors to equally and fairly balance an article.

Having said that, I still have tremendous respect for the project and comparing its costs to those of the prior systems necessarily incorporating manual preparation, editing, production and distribution of printed matter by 'experts', the opportunity costs for access alone justify the full expenditure. It's not a lot of money in global terms.

5
idorosen 1 day ago 2 replies      
Reproducing the table from the article with one extra column, the ratio of expenses to revenue for clarity, it looks like they're still operating with a very comfortable margin. Yes, the 19% margin is tighter than a 50% margin 12 years ago, and their existence depends on donations now more than ever ($23,463/yr is sustainable by a single engineer's salary, $65,947,465/yr is...not), but Wikipedia and other Wikimedia projects also serve a much wider audience and broader purpose. This isn't scary in and of itself, especially if they've got cash reserves to give them time to tighten the belt later on if it becomes a problem and someone in a leadership position is monitoring their finances to act if their burn rate gets too high... I've seen plenty of nonprofits with tighter margins survive and succeed.

 Year Revenue Expenses Net Assets Expense Ratio (1-margin) 2003/2004 $80,129 $23,463 $56,666 29% 2004/2005 $379,088 $177,670 $268,084 47% 2005/2006 $1,508,039 $791,907 $1,004,216 53% 2006/2007 $2,734,909 $2,077,843 $1,658,282 76% 2007/2008 $5,032,981 $3,540,724 $5,178,168 70% 2008/2009 $8,658,006 $5,617,236 $8,231,767 65% 2009/2010 $17,979,312 $10,266,793 $14,542,731 57% 2010/2011 $24,785,092 $17,889,794 $24,192,144 72% 2011/2012 $38,479,665 $29,260,652 $34,929,058 76% 2012/2013 $48,635,408 $35,704,796 $45,189,124 73% 2013/2014 $52,465,287 $45,900,745 $53,475,021 87% 2014/2015 $75,797,223 $52,596,782 $77,820,298 69% 2015/2016 $81,862,724 $65,947,465 $91,782,795 81%
How sure are we that these numbers are accurate, anyhow?

6
elect_engineer 1 day ago 3 replies      
I am the author of this op-ed, which I will prove by posting a comment on my Wikipedia talk page [ https://en.wikipedia.org/wiki/User_talk:Guy_Macon#Hacker_New... ] before saving this. I am open to any questions, criticism, or discussion. BTW, as I noted in the op-ed. At the request of the editors of The Signpost, the version linked to at the top of this thread has fewer citations and less information in the graphic. The original version is at [ https://en.wikipedia.org/wiki/User:Guy_Macon/Wikipedia_has_C... ]
7
heydenberk 1 day ago 3 replies      
Wikimedia publishes independently-audited financial statements. Here's the latest one. https://upload.wikimedia.org/wikipedia/foundation/4/43/Wikim...

It's clear that salaries and awards and grants are driving the increase in cost. Maybe this is damning evidence of a decadent culture, which the author of this op-ed clearly presume, but I doubt it. I would expect that Wikipedia's employees have been working very hard for a long time to keep the site running and they've cultivated expertise in governing the site in a way that avoids controversy and maintains credibility. I'd rather Wikipedia spend to retain long-tenured experts who have paid their dues than be an underpaid-college-graduate-mill like so many non-profits are. It seems that they've done that, and they've waited until the organization was financially stable to do so.

8
cwyers 21 hours ago 4 replies      
It just... really bothers me that Wikipedia has grown into this massive thing, with $60 million in cash reserves and $31 million in salaries a year... and the people who aren't getting paid are the ones actually writing an encyclopedia. For that kind of money, you'd think they could actually pay people to write an encyclopedia, like Britannica used to. Now Britannica is circling the drain, Wikipedia is raking in money, and instead of paying the writers, there's this whole bureaucracy slurping up the cash and not giving it to the people doing the actual work. I hate all this digital sharecropping. I hate all these businesses based on paying millions of amateurs nothing or next to nothing for large volumes of low quality labor, making it up on volume, and paying a handful of people large sums of money to "administer" it. You'd think for that kind of money you could pay some writers.
9
aaronharnly 1 day ago 4 replies      
> ...I have never seen any evidence that the WMF has been following standard software engineering principles that were well-known when Mythical Man-Month was was first published in 1975. If they had, we would be seeing things like requirements documents and schedules with measurable milestones.

This part of the critique seems a little off, doesn't it? I don't know the state of WMF engineering, it very well may have problems, and a complete lack of documentation or planning is not a good sign, but the particular artifacts (requirements documents, schedules with milestones) mentioned here are more from the pre-Agile waterfall school of thought. Can anyone familiar with WMF engineering comment?

10
hn_throwaway_99 1 day ago 11 replies      
This op ed is non-sensical. According to the author, every successful startup in history is "cancer". Wikipedia's costs have grown because their usage has grown exponentially (comparing costs to economy-wide inflation is particularly baffling).

If anything, I got from this article that Wikipedia has kept costs well below revenue growth, which is normally the sign of a healthy organization.

11
rawland 1 day ago 0 replies      
According to the Wikimedia strategic plan summary [1], it seems the spending is indeed in line, with what was (mis?)planned:

https://strategy.wikimedia.org/wiki/Wikimedia_Movement_Strat...

And some salaries (from 990s):

https://meta.wikimedia.org/wiki/Wikimedia_Foundation_salarie...

--

Sources:

[1]: https://strategy.wikimedia.org/wiki/Wikimedia_Movement_Strat...

12
brudgers 1 day ago 1 reply      
I am not sure I understand what problem am I supposed to see when I look at the table. It looks to me like Wikipedia has income in excess of expenses and a reserve to cover unforeseen events. Remembering what Wikipedia was like in 2005 when Wales thought it didn't need employees, made me think that Wales could not imagine the scale at which Wikipedia is an important asset of humanity today.

It's similarly myopic to the criticism of Wikipedia's software engineering methodology further down the essay. There's a scale of project at which waterfall greatly increases the odds of success: without a plan, a pyramid doesn't get nice sharp corners or come to a point, the dam does not turn the turbine, and Armstrong does not boot print moon dust (never mind landing back in the vicinity of ships and helicopters and medical staff).

A big Wikipedia changes slowly. That's good. One person's rant doesn't cause it to pivot on a dime. One person's rant doesn't suddenly turn it to ad funded. One person's rant doesn't suddenly remove a category of articles.

13
easilyBored 1 day ago 1 reply      
Nah, just companies that profit from Wikipedia (hello Google!) and non-profits should pony up. Why shouldn't a person doing great work to make sure Wikipedia runs smoothly get paid when there's so much money going around?

Google should "adopt" 150 of their employees, MSFT 50, Facebook 50 and so on. It's tax deductible too...

14
praneshp 1 day ago 3 replies      
> The modern Wikipedia hosts 1112 times as many pages as it did in 2005, but the WMF is spending 33 times as much on hosting, has about 300 times as many employees, and is spending 1,250 times as much overall. WMF's spending has gone up by 85% over the past three years.

Can someone analyze this? It sounds a lot like he has a negative feeling about WMF, and threw in numbers to validate his opinion. I'd expect non-linear spending (in terms of pages hosted) at some point (because other things related to pages like links probably grow non-linearly).

15
skdotdan 1 day ago 2 replies      
Maybe a bit off-topic, but I don't think we should use the word "cancer" that way.
16
nullc 1 day ago 0 replies      
IMO-- as someone who was directly involved at the time the big mistake was relocating to SF. That one decision was the beginning of a cascade of ever increasing spending which marked the end of an era of fiscally conservative operations.

There were many positive outcome too: this increase in spending has resulted in many benefits but not at all proportional to the increase in costs.

17
ppod 1 day ago 0 replies      
I don't know much about wikipedia or WMF but the spending growth in that table is not exponential. And surely the correct way to measure the scale of the service provided is page views rather than number of pages.
18
mycall 1 day ago 1 reply      
Expenses (2016/2015) [1]

Salaries and wages 31,713,961 26,049,224

Awards and grants 11,354,612 4,522,689

Internet hosting 2,069,572 1,997,521

In-kind service expenses 1,065,523 235,570

Donations processing expenses 3,604,682 2,484,765

Professional service expenses 6,033,172 7,645,105

Other operating expenses 4,777,203 4,449,764

Travel and conferences 2,296,592 2,289,489

Depreciation and amortization 2,720,835 2,656,103

Special event expense, net 311,313 266,552

Total expenses 65,947,465 52,596,782

[1](https://upload.wikimedia.org/wikipedia/foundation/4/43/Wikim...)

19
a_imho 20 hours ago 0 replies      
I'm very conflicted about the donate me guilt tripping. On one hand these companies need cash to operate and they do a world of good, but on the other reading their financial reports I'm just not convinced they are efficient with the money.
20
wauzars 22 hours ago 0 replies      
This is one of the reasons I hated working there. They waste money and every year beg for more. Why do they even need an office in a prime San Francisco location? On top of that, they work on stupid projects, and internally the organization is run by imbeciles.
21
tuna-piano 1 day ago 0 replies      
As I understand it, Wikipedia's software and content is completely open source. You could make a foundation called BetterWiki, and run it on $1M (if you get the SEO and volunteers to your side). Is that right?

Now if the people donating (cards against humanity, mom+pops, etc) feel good supporting Wikipedia, and there are more people that want to feel good supporting Wikipedia than Wikipedia needs, maybe Wikipedia should invest money in more missions that go along with its general values?

https://en.wikipedia.org/wiki/Wikipedia:Reusing_Wikipedia_co...

22
zby 1 day ago 0 replies      
The cancer metaphor seems very artificial - it is only about the exponential growth with no underlying model. Without the model exponential growth means nothing - because you don't know how to extrapolate the current trend.
23
antr 1 day ago 1 reply      
Can someone explain to me what assets worth $91m does WMF have? It seems like an awful amount of capex
24
soheil 1 day ago 0 replies      
Why were there so many sites that failed before Wikipedia took off? Looks like part of any search engine should be the service Wikipedia prodives, henece, the Google love affair. Can the same model be applied to other things, i.e. User created content + user curated content + indefinite feedback loop: the more users use it the more content is created/curated (a percentage of new users go on to creating/curating content)?
25
xchip 1 day ago 0 replies      
"I have never seen any evidence that the WMF has been following standard software engineering principles [...]. If they had, we would be seeing things like requirements documents and schedules with measurable milestones. This failure is almost certainly a systemic problem directly caused by top management, not by the developers doing the actual work."
26
Fiahil 1 day ago 1 reply      
> If we want to avoid disaster, we need to start shrinking the cancer now, before it is too late. We should make spending transparent, publish a detailed account of what the money is being spent on and answer any reasonable questions asking for more details.

I never search for it, but I'm really surprised the foundation has not yet made their spending transparent. Aren't they supposed to be non-profit ?

27
lottin 1 day ago 0 replies      
In short: despite the alarming headline, financially Wikipedia is doing very well.
28
dandare 16 hours ago 0 replies      
Do you know if Wikimedia Foundation releases detailed spending data? The financial reports are too top level.

I would like to visualize Wikipedia's budget using wikiBudgets.org - something like this: https://us.wikibudgets.org/w/united-states-budget-2016

29
Svekax 1 day ago 1 reply      
> It could be the WMF taking a political position that offends many donors.

We have a winner. It's the same thing killing ESPN. It's not wise for companies to take political positions on the left or on the right that will alienate half your users. Just don't do it.

30
fasteo 14 hours ago 0 replies      
>>> We should make spending transparent, publish a detailed account of what the money is being spent on and answer any reasonable questions asking for more details

Isn't this an obligation for foundations ?

31
bootload 1 day ago 0 replies      
"The modern Wikipedia hosts 1112 times as many pages as it did in 2005, but the WMF is spending 33 times as much on hosting, has about 300 times as many employees, and is spending 1,250 times as much overall."

Are there any comparable data on the costs for 11x( 16 billion PV/M) this? (I'm thinking google/amazon here)

"their poor handling of software development has been well known for many years."

So is the problem inefficiency in the code/HW setup? That is solvable. Any pointers to the hosting solution used?

32
devwastaken 1 day ago 0 replies      
Oh I see it all the time. Random extraneous projects and really niche projects that don't fit well in the wikimedia ecosystem, built ontop of the tech debt ridden remains of Mediawiki while the core devs are tasked with making helper tools when they should be paid to work on continually improving core functionality of Mediawiki so anyone other than Wikipedia can use it.
33
theprop 22 hours ago 0 replies      
Woah, nearly $100 mn per year. Did they start paying all the "volunteers" who create all their content?
34
roadbeats 1 day ago 0 replies      
There are articles claiming Wikipedia is actually backed by some investors and corporate who want to control it. Wikiscanner results gives a hint about how.

https://www.theregister.co.uk/2012/12/20/cash_rich_wikipedia...

35
mastazi 1 day ago 1 reply      
> The modern Wikipedia hosts 1112 times as many pages as it did in 2005, but the WMF is spending 33 times as much on hosting

I stopped reading right there.

36
eriknstr 1 day ago 0 replies      
>Nothing can grow forever. Sooner or later, something is going to happen that causes the donations to decline instead of increase. It could be a scandal (real or perceived). It could be the WMF taking a political position that offends many donors. Or it could be a recession, leaving people with less money to give.

Does the Archive Team backup Wikipedia? Probably they do but if not I guess they should.

37
rayiner 1 day ago 0 replies      
Almost all growth you see in the real world is exponential in its early stages. The numbers the article points to show revenues exceeding costs and a growing surplus. There is no story here other than Wikipedia is a rapidly growing organization and is doing it while running in the black.
38
vinceguidry 1 day ago 1 reply      
I have zero problems with Wikipedia's resorting to nagging the everloving fuck out of people who don't donate.

I just wish I could get them to not nag the everloving fuck out of me. Every year when I see the donation banner, I donate perhaps $20. I would maybe double that donation if it could stop fucking nagging me.

39
rubatuga 1 day ago 4 replies      
How true are his statements? I seriously don't know enough about the WMF or the fiscal policies in place to make even a guess.
40
fareesh 1 day ago 2 replies      
From an engineering point of view, would changing any part of their stack reduce the hosting burden? I did some tinkering with MediaWiki in late 2006, it was a bit convoluted at the time. I imagine it is a gigantic project now.
41
Markoff 15 hours ago 0 replies      
TLDR: Wikipedia doesn't need your money, don't donate anytime soon
42
yellow103 1 day ago 1 reply      
Perhaps they view themselves as a charity, and that spending is the goal.
43
postit 1 day ago 0 replies      
I keep wondering how much money would Wikimedia save if they implemented web torrent and progressive loading.
44
systematical 1 day ago 3 replies      
I honestly wouldn't have a problem with them running ads as long as it was a single ad per-age and non-obtrusive.
45
y1426i 1 day ago 1 reply      
Not surprising. It's quality has deteriorated and not dependable. It's high SEO ranking has led to it being exclusively used for pushing an opinion or agenda. Articles on popular topics eventually end up being biased towards a viewpoint rather than being factual and chronological.
46
sparkzilla 1 day ago 0 replies      
Good on Guy for bringing this up at last, but others have been questioning Wikipedia's fundraising for some time now [1][2][3] Note: I wrote the last one. In the same way that many charities exist to line the pockets of their staff, while not giving to their cause, Wikipedia has become a bloated fundraising bureaucracy that happens to have an online encyclopedia. The emphasis on fundraising leads to corrupt actions, for example the fundraising banners that say the site is in imminent danger of collapse despite it having over $100 million in the bank.

[1] https://www.dailydot.com/business/wikipedia-fundraiser-banne...[2] https://news.slashdot.org/story/16/12/16/1631223/wikipedia-e...[3] http://newslines.org/blog/stop-giving-wikipedia-money/

47
profalseidol 1 day ago 1 reply      
Wikipedia should transform into a blockchain dApp.
48
aerovistae 1 day ago 0 replies      
For anyone who hesitates like I did, WMF is Wikimedia Foundation, not World Monetary Fund-- there is no World Monetary Fund; it's the IMF -- International Monetary Fund.
49
erikb 1 day ago 1 reply      
Hm? Have you seen the annoying "please sponsor us" ads over the last 5 or so years? And you didn't recognize the cancer back then when that started?
50
glasz 1 day ago 0 replies      
thought i will be reading about the beginning of the end of political censorship and revisionism on wikipedia.

anyway, wp is a business and money is to be made with all kinds of shit.

51
freshflowers 1 day ago 1 reply      
> I have never seen any evidence that the WMF has been following standard software engineering principles that were well-known when Mythical Man-Month was was first published in 1975. If they had, we would be seeing things like requirements documents and schedules with measurable milestones.

This person appears to be completely ignorant of the changes in software engineering since, let's say, the mid 90s. (Which kind of discredits everything else he writes.)

However, he's introduced as "He runs a consulting business, rescuing engineering projects that have gone seriously wrong."

So basically this is just a consultant's sales pitch.

52
lossolo 1 day ago 2 replies      
They are handling around 2.9k req/s. In april they had 7.6 billion page views. English articles: 5,400,448. English wikipedia download has size of 13 GB compressed (expands to over 58 GB when uncompressed).
53
peterwwillis 1 day ago 2 replies      
ISPs use Wikipedia as a killer app (free access from mobile providers), so make them pay for it. Charge them for access, or demand they host an official mirror. In exchange you don't show begware to the ISP's customers. ISPs will end up competing to provide access, eliminating the bulk of hosting requirements.
54
darod 1 day ago 1 reply      
As someone who's recovering from cancer, this is a horrible title.
55
grandalf 1 day ago 2 replies      
Is this a case where the HN story originally appeared with a title matching the essay and was subsequently changed to a more descriptive title? Quite the opposite of the typical pattern :)
56
fictioncircle 1 day ago 2 replies      
Frankly, the problem with this Op Ed is there is a very simple solution available to anyone who is concerned about a "doomsday" scenario for Wikipedia:

https://en.wikipedia.org/wiki/Wikipedia:Database_download#Wh...

Mirror it and make it available read-only until that day comes as a separate "disaster recovery" organization and raise the minimal funding required for that function (1 employee + 1 dedicated server) would be sufficient to the task. More if you wanted to make it usable but at that point, you aren't really a dumb mirror.

> pages-articles.xml.bz2 Current revisions only, no talk or user pages; this is probably what you want, and is approximately 13 GB compressed (expands to over 58 GB when uncompressed).

You just need to mirror this regularly and maintain a reasonable depth of the backup. (Say, once a month for the past 12 months)

Then, whenever this terrible event destroys Wikipedia you are on the clear due to you operating under the past licenses for the data and people will need a new place to go if Wikipedia is genuinely destroyed.

57
Hambonetasty 1 day ago 1 reply      
It's a website not a person. Keep it clean, fool.
58
nathan_f77 1 day ago 4 replies      
> After we burn through our reserves, it seems likely that the next step for the WMF will be going into debt to support continued runaway spending, followed by bankruptcy. At that point there are several large corporations (Google and Facebook come to mind) that will be more than happy to pay off the debts, take over the encyclopedia, fire the WMF staff, and start running Wikipedia as a profit-making platform. There are a lot of ways to monetize Wikipedia, all undesirable. The new owners could sell banner advertising, allow uneditable "sponsored articles" for those willing to pay for the privilege, or even sell information about editors and users.

I honestly wouldn't complain if Wikipedia was owned and monetized by Google. I think they would recognize the importance of Wikipedia and handle it very carefully. I also think there is a small and very vocal minority on Hacker News who would be outraged, but most of the world wouldn't think twice.

3
Net neutrality is in jeopardy again mozilla.org
874 points by kungfudoi  14 hours ago   347 comments top 41
1
shawnee_ 13 hours ago 9 replies      
Net neutrality is fundamental to free speech. Without net neutrality, big companies could censor your voice and make it harder to speak up online. Net neutrality has been called the First Amendment of the Internet.

Not just harder. Infinitely more dangerous. Probably the scariest implications for NN being gutted are those around loss of anonymity through the Internet. ISPs who are allowed to sell users' browsing history, data packets, personal info with zero legal implications --> that anonymity suddenly comes with a price. And anything that comes with a price can be sold.

A reporter's sources must be able to be anonymous in many instances where release of information about corruption creates political instability, endangers the reporter, endangers the source, endangers the truth from being revealed. These "rollbacks" of regulations make it orders of magnitude easier for any entity in a corporation or organization to track down people who attempt to expose their illegal actions / skirting of laws. Corporations have every incentive to suppress information that hurts their stock price. Corrupt local officials governments have every incentive to suppress individuals who threaten their "job security". Corrupt PACs have every incentive to drown out that one tiny voice that speaks the truth.

A government that endorses suppression cannot promote safety, stability, or prosperity of its people.

EDIT: Yes, I am also referring to the loss of Broadband Privacy rules as they have implications in the rollback of net neutrality: https://www.theverge.com/2017/3/29/15100620/congress-fcc-isp...

Loss of broadband privacy: Yes, your data can and will be sold

Loss of net neutrality: How much of it and for how much?

2
guelo 9 hours ago 1 reply      
It's insane the amount of comments on HN of all places that don't understand that the end of Net Neutrality is the end of the open web. People that didn't get a peek at Compuserve have no idea the fire we're playing with here. The open web is the most significant human achievement since the transistor and we're about to kill it happily.
3
em3rgent0rdr 12 minutes ago 0 replies      
Title II "Net Neutrality" is a dangerous power grab -- a solution in search of a problem that doesn't exist, with the potential to become an engine of censorship (requiring ISPs to non-preferentially deliver "legal content" invites the FCC and other regulatory and legislative bodies to define some content as "illegal").

Title II "Net Neutrality" is also an instance of regulatory capture through which large consumers of bandwidth (such as Google and Netflix) hope to externalize the costs of network expansions to accommodate their ever-growing bandwidth demands. To put it differently, instead of building those costs into the prices their customers pay, they want to force Internet users who AREN'T their customers to subsidize their bandwidth demands.

4
jfaucett 13 hours ago 27 replies      
Does anyone else find the internet market odd? Up until now net neutrality and other policies have given us the following:

1. Massive monopolies which essentially control 95% of all tech (google, facebook, amazon, microsoft, apple, etc)

2. An internet where every consumer assumes everything should be free.

3. An internet where there's only enough room for a handfull of players in each market globally i.e. if you have a "project-management app" there will not be a successfull one for each country much less hundreds for each country.

4. Huge barriers of entry for any new player into many of the markets (no one can even begin competing with google search for less than 20 million).

I think there's still a lot of potential to open up new markets with different policies that would make the internet a much better place for both consumers and entrepreneurs - especially the small guys. I'm just not 100% sure maintaining net-neutrality is the best way to help the little guy and bolster innovation. Anyone have any ideas how we could alleviate some of the above mentioned problems?

EDIT: another question :) If net-neutrality has absolutely nothing to do with the tech monopolies maintaining their power position then why do they all support it? [https://internetassociation.org/]

5
bkeroack 13 hours ago 9 replies      
I've written it before and I'll write it again (despite the massive downvotes from those who want to silence dissent): Title II regulation of the Internet is not the net neutrality panacea that many people think it is.

That is the same kind of heavy-handed regulation that gave us the sorry copper POTS network we are stuck with today. The free market is the solution, and must be defended against those who want European-style top-down national regulation of what has historically been the most free and vibrant area of economic growth the world has ever seen.

The reason the internet grew into what it is today during the 1990s was precisely because it was so free of regulation and governmental control. If the early attempts[1] to regulate the internet had succeeded, HN probably wouldn't exist and none of us would have jobs right now.

1. https://en.wikipedia.org/wiki/Communications_Decency_Act (just one example from memory--there were several federal attempts to censor and tax the Internet in the 1990s)

6
pbhowmic 13 hours ago 3 replies      
I tried commenting on the proceeding at the FCC site but I keep getting service unavailable errors. The FCC site itself is up but conveniently we the public cannot comment on the issue.
7
rosalinekarr 11 hours ago 0 replies      
[The propaganda Comcast is tweeting right now is absolutely ridiculous.][1]

[1]: https://twitter.com/comcast/status/859091480895410176

8
SkyMarshal 10 hours ago 1 reply      
Looking at Comcast's webpage on this:

http://corporate.comcast.com/comcast-voices/comcast-supports...

They're arguing that Title II Classification is not the same as Net Neutrality, with the following statement:

"Title II is a source of authority to impose enforceable net neutrality rules. Title II is not net neutrality. Getting rid of Title II does not mean that we are repealing net neutrality protections for American consumers.

We want to be very clear: As Brian Roberts, our Chairman and CEO stated, and as Dave Watson, President and CEO of Comcast Cable writes in his blog post today, we have and will continue to support strong, legally enforceable net neutrality protections that ensure a free and Open Internet* for our customers, with consumers able to access any and all the lawful content they want at any time. Our business practices ensure these protections for our customers and will continue to do so."*

So if Title II goes away, where do those strong, legally enforceable net neutrality protections come from? Wasn't that the reasoning behind Title II in the first place, it's the only effectively strong, legally enforceable way of protecting net neutrality (vs other methods with loopholes)?

9
stinkytaco 12 hours ago 6 replies      
Honest question, but is Net Neutrality the answer to these problems?

A few weeks ago on HN, someone made an analogy to water: someone filling their swimming pool should pay more for water than someone showering or cooking with it. This seems to make sense to me, water is a scarce resource and it should be prioritized.

Is the same true of the Internet? I absolutely agree that ISPs that are also in the entertainment business shouldn't be allowed to prioritize their own data, but that seems to me an anti-trust problem, not a net neutrality problem. I also agree that ISPs should be regulated like utilities, but even utilities are allowed to limit service to maintain their infrastructure (see: rolling blackouts).

Perhaps I simply do not understand NN and perhaps organizations haven't done a good job of explaining it, but I don't know that these problems are not best solved by the FTC, not the FCC.

10
pc2g4d 2 hours ago 0 replies      
The top comments here seem to misunderstand net neutrality. It's not about companies selling your browsing history---that was recently approved by Congress in a separate bill[1]---but rather is about whether ISPs can prioritize the data of different sites or apps. IIUC net neutrality doesn't really provide any privacy protections, though it's likely good for privacy by making a more competitive market that motivates companies to act more (though not always) in consumers' interests.

1: https://arstechnica.com/information-technology/2017/03/how-i...

11
Sami_Lehtinen 9 hours ago 0 replies      
My Internet connection contract already says that they reserve right to: Queue, Prioritize and Throttle traffic. This is used to optimize traffic. - Doesn't sound too neutral to me? It's also clearly stated that some traffic on the network get absolute priority over secondary classes.

Interestingly at one point 100 Mbit/s connection wasn't nearly fast enough to play almost any content from YouTube. - Maybe there's some kind of relation, maybe not.

12
wehadfun 13 hours ago 1 reply      
Trumps appointees disappoint me a lot. This guy and the one over the EPA
13
alexanderdmitri 12 hours ago 0 replies      
I think a great thing to do (if you are for net neutrality), is pick specific parts of the NPRM filed with this proceeding and comment directly on it[1] to help do some work for the defense. I feel sorry for anyone who might actually need to address this document point for point to defend net neutrality.

I tried my hand at the general claim of regulatory uncertainty hurting business, then Paragraphs 45 and 47:

-> It is worth noting that by bringing this into the spotlight again the NPRM is guilty of iginiting the same regulatory uncertainty it repeatedly claims has hurt its investments.

-> Paragraph 45 devotes 124 words (94% of the paragraph), gives 3 sources (75% of the references in this paragraph) and a number of figures (100% of explicitly hand-picked data) making the claim Title II regulation has suppressed investment. It then ends with 8 words and 1 reference vaguely stating "Other interested parties have come to different conclusions." Given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

-> There are also a number of extremely misleading and insubstantiated arguments throughout. Reference 114 in Paragraph 47, for example, is actually a hapzard mishmash of 3 references with clearly hand-picked data from somewhat disjointed sources and analyses. Then the next two references [115, 116] in the same paragraph, point to letters sent to the FCC over 2 years ago from small ISP providers before regulations were classified as Title II. Despite discussing the fears raised in these letters, the NRPM provides little data on whether these fears were actually borne out. In fact, one of the providers explicitly mentioned in reference 115, Cedar Falls Utilities, have not in any way been subject to these regulations (they have less than 100,000 customers ... in fact the population of Cedar Falls isn't even 1/2 of the 100,000 customer exemption the FCC has provided!). This is obviously faked concern for the small ISP businesses and again, given the NPRM's insistence on both detail and clarity, this is absolutely unacceptable.

[1] makes a great point on specifically addressing what's being brought up in the NPRM:https://techcrunch.com/2017/04/27/how-to-comment-on-the-fccs...

14
albertTJames 2 hours ago 0 replies      
Neutrality does not mean anything should be authorized... international law should allow ISP to submit to judiciary surveillance of individuals if those a suspected of serious crimes, terrorism, pedophilia, black hat hacking, psychological operations/fake news. I don't think because policemen can stop me in the street it is a violation of my freedom.Moreover the article is extremely vague and use argumentum ad populum to push its case while remaining quite unclear on what is really planned: "His goal is clear: to overturn the 2015 order and create an Internet thats more centralized."
15
vog 12 hours ago 1 reply      
It is really a pity that in the US, net neutrality was never established by law, but "just" on institutional level.

Here in the EU, things are much slower and the activists were somewhat envious how fast net neutrality was established in the US, while in the EU this is a really slow legislation process. But now it seems the this slower way is at least more sustainable. We still don't have real net neutrality in the EU, but the achievements we have so far are more durable, and can't be overthrown that quickly.

16
sleepingeights 12 hours ago 1 reply      
Many of these articles are missing an easily exploitable position. The key term is "bandwidth" which is the resource at stake. What is being fought over is how to define this "bandwidth" in a way that will be enforceable against the citizen and favorable to the corporation (i.e. "government").

One way they could do this is to divide it like they did the radio spectrum by way of frequency, where frequency is related to "bandwidth". The higher the frequency, the greater the bandwidth. With communication advances, the frequencies can be grouped just like they did with radio, where certain "frequencies" are reserved by the government/military, and others are monopolized by the corporations, and a tiny sliver is provided as a "public" service.

This way would be the most easily enforceable for them to attack NN and the first amendment, as it already exists by form of radio.

* It is already being applied by cable providers through "downstream/upstream" where your participation by "uploading" of your content is viewed inferior to your consumption of it. i.e. Your contribution (or upload) is a tiny fraction of your consumption (or download).

* Also, AWS, Google and other cloud services charge your VPS for "providing" content (egress) and charge you nothing for consuming (ingress). On that scale, the value of what you provide is so miniscule it is almost non-existent to the value of what you consume.

tldr; NN is already partly destroyed.

17
justforFranz 11 hours ago 0 replies      
Maybe we should give up and just let global capitalists own and run everything and jam web cameras up everyone's asshole.
18
notadoc 6 hours ago 0 replies      
Is the internet a utility? And are internet providers a utility service? That's really what this is about.
19
weberc2 13 hours ago 10 replies      
I wasn't impressed with this article; it reads like fear mongering. More importantly, I don't think the fix is regulation, I think it's better privacy tech + increased competition via elimination of local monopolies. Do we really want to depend on government to enforce privacy on the Internet?
20
akhilcacharya 8 hours ago 0 replies      
It's interesting how much could have changed if ~175k or fewer people in the Great Lakes region had voted differently..
21
FullMtlAlcoholc 12 hours ago 1 reply      
Why does anyonw want to give more power to Comcast or AT&T? Neither has hardly ever been described as innovative.. unless you count clueless members of Congress.
22
mbroshi 12 hours ago 2 replies      
I am agnostic on net neutrality (ie. neither for nor against, just admitting my own lack of ability to assess its fallout).

I read a lot of sweeping, but hard to measure claims on its affects (such as in the linked article). Are there any concrete, measurable affects that anyone is willing to predict?

Examples might be:

* Average load times for the 1000 most popular webpages will decrease.

* There will be fewer internet startups over the next 5 years than the previous.

Edit: formatting

23
jwilk 13 hours ago 2 replies      
If you want people to take your article seriously, then maybe you shouldn't put a pointless animated GIF in it.
24
tycho01 13 hours ago 1 reply      
I'm curious: to what extent could a US ruling on this affect the rest of the world?
25
arca_vorago 6 hours ago 0 replies      
Someone tell me again why we don't have public internet backbone like we do roads?
26
WmyEE0UsWAwC2i 8 hours ago 0 replies      
Net neutrality should be on the constitution. Safe from lobbists and the politician in office.
27
twsted 7 hours ago 0 replies      
It's sad that this article stayed at the first positions for so little time. And we are on HN.

But is this HN folks fault?

At the time of my writing "Kubernetes clusters for the hobbyist" - who thinks it is as important as this one? - with 470 points less, almost 300 comments less, both posted 6/7 hours ago is six positions above.

28
c8g 9 hours ago 0 replies      
> Net neutrality is fundamental to competition.

so, I won't get 20 times faster youtube. fuck that net neutrality.

29
billfor 11 hours ago 0 replies      
I'm not sure putting the internet into the same class of service as a telephone made sense for all the unintended consequences. Everyone is fine until they wind up paying $50/month for their internet and then seeing another $15 in government fees added to their bill. From a pragmatic point of view, I'm sure the government will always have the option to regulate it later on.
30
kristopolous 10 hours ago 0 replies      
Freedom is never won, only temporarily secured.
31
M2Ys4U 4 hours ago 0 replies      
(In the US)
32
1ba9115454 13 hours ago 13 replies      
How much of this can just be fixed by the free market?

If I feel an ISP is limiting my choices wouldn't I just switch?

33
MichaelBurge 13 hours ago 1 reply      
> The order meant individuals were free to say, watch and make what they want online, without meddling or interference from Internet service providers.

> Without net neutrality, big companies could censor your voice and make it harder to speak up online.

Hmm, was it ever prohibited for e.g. some Twitter user to write your ISP an angry letter calling you a Nazi, so they shut your internet off to avoid the headache?

I've only heard about "net neutrality" in the context of bandwidth pricing. It's very different if companies are legally required to sell you internet(except maybe short of an actual crime).

34
laughingman2 12 hours ago 2 replies      
I never thought hackernews would have so many opposing net neutrality. Is some alt-right brigading this thread? which is ironic considering how they claim to care about free speech.
35
boona 13 hours ago 1 reply      
If the internet is fundamental to free speech, maybe it's not a good idea to give it's freedom over to state control, and in particular to an agency who historically has gone beyond it's original mandate and censored content.

When you hand over control to the government, don't ask yourself what it would look like if you were creating the laws, ask yourself what it'll look like when self-interested politicians create them.

36
13 hours ago 13 hours ago 3 replies      
37
marcgcombi 13 hours ago 1 reply      
38
grandalf 13 hours ago 2 replies      
39
JustAnotherPat 13 hours ago 0 replies      
November 8th was critical to the Internet's future, not today. People made their bed when they refused to get behind Clinton. Now you must accept the consequences.
40
bjt2n3904 12 hours ago 2 replies      
So, which is it HackerNews? Are we OK with companies deciding what gets on the internet, or are we not? On one hand, we laud Facebook et al. for suppressing "fake news", and then we get upset when ISPs do the same.

Furthermore, the FCC has historically engaged in content regulation. Anyone wonder why there's no more cartoons on broadcast television? Or perhaps why the FCC is investigating Colbert's Trump Jokes? If we're so concerned about content freedom, the FCC is not the organization to trust.

41
vivekd 11 hours ago 2 replies      
>The internet is not broken, There is no problem for the government to solve. - FCC Commissioner Ajit Pai

This is sooo true. If internet carriers were preferring some kind of content, or censoring or giving less bandwidth to certain content, or charging for certain content - and this was causing the problems described in the mozilla article - then yes - we could have legislation to solve that problem.

What gets to me about the net neutrality movement is that the legislation they are pushing for is based on vague fears and panic. Caring about net neutrality has become some sort of weird silicon valley techno-virtue signaling.

If ISPs start behaving badly or restricting free speech, I would be happily on board to having legislation to address that. This has not happened and there is no evidence that there is any imminent threat of this happening. Net neutrality legislation is a solution to a vague non-existent speculative problem.

4
Prepack helps make JavaScript code more efficient prepack.io
823 points by jimarcey  5 days ago   220 comments top 36
1
chmod775 5 days ago 4 replies      
I just ran this on a huge JS project that has a quite intensive "initialization" stage (modules being registered, importing each other, etc.), and prepack basically pre-computed 90% of that, saving some 3k LOC. I had to replace all references to "window" with a fake global object that only existed within the outer (function() {..})() though (and move some other early stuff with side effects to the end of the initialization), to get it to make any optimizations at all.

Very impressive overall.

2
yladiz 5 days ago 5 replies      
I hate to bring this up whenever I see a Facebook project, but it still warrants saying: the patents clause in this project, like in others including React, is too broad. I really wish they made a "version 3" that limited the scope of the revocation to patents pertaining to the software in question, e.g. Prepack, React, rather than a blanket statement that covers any patent assertion against Facebook. While I suppose the likelihood of this occurring is small, I can imagine a company holding some valid patents, such as something related to VR, that aren't related to Prepack that Facebook infringes upon, as well as using a software that Facebook produces like Prepack, sue Facebook for infringement, and then losing the right to use Prepack as a result. From my understanding these kinds of clauses are beneficial overall, but the specific one that Facebook uses is too broad.

Tangentially related: what would happen if you did sue Facebook for patent infringement, and continued to use this software?

3
dschnurr 5 days ago 2 replies      
This is coolit's worth mentioning that you might be trading runtime performance for bundle size though, here's a contrived example to demonstrate: http://i.imgur.com/38CR3Ws.jpg
4
chime 5 days ago 3 replies      
This has promise but still needs more work. I added one line to their 9 line demo ( https://prepack.io/repl.html ) and it ballooned to 1400+ lines of junk:

 (function() { function fib(x) { y = Date.now(); // the useless line I added return x <= 1 ? x : fib(x - 1) + fib(x - 2); } let x = Date.now(); if (x * 2 > 42) x = fib(10); global.result = x; })();
I understand Date might not be acceptable for inner loops but a lot of my code that deals with scheduling would benefit significantly if I could precompute some of the core values/arrays using a tool like prepack.

5
NTillmann 5 days ago 17 replies      
Hi, I am Nikolai Tillmann, a developer on the Prepack project. I am happy to answer any questions!
6
jamescostian 5 days ago 1 reply      
The examples are very far from the JS I see and read, but this is definitely a very useful tool. It seems like gcc -Olevel. It would be interesting to incorporate some sort of tailoring for JS engines into this, like how a compiler might try to make x86-specific optimizations. For example, if you know your target audience mostly runs Chrome (or if the code is to be run by node), you might apply optimizations to change the code to be more performant on V8 (see https://github.com/petkaantonov/bluebird/wiki/Optimization-k... for example).

I love it and can't wait to use it on some projects!

7
ianbicking 5 days ago 2 replies      
A long time ago there was a theory about using Guile (the GNU Scheme) as a general interpreter for languages using partial evaluation: you write an interpreter for a language in Scheme, use a given program as input, and run an optimizer over the program. This turns your interpreter into a compiler. I played around with the concept (making a Tcl interpreter), and it even kind of worked, often creating reasonably readable output.

Prepack looks like the same kind of optimizer it could be a fun task to write an interpreter and see if this can turn it into a compiler/transpiler.

8
xg15 4 days ago 1 reply      

 function define() {...} function require() {...} define("one", function() { return 1; }); define("two", function() { return require("one") + require("one"); }); define("three", function() { return require("two") + require("one"); }); three = require("three");
--->

 three = 3;
There is a certain irony that now it's possible to do optimisations like that in javascript - a dynamically typed language with almost no compile time guarantees.

Meanwhile java used to have easy static analysis as a design goal (and I think a lot of boilerplate is due to that goal) but the community relies so much on reflection, unsafe access, dynamic bytecode generation, bytecode parsing etc that such an optimisation would be almost impossible to get right.

9
untog 5 days ago 1 reply      
This should have a big impact on the "cost of small modules", as outlined here:

https://nolanlawson.com/2016/08/15/the-cost-of-small-modules...

Which is to say, one of its most effective use cases will be making up for deficiencies in Webpack, Browserify and RequireJS. Which I'm a little ambivalent about - I wish we could have seen improvements to those tools (it's possible, as shown by Rollup and Closure Compiler) rather than adding another stage to filter our JavaScript through. But progress is progess.

10
gajus 5 days ago 0 replies      
11
tyingq 5 days ago 1 reply      
How "safe" is it? I'm thinking, for example, of Google's closure compiler and the advanced optimizations, which can break some things.

Or roughly, if it compiles without errors, is it safe to assume it won't introduce new bugs?

12
bthornbury 5 days ago 2 replies      
Awesome project, the performance gains seem real, but why wouldn't these optimizations be happening at the javascript JIT level in the vm? (serious question)

React / javascript programming, is the most complex environment I've ever dug into, and it's only getting more complex.

create-react-app is great for hiding that complexity until you need to do something it doesn't support and then it's like gasping for air in a giant sea of javascript compilers.

13
vikeri 5 days ago 5 replies      
I was under the impression that V8 and the like are so optimized that this would give marginal gains. Would love to be wrong though. Do you have any performance benchmarks?
14
Kamgunz 5 days ago 1 reply      
Very interesting, nobody mentioned how formal and quite technical this README is, it goes really into details about what it does, and even future plans laid in three sections across 30 bullet points. One bullet point in the really far future sections said "instrument JavaScript code [in a non observable way]" emphasis mine, that part was noted in several other bullet points. It seems to me every compiler/transpiler/babel-plguin changes JavaScript code in a non observable way, no? Just a theory, but that undertone sounds to me like the ability to change/inject into JavaScript code undetectably on the fly in some super easy way.

Just another day at Facebook's office...

15
arota 5 days ago 2 replies      
This is exciting, and has a lot of potential to significantly improve JS library initialization time.

I wonder if this is the same project[0] Sebastian McKenzie previewed at React Europe 2016?

[0] https://www.youtube.com/watch?v=xbZzahWakGs

16
dandare 5 days ago 3 replies      
What is the business model for a tool like this? Who has the resource to spend man/years of work while also create such a fantastic, simple yet comprehensive landing page?
17
drumttocs8 5 days ago 3 replies      
Coming from a non-CS background, I've always wondered why you can't "convert" code from one framework or paradigm to another. For instance, converting a project from jQuery to React. If you can define the outputs, why can't you redefine the inputs? That's what it seems like this project does... I suppose converting frameworks would be a few orders of magnitudes harder though.
18
mstade 4 days ago 0 replies      
I'm happy to see there's an option to use this with an AST as input, more tools like this should follow suit. Hopefully it can then push us to a place where there's a standard JS AST such that we don't reinvent that wheel over and over. Babel seems to be winning here, but I don't think it matters so much which one wins so long as any one does.

This tool looks interesting, particularly the future direction of it, but I'm weary about efficiency claims without a single runtime metric posted. The claims may be true, initializing is costly, but so is parsing huge unrolled loops. For an optimization tool, I'd hope to see pretty strong metrics to go along with the strong claims, but maybe that is coming?

Interesting work, nonetheless!

19
aylmao 5 days ago 0 replies      
Facebook's javascript / PL game doesn't disappoint. This is awesome!
20
ericmcer 5 days ago 0 replies      
Pretty cool, it did not make much difference in my application size, as it has very little static data in it. It seems pretty rare to do something like:

 fib(2);
and more common to do:

 getInputOrHttpOrSomethingAsync().then(function(a){fib(a)});

21
iamleppert 5 days ago 6 replies      
Not a comment about the tool, which looks cool and well done.

It's sad that there are developers and projects who write the type of code that causes these sorts of performance trade offs. I stopped writing this kind of fancy code a long time ago when I realized it wasn't worth it. You're just shooting yourself in the foot in the long run.

I think static analysis performance optimization tools are great but a certain part of me thinks it just raises the waterline for more shitty code and awful heavy frameworks that sacrifice the user experience for the developer experience.

"Just run it through the optimizer" so we don't actually have to think about what a good design looks like...

22
frik 5 days ago 2 replies      
How does it compare to Google's closure compiler? It is considered by many best in class. It understands the code (uses Java based Rhino Javascript engine), while most alternatives (UglifyJS & co) just monkey patch things. You can trust the Google's closure compiler output.

Edit: @jagthebeetle: have you tried "advanced mode"? (One should read the documentation before using it, it's really a game changer but requires one to read the docu first)

23
Waterluvian 5 days ago 0 replies      
What percentage of typical code is packable like this? What I really need is a way to easily determine, "is it worth bothering with a tool like this?"
24
kasper93 5 days ago 0 replies      
I think that just in time compilers are better at doing thier things. Sure it is nice project that can interpret and print preprocessed js, but I think it might in fact not bring speed in most cases.

And the current state doesn't even know how to constant fold this loop.

function foo() { const bar = 42; for (let i = 0; i <= bar; i++) { if (i === bar) { return bar; } }};

25
hdhzy 5 days ago 1 reply      
This looks very good indeed but the lack of initial data model very severely limits the production usability of this tool. You can't use "document" and "window" ...

It's the same problem TypeScript have/had that for external libs you need definition files for it to work. Now if we had TypeScript-to-assumeDataProperty generator that would be VERY interesting!

26
kamranahmed_se 4 days ago 0 replies      
> helps make javascript code more efficient

https://github.com/facebook/prepack/issues/543

Are you sure?

27
kccqzy 5 days ago 0 replies      
This reminds me of Morte, an experimental mid-level functional language created by Gabriel Gonzalez. They both seem to be super-optimizing, that is partially executing the program in question. Of course it is a great deal easier to do in a functional language than JavaScript.

http://www.haskellforall.com/2014/09/morte-intermediate-lang...

28
KirinDave 5 days ago 0 replies      
I wonder what this would do to Purescript code?
29
jlebrech 4 days ago 0 replies      
I want something that can separate my code into what can be precompiled into wasm and what has to stay in JS. maybe just insert comments so i can see what needs to be done.
30
Traubenfuchs 4 days ago 0 replies      
I can't get anything to work in it. Just for fun I put the not minifed vue.js source inside and I get:

null or undefinedTypeError at repl:537:23 at repl:5:16 at repl:2:2

31
reaction 5 days ago 0 replies      
Has anyone used this with webpack + reactjs ?
33
avodonosov 5 days ago 0 replies      
How does one measures performance improvement for a web page gained from such tools?
34
k__ 4 days ago 0 replies      
Just throw your webpack bundles in and be amazed.
35
Hydraulix989 5 days ago 4 replies      
36
iMark 5 days ago 0 replies      
The destination page looks uncomfortably like Webpack's.

Not the best idea, imho.

5
Puerto Rico files for biggest ever U.S. local government bankruptcy reuters.com
617 points by chollida1  5 days ago   459 comments top 33
1
knob 5 days ago 19 replies      
I have lived all my life in Puerto Rico, and as you can imagine, this issue is quite controversial.We owe money to the creditors... and what you do with a debt is to you pay it off.Yet the amount is so staggering, that I wonder if it's actually possible.

As is typical, decisions by politicians placed us in this situation. Decade after decade, 4-year term after 4-year term, the government has spent money it does not have. Be one political party or the other, it is the same thing.

There was a big manifestation this past Monday, where various Unions, groups, and students did a "Paro Nacional".Truthfully, I don't think they accomplished anything, other than various idiots vandalizing property they don't own.We are in deep shit, and it's going to get worse.

Lots of people are leaving the island, which just compounds the problem (less revenue).

I don't leave because: it's where I was born, where I have lived my entire life, and it is, honestly, paradise.

Obligatory John Oliver's Puerto Rico segment: https://www.youtube.com/watch?v=Tt-mpuR_QHQ

What will happen? I have no idea.Good to see this in HN.

2
6stringmerc 5 days ago 4 replies      
All that text about Puerto Rico's financial difficulties and not one mention of the rampant fraud that spun up the debt from a reasonable amount to the $XB that is likely to default is the result of graft, corruption, and fraud enabled by many US and international actors.

Want to take a guess how many of those responsible for the transactions, debt issues, and back-office fund transfers are in jail and all their assets seized to repay the debt, a la Civil Forfeiture for drug crimes in the US?

I feel bad for the Citizens of Puerto Rico, because here in Dallas the Fire and Police Pension fund is probably insolvent due to a combination of graft and incompetence as well, and those responsible in both these circumstances seem to just shrug it off and go on with life, their pockets lined more than 90% will ever see in 20 years of hard work.

Talking about the debt in Puerto Rico without firmly acknowledging the conditions that led to it is irresponsible in my opinion. Hard to separate the two. And, if we're going to talk about accountability for the latter, then we should nail those who committed the former first. Likelihood of that? Heh, yeah I'm not optimistic - doesn't mean I can espouse what I believe to be a justifiable alternate course.

3
grandalf 5 days ago 3 replies      
When issues of local government financial problems come up, I think it's important to remember that local governments are typically not able to undertake deficit spending the way the Federal government is.

If the US Federal government were prevented from running a budget deficit, we'd likely see a lot more solvency issues and financial failures of many sub-organizations within government due to bad budgeting. It would also be a lot harder for our leaders to start wars or get away with sloppy cost estimates of their grandiose ideas.

My argument is not that government should be small, simply that its spending should reflect its income and the results of its spending should be easily correlated with their cost.

Since the Federal government takes the lion's share of income taxes, local governments are constrained in their ability to generate income. Yet for most people, local governments provide the vast majority of the useful government services they enjoy.

So while it may seem that the Federal government is comparatively stable and responsible, the reality is that it's simply far less accountable to anyone and is able to use that lack of accountability to launder its reputation. States (or local governments) do not have the same luxury.

4
EternalData 5 days ago 3 replies      
I feel like local governments are going to be the first to really be affected by significant pension overlays that are not properly accounted for -- a lot of pension funds assume rates of return that are historically farcical. I think the industry average used to be about 8%.

"During the 20th Century, the Dow advanced from 66 to 11,497. This gain, though it appears huge, shrinks to 5.3% when compounded annually."

http://davidgcrane.org/?page_id=702

It doesn't auger well for the future of stable financial markets. Weak localities will fail, and eventually the states that have to support them. Puerto Rico is the canary in the coal mine.

5
creaghpatr 5 days ago 6 replies      
A few months ago I discussed this with friends and suggested the government subsidize cheap flights to Puerto Rico, say $30 for a round trip flight from Atlanta, scaled up depending on domestic distance.

This would cost taxpayer money but would drive a ton of tourism to Puerto Rico (albeit temporarily) and pump a bunch of money into their economy, defibrillator style. I've never been there but given a cheap round trip flight I would easily go and spend money.

The alternative appears to be some kind of bailout and/or debt restructuring and I don't see that working out in the long term, which would presumably cost taxpayers even more money.

6
rburhum 5 days ago 1 reply      
For the Spanish speakers of HNs, NPR's Radio Ambulante has an excellent episode (called "deuda") that they did last year explaining the complexity of the situation.

The short version is that a lot of the debt is held by the people of PR themselves in form of bonds that were sold when the economy was good.

The reason it was artificially good is that for a long period it had great US tax benefits when compared with other US territories/states. During that time several corporstions (particularly pharma) had factories and jobs there. During that time there was a boom in bond sales - a lot of predatory practices that are reminiscent of the US housing bubble were done there, too. Once the tax loophole was closed, corporations left the island, jobs went to shit, the bonds could not be repaid, and all well, you get the picture. Except that in this case, "not paying debt", literally means not paying a huge chunk of all the life savings that people poured (arguably even patriotically) to their own bond system (see the parallels to the housing crisis?).

You could technically boost the enconomy there by reintroducing similar tax loopholes, but it would be a temporary fix, and there would be significant struggles in the Senate and Congress... Taxation without representation sucks big time...

Link to podcast:https://16683.mc.tritondigital.com/NPR_510315/media-session/...

7
djsumdog 5 days ago 6 replies      
Puerto Rico should really be a state by now. Would it being a state given it any distinct advantages (or are there any specific problems in light of it not being a state) in this type of situation?
8
bradleysmith 5 days ago 0 replies      
I spent several months in Puerto Rico working on operations with Google's Project Loon, launching balloons out of Cieba.

Seeing the island during September 2016 power outage was eye-opening. It was admittedly a pretty bad event that spurred it, but portions of the island were without power for several days. Infrastructure development is definitely necessary, particularly considering the possibility of storm hits there.

It is a lovely island, I hope this manages to nudge along solutions from what seemed to be a stagnating problem.

9
oneguynick 5 days ago 0 replies      
As you hear about Puerto Rico the next few days, take some time to get smart on the history and dynamic. Highly recommend Congressional Dish Podcast that came out a few weeks ago. Very good - http://www.congressionaldish.com/cd147-controlling-puerto-ri...
10
sidmitra 5 days ago 0 replies      
There is an excellent episode on the podcast Radio-Ambulante on exactly this. There're are accounts of a few people living there, small business owners etc. if i recall correctly

http://radioambulante.org/en/audio-en/debt

English translation: http://radioambulante.org/en/audio-en/translation/translatio...

11
elihu 4 days ago 0 replies      
I'm not super knowledgeable, but this seems to me like a good thing. If investors lend money to a government that can't repay, they should lose some of that money. I don't think whole economies should remain in a state of economic slavery.

Is there some way in which this move might turn out bad for the people of Puerto Rico, aside from it probably being more difficult to raise debt in the future at a reasonable rate?

12
WillyOnWheels 5 days ago 0 replies      
The Marxist hidden in me says

* listen to Richard Wolff's views on Puerto Ricohttp://www.rdwolff.com/tags/puerto_rico

* listen to David Graeberhttps://www.amazon.com/Debt-First-5-000-Years/dp/1612191290

13
gwbas1c 5 days ago 1 reply      
A lot of people forget that government debt is often funny-money. Defaulting on debt is very different than not paying your friend back.
14
kqr2 5 days ago 0 replies      
For a good intro to the Puerto Rico financial crisis, see this Planet Money podcast :

http://www.npr.org/sections/money/2016/04/01/472733338/episo...

15
kirse 5 days ago 0 replies      
Are there any investment opportunities amidst this news? I've always wondered how these massive bankruptcies play out.
16
ComodoHacker 4 days ago 0 replies      
TIL there are territories of the United States where the U.S. Constitution is not fully applicable.
17
ksec 4 days ago 0 replies      
Ok, So China's economy is on the verge of collapse ( as the media likes to paint it), it is not sustainable, but they are carefully trying to control it.

EU is, Brexit, Greece ( tiny compared to P.Rico ).....

And now U.S.

It seems we have more problems then we had after the Money Printing machine turned to full power in 2008.

This may sound naive, do we actually have a recession in the past 20 - 30 years? We may have contraction, but actual recession? 2008, was like a come and go event in such a short time frame.

And an even more stupid question, is a constant GDP growth, sustainable? Or if we include the lowered value of currency we have been in negative already?

18
justforFranz 5 days ago 0 replies      
Does anyone know how much the US federal govt is on the hook for?
19
danschumann 4 days ago 1 reply      
Their governor looks like Michael Scott. I've never understood why governments go into debt. It's just stupid.
20
nom 5 days ago 0 replies      
http://www.nationaldebtclocks.org

I'll just leave this here.

21
masterleep 5 days ago 3 replies      
Illinois badly needs to do this as well.
22
rrggrr 5 days ago 2 replies      
> As is typical, decisions by politicians placed us in this situation. Decade after decade, 4-year term after 4-year term, the government has spent money it does not have.

Every time every one of us votes for any politician who favors any deficit spending - and this is most of us - we give aid and comfort to this outcome. We all want fiscal responsibility until its a cause that touches us emotionally.

23
faragon 4 days ago 0 replies      
Apple could buy that debt and rename the island to "Apple Island", etc.
24
cpr 5 days ago 0 replies      
As Maggie Thatcher said: eventually you run out of other people's money.
25
azinman2 5 days ago 0 replies      
On the bright side, perhaps it's now a great time to buy property in PR?
26
guelo 5 days ago 0 replies      
The PROMESA bill is disgusting. An unappointed board filled with bankster-friendly technocrats will now overrule democracy in Puerto Rico. The island is in for a generation of recessions and low growth as the entire economy will be permanently directed towards paying off bondholders.
27
hindsightbias 5 days ago 0 replies      
Why not sell of the island, or part of it for seasteading?
28
digitalneal 5 days ago 1 reply      
Wonder how bad it would be if they didn't subsidize rum production so heavily.
29
_delirium 5 days ago 2 replies      
30
golemotron 5 days ago 0 replies      
Debt: The Next 5000 Years
31
kyleblarson 5 days ago 0 replies      
In 30 years bankruptcies like this one will look like peanuts compared to the collapse of massively underfunded government employee pension funds that will begin happening soon.
32
Taylor_OD 5 days ago 0 replies      
I was in Puerto Rico a year or so ago. I'm by no means a expert but it seemed like a odd place to live. Mostly because its not a US state yet a good chunk of the people seem to want it to be.
33
mirekrusin 5 days ago 1 reply      
Should I cash my USD now or should it be safe for few more months?
6
The Horror in the Standard Library zerotier.com
813 points by aw1621107  3 days ago   209 comments top 33
1
bluejekyll 2 days ago 2 replies      
OMG, as I was reading this I thought, "man, this reminds me of a bug I ran into with std::string back in 2000", A few sentences later, and this is also about std::string and the STL.

Mine was different though, after tracking down a memory leak that was happening with the creation of just new empty string, I discovered in the stdlib that there was a shared pointer to the empty string with a reference count of how many locations were using it (ironic that this was intended to save allocations). It turned out this was on Intel and we had what was rare at the time, a multi-processor system. It turned out that the std::string empty string reference count was just doing a vanilla ++, no locking, nothing, variable not marked volatile, nothing.

A few emails with a guy in Australia, a little inline assembly to call a new atomic increment on the counter, and the bug was fixed. That took two weeks to track down, mostly because it didn't even cross my mind that it wasn't in my code.

From that point on, I realized you can't trust libraries blindly, even one of the most used and broadly adopted ones out there.

2
faragon 2 days ago 2 replies      
The problem is forgetting that dynamic memory usage is not "free" (as in "gratis" or "cheap"). In fact, using std::string for long-lived server processes doing intensive string processing (e.g. parsing, text processing, etc.) is already known to be suicidal since forever, because of memory fragmentation.

For high load backend processing data, you need at least a soft real-time approach: avoid dynamic memory usage at runtime (use dynamic memory just at process start-up or reconfig, and rely on stack allocation for small stuff, when possible).

I wrote a C library with exactly that purpose [1], in order to work with complex data (strings -UTF8, with many string functions for string processing-, vectors, maps, sets, bit sets) on heap or stack memory, with minimum memory fragmentation and suitable for soft/hard real-time requirements.

[1] https://github.com/faragon/libsrt

3
arunc 2 days ago 2 replies      
I encountered exactly the same issue few years ago in UIDAI in one of our large scale biometric matchers and the resolution was exactly the same. After a week of debugging I found that the libstdc++ allocator was the culprit. I found [1] and confirmed the same, which helped in fixing this issue.

The thing that was more interesting (or sad) was to know that the GCC developers didn't expect the multithreaded applications to be long running.

"Operating systems will reclaim allocated memory at program termination anyway. "

[1] https://gcc.gnu.org/onlinedocs/libstdc++/manual/mt_allocator...

4
alyandon 3 days ago 1 reply      
I myself ran across this same scenario many years ago with a similar amount of hair pulling and eventually concluding that the GNU libstdc++ allocator wasn't reusing memory properly. Unfortunately, I was never able to pare down the application to the point that I had a reproducible test case to report upstream.

GLIBCPP_FORCE_NEW was the solution for the near term and since I was deploying on Solaris boxes I eventually switched to the Sun Forte C++ compiler.

It really bugs me that this problem still exists. :-/

5
bgd11 2 days ago 1 reply      
All the technicalities aside the writing style of the author is amazing. I would have never thought that someone can create such an intense narrative with 'malloc' as the main character
6
firethief 3 days ago 1 reply      
> Nothing made any sense until we noticed the controller microservice's memory consumption. A service that should be using perhaps a few hundred megabytes at most was using gigabytes and growing... and growing... and growing... and growing...

Not identifying this until many hours after symptoms were impacting users sounds like a pretty big monitoring blind spot.

7
cyphar 3 days ago 4 replies      
Did you report the issue upstream with a patch? The solution to "the standard library is broken" is to fix the standard library, no? It's all free software after all.
8
stephen_g 2 days ago 1 reply      
Things like this is why I was happy to see the LLVM project write their own C++ standard library. libstdc++ has always seemed a bit hacky and fragile to me. It's great to have an option which is a more modern, clean codebase.

Have you tested to see if this works better with LLVM libc++?

9
rurban 2 days ago 0 replies      
The "make malloc faster" part was done over a decade ago with the followup from ptmalloc2 (the official glibc malloc) to ptmalloc3. But it added one word overhead per region, so the libc people never updated it to v3. perf. regression.They rather broke the workarounds they added. And now they are breaking emacs with their new malloc.
10
consultSKI 2 days ago 0 replies      
>> Most operators in C++, including its memory allocation and deletion operators, can be overloaded.

Have I mentioned lately how much I hate C++?

Great read.

11
spyder81 2 days ago 0 replies      
"Then I remembered reading something long ago" is when experienced programmers are worth their weight in gold.
12
brynet 2 days ago 0 replies      
Interestingly for recent versions of GCC (>=4.0) GLIBCXX_FORCE_NEW is defined for libstdc++, not GLIBCPP_FORCE_NEW.

https://gcc.gnu.org/onlinedocs/libstdc++/manual/mt_allocator...

13
charles-salvia 3 days ago 2 replies      
I'm a bit confused here.

>> Most operators in C++, including its memory allocation and deletion operators, can be overloaded. Indeed this one was.

Okay, well, firstly - the issue here seems to be a problem with the implementation of std::allocator, rather than anything to do with overloading global operator new or delete. Specifically, it sounds like the blog author is talking about one of the GNU libstdc++ extension allocators, like "mt_allocator", which uses thread-local power-of-2 memory pools.[1] These extension allocators are basically drop-in extension implementations of plain std::allocator, and should only really effect the allocation behavior for the STL containers that take Allocator template parameters.

Essentially, libstdc++ tries to provide some flexibility in terms of setting up an allocation strategy for use with STL containers.[2] Basically, in the actual implementation, std::allocator inherits from allocator_base, (a non-standard GNU base class), which can be configured during compilation of libstdc++ to alias one of the extension allocators (like the "mt_allocator" pool allocator, which does not explicitly release memory to the OS, but rather keeps it in a user-space pool until program exit).

However, according to the GNU docs, the default implementation of std::allocator used by libstdc++ is new_allocator [3] - a simple class that the GNU libstdc++ implementation uses to wrap raw calls to global operator new and delete (presumably with no memory pooling.) This allocator is of course often slower than a memory pool, but obviously more predictable in terms of releasing memory back to the OS.

Note also that "mt_allocator" will check if the environment variable GLIBCXX_FORCE_NEW (not GLIBCPP_FORCE_NEW as the author mentions) is set, and if it is, bypass the memory pool and directly use raw ::operator new.

So, it looks like the blog author somehow was getting mt_allocator (or some other multi-threaded pool allocator) as the implementation used by std::allocator, rather than plain old new_allocator. This could have happened if libstdc++ was compiled with the --enable-libstdcxx-allocator=mt flag.

However, apart from explicitly using the mt_allocator as the Allocator parameter with an STL container, or compiling libstdc++ to use it by default, I'm not sure how the blog author is getting a multi-threaded pool allocator implementation of std::allocator by default.

[1] https://gcc.gnu.org/onlinedocs/gcc-4.9.4/libstdc++/manual/ma...

[2] https://gcc.gnu.org/onlinedocs/gcc-4.9.4/libstdc++/manual/ma...

[3] https://gcc.gnu.org/onlinedocs/gcc-4.9.4/libstdc++/manual/ma...

14
bboreham 2 days ago 1 reply      
Post doesn't actually say what was broken, or indeed prove the location of broken-ness. Just that it went away with a different compile option.

Exciting writing, but lacking a point.

15
halayli 3 days ago 1 reply      
this conclusion might be wrong. the code in question while it might not be allocating/freeing memory it might be stumbling on memory blocks and corrupting mem management structures. Turning the flag on might be fixing the issue by mere luck because memory allocations, locations and structures would be different
16
jonnycomputer 6 hours ago 0 replies      
the idea that we should always blame ourselves first has merit. but frankly some bugs, just p e r s i s ttt.

like this one, with fputcsv in PHP. https://bugs.php.net/bug.php?id=43225

17
squidlogic 3 days ago 2 replies      
Amazing write-up. Informative and gripping in its prose.
18
TimJYoung 2 days ago 1 reply      
I'm not sure if the other debug tools mentioned offer this, but AQTime Pro:

https://smartbear.com/product/aqtime-pro/overview/

has an allocation profiler that can be used to track down this sort of problem. You can take allocation snapshots while the application is running to see where the allocations are coming from (provided that you can run AQTime Pro against a binary with debug symbols/info).

I'm not affiliated with the company - just a happy customer that has used them for years with Delphi development.

19
Pica_soO 1 day ago 0 replies      
One thing, not even mentioned in the article is the component-scouting & profiling-phase, which completely failed. You do not go all in with a project on a crucial library, that you did not profile with the real workload.

One small prototype, never run under full load, with mock up classes- not even the size of future classes, mock up operations(not even close to the real workload) and sometimes not even the final db-type attached. Yeah, hard to see the future, but why not drive the test-setup to the limits and go from their?

Instead the whole frankenservice is run once for ten minutes and declared by all involved worthy to bare the load of the project.

Here is to lousy component scouting and then blaming it on the architect.

20
tripzilch 2 days ago 0 replies      
Upvoted for the Lovecraft and pulp horror lit references, and starting with "It was a dark and stormy night ..." :-)

Great writing, great read.

21
SFJulie 2 days ago 2 replies      
Memory fragmentation due to dynamic non fixed size data structure and multithreading is an old foe. That may not be fixable in c/c++

Worker A allocates dynamic stuff. Algo take a segment (0+sof(str)(ofA) + n) Work B Allocates to create same kind of data structure (fragment of a JSON) [ofA, OfB]Wk A resume allocating, boundary of [0, ofA] exceeded, no free contiguous space up or down [Ofb, OfC] allocatedWk C enters wants to alloc, but sizeof(string) make it bigger than [0, OfA] so [ofD, ofE] asked.... and the more concurrent workers the more interleaving of memory allocation go on with fragmented memory.

Since malloc are costly the problem known, a complex allocator was created with pools of slab and else, probably having one edge case, very hard to trigger having phD proven really complex heuristic.

CPU power increase, more loads more workers, interleaving comes in, edge case gets triggered.

And C/C++ makes fun of fortran with its fixed size data structures embracing any new arbitrary size arbitrary depth data structure for the convenience of skipping a costly waterfall model before delivering a feature or a change in the data structure and avoiding bike shedding in committees.

Human want to work in a way that is more agile than what computers are under the hood.

Alternative:

Always allocated fixed size memory range for data handling, and make sure it will be enough. When doing REST make sure you have an upper bound, use paging/cursors, which require FSM, have all FP programmers say mutable are bads, sysadmin say that FSM are a pain to handle when HA is required, and CFO saying SLA will not be reached and business model is trashed, and REST fans saying that REST is dead when stateful.

Well REST is a bad idea.

22
bogomipz 2 days ago 1 reply      
This was a nice write up, however I didn't follow how memory fragmentation was related to a memory leak. Can someone explain? I understand that alternate memory allocators would help with the fragmentation issue but how does the choice of allocators affect memory leakage?
23
jcalvinowens 2 days ago 0 replies      
I don't understand the point of this article... if you think there's a bug in the library, fix it. Don't write a melodramatic blog post lamenting how horrible it is in the hope somebody else will do it for you.

This isn't particle physics, it's code: we don't have to guess, we can look at it and see how it works.

24
grandinj 2 days ago 1 reply      
I'm guessing the cpp thing is a holdover from the days when the glibc maintainer was less than entirely helpful. There has been actual improvements in glibc in this area lately so hopefully these kinds of hacks will slowly go away.
25
selimthegrim 2 days ago 0 replies      
James Mickens, move over. There's a new sheriff in town.
26
Safety1stClyde 2 days ago 1 reply      
It was only yesterday that I was reading another discussion from hacker news about problems with Gnu C library.

https://news.ycombinator.com/item?id=14271305

27
logicallee 3 days ago 2 replies      
People forget that C++ is just a tool, like a screwdriver or a hammer. A good carpenter knows when it's time to take a metallurgy class and resmelt his hammer, because its composition is not correct for being a hammer.
28
Ono-Sendai 2 days ago 0 replies      
So where's the bug report with repro test code?
29
odbol_ 1 day ago 0 replies      
Is C++ now going the way of PHP, where to have an actual working program you have to disable all the defaults in some mysterious but crucial ritual?
30
tbodt 3 days ago 1 reply      
Maybe it'll get fixed now that a post saying "libc++ is broken" got hackernewsed
31
ris 2 days ago 0 replies      
The correct response is to file a bug report, not write a clickbait-y article.
32
mtanski 3 days ago 2 replies      
Yeah malloc() is pretty terrible in glibc by modern standards. For some workloads it just can't keep up and ends up fragmenting space in such a way that memory can't be returned to the OS (and thus be used for the page cache) and you end up in this performance spiral.

I always deploy C++ server on jemalloc. Been doing it for years and while there's been occasional hicks up when updating it has provided much more predictable performance.

33
nly 3 days ago 5 replies      
Actually it is C's malloc and free that is "broken". malloc() takes a size parameter, but free() doesn't. This imbalance means it can never be maximally efficient. Whatever GNU stdlibc++ is doing is probably, on balance, a net win for most programs.

It's not exactly roses in C++ either of course. You can do better than the standard library facilities. Andrei Alexandrescu gave a great, entertaining, and technically elegant talk on memory allocation in C and C++ at Cppcon 2015 that is well worth watching

https://www.youtube.com/watch?v=LIb3L4vKZ7U

7
Something is wrong when the telephone app on your phone becomes 3rd party martinruenz.de
591 points by guy-brush  4 days ago   335 comments top 54
1
Voloskaya 4 days ago 10 replies      
Or that just means that we still call the device in our pocket a "phone" for legacy reasons. If we are okay with having a third party handles our messages, VoIP etc, why not the phone app?
2
mcherm 4 days ago 8 replies      
I completely disagree with the title. The fact that "telephony" can be an app on the phone is a WONDERFUL thing. It means that the author of this article has a choice, as opposed to NOT having a choice.
3
NeutronBoy 4 days ago 3 replies      
> This leaves me in an unpleasant spot as I, where I can, avoid using google services and now need to find an alternative dialling application. Isnt this sweet? I am searching for a dialling application for my smartphone. A DIALLING application

You bought a Google phone and don't want to use Google services, but complain that the dialler can be provided by a third party? Isn't that a good thing?

4
tomkarlo 4 days ago 0 replies      
It's always been a "telephone app", on smart phones. And in many cases, it's already been somewhat "3rd party" because in reality it's from the ODM who made your phone or the SOC vendor, not the company that branded your phone. The author is only just becoming aware that this is "3P" because he happens to have a branded dialer added on his phone, versus a "white label" dialer that was already there.

I'm fairly certain Wileyfox didn't make the dialer that previously came with his phone, either. (They're a smaller OEM.) They probably just used the one provided by the ODM assembling the device for them.

5
Animats 4 days ago 2 replies      
It's a real problem. Are there any voice dialing programs for Android phones which do voice recognition locally and don't require Google services? That's the way it used to work until Google broke it so they could monitor all your dialing.
6
gruez 4 days ago 3 replies      
https://www.wileyfox.com/the-brand

>UNRIVALLED PRIVACY AND SECURITY

>Choose precisely the data you wish to share; protect apps with additional PINs; prevent spam with Truecaller Integrated Dialler.

So their idea of privacy is privacy from everyone except the manufacturer (and "trusted" third parties)

>Of course, you can always root the phone and install custom roms. But this process takes some time and the development and compatibility with these roms is less than satisfactory.

Maybe he shouldn't have bought a device with such a small userbase?

7
gumby 4 days ago 2 replies      
I don't agree. It's simply that some vendors are untrustworthy.

Frankly if the telephone app on my "phone" stopped working I wonder how long it would take me to notice.

8
walrus01 4 days ago 2 replies      
I have never heard of a "Wileyfox Swift". If you need an awesome and capable Android 7.0 based dual SIM phone with zero carrier crapware/bloatware and a close to stock Android experience, the OnePlus 3T (64 or 128GB version) is a good choice.

Since Oneplus' falling out with Cyanogen Inc, and the financial failure of Cyanogen, Oneplus' own OxygenOS is essentially a re-implemented CyanogenMod that has all of the same features.

9
leeoniya 4 days ago 0 replies      
i just bought a Volkswagen, which comes with Car-Net, which apparently relays telemetry to Verizon Telematics via a 3G connection. oh and there's a gps reciever and an in-car microphone.

their privacy policy is scary.

https://carnet.vw.com/web/vwcwp/privacy-policy

dealer refuses to disable or remove this carnet module. their "solution" is to tell me just not to sign up for the services. ummm..lol no. needless to say i'm taking matters into my own hands via vw message boards and:

http://www.ross-tech.com/vag-com/

may have to physically remove the module and/or neuter the antenna.

10
bostand 4 days ago 1 reply      
This why we need [edit] GDPR.

Next time someone tries to harvest my personal data using an "all-inclusive" EULA I'm going to sue his ass in EU-land.

11
ranveeraggarwal 3 days ago 0 replies      
Which is why I switched to Google-made phones (Nexus). No bloat, frequent security up dates and default apps. Sure, Google knows what I'm having for dinner, but it's a single entity I trust. At least I don't go about moving through apps, reading their TnCs.
12
thomseddon 4 days ago 0 replies      
Found myself in the exact same situation (have a WileyFox, updated and ended up with the awful true caller app as the default dialler + found I couldn't use the default google dialler), I've since been using: https://play.google.com/store/apps/details?id=com.contapps.a... but it does have adverts, I'd welcome any other suggestions (aside from re-flash).

Only today I was joking about how absurd it is that I was struggling with such a fundamental feature.

13
timsayshey 4 days ago 0 replies      
Too much traffic -- couldn't access the page.

Here's the cached version: http://archive.is/vs4a7

14
tdicola 4 days ago 2 replies      
Looking at my cell phone bill I use far more data than I do actual voice a month. It's kind of an anachronism to call it a phone anymore. It's a pocket computer.
15
nemoniac 4 days ago 0 replies      
Can someone recommend a good, safe, privacy-respecting 3rd party phone app for Android, preferably for Cyanogenmod or LineageOS?
16
bighi 4 days ago 0 replies      
You're not only using Android, but you're using an Android phone from a smaller company.

While this is bad, it's not really unexpected.

17
xenithorb 1 day ago 0 replies      
You don't need to root a phone to install a new operating system. What you do likely need to do is unlock the bootloader, if at all possible.

If it's supported by LineageOS, use that instead. If not - perhaps consider buying a phone that is. If you're really privacy conscious, perhaps look at the CopperheadOS options - they're specifically known for privacy.

Other than that, this is the sad state of big data and advertising. Unless you fight back agasint it, everything about you is going to be poked, proded, and collected for futher processing and aggregation. Hell, a lot of you probably work in that industry here.

You should not assume that any device manufacturer has your interests in mind, as the trends have overwhelmingly shown that anything proprietary always leads to them finding a way to further line their pockets as interest for their product wanes, or their investors get greedier.

These are facts of life; take action but don't be surprised when you're being misled and taken advantage of when you're putting the trust of something so integral in a proprietary party.

Godspeed.

18
on_and_off 4 days ago 1 reply      
What exactly is wrong with the dialler being an app ?

It allows exactly what the author wants with his specific desire for a dialler that does not rely on play services or true caller.

20
notsohuman 3 days ago 0 replies      
Somethings wrong with the idea that there can be apps installed in your phone that could do things you don't know
21
accountyaccount 4 days ago 1 reply      
Disagree entirely. The stock phone app is likely the least used app I keep on my phone.
22
bennyp101 4 days ago 0 replies      
When I updated it at the weekend, I too noticed that the dialer had gone - which is annoying as I find the interface to Truecaller a bit of a pain - but as I rarely make calls it's something I can live with. The incoming call filtering and showing of names not in my phonebook is actually pretty useful.

When I bought my Wileyfox Swift 18 months ago it came with Truecaller as an app already installed, and it integrated with the standard dialer well - ie I could use the dialer app to make calls, and incoming calls would show info from Trucaller. Not sure why they couldn't keep it like that.

23
mayneack 4 days ago 0 replies      
Many have already posted about how the phone is less important than email/browser which we're ok with being 3rd party.

I didn't really think there could be a difference between phone apps until I got project fi from Google. Their phone app comes with voicemail transcription and spam detection. Some of that is obviously from the service itself, but these features seem like things that I'd want to be able to acquire even if I had an AT&T Samsung (which is what I moved from and didn't have by default). Third party seems fine with me.

24
bbulkow 3 days ago 0 replies      
One thought. A company like WileyFox must spend significant bucks to provide after-the-fact engineering to provide updates. It would be a classic error to underestimate the cost of ongoing engineering, and provide a lower up front cost, then be strapped for cash later. At that point, WileyFox management needs a revenue stream to provide the engineering to push 7.1.1. It would - again I am guessing - typically turn to outside companies who pay for placement, like the dialer. Those companies make money by selling data.

Thus I am not surprised that this is happening, although I had not heard of WileyFox until today. It's also possibly WileyFox is greedy, but I'm going with incompetence; I will believe first that someone made a mistake in pricing and planning, and they're looking for the best outcome for customers, company, brand.

I am, perhaps uncomfortably, saying that "you got an unexpectedly low price" going with a company like WileyFox. They gave you a cheaper phone, and frequent updates, and now they need to pay for it. They could either start charging subscriptions for the 7.1.1 update ( which would be fair, no? ), or they could do what they are doing by making a deal with what sounds like a shady dialer company, or they could bail out and not provide the updates they promised ( like most handset providers do ).

I could get all high-horse about software consumers being trained to expect free software. That expectation being set by the billions of dollars in VC money that has been spent to capture markets - markets that later require "exploitation". But I won't go there - it's a complicated argument, there are other market forces, there are models that work, companies can be "up front" about data-driven business models.

My way of saying - you do have to pay for software, you took the 7.1.1 update, you should expect to pay..... right?

25
mmmBacon 3 days ago 2 replies      
I was out riding motorcycles with a friend. He went down and was pretty broken up. As the ambulance loaded him up, I tried to use his Android phone to call his wife to let her know he had been hurt and what hospital he was being taken to. It was a stressful situation but I could not find the phone function at first easily (no idea which distribution). The phone function is still really important!
26
10165 3 days ago 1 reply      
Is it true that carriers are increasingly switching to calling over WiFi? I understand the new iPhones have this feature, with switchover happening automatically.

Did the popularity of Facetime, WhatsApp, etc. have anything to do with this? I have seen claims from carriers that WiFi calling "improves coverage".

At some stage, will anyone question what is the point of the cellular network? Especially in urban areas.

What is the difference between

a. a "smartphone" and

b. a portable, pocket-sized computer with rechargeable battery where the user chooses what to install and can remove any pre-installed software, where user gets a choice of 1. using pre-installed software and default settings or 2. using her own bootloader, kernel and userland, and where the user can easily open the case and tinker.

Does a computer need to have any association with the company selling internet access? Today's "phones" are manufactured for carriers (who are often ISP's, too), not for users. The carriers in turn sell these customized "phones" to users.

27
skewart 3 days ago 1 reply      
One of the central complaints in the blog post is that it's too hard to get your hands on an Android phone that doesn't come with impossible-to-remove bloatware. I've been amazed and annoyed by this too.

I understand all the reasons why manufacturers add these apps to the phone. But still, I would gladly buy a phone that came with a stripped down version of Android, no bloatware, and received regular updates and security patches.

Sure, you can root your phone, and there are various ROMs out there that you can install. But as someone who's never done this it seems kind of complicated and annoying.

I want a simple experience where it just works right out of the box. Heck, I'd be happy if to buy a phone from someone who just resells devices after flashing a decent ROM onto them and makes updates easy somehow.

28
ris 4 days ago 1 reply      
So what makes you think Apple or Google aren't using your call information for advertising purposes in the default diallers?
29
AngeloAnolin 4 days ago 1 reply      
Did I read that correctly?

Isnt this sweet? I am searching for a dialling application for my smartphone. A DIALLING application.

Does this mean he can't practically call someone outside his contacts list as there's no way to key in phone numbers? Or would he still be able to make regular voice calls?

30
shipintbrief 3 days ago 0 replies      
Something is wrong when OP writes about dialer app that sells your private info were installed on device without any warnings and everyone tells the OP that they don't really use phone app.Makes me question the humanity.
31
Fej 4 days ago 1 reply      
> This leaves me in an unpleasant spot as I, where I can, avoid using google services and now need to find an alternative dialling application.

AOSP dialer. Done. Don't want the truedialer app? Run CM, like it says in the post.

32
dotBen 4 days ago 1 reply      
No, this is simply proof that consumers must pick carefully the hardware vendor they decide to buy as which software it comes with will vary greatly between devices. This is simply not an issue with Google Pixel/Nexus devices, Samsung or any of the other major vendors.

Running Cyanogen (which has now no longer developed) nor 'GAPPS' (Google's standard package of apps that gives you access to the Play Store, and the stock dialer) is a pretty fringe use of Android.

33
TheStrangBird 4 days ago 0 replies      
Well in the end modern smartphones are just general purpose computers which have some artificial restrictions on what you can do with them and happen to have a touchscreen/modem/speaker/microphone etc.

So the dieler just being a app is a direct consequence from smartphones not being any kind of "special/magical" embedded device.

Through silently overriding the Dialer with a program I would normally suspect to be malware which sneaked on my phone is a horrible think to do...

34
loueed 3 days ago 0 replies      
Many companies are betting that phones will be phased out for augmented reality glasses. Magic Leap, Microsoft, Facebook, Google, and Apple all have teams working on it.

People expect these devices to accept sim cards, will this break the confusion? My Smartglasses are a personal computer that also has phone functionality.

35
meroje 4 days ago 0 replies      
The Swift originally shipped with CyanogenOS, which is not the same thing as CyanogenMod. CyanogenOS already featured Truecaller on the dialler to show names on incoming calls. I was confused too when the dialler disappeared from my launcher, but I use it very rarely and essentially to receive calls so not a big deal for me. I understand OP's concerns though.
36
steveharman 3 days ago 0 replies      
Aren't there loads of free "phone" (dialer) apps in the Play store? Just use one of those?
37
jumpkickhit 4 days ago 0 replies      
Funny, I'm reminded of my HP Ipaq back in 2005 or so. 800mhz smartphone with a stylus, wifi, running Windows CE.

It struck me as a portable computer, with "cellphone" functionality added as an almost afterthought.

Guess that's still the case if you think about it.

38
smnplk 4 days ago 0 replies      
I have and idea for a VOIP app called The Stallminator. The app would feature a nice backdrop of Richard Stallman's face http://tinyurl.com/msbkwb3
39
InclinedPlane 3 days ago 0 replies      
It's fine. If you want a phone just for voice go buy a flip phone, they are basically free and plans are cheap. If you want a smartphone, then deal with the implications.
40
alinspired 4 days ago 0 replies      
This issue echoes many privacy-related discussions and the same approach for android applies here:

 - unlock - install minimal 3rd party ROM (ie LineageOS) - choose what you want from Google via Open Gapps or use F-droid

41
thuruv 3 days ago 0 replies      
Whether thats a Google service or not, the OP has clearly stating the point that the problem he's facing now will have an evolution and affect even us in future.
42
johnhenry 4 days ago 0 replies      
Unfortunately, in this day and age, I don't think it's reasonable to expect privacy even from a first party application.
43
guy-brush 3 days ago 0 replies      
I just updated the article to include further emails from truecaller and wileyfox.
44
mdekkers 4 days ago 0 replies      
truecaller is something alltogether evil
45
codewiz 4 days ago 0 replies      
Double check which software you get with which device. Not being an apple fanboy, I have to admit: At least you know what you get when buying an Iphone.

I don't understand why some people compare a $650 iPhone 7 with cheap phones loaded with some half-assed vendor fork of Android?

If you wanted stock Android, you should have bought a $650 Pixel phone.

46
ianseyler 4 days ago 1 reply      
My iPhone is my pocket computer. It's on a tablet plan (data only - up to 1GB for $20 CAD) because voice and text is something I would hardly use. TextNow worked fine for me in past for phone calls and texting non-apple devices but I've since switched to Hushed as my go-to. I've never used the built in "Phone" app.
47
jaimex2 3 days ago 0 replies      
And this is yet another reason why Cyanogenmod no longer exists.
48
znpy 4 days ago 0 replies      
i am still wondering how comes that nobody came up with a phone application for linux computers.

I have a 3G modem in my ThinkPad X220 and i am fairly sure it is technically capable of doing and receiving phone calls.

49
dorianm 4 days ago 0 replies      
I thought it was about LinkedIn selling people's profiles :)
50
fulafel 4 days ago 1 reply      
How do I find a dialer app that's cloud-free?
51
agumonkey 4 days ago 0 replies      
I miss my motorola v3650.
52
lujingfengjeff 3 days ago 0 replies      
paixun
53
pilot72 4 days ago 0 replies      
You purchased a Chinese mobile phone and it installed spyware in an update.

Anybody who's ever purchased a mobile from eBay or AliExpress has already seen that. They need to get their revenue from somewhere. Next time stick to a known, trustable brand.

54
mtkd 4 days ago 4 replies      
I fought having a phone through to 2008 - now I'm reconsidering wanting one again

if there was a pure messaging device right now with no voice - I'd bite their hand off

I don't need a mobile browser or even apps - all they do is stop me from disconnecting from work for a few minutes or hours - I don't think always-on is good for anyone over time

8
Uncensorable Wikipedia on IPFS ipfs.io
543 points by bpierre  12 hours ago   190 comments top 24
1
cjbprime 11 hours ago 10 replies      
Strategically, this (advertising IPFS as an anti-censorship tool and publishing censored documents on it and blogging about them) doesn't seem like a great idea right now.

Most people aren't running IPFS nodes, and IPFS isn't seen yet as a valuable resource by censors. So they'll probably just block the whole domain, and now people won't know about or download IPFS.

We saw this progression with GitHub in China. They were blocked regularly, perhaps in part for allowing GreatFire to host there, but eventually GitHub's existence became more valuable to China than blocking it was. That was the point at which I think that, if you're GitHub, you can start advertising openly about your role in evading censorship, if you want to.

But doing it here at this time in IPFS's growth just seems like risking that growth in censored countries for no good reason.

2
smsm42 3 hours ago 1 reply      
Ironically, I've just discovered that https://ipfs.io/ has certificate signed by StartCom, known for being source of fake certificates for prominent domains[1]. So in order to work around censorship, I have to go to site which to establish trust relies on a provider known for providing fake certificates. D'oh.

[1] https://en.wikipedia.org/wiki/StartCom#Criticism

3
badsectoracula 7 hours ago 3 replies      
Correct me if i'm wrong, but if accessing some content through IPFS makes you a provider for that content doesn't that mean that you are essentially announcing to the world that you accessed the content, which in turn can be used by those who do not want you to access it for targeting you?

In other words, if someone from Turkey (or China or wherever) uses IPFS to bypass censored content, wouldn't it be trivial for the Turkish/Chinese/etc government to make a list with every single person (well, IP) that accessed that content?

4
k26dr 8 hours ago 1 reply      
The following command will allow you to pin (ie. seed/mirror) the site on your local IPFS node if you'd like to contribute to keeping the site up:

ipfs pin add QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX

5
TekMol 10 hours ago 3 replies      
How is Wikipedia censored in Turkey? Are providers threatened to be punished if they resolve DNS queries for wikipedia.org? Or are they threatened to be punished if they transport TCP/IP packets with IPs that belong to Wikipedia?

Wouldn't both be trivial to go around? For DNS, one could simply use a DNS server outside Turkey. For TCP/IP packets, one could set up a $5 proxy on any provider from around the world.

6
mirimir 2 hours ago 0 replies      
Some additional information may help in the duty vs prudence debate. It's true that IPFS gateways can be blocked. But as noted, anyone can create gateways, IPFS works in partitioned networks, and content can be shared via sneakernet. Content can also be shared among otherwise partitioned networks by any node that bridges them.

For example, it's easy to create nodes on both the open Internet and the private Tor OnionCat IPv6 /48. That should work for any overlay network. And once nodes on such partitioned networks pin content, outside connections are irrelevant. Burst sharing is also possible. Using MPTCP with OnionCat, one can reach 50 Mbps via Tor.[0,1]

0) https://ipfs.io/ipfs/QmUDV2KHrAgs84oUc7z9zQmZ3whx1NB6YDPv8ZR...

1) https://ipfs.io/ipfs/QmSp8p6d3Gxxq1mCVG85jFHMax8pSBzdAyBL2jZ...

7
eberkund 10 hours ago 2 replies      
These distributed file systems are really interesting. I'm curious to know if there is anything in the works to also distribute the compute and database engines required to host dynamic content. Something like combining IPFS with Golem (GNT).
8
awqrre 16 minutes ago 0 replies      
until they create laws...
9
kibwen 11 hours ago 7 replies      
But Wikipedia allows user edits, and so is inherently censorable. You don't need to block the site, you can just sneak in propaganda a little at a time.
10
Spooky23 10 hours ago 0 replies      
Why bother with a technological anti-censorship solution for Wikipedia when the obvious solution is to just attack the content directly.

If a censoring body wants some information gone, just devote some attention to lobbying the various gatekeepers in Wikipedia.

11
y7 9 hours ago 1 reply      
Does IPFS work properly with Tor these days? Last I checked support was experimental at best.

Without proper support of an anonymity overlay, using Tor to get around your government's censor doesn't sound like a very wise idea.

12
slitaz 11 hours ago 1 reply      
Didn't you mean "unblockable" instead?
13
maaaats 10 hours ago 1 reply      
When browsing the content, how does linking work? I mean, don't they kinda have to link to a hash? But how can they know the hash of a page when the links of that page are dependent on the other pages and this may be a circle?
14
treytrey 11 hours ago 4 replies      
I'm not sure this thought makes sense, but just putting it out there for rebuttals and to understand what is really possible:

I assume IPFS networks can be disrupted by a state actor and only thing that a state actor like the US may have some trouble with is strong encryption. I assume it's also possible that quantum computers, if and when they materialize at scale, would defeat classical encryption.

So my point in putting forward these unverified assumptions is to question whether ANY technology can stand in the way of a determined, major-world-power-type state actor. Personally, I have no reason to believe that's realistic, and all these technologies are toys relative to the billions of dollars in funding that the spy agencies receive.

15
LoSboccacc 8 hours ago 1 reply      
> In short, content on IPFS is harder to attack and easier to distribute because its peer-to-peer and decentralized.

> port 4001 is what swarm port IPFS uses to communicate with other nodes

uhm.

16
pavement 10 hours ago 3 replies      
Listen, I get that there are other parts of the world experiencing serious "technical difficulties" lately...

But I can only read English! Where's the English version?

https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34is...

This hash doesn't do much for me:

 QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX
How do I find the version I want?

If I can't read it in my language, it's still censored for me.

17
hd4 11 hours ago 4 replies      
Maybe a very dumb question, but why didn't they build anonymity into it rather than advise users to route it over Tor? My guess is it may have something to do with the Unix philosophy. It's still a great tool regardless.
18
devsigner 5 hours ago 0 replies      
Here it is on Archive.is just for good measure and posterity purposes: https://archive.is/GnjGT
19
captn3m0 12 hours ago 4 replies      
The SSL cert chain is broken for me.
20
DonbunEf7 11 hours ago 2 replies      
Isn't IPFS censorable? That's the impression I got from this FAQ entry: https://github.com/ipfs/faq/issues/47
21
davidcollantes 5 hours ago 1 reply      
Will it be available if the domain (ipfs.io) stops resolving, gets seised or is blocked?
22
amelius 11 hours ago 1 reply      
Sounds good, but isn't this a fork of Wikipedia?
23
forvelin 9 hours ago 2 replies      
At this moment, it is enough to use Google DNS or some VPN to reach Wikipedia in Turkey. This is good case, but IPFS is just an overkill.
24
nathcd 11 hours ago 2 replies      
I'd be really curious to hear more about how Goal 2 (a full read/write wikipedia) could work.

IIRC, writing to the same IPNS address is (or will be?) possible with a private key, so allowing multiple computers to write to files under an IPNS address would require distributing the private key for that address?

Also, I wonder how abuse could be dealt with. I've got to imagine that graffiti and malicious edits would be much more rampant without a central server to ban IPs. It seems like a much easier (near-term) solution would be a write-only central server that publishes to an (read-only) IPNS address, where the load could be distributed over IPFS users.

9
Why does Google prepend while(1); to their JSON responses? stackoverflow.com
584 points by vikas0380  2 days ago   107 comments top 13
1
winteriscoming 2 days ago 5 replies      
Everytime I read about such constructs, it makes me realize, as a regular developer, how complex web application security is and how difficult it is to think about and cover your application against each and every such potential problem.
2
c0achmcguirk 2 days ago 0 replies      
I believe this hack (JSON Hijacking) was discovered by Jeremiah Grossman in 2005[1].

It's fascinating to read how he discovered it and how quickly Google responded.

[1] - http://blog.jeremiahgrossman.com/2006/01/advanced-web-attack...

3
samfisher83 2 days ago 9 replies      
Why don't browsers strip cookies when they are doing cross domain javascript fetches?
4
westoque 2 days ago 4 replies      
I wondered the same thing years ago. I always thought that browsers would have implemented other security measures so that websites avoid doing this.

Around 90 something percent of websites I visit don't implement that `for(;;)` or `while(1)` solution.

So are we saying that they're vulnerable sites?

5
frik 2 days ago 2 replies      
FB prepends a "for(;;);" which is 1 char shorter than "while(1);", has been the case since 2012/13.

Firebug v2 and ChromeTools know how to parse such JSON and ignore that first part. (IE11 and Firefox newer DevTools can't "handle" it aka show just a plain text string)

6
xg15 2 days ago 3 replies      
I had a hunch that this is to prevent people from including the resource in a script tag - but I always wondered how they'd access the data as a JSON expression on its own should technically be a no-op when interpreted as JS (or so I thought).

The overridden array constructor was the missing link.

Though couldn't you have it easier by making sure your top-level JSON structure is always an object?

As far as I know, while a standalone array expression []; is a valid JS statement, a standalone object expression {}; is not and would produce a syntax error.

7
zoren 2 days ago 0 replies      
That is one weird array in Google's reply. Looks like it could have been an object instead, whereby JSON hijacking wouldn't be a problem.
8
maambmb 1 day ago 0 replies      
I feel like the browser could use the Content-Type header to check whether the response is JSON or actual executable javascript - throwing an error if the former
9
CaliforniaKarl 2 days ago 1 reply      
I haven't worked with JSON like that before. Do JSON parsers properly ignore the stuff Google puts in, or do you have to strip it out before parsing?
10
NewEntryHN 2 days ago 1 reply      
Google use cookies to authenticate API requests?
11
the_mitsuhiko 2 days ago 1 reply      
Pretty sure browsers no longer permit overriding ctors for literals.
12
Animats 2 days ago 1 reply      
Why not "while(0)"? Then an eval wouldn't do anything.
13
tossaway322 2 days ago 1 reply      
Jeez, why not live w/o JavaScript?

We keep trying to accomodate a defunct language with insoluble problems. Isn't that an error in our thinking processes?

https://www.wired.com/2015/11/i-turned-off-javascript-for-a-...

10
SCOTUS Rejects Guilty Until Proven Innocent Can't Keep Money from the Innocent forbes.com
434 points by rrauenza  5 days ago   186 comments top 12
1
jdc0589 5 days ago 5 replies      
Does this have any implications for the ridiculous seizures of property police carry out every now and then when they pull someone over who has a large amount of cash on them?

The whole idea of a court case titled, e.g., "State of Texas vs $45,000" is asinine. I'd love to see the SCOTUS lay down the law on that bullshit.

2
pc86 5 days ago 5 replies      
Who dissented? I would be interested in reading the dissent and seeing if it was based off of some technicality of this case, or a belief that requiring someone to prove their innocence wasn't a violation of their Constitutional rights.
3
rayiner 5 days ago 4 replies      
The article does a piss-poor job of actually framing the issue that was decided: http://www.scotusblog.com/case-files/cases/nelson-v-colorado. In particular, the article's characterization of the question presented is so hand-wavy that it makes the dissent seem incomprehensible.

To understand Thomas's logic, start with Section I of the dissent: https://www.supremecourt.gov/opinions/16pdf/15-1256_5i36.pdf. Thomas is not saying that the state should be able to "keep money from the innocent." Thomas's reasoning is roughly the following:

1) The 14th amendment requires the government to give you adequate process before depriving you of a property right.

2) At the time petitioner was convicted, he paid over a sum of money to the state. He lost the property right in that money, and the state gained the property right in that money.

3) After the conviction was overturned on appeal, the state required petitioner to go through a process to get his money back. Petitioner alleges that the process was inadequate under the 14th amendment.

4) To make a 14th amendment deprivation claim, you have to show deprivation of a property right. At the time petitioner sought his money back, he had no property right. That transferred to the state under step (2).

5) Thomas asks: where is the property right that is the basis for the 14th amendment claim? It can't come from the 14th amendment itself, because that only comes into play once petitioner has a property right.

To get around (5), you can theorize that the initial payment automagically become null-and-void when the conviction was vacated. But that would be an unusual result, because legal judgments don't ordinarily have that kind of effect. Say you buy some property, then sue the seller and get a judgment saying you overpaid. You can assert that judgment against the seller to collect, but at most the judgment means the seller owes you money back, it does not automatically transfer the property right in some of the money you paid back to you.

I think the majority is right in that the overall effect of the Colorado law would be a due process violation. But the majority's reasoning kind of requires thinking of the whole process as a black box and not thinking too hard about what happens inside.

4
ars 5 days ago 2 replies      
I don't understand this:

> Ali had previously sold the Chevy but still held title to it and it was registered in his name ..... In order to regain his Chevy, Ali was asked to prove his innocence.

Why does he need to regain "his" Chevy? It said he sold it, so it's not his anymore. Just the title transfer was not registered yet (which in my state is pretty common, especially for junk cars, since the state charges a fee and sales tax to record the transfer).

5
rhizome 5 days ago 0 replies      
A little more of an educated take from a couple weeks ago:

http://www.scotusblog.com/2017/04/opinion-analysis-states-ca...

6
Neliquat 5 days ago 5 replies      
This is a huge step forwards. I only hope asset forfiture is next.
7
davrosthedalek 5 days ago 3 replies      
It was a 7:1 vote. Is there a resource to look up how individual judges voted?
8
OliverJones 5 days ago 0 replies      
Sarah Stillman wrote up civil forfeiture in The New Yorker in the summer of 2013. John Oliver's TV piece on the topic must have drawn heavily from this article.

http://www.newyorker.com/magazine/2013/08/12/taken

9
inlined 4 days ago 0 replies      
> Ali had previously sold the Chevy but still held title to it and it was registered in his name. When the buyer was arrested for a DWI and drug possession, police seized the truck and filed a civil forfeiture action against it, even though Ali was not involved. In order to regain his Chevy, Ali was asked to prove his innocence.

If Ali sold the car but kept the title what the heck did he "sell"? This sounds like a poor defense against civil forfeiture. It seems like he'd have to claim he cheated the person he sold the car in order to assert ownership. I wouldn't want to touch that case either.

10
felipelemos 5 days ago 1 reply      
What surprises me is - even for just 1 vote against - it was not a unanimous decision.
11
wu-ikkyu 5 days ago 1 reply      
Does this mean police can't legally steal your money anymore?
12
justinclift 5 days ago 1 reply      
Wonder if Kim Dot Com's legal team would be able to make use of this in their ongoing US legal saga?
11
Sorting Two Tons of Lego, the Software Side jacquesmattheij.com
507 points by jacquesm  2 days ago   91 comments top 21
1
jph00 2 days ago 8 replies      
I've really enjoyed talking to Jacques about his lego project over the last few days, and I hope that it will lead to some additional learning materials on course.fast.ai (which he kindly links to in his article) as a result. It's great to see such a thorough writeup of the whole process, which I think is much more interesting and useful than just showing the final result.

The key insight here, to me, is that deep learning saves a lot of time, as well as being more accurate. I hear very frequently people say "I'll just start with something simple - I'm not sure I even need deep learning"... then months later I see that they've built a complex and fragile feature engineering system and are having to maintain thousands of lines of code.

Every time I've heard from someone who has switched from a manual feature engineering approach to deep learning I've heard the same results as Jacques found in his lego sorter: dramatic improvements in accuracy, generally within a few days of work (sometimes even a few hours of work), with far less code to write and maintain. (This is in a fairly biased sample, since I've spent a lot of time with people in medical imaging over the past few years - but I've seen this in time series analysis, NLP, and other areas too.)

I know it's been trendy to hate on deep learning over the last year or so on HN, and I understand the reaction - we all react negatively to heavily hyped tech. And there's been a strong reaction from those who are heavily invested in SVMs/kernel methods, bayesian methods, etc to claim that deep learning isn't theoretically well grounded (which is not really that true any more, but is also beside the point for those that just want to get the best results for their project.)

I'd urge people that haven't really tried to built something with deep learning to have a go, and get your own experience before you come to conclusions.

2
marze 2 days ago 1 reply      
Suggestion: why not use three cameras simultaneously, each from a different angle, then classify the three images? Those cameras must be nearly free in cost.

Also, to get more training data, what about setting up a puffer to blow the part back on the belt and tumble it? If you could configure the loader belt to load parts slowly and stop after one is seen, you could automatically re-image the first part an arbitrary number of times by blowing it backwards before letting it move along and restarting to first belt to get another.

And question: do you normalize out color at any stage? As in, classify a black and white image, with a separate classifier for the color?

3
ma2rten 2 days ago 1 reply      
It sounds like you went though a similar process as the computer vision community over last couple decades.

First people used to write classifiers by hand, but they found it's too tedious, unreliable and has to redone for each object you want to classify. Then they tried to learn to detect objects by using local feature detector and train a machine learning model to classify objects based on that. This worked much better, but still made some mistakes. Convolutional Neural Network were already used to classify small images of digits, but people were skeptical they would scale to larger images.

This was until in 2012 AlexNet came along. Since then performance of convolutional networks has improved each year. Now they can classify images with similar performance as humans.

4
11thEarlOfMar 2 days ago 1 reply      
This is true hacking. I mean, at its essential core. The purpose, the methods, the tools, the rationale. If there is an archetype for Hacker, it's jacquesm.

Bravo!

5
ziikutv 2 days ago 1 reply      
For anyone wondering, that is a USB Microscope camera which can be ordered via Amazon[1].

[1]: https://www.amazon.com/XCSOURCE-Microscope-Endoscope-Magnifi...

6
justforlego 2 days ago 1 reply      
Would it be possible to use existing 3D descriptions of the bricks to train the model?The LDraw library contains mainly every LEGO brick [1].

[1]: http://www.ldraw.org/

7
modeless 2 days ago 3 replies      
Awesome project!

> I simply dont think it is responsible to splurge on a 4 GPU machine in order to make this project go faster.

2 things: 1. You can rent 8-GPU machines on AWS, Azure or GCE. 2. The incredibly wide applicability of machine learning means that an investment in hardware might not be wasted. Even if you only use the machine for this one project, if it helps you learn more about the field it will probably still be a good investment career wise.

8
dpkonofa 2 days ago 2 replies      
I love projects like this because, while it doesn't necessarily have a direct application right away, it solves a piece of a problem that could go a long way to help something else. Reminds me of the skittles/M&M sorting machine that someone built a little while ago. As more projects like this develop, we're teaching computers more and more about visual recognition.

Can't wait for Skynet to go live! :-P

9
Saus 2 days ago 2 replies      
Nice work, I enjoyed the write-ups. You wrote that you wanted to sell off complete sets.

Would you be able to first make an inventory of all your available pieces. And then load a DB with (all?) complete sets and let the machine sort different sets in 1 bucket (starting with the most expensive set first?). Or how are you going to get your sets together?

10
bootload 1 day ago 0 replies      
"then several things happened in a very short time: about two months ago HN user greenpizza13 pointed me at Keras, rather than going the long way around and using TensorFlow directly (and Anaconda actually does save you from having to build TensorFlow). And this in turn led me to Jeremy Howard and Rachel Thomas excellent starter course on machine learning."

This is why you read HN. Interesting though had Jacques not made the original attempts I don't think the payoff above would have been as useful.

11
datenwolf 2 days ago 1 reply      
Cool project!

One question: Wouldn't it have been easier to use a line scan camera and tether line aquisition to the belt's movement by attaching a rotary encoder which output would trigger individual line scans? That's the standard solution in the industry.

12
otaviogood 1 day ago 1 reply      
It could be fun if you released your tagged data set of lego piece pictures so people in the ML community could try to write classifiers. Even untagged pics could be interesting.
13
RoboTeddy 2 days ago 0 replies      
> Right now training speed is the bottle-neck, and even though my Nvidia GPU is fast it is not nearly as fast as I would like it to be. It takes a few days to generate a new net from scratch but I simply dont think it is responsible to splurge on a 4 GPU machine in order to make this project go faster.

Easy cloud training: https://www.floydhub.com/

14
wolfgang42 2 days ago 1 reply      
> [The stitcher determines] how much the belt with the parts on it has moved since the previoue frame (thats the function of that wavy line in the videos in part 1, that wavy line helps to keep track of the belt position even when there are no parts on the belt)

I'm curious about this wavy line--does it need to be specially encoded in any way or did you just squiggle the belt with a marker and let the software figure out how it lines up?

15
Jakob 1 day ago 0 replies      
Would a training set of 3d renderings of every angle of each lego piece work? That should be easy to produce and would make the manual labeling step obsolete.
16
unityByFreedom 2 days ago 0 replies      
Thank you for posting this follow-up!

I look forward to seeing if you can push it further by leveraging faster hardware in the cloud.

I suspect the training time could cause you to lose interest in iterating improvements. But, how cool would it be to make the project even better =)

17
geoffbrown2014 1 day ago 1 reply      
What a terrific project! So many levels of challenges. How many different images of each piece was needed to before you could train the system?
18
Nexxxeh 2 days ago 2 replies      
Are there instances where multiple parts look the same from different angles?
19
tezza 2 days ago 0 replies      
excellent writeup.

has anyone applied this sort of thing to voice recognition ? i see a lot of computer vision applications, but haven't found any audio classifiers amongst the CV articles

20
wwarner 2 days ago 0 replies      
great write up, thank you for sharing it.
21
iDemonix 2 days ago 1 reply      
> Right now training speed is the bottle-neck, and even though my Nvidia GPU is fast it is not nearly as fast as I would like it to be. It takes a few days to generate a new net from scratch but I simply dont think it is responsible to splurge on a 4 GPU machine in order to make this project go faster.

You should stick up a donate button, if you keep writing interesting articles about how it all works, I'd happily throw a few dollars towards the process.

12
How Stripe teaches employees to code stripe.com
530 points by p4lindromica  4 days ago   116 comments top 17
1
rectang 4 days ago 2 replies      
When I worked for Eventful, I organized several study groups for beginners similar to this.

Like the Stripe team, participants in these study groups also found that cross-departmental collaboration improved. After learning the fundamentals of coding, people understood better how to work with engineering teams.

We also found a few people out on the edge of the bell curve who had strong engineering aptitude, including one fellow who eventually became a stellar engineer for us.

The main difficulty is that you need a lead who enjoys teaching. Personally, I find the challenge of explaining concepts at varying audience-appropriate levels fascinating and stimulating. We had one other fellow lead a group, and he had a good experience too. But as you can see from this discussion, not everyone wants to take this on.

2
barrkel 4 days ago 5 replies      
Flipping this the other way - a company that works in a particular area, whether it's law, finance, retail, advertising, anything - it's important to educate engineers about the specifics of that industry. The alternative is to forgo bottom-up innovation in the company (with engineers that don't have enough business context) and risk embedding a command and control approach to feature development (coming in from sales, through product management, into engineering design, all the way down to the interchangeable coding monkeys).
3
jjcm 4 days ago 2 replies      
I'm currently running a very similar thing over at Atlassian. I was brought in to run their prototyping, and one of the things I immediately noticed is many of the designers didn't have the tools they needed to make proper prototypes. Rather than just teach them on invision/atomic/principal, I figured it'd be better to just teach them how to do front end dev. We're now holding lessons for a 2 hour block every week in 10 week cycles, and that seems to be the best for people schedule wise. Originally we did all day courses but too many people had conflicts. Course notes are here if anyone is interested: http://prototype.guide - also includes a small electron server to help dynamically compile stylus/pug.
4
bballer 4 days ago 0 replies      
This is great! I think every one should have some fundamental knowledge of how coding works, especially those working at SaaS companies who aren't directly involved in code. It would definitely be a confidence booster to those who are in roles where the primary function is supporting software and the processes (and bugs of course) around the primary product.

The problem we have is that we are strapped and everyone is pedal to the metal every hour we are at work. We just don't have the time to sit down and layout these kind of courses for all the employees and then follow through on properly teaching them. I sure wish we did. I can see how at large stable companies this can be a huge win, but the reality is for the smaller fish it's tough to pull this off.

5
korzun 4 days ago 7 replies      
A part of me does not understand how start-ups find so many different ways to burn the inventors money. Another part of me wonders how these people have so little workload that they can afford to sit in a 2.5-hour class every week and take projects home / do them at work?

I worked with companies that tried to do something similar, and besides a nice PR piece for the blog, these types of things are nothing more than a nice 2.5-hour break for people who can't schedule enough meetings to make their day go by faster.

Wait until individuals who came there to work get fed up with carrying the rest of their team and start looking elsewhere.

6
partycoder 4 days ago 3 replies      
There is substantial domain knowledge you require to code safely.

Explain someone who hasn't been exposed to how numbers are represented in memory that doing financial stuff with IEEE 754 floating point numbers (default in JavaScript and others) can lead to precision errors, with possible financial consequences. Very hard to do without going all the way to the basics.

Or maintainability, security, configuration, construction for verification, performance, scalability, concurrency, thread-safety... and all non-functional requirements.

You can save money in less skilled people. I have fixed bugs implemented by self-taught guys. I remember profiling a service experiencing service degradation (with terrible financial consequences) and finding a O(2^n) function that could be O(1). That's the kind of risk you expose yourself to.

7
jorblumesea 4 days ago 2 replies      
Take recruiting for example, would that really allow you to have a more technical conversation with a prospective employee? If they are any sort of engineer they'll run circles around you in every sense, so what's the benefit here? I'd love to see the data breakdown of how/why this benefits certain jobs or teams.
8
jaboutboul 4 days ago 1 reply      
How about opening the course and sharing on github so other can benefit/help improve as well?
9
barking 3 days ago 1 reply      
I would humbly suggest that they cease and desist from calling their workers stripes.
10
timlod 3 days ago 2 replies      
Everytime there's a post about Stripe, it is a positive post. Programms they initiate, people they hire, how they act in their business. To me they seem like an example of a 'good' company with great culture as opposed to what you read about Uber [edit: removed caps] and co. these days, very refreshing!
11
throwaway2016a 4 days ago 6 replies      
> For example, == is used at Stripe to mean youre in agreement

Is this normal? I have never heard this before.

12
CodeSheikh 3 days ago 1 reply      
It is nice to see how more and more tech companies are taking this initiative to hold on-campus intro to programming classes (mostly it is javascript or python). But before most of these companies decide to teach coding to every single employee, I would first focus on those god awful "tech" project managers who get hired because of some fancy MBA degree and most of the time they have no clue about the complexities of a software project in general.
13
imron 4 days ago 1 reply      
Next can they teach their web designers about contrast?
14
diek 4 days ago 5 replies      
> In seating, we mix engineering teams with non-engineering teams

In my experience this is a huge productivity killer for engineering teams. No, sitting us down next to the recruiting guy doesn't make the recruiter better at technology or the engineers better at understanding recruiting. It mostly just makes us hate the person who talks on the phone all day while we're trying to work.

15
sidchilling 3 days ago 0 replies      
Perhaps a follow-up blog post on which department the people who attended the program were from and how did the program help then in their day-to-day work.

Once, we have success stories, more companies would be interested in implementing such programs.

Kudos to the initiative!

16
alexpetralia 2 days ago 0 replies      
I'm surprised by how controversial this article has been. I think it reflects the wide gamut of workplaces that continue to exist in tech, despite the external homogeneous appearance the industry maintains.
17
hasenj 3 days ago 1 reply      
It's all nice and everything, but I have few "questions"

Is asking people about how they "felt"about the course really a good method of evaluating the course's effectiveness? It sounds like people would always makes up a reason to say they felt good about the course, either as lip service, or because they enjoyed it even though it might have zero effect on their work.

Maybe instead they should try to come with specific things to measure before and after the course?

13
Thieves drain 2FA-protected bank accounts by abusing SS7 routing protocol arstechnica.com
373 points by Dowwie  4 days ago   209 comments top 24
1
kevin_b_er 4 days ago 9 replies      
SMS is not a secure 2nd factor. It is subject to not only technical attacks such as the one in the article, but also a wide variety of social engineering attacks. Getting cell phone reps to compromise an cell phone account is apparently not hard, and has been used many times to take over online accounts.
2
Latty 4 days ago 6 replies      
Banks here in the UK use your chip & pin based card as a second factor (or rather, as the two factors - the chip you have, the pin you know) - they give you a little card reader that can use the card and pin to provide a 2FA token for logging in or sign requests to send money.

It's a much better system. Of course, some banks don't use it to it's full potential - many use it only for signing money transfers, but it's still pretty good. The readers are also cheap and standardised, so you can use any one of them for any account, which is useful.

3
danjoc 4 days ago 1 reply      
Last July, NIST called out SMS 2FA as insecure

https://www.schneier.com/blog/archives/2016/08/nist_is_no_lo...

Second comment: SMS should have been removed long time ago considering the SS7 problems. Better to use a secure token.

Is the bank taking responsibility and covering the loss for their customers?

4
ismail 4 days ago 1 reply      
The problem with SS7 is that trust is assumed. Mobile carriers that have roaming agreements will have either a direct link or via a hub. So what happened here was the network of the foreign roaming partner was used to redirect the SMS traffic on the victims carriers. Would not be surprised if it was an inside job.

With ss7 you can do fun things like query the last location update/logged in base station for a mobile phone, due to roaming carrier x can query for customers on carrier y in another country. If you link up to one of the roaming hubs you can pretty much get the location of anyone with a mobile phone. Feature phones included.

5
kwhitefoot 4 days ago 1 reply      
The headline makes it sound as if abusing SS7 was all they needed to do but in fact they had to have the other factor as well so it really is not quite as scary as it at first appears. It also seems from the article that the thieves were able to log in to the accounts with just a password and only needed the SMS to sign transactions.

It's different here in Norway; the banks require two factor authentication to log in as well as signing transactions.

I don't claim it's perfect but at least no one can log in unless they control both factors.

6
ryanmarsh 4 days ago 2 replies      
Phreaking, the cutting edge way to commit computer fraud in 2017.

Who would have guessed?

7
matt_wulfeck 4 days ago 0 replies      
I'm still a little irked that Google constantly reminds me to add a phone number as a backup for my email account. I already have google push login, OTP, as well as backup codes.

This proves that the phone can be more a liability in the face of much better technology.

8
idlewords 4 days ago 4 replies      
Here's a guide for how to set up SMS-free two-factor authentication on your Gmail account. It will cost you $18; if that's a hardship, contact me.

https://techsolidarity.org/resources/security_key_gmail.htm

9
codewithcheese 4 days ago 1 reply      
Namecheap only supports SMS 2FA. The have been suggesting they will support Authenticator for years now https://blog.namecheap.com/two-factor-authentication/

Pretty unacceptable considering how important domain control is.

10
hinkley 4 days ago 2 replies      
Isn't this the old "SMS is not 2FA, stop calling it that" argument?
11
zyx321 4 days ago 1 reply      
The Sueddeutsche article claims that German customers were affected too. Most German banks I know of support TAN-generators[1] which are completely unhackable by any known methods. Insert your card, scan the barcode on your screen, confirm the target IBAN and amount, and you get a unique TAN that is calculated from your transaction parameters.

[1] https://www.amazon.de/ReinerSCT-Tanjack-chipTAN-SmartTAN-Tan...

12
mdekkers 4 days ago 2 replies      
My bank's 2FA literally comes on a piece of paper. A set of numbered codes, and the banking app/site tells me which code to use for any given transfer.
13
finnn 4 days ago 2 replies      
When I asked (via Twitter) if my credit union would provide a secure 2FA option, they told me:

> We're always on the lookout of how we can keep our members' accounts secure. Right now, the Mobile Texts are FFIEC compliant.

14
stcredzero 4 days ago 0 replies      
In August, Lieu called on the FCC to fix the SS7 flaws that make such attacks possible. It could take years to fully secure the system given the size of the global network and the number of telecoms that use it.

One of the newly discovered great sins of the early 21st century, is to disseminate insecure code. Before the public became widely aware of chemical pollution, I'm sure many polluters thought themselves innocent and environmentalists as pernicious busybodies.

15
cyberferret 4 days ago 1 reply      
Another feather in the cap for a dedicated 2FA solution such as Google Authenticator etc. that doesn't use SMS?

Though having replaced two phones since using that solution - it can be a pain to have to re-set it up with each provider every time. I can see that if someone is prone to losing their phone, it will become a major issue.

I think the problem is that all the companies whom I use 2FA for have totally different methodologies for re-setting it up on a new device. Whilst some have an automated way of verifying my identity and resetting the new device almost instantly, I have had a couple that needed talking to a human support rep (inconvenient, but understandable) and one company that needed another employee in the company to do a full 2FA verification themselves, and then talk to a company support rep on my behalf to verify my request to reset my 2FA settings! (WTF).

Thus, each time I replace my phone, I find myself actually culling the number of services where I use 2FA purely because it was too much of a pain to go through the reset process, and it was actually easier to drop 2FA with them altogether (or in one case actually drop the service altogether).

16
riobard 4 days ago 0 replies      
So SS7 is like BGP where you can just announce your number/IP block?
17
DBNO 4 days ago 3 replies      
Edit: I had an idea for an improved sms 2fa, but comments gave persuasive reasons why google authenticator was better. Thanks for the comments!

Idea basically is a 3FA system where bank sends you a one-time 6-digit number. You then have to translate that number using a user-seeded cryptographic hash function. This secret function is your third factor which translates the received SMS code into the value you'll input at login.

Analysis: Security would increase; but ease-of-use would decrease, especially in regards to how a user would reset their password if they lose both their password and their program that calculates the cryptographic hash.

18
partycoder 4 days ago 0 replies      
Phreaking in 2017, interesting. The golden age of phreaking ended with SS7. SS5 was very insecure, people could just emit tones in certain frequencies and pull off tricks like calling for free. Maybe this is the beginning of a new era.

I think major websites should stop using SMS and ask for just an authenticator app or secure keys. SMS should be regarded as a bad security practice.

19
pm90 4 days ago 1 reply      
This is really scary... can banks please start using something like Google Authenticator? I was assuming that 2FA over SMS was the most secure thing ever...apparently that's not the case.
20
jdmichal 4 days ago 0 replies      
This sounds a lot like attacks on the CAN buses within car systems. We can no longer afford to have zero-authentication, zero-authorization networks anywhere.
21
danellis 4 days ago 1 reply      
Where and how do these people get access to the PSTN?
22
Techbrunch 4 days ago 1 reply      
It would be nice to have a WhatsApp API that could be use for 2FA, banks probably already have your number.
23
EGreg 4 days ago 0 replies      
These days what is a good way to authenticate people AND prevent them from making millions of accounts?
24
finnn 4 days ago 4 replies      
Is there a good technical explanation of how SS7 works, technical docs, etc?
14
How to Survive as a Solo Dev for Like a Decade or So sizefivegames.com
426 points by lj3  4 days ago   163 comments top 16
1
raamdev 4 days ago 16 replies      
> I find it astonishing that startup indie devs pay out for an office, with all the extra bills that entails. Work from home, keep your overheads as close to zero as possible.

I've been a solo dev for nearly a decade (8 years) and I found that sometimes it makes complete sense to pay for an office. Before my daughter was born, I was able to work from almost anywhere: noisy cafes, at home, the library, etc. But after she was born it was like my brain needed a space away from home where I could close a door and have a room all to myself, a place where I couldn't hear family sounds or be within walking distance of anyone I knew. I tried cafes and libraries but, oddly, they no longer worked for methe noise and the people walking by suddenly became a huge distraction. I couldn't focus. I decided to rent a tiny artist studio for $350/mo that I found on CraigsList and it was the best $350 that I spent each month in terms of direct impact on my ability to get work done.

2
karmajunkie 4 days ago 3 replies      
This would be more accurately titled, "I survived 10 years as a solo developer and here are the choices I made". Here's my version:

1) Craftsmen pay good money for their tools. Invest in your office space, whether its a dedicated space in your home or a coworking membership. But be honest about what tools you need. You probably don't need a $1000 aeron chair. But you do need one that's comfortable. You probably don't need that super-cool triple-panel 17" laptop Razer pumped at CES for web development. You probably do need something with enough RAM to run a few VMs sometimes.

2) Outsource everything that doesn't make you money or that you aren't good at. Taxes and bookkeeping, for example. But know enough about it so you can tell if you've hired good help there.

3) Invest in yourself. Learn some new technology at least once a yearwhether thats a framework, a language, or a skill like design. Go to at least one regional conference, and at least one national conference if you can afford it.

4) Work the shit out of your network. Set a limit on how many unpaid lunch meetings you'll take to hear about other people's problems, and always try to find a way to help them even if you don't wind up taking the job. And then try to hit your limit most of the time. Farm favors like they're a cash crop.

5) Find a way to keep yourself accountable, whether that's a mentor, a coach, or an accountability partner. We all need someone to keep us honest about our motivations and rationalizations from time to time.

6) Try to exercise some self control over how many self-indulgent HN comments you make in a given period of time. :)

3
enraged_camel 4 days ago 4 replies      
>>> Dont spend any money: Do as much as you can yourself. If you cant afford it, dont pay someone to make assets that you could do yourself. Whats more, do you really need to hire a full time coder? Or can you just hire a freelancer for a month? If you dont have money, make the sound effects yourself.

Nah. As a solo dev you need to spend your time efficiently.

You can go to Fiverr and pay literally $5 for stuff like that. Sure, what you get won't be amazing, but it will be passable and it is pretty much guaranteed to be better than what you can create as a pure beginner.

That is far more preferable to spending hours or days (or maybe even weeks) learning to do it yourself.

4
throw9966 4 days ago 7 replies      
The best advice that I got was don't quit your full time job. My desktop sharewares dont sell that often nowadays, but its enough to pay my rent and monthly expenses. The salary that I get from my full time job goes direct to my bank untouched.

> "Working from a cafe"

100% agree. For me, no work gets done from a cafe. I wonder what work people do by sitting at Starbucks. I cant write one line of code if I am being constantly distracted. Does anyone feel different ?

5
hartator 4 days ago 4 replies      
> Sit down and do some fucking work. Dont go for a coffee, thats not work and you know thats not work, no youre not working from the cafe stop lying to yourself. Get up, get on and do some fucking work.

I can relate so much to the coffee trap. Just went back from grabbing coffe. Just another to delay working.

6
erikb 4 days ago 6 replies      
Now that I've experienced the start-up world twice I have to say: What's the big deal about working full time in big corp? Not all of them have cubicles and ask their devs to wear suits. T-Shirt, free coffee, huge desk, free hardware, other smart people who are just like me, reliable income, some level of attractiveness to the other gender due to stable life.

Honestly I don't know why I didn't do that from the start and worked on my projects in my spare time.

7
slaunchwise 4 days ago 0 replies      
I was a solo for 10+ years. I rented an office when my kids were too little to simultaneously grasp the ideas that 1) I liked them and wanted them near me and 2) I could not actually have them near me right now. When they were old enough to understand I moved back home. It didn't make much of a difference in terms of productivity for me.

One thing that did help was a sense that my workspace was both mine and a place for work. I needed to know that and no one had the right to interrupt or try and shoo me away. Public spaces never worked for me because other people had a right to them, too, and they could bring their kids or ask me questions about the nearest chair or whatever. I could concentrate better knowing that was true. It was worth money to me.

8
redwyvern 4 days ago 0 replies      
Not really well written, but some of the points on not wasting money and time with BS activities and expenditures were important.
9
jacquesm 4 days ago 0 replies      
If you want to make it for much longer than just a decade plan for those once-every-in-10-years dry spells and accidents and SAVE. Sock money away as if your life and career depend on it, one day they will and if you don't have savings you will end up in trouble. Save 20% of your gross at a minimum, then, once you reach 100K or so of rolling reserve you can relax and start spending a bit but really try to maintain that reserve.
10
vpresident 4 days ago 1 reply      
Being an indie developer for 10 years feels too much for me.

I mean, once you do 2-3 games and you have a little success, I think you should use that notoriety to gather more talent and assemble a team. Forget starting a startup, studio, being an entrepreneur. I say you should continue doing what you do, but instead of doing all the development yourself and outsourcing the graphics or the sound effects, just bring them in.

A team of 2-3 developers, 1-2 artists, and a composer should level up faster. Watch Fullbright[1], they had amazing success together, while on their own they were.. just ok.

There are any former solo devs here that could share their story? What was the next step for them?

[1] https://en.wikipedia.org/wiki/Fullbright_(company)

11
jmspring 3 days ago 0 replies      
I've worked as part of a larger local remote team for a number of years. I'm dealing with customers, group meetings, or hackfests on about a 3-5 day a month basis. Outside of that, I'm fully remote.

I used to have a garage office, but the last several months the coffee table is the remote office. Soma.fm queued up when I start working. I take breaks for bike rides/lunch/exercise, if I feel the need to be "social", the local brewery (more like park + brewery) has outside seating and friends are there.

Solo - just gauge how much interaction you need and when you need it.

12
gcb0 4 days ago 1 reply      
was the author abused by a 3d space shooting game? why pick on that genre as the holy grail of a game that won't be finished? Iam almost sure I'm missing some internal game dev joke.
13
kristianp 4 days ago 0 replies      
> Say Game1 brings in 40,000: with rent and beer and socks thats probably two years salary,

Be prepared to not make a huge amount of money, but you get to write games for a living.

14
fapjacks 4 days ago 2 replies      
"Sit down and do some fucking work."

The other stuff is just optional.

15
ninjakeyboard 4 days ago 1 reply      
nice article and I almost bought your game but I don't play games on my pc :( But if I did I would - looks mint - Nice trade on experience for advertising
16
return0 4 days ago 0 replies      
I find that the frequency of posts about solo entrepreneurs has increased. Are there any startups working for this niche?
15
Americans' Access to Strong Encryption Is at Risk, an Open Letter to Congress rietta.com
360 points by rietta  5 days ago   134 comments top 22
1
Nomentatus 5 days ago 9 replies      
The irony here is that simple one-time-pad solutions (OTP) will continue to be available to securely encrypt the sort of messaging that's of use to terrorists (relatively short infrequent messages), instead it's the general communications (including for banking) that the rest of us perform online that will be made vulnerable.

You don't even have to program or use a computer to create these OTP solutions, for limited messages you could just flip a coin to create the OTP if necessary (although there are lots of more automated solutions available as well.)

Airgapped computers at both ends provide another way 'round restrictions for more sophisticated actors. Their backdoors won't be accessible (remotely.)

So taking away secure encryption from the rest of us is just security theatre; a destructive, narcissistic legislative exercise designed to make it look like the pompous powerful doing something when they're doing nothing of any real use while creating terrible risks.

This is why, I think, legislators have consistently ignoring logic and math from professionals such as the OP - they don't care. They know perfectly well they're pissing into the wind doing nothing useful; that it's all theatre; they just think the fallout is going to land on someone else's pants after they're out of office. But tech works (and fails) faster than that.

[Counterargument: if everything else is breakable, securely encrypted messages really stand out. One answer: But very short messages (in an unknown format) aren't generally breakable, anyway, and that's the likely case.]

2
xupybd 5 days ago 4 replies      
Just think of the outrage if the government required master keys to everyone's homes? I know there is a difference, but it's not a huge leap to compare the two. We don't want the government to have such easy access to our homes because we can't trust every government employee not to abuse it. I think the same goes here. No mater what safe guards you put in place it's a scary thought that you simply can't keep the government out of your affair's. Sure now you think you have nothing to hide. But what if your political views become criminal, what if your religious views become hate speech? We're not there yet but times can change quickly.
3
1001101 5 days ago 2 replies      
Does anyone here remember the clipper chip? If you don't, I'd recommend boning up on this chapter of the crypto wars.

The 'because terrorism' excuse falls a bit flat with me.

Thought experiment: how hard would it be for a terrorist organization with access to 100's of millions of dollars (eg. ISIS) to come up with a secure communications scheme? One time pad. A reasonable cipher that hasn't had any 'help' during development. Even run an encrypted channel over a backdoored product. I'm sure many of us could come up with something in a day (with decryption over an airgap). How about a hostile government with multi-billion dollar budgets (and who have been using OTP already for decades).

Is this about terrorists, or is this about citizens? My bet is on the latter.

https://en.wikipedia.org/wiki/Clipper_chiphttps://en.wikipedia.org/wiki/Crypto_Wars

4
rietta 5 days ago 0 replies      
I'm going to have to go back to listen to the entirety of the Senate hearing at some point. With so much talk about Russia hacking and influence and then they flip the switch and want backdoors into encryption even though any mandated tool the government demands for so called lawful intercept can be hacked by or ordered by the judges in Russia! There is a strange disconnect and I think it hurts us that the public discourse is security vs privacy rather than being about the personal security off all citizens.
5
libeclipse 4 days ago 1 reply      
I did my Extended Project Qualification (EPQ) [1] on this issue, and it actually surprised me how many people think that the governments are right in this debate.

When presenting the work, I had a chance to ask ordinary people, and they all pretty much agreed that the government should be able to "break" encryption with a warrant.

This is a scary prospect, and I feel that educating citizens as well as the government is important.

[1] https://github.com/libeclipse/EPQ/blob/master/paper.pdf

6
sandworm101 5 days ago 0 replies      
Access is under no risk whatsoever. Encryption is math. It is open source. It will always be there. What is at risk is the legal right to use it, the government's permission for the public to use that math. My point: people with good reason to fear the government will still access and use encryption. This therefore isn't about terrorists. It is about watching the everyday people who want to abide by the law.
7
_jal 5 days ago 0 replies      
The Four Horseman of the Infocalypse[1] ride again!

[1] https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...

8
nom 5 days ago 0 replies      
The greatest problem right now is our hardware, not our software. We can always devise secure encryption schemes without backdoors. Nobody can do anything against it.

Our hardware on the other hand... is probably backdoored already.

9
WalterBright 4 days ago 1 reply      
It isn't just our privacy at issue. With more and more critical infrastructure on the internet, having unbreakable encryption is a major national economic and national security requirement.

It's unrealistic to think that if there is a means for access by the government, that foreign enemies and criminal organizations won't be able to access it, too, and cause havoc.

10
pinaceae 5 days ago 0 replies      
As if they'd give a shit.

Right now they want to un-insure 24mil people, re-introduce the whole pre-existing condition scam.

you really think a ruling class that has no qualms being "pro-life" while denying young mothers healthcare will care about your nerd bullshit?

11
notliketherest 5 days ago 3 replies      
This is not a battle they can win. Most American's DGAF if their shit is encrypted, until the PSA campaign fighting against laws like these tells them the government is taking away their rights and able to snoop on their lives. Just like SOPA and others this will be defeated.
12
natch 5 days ago 0 replies      
For congressional consumption, I suspect arguments like this need to be dumbed way, way, down.

Tim Cook's "software equivalent of cancer" is an example of an effective dumbed down take on it, but it need not be the last one. The more ways the point can be re-worded concisely so that lay people will understand it, the better.

13
shmerl 5 days ago 0 replies      
Some just never learn. How many times will they bring up this "let's make a backdoor but we don't really want a backdoor" stupidity?
14
paulddraper 5 days ago 2 replies      
Encryption will never be intentionally backdoored on a large scale.

I think one of RSA argued this, basically "Do you really think the government will want to review and approve everything on the app store?"

Forcing big players to divulge data, making accused people decrypt their devices -- those are things the government could do. Encryption per se isn't in any danger.

15
threepipeproblm 4 days ago 0 replies      
I read that Sen. Diane Feinsetin is supporting an anti-encryption bill. It's never been completely clear to me if she, and those like her, fall more on the stupid side, or more on the evil side.

But the arguments against this aren't that difficult... so I have to guess it's the evil. Power corrupts.

16
spilk 4 days ago 1 reply      
The US Department of Defense arguably runs the most extensive key escrow system in the world. Every DoD employee and many contractors have Common Access Cards (CAC) that contain email encryption keys that are escrowed with DISA.
17
nickpsecurity 4 days ago 0 replies      
A better example of work that Congress might be interested in would be Schneier and Kerr's writeup on encryption workarounds showing government tools they have available with legal considerations of current or expanded ones. That's the kind of practical stuff that can influence powerful people's opinion as they're always looking at grey areas to balance many conflicting interests.

https://www.schneier.com/blog/archives/2017/03/new_paper_on_...

18
feld 5 days ago 0 replies      
Bernstein v. United States
19
deepnet 5 days ago 1 reply      
> ... "protected being being stolen."

repetition error.

20
I_am_neo 5 days ago 0 replies      
As a sovereign I demand my privacy!
21
microcolonel 5 days ago 1 reply      
Good sentiment, and better cause...

but please, for the love of god, proofread your writing!

22
azinman2 5 days ago 4 replies      
Wait does he have a Masters in Information Security from the College of Computing at the Georgia Institute of Technology???!

Joking aside, unfortunately it takes deep problems to motivate people/the US to change. It'll swing this way, and there will be dramatic consequences. Only then will things swing back the other way.

It's too bad there isn't any balance here -- it does make sense in many situations that the police/courts should be able to gain access to information. But encryption doesn't care about the situation. Encryption doesn't care who you are. Encryption has no contextual morales of its own.

If data had physical weight, where things that were important we're really hard to steal, then it'd function like the real world. But data does not, and it's too easy to download gigs of data one should never have access to. It's very difficult to gain a middle ground as suggested by Pelosi. I don't know if she understands that.

16
Build yourself a Linux github.com
491 points by AlexeyBrin  4 days ago   88 comments top 25
1
Sir_Cmpwn 4 days ago 2 replies      
Building a Linux distro from scratch has been one of the most tasking projects I've attempted. There are two phases: the frustration phase, and the tedious phase. Bootstrapping it is an incredibly frustrating process - I restarted from scratch 5 times before any attempt ever got to the tedious phase. The tedious phase never ends. You have to make hundreds of packages to get to a usable desktop system. I've made 407 packages and I still don't have a desktop to show for it (I'm hoping to get sway working tomorrow, I think I have about 30 packages left).

Still, I've finally gotten to a somewhat practical self-hosting system. My laptop runs it on the metal and all of the infrastructure (website, bugzilla, git hosting, mirrors) is on a server running my distro too. It's taken almost a year of frustration, but it's very rewarding.

2
erikb 4 days ago 1 reply      
I really wondered what's so special about this project that aims to achieve what probably more projects aimed to achieved than there are lines of code in the kernel.

It's not the goal, it's not the OS. But the documentation is a sight to behold! Very clear, detailed, interesting writing style, and it puts together quite a few frustrating topics in a simple, structured matter. Wow and kudos! Keep on writing docs, please!

3
digi_owl 4 days ago 3 replies      
Both this and LFS reminds me that Linux makes sense until you get the DEs involved. At that point shit just sprawls all over the place as there are no longer any notion of layers.
4
thom_nic 4 days ago 0 replies      
A similar process is building a custom kernel and rootfs for an ARM device such as the beaglebone. Olimex actually has a good tutorial for their device: https://www.olimex.com/wiki/AM335x

This is only slightly more complicated due to the need to cross-build but I found it fairly easy with qemu-static-arm and prebuilt cross toolchain packages for Ubuntu/ Debian.

The benefit is that you can develop for a target device that is not your PC, so no worry about messing up the bootloader and leaving your PC in a state where you need a recovery CD to fix it and boot. Just get a USB-serial cable :)

You can also try buildroot or yocto, although I had no interest in building every package manually versus relying on Debian's repos.

5
asciimo 3 days ago 2 replies      
This is almost as complicated as building a Javascript web application.
6
fizixer 4 days ago 5 replies      
Have you even looked at the LFS project[1]? And what does your guide provide that LFS doesn't?

[1] http://www.linuxfromscratch.org/

7
lanna 4 days ago 1 reply      
If you are interested in building your own Linux, the LFS project has a lot of detailed information: http://linuxfromscratch.org
8
jonathanstrange 3 days ago 4 replies      
I have a question related to this article, though not directly. If building completely from scratch turns out too cumbersome and time-consuming, what would be the easiest way of building a minimal, fast starting distro with graphical user interface and networking whose only purpose is to run one application on x86 hardware in kiosk mode?

Is there a distro builder for dummies?

9
marenkay 2 days ago 0 replies      
Huge fang on Linux From Scratch myself, after reading this I wonder if someone has tried the same with FreeBSD! Or the Darwin sources released for OSX (.. not talking about dormant PureDarwin project)
10
throw2016 4 days ago 0 replies      
Following this guide will get you something very close to a base Alpine Linuxwith busybox. Alpine is fairly minimal out of the box and even eschews grub for syslinux.

The upside with Alpine is if you need features and packages they are an install away. But if the purpose is to learn about compiling the kernel and how the system initalizes this is a decent start.

11
thu 4 days ago 0 replies      
This sounds like Aboriginal Linux: http://landley.net/aboriginal/about.html
12
rijoja 3 days ago 0 replies      
I am following this guide to build a kernel. But it seems like that instead of getting the headers from the kernel source they are using a github repository which only contains the headers to save downloading time. All fine and dandy if the latest commit to this repo wasn't from 3 years ago!!
13
blanket_the_cat 4 days ago 0 replies      
This is awesome. I've been building almost the exact same project, along almost the same timeline (based on the commit history). Mostly an excuse to learn more advanced Bash, and Linux Internals/Features I've never had a good excuse to explore. Gonna release next week. Hope I get as warm a reception. Kudos on an awesome project!
14
peterwwillis 4 days ago 0 replies      
I remember when HOWTOs were actually maintained over time so their instructions were up to date. Blogs killed HOWTOs.
15
agumonkey 3 days ago 0 replies      
Let's branch to add : sysvinit/BSD, init, OpenRC, upstart, systemd, SMF, launchd, Epoch, finit ..

https://wiki.gentoo.org/wiki/Comparison_of_init_systems

16
felixsanz 3 days ago 0 replies      
Awesome! Good job. This also helps understand what the distro installer does.
17
colemickens 4 days ago 0 replies      
Seem neat to learn, but for something maintainable, LinuxKit seems interesting.
18
ausjke 3 days ago 0 replies      
This is an awesome write-up. did not know losetup can do what kpartx does now with the option -P, I did similar things in the past but this is a good update for me.
19
Siecje 3 days ago 0 replies      
Has anyone used Tinycore Linux? http://tinycorelinux.net/
20
Jaruzel 3 days ago 0 replies      
I've crashed and burned a couple of times trying to complete LinuxFromScratch, so I may give this a go - It seems a bit clearer on the core steps.
21
apeacox 4 days ago 1 reply      
Well, I'd use http://linuxfromscratch.org/ for that matter...
22
akavel 4 days ago 0 replies      
Would be cool to port this to Nix package manager machinery (i.e. make this kinda an alternative to NixOS).
23
Ericson2314 3 days ago 0 replies      
I feel like just reading Nixpkgs is probably just as edifying, tbh.
24
faragon 4 days ago 0 replies      
Beautiful.
25
airswimmer 4 days ago 0 replies      
I don't think this is much useful. You should check out the linuxfromscratch.org
17
Google accuses Uber of creating a fake company in order to steal its tech businessinsider.com
355 points by golfer  5 days ago   170 comments top 20
1
golfer 5 days ago 1 reply      
Some quality journalists are live tweeting the proceedings from inside the courtroom. Getting some great updates in real time:

https://twitter.com/CSaid

https://twitter.com/Priyasideas

https://twitter.com/inafried

https://twitter.com/MikeIsaac

2
TazeTSchnitzel 5 days ago 2 replies      
This made me wonder what HN thought of the acquisition at the time.

Well: https://news.ycombinator.com/item?id=12315205

Top comment noted how the company looked like a quick flip.

3
crench 5 days ago 3 replies      
"Here's the thing," [Judge William Alsup] said. "You didn't sue him. You sued Uber. So what if it turns out that Uber is totally innocent?"

This is going to be a very interesting case.

4
anigbrowl 5 days ago 3 replies      
What does it take to get a business license revoked these days? If an individual carried on the way Uber does s/he's be looking at a long stretch in prison. While I championed Uber's disruption of the taxi monopoly when it got started, and did a lot of free advocacy here on HN against taxi industry shills, the firm has turned out to be as corrupt or worse than the market it set out to disrupt.

Should the various allegations made against the firm prove true, and and it seems like there's a good chance of that, a good number of people need to face criminal charges, the company needs to be shut down and its assets auctioned off, and the investors need to end up with nothing because they abrogated their corporate governance responsibilities.

5
askvictor 5 days ago 3 replies      
Uber seems to be the logical extreme of 'easier to beg forgiveness' mentality; there are no rules (explicit or implicit) or ethical boundaries that are not subject to be broken in pursuit of their goals.
6
marcell 5 days ago 2 replies      
Based on this live tweets (https://twitter.com/CSaid) it doesn't sound like Waymo/Google is making much headway. They want to pin this on Uber, but haven't presented evidence of wrongdoing by Uber:

 Judge to Waymo: U have no proof that shows a chain of Levandowski saying to anybody heres the trade secrets.
Unless Waymo presents something like this, I don't see how this trial benefits them. Sure they can make a big fuss an get Levandowski kicked off self driving cars / Lidars, but that won't stop Uber from moving forward with their program.

7
Animats 5 days ago 10 replies      
Levandowski is probably going to come out of this really well. Uber, not so much. Google's LIDAR technology is obsolete spinning-scanner gear, mostly from Velodyne. It's something you'd use on an experimental vehicle, not a production one. The production LIDAR systems are coming, they're all solid state, and they come from big auto parts makers like Delphi and Continental. So by the time the Uber case gets to trial, it will be moot.

Levandowski already got his money. Waymo could sue him, but what are the damages? Google isn't selling anything, so they can't show impact on their sales volume. (Neither is Uber. Uber's venture into self-driving is probably more to pump up the valuation than to provide a real service, anyway.)

I'm beginning to think that self-driving will be a feature that comes from auto parts companies. You need sensors and actuators, which come from auto parts companies. You need dashboard units, which come from auto parts companies. You need a compute unit, which is just a ruggedized computer packaged for automotive conditions, something that comes from auto parts companies. You need software, which may come from a number of sources. This may not be all that disruptive a technology.

8
dmitrygr 5 days ago 1 reply      
Judge Alsup, to Waymo Lawyer: "You have one of the strongest records I've seen of somebody doing something bad. Good for you!"
9
kristianc 5 days ago 0 replies      
Original HN discussion from the time - several on here, including myself thought acquisition looked like quick flip from the outset:

https://news.ycombinator.com/item?id=12315205

10
Namrog84 5 days ago 1 reply      
If this turns out to be true, this does not bold well for a bright UBer future. I feel like I only hear bad things about them lately.

Anyone have any speculation as to what might happen to Uber if this turns out to be true?

11
wand3r 5 days ago 1 reply      
This is Ubers fault for leaving themselves open, but this will play out like a hostile takeover w/ Google upping their 6percent stake to controlling or wholly owning Uber
12
Touche 5 days ago 1 reply      
The entire point of Uber is to evade laws on technicalities, so the fact that they are doing this here should be a surprise to no one.

That is their core competency, in fact.

13
aaron695 5 days ago 0 replies      
"Google just accused Uber of creating a fake, shell company with its former engineer to steal its tech"

They don't seem to be backing that clickbait title do they?

I think the backdated form makes sense, more sense they doing something obvious like creating a fake company the 'day' after someone leaves Google.

14
fujipadam 5 days ago 0 replies      
Considering Uber's past unethical behavior, I am sure they stole tech. The problem is that it is very difficult to prove it.

If it is proved, there should be actual consequence to the executives, including jail

15
PascLeRasc 5 days ago 1 reply      
Is this Otto the same as www.ottomotors.com? That site seems slightly fake, like the kind of site Hooli in Silicon Valley would have.
16
huangc10 5 days ago 1 reply      
If this is true, it's kind of a brilliant scheme, you know, in an evil "I'm going to take over the world" sorta way.
17
tim333 4 days ago 0 replies      
So if Uber stole Google's designs but are not using them in their present self driving stuff can Google do much?
18
asafira 5 days ago 0 replies      
Since parts of this are public information, when is the next bit of information expected to come out?
19
bunderbunder 4 days ago 0 replies      
This page loads a 2400x1800 *.jpg into a (on my screen) 372x279 image element.

I believe no further comment is necessary.

20
valuearb 5 days ago 2 replies      
18
The Rust Libs Blitz rust-lang.org
430 points by aturon  3 days ago   122 comments top 16
1
dmix 3 days ago 3 replies      
This is something Haskell could really benefit from. Largely just through writing documentation for common libraries.

A post was recently on the frontpage of HN about using Haskell in production [1] that divided the common documentation experience between "hard" and "soft" docs. Far too often with Haskell you only get the 'hard' docs where you get descriptions of functionality and functions but it lacks why (and cohesively how) you would want to use the various functionality.

This makes a strong assumption you are already deeply familiar with the usecase and implementation concept.

This may apply to Rust as well. Rust will likely attract experienced developers, much like Haskell, where in most cases a decent level of code quality would be anticipated. But one of the hardest things to get right as an OSS developer is documentation. You're often so busy with the burden of maintenance that the explanatory side gets sidelined. Especially as a library and the underlying language evolves. So I hope this is a priority focus during their reviews.

[1] https://news.ycombinator.com/item?id=14266462

2
z1mm32m4n 3 days ago 1 reply      
I really love seeing articles like this come out about Rust.

It's language design the way it should be: incorporating the cutting edge ideas from academia while still striving to cater to beginners; drawing on the strengths of other languages communities to build out good library and solutions to package management; designing everything in the open, and constantly seeking feedback from their users.

It's a great blend of theoretical CS, HCI, computer systems, and application development, and it's always fun to hear about what they're up to.

3
kibwen 3 days ago 3 replies      
I'm going to use this opportunity to second the suggestion that people consider taking this year's Rust community survey: https://blog.rust-lang.org/2017/05/03/survey.html . I know it's mentioned in the post, but I figure the number of people reading the comments is much larger than the number who actually click through the link. :P And even if you don't or have never used Rust, we still value your feedback!
4
mintplant 3 days ago 1 reply      
May I suggest the bytes crate [0]? It's one of those small libraries providing a key building block (mutable and immutable byte buffers), and is a dependency of tokio-io and any other crate which implements tokio-io's Encoder/Decoder traits.

[0] https://crates.io/crates/bytes

5
ssdfe 3 days ago 2 replies      
I do wish cargo packages were namespaced a la Github. Squatting on usernames is one thing, but package and project names are often the only way you hear about something. cargo react-svg might be a terrible project or a good quality one maintained by facebook, but you wouldn't know from the name. Because of the name, it'll be at least somewhat downloaded if that's a common need. It makes grouping by org difficult too.
6
eriknstr 2 days ago 2 replies      
The article also links the state of rust survey. I visited the survey with intent to answer it but the first question "Do you use Rust?" only has the following three alternatives for an answer:

- "Yes"

- "No, I stopped using Rust"

- "No, I've never used Rust"

When making a survey the alternatives for the answers are very important. I feel that none of these alternatives apply to me. That's bad. Unfortunate because I would have liked to participate in the survey.

I've done a little bit of beginner programming in Rust in order to try and learn the language. However I haven't yet used it to implement anything actually useful so I wouldn't say "yes, I use Rust". It's been a while since last I did something in Rust but I wouldn't say "no, I stopped using Rust" because to me that implies that I have decided that Rust is not for me, which is not something that I feel, I want to use it, I just keep pushing it down on the list of things to do because other more immediate desires and problems keep popping up.

7
stcredzero 3 days ago 3 replies      
Theres a countervailing mindset which, in its harshest terms, says the standard library is where code goes to die

This can be addressed in a language with sufficient annotation and good parser tools. In some future language, there should be a unification between the version control, the de-facto codesharing site, language/library versions, and syntax-driven tools to automatically rewrite code.

It should be possible to "publish" a language and its libraries such that any breaking changes will automatically be updated when you switch library versions. (This should also be applicable to Entity-Relation diagrams and Object-Relational mappings -- those can be treated as a versioned library.)

8
wyldfire 3 days ago 1 reply      
> The product of this process will be a mature core of libraries together with a set of API guidelines that Rust authors can follow to gain insight into the design of Rust and level up crates of their own interest.

Is there a plan to make these things statically checkable by rustc/rustfmt/rust-tidy or some sort?

9
bpicolo 3 days ago 1 reply      
One thing I don't see here: It's difficult to integrate libs into e.g. parallel when they don't derive all the various things (Copy). Will a goal be to aim for deriving standards for libs looked at?
10
dasmoth 2 days ago 3 replies      
I realise this is being done with the best of intentions and will probably be a big net positive in practise, but something about the way it has been presented here rubs me up the wrong way.

In short, the blog post makes very little mention of the role of the primary author(s) of the libraries in question, beyond "Every two weeks, hold a library team meeting [...] with the author in attendance." While I imagine the reality will be quite different, this sounds an awful lot like "oi, you, code review in my office now!"

One of the attractions of working on open source is that it offers more scope for autonomy and individual recognition than the typical commercial software job. It's slightly alarming that this doesn't seem to be recognised here.

As I say, I'm sure the reality will be fine (and I remain very keen to give Rust a serious try when time permits), but the rather collectivist presentation here is a tiny bit off-putting.

11
bhickey 3 days ago 1 reply      
bstrie is coming by my place tomorrow, we're going to take a stab at overhauling `rand`. We must've been discussing this for two years.
12
Animats 3 days ago 9 replies      
Does the "Rust standard of quality" for these crucial crates include "no unsafe code"?

"Vec" currently needs unsafe code, because Rust doesn't have the expressive power to talk about a partially initialized array. Everything else with unsafe code is an optimization. Often a premature one. Maps should be built on "Vec", for example.

13
jhasse 3 days ago 1 reply      
My biggest gripe with the crate situation is that some of them require nightly. E.g. everything coroutines AFAIK.
14
microcolonel 3 days ago 3 replies      
Rust sorta has a de-facto code style. It'd be interesting to add tooling to cargo to make it obvious how to comply with the evolved standard style for Rust.
15
modeless 3 days ago 3 replies      
Unfortunate name. I thought this was about libz aka zlib.
16
luck_fenovo 3 days ago 3 replies      
It is an unfortunate name. I don't know why, given how inclusive the Rust community usually is and how eager they were to remove master/slave terminology, that they would choose to announce this effort under the banner of Nazi war tactics.
19
I tried Haskell for 5 years metarabbit.wordpress.com
362 points by sndean  4 days ago   253 comments top 19
1
arnon 4 days ago 5 replies      
We have a code base of roughly 200,000 lines of Haskell code, dealing with high performance SQL query parsing, compilation and optimizations.

I only remember one situation over the past 5 years that we had a performance issue with Haskell, that was solved by using the profiling capabilities of GHC.

I disagree that performance is hard to figure out. It could be better, yes - but it's not that different than what you'd get with other programming languages.

2
choxi 4 days ago 8 replies      
If anyone's curious to try out functional programming, I would highly recommend Elm. I haven't been so excited about a language since I went from C to Ruby ten years ago, and Pragmatic Studios has a great course on it (I have no affiliation): https://pragmaticstudio.com/courses/elm
3
gjkood 4 days ago 15 replies      
Question to the Haskell experts here.

Is Haskell more academic in nature or used heavily in Production environments?

Is there an application sweet spot/domain (say Artificial Intelligence/Machine Learning, etc) where it shines over using other languages (I am not talking about software/language architectural issues like type systems or such)?

I have no experience with Haskell but do use functional languages such as Erlang/Elixir on and off.

4
hzhou321 4 days ago 7 replies      
> 1. There is a learning curve.

Time and experience can cover up anything. So this does not say much about Haskell other than it is all negative without time and experience.

> 2. Haskell has some very nice libraries

So does NodeJS and (on an abstract level) Microsoft Word. Libraries are infrastructures and investments that (like time and experience) can cover up any shortcomings.

> 3. Haskell libraries are sometimes hard to figure out

That is simply negative, right?

> 4. Haskell sometimes feels like C++

That is also negative, right?

> 5. Performance is hard to figure out

That is also negative, right?

> 6. The easy is hard, the hard is easy

That is a general description of specialty -- unless he means all hard are easy.

> 7. Stack changed the game

Another infrastructure investment.

> 8. Summary: Haskell is a great programming language.

... I am a bit lost ... But if I read it as an attitude, it explains a lot about the existence of infrastructure and investment. Will overcomes anything.

5
throwaway110034 4 days ago 5 replies      
I will never, ever use Haskell in production because of its default evaluation strategy, the wrongness of which was tacitly conceded not long ago with the addition of the strictness pragma (which only works per-module) to GHC.

I think it's especially telling that its community skews so heavily towards this blogger/monad tutorial writer dilettante demographic rather than the D. Richard Hipp/Walter Bright 'actually gets real work done' demographic. I know which of the two I'd rather be in. Haskellers are even worse than Lispers in this regard. For the amount of noise about Haskell, you'd expect to see high-quality operating system kernels, IDEs, or RDMBSs written in it by now. Instead its killer apps are a tiling window manager, a document converter, and a DVCS so slow and corruption-prone even they avoid it in favor of Git.

6
m-j-fox 4 days ago 8 replies      
> very hard to understand why a function could be useful in the first place

So true.

https://hackage.haskell.org/package/base-4.9.1.0/docs/Contro...

> mfix :: (a -> m a) -> m a

> The fixed point of a monadic computation. mfix f executes the action f only once, with the eventual output fed back as the input. Hence f should not be strict, for then mfix f would diverge.

But why tho?

7
jes5199 4 days ago 3 replies      
Haskell: where difficult problems are trivial, and where trivial problems are the subject of ongoing academic research
8
matt_wulfeck 4 days ago 5 replies      
> The same is not true of Haskell. If you have never looked at Haskell code, you may have difficulty following even simple functions.

Why is it that people talk about this almost as if it's done virtue of the language? As if the fact that's it's so inscrutable proves that it's valuable, different, and on a higher plane of computing.

9
fizixer 4 days ago 0 replies      
Wow, really good writeup. And on point regarding '5 days' vs '5 years' approach.

And it really confirms my biases against the language.

10
dmix 4 days ago 0 replies      
> The easy is hard, the hard is easy

Probably the best single line description of Haskell.

The learning curve helps with the former (easy) part, the latter (hard) part contains some really brilliant ideas where you start to wonder why so many people still use other languages for everything, then you remember how hard the easy stuff can be...

11
nicolashahn 4 days ago 0 replies      
I don't think it takes 5 years to learn this stuff. Nothing stood out to me and I only used Haskell for 10 weeks for a single college course 2 years ago. It's all true though.
12
devrandomguy 4 days ago 1 reply      
> Types / Type-driven development Rating: Best in class

> Haskell definitely does not have the most advanced type system (not even close if you count research languages) but out of all languages that are actually used in production Haskell is probably at the top.

What are these other research languages, that have such incredible type systems? Do they usually have implemented compilers, or would they only be described in an abstract form? Can I explore them for fun and curiosity?

13
KirinDave 4 days ago 0 replies      
I strongly agree with most of this.I've been so pro-Purescript because I think we need a break from Haskell into something new but related. Haskell is great, but has so much baggage it's tough to enter into.
14
Maro 4 days ago 1 reply      
I have tried to use Haskell a number of times in the past ~5 years for small-scale projects and witnessed others try to use it, and most of these projects (actually I think all) have resulted in failure / rewrites in more simple/plain-vanilla languages like Python/Go. I keep thinking Haskell is interesting, but at some point I had to force myself to stop investing time in it because I kept concluding it's not a good investment of time for a move-fast/practical person like me. I again had to remind myself when I read this post.

Some reasons I remember for the various failures, in no particular order:

- steep learning curve = experienced (in other languages) programmers having a tough time not being productive for weeks/months in the new language, with no clear payoff for the kind of projects they're working on

- sometimes/often side-effects/states/global vars/hackyness is what I want, because I'm experimenting with something in the code; and if I'm not sure if this code will be around in 3 months, I want to leave the mess in and not refactor it

- in general, I think all-the-way pure no side effects is too much; I think J.Carmack said sth along the lines: Haskell has good ideas which should be imported into more generally useful languages like C++/etc, eg. the gamestate in an FPS game should be write-once, it makes the engine/architecture easier to understand (but in general the language should support side-effect)

- I found the type system to be cumbersome: I kept not being able to model things the way I wanted to and running into annoyances; I find classes/objects/templates etc from the C++/Java/Python/whatever world to be more useful for modeling applications

- when the spec of the system keeps changing (=the norm in lean/cont.delivery environments), it's cumbersome/not practical to keep updating the types and deal with the cascading effects

- weird "bugs" due to how the VM evaluates the program (usually around lazyness/lists) leading to memory leaks; when I was chasing these issues I always felt like I'm wasting my time trying to convince the ghc runtime to do X, which would be trivial in an imperative language where I just write the program to do X and I'm done

- cryptic ghc compile errors regarding types (granted, this is similar in C++ with templates and STL..)

- if it compiles it's good => fallacy we kept running into

- type system seemed not a good fit for certain common use-cases, like parsing dynamic/messy things like json

Working at Facebook for the last year and seeing the PHP/Hack codebase which powers this incredibly successful product/company has further eroded my interest in Haskell: Facebook's slow transition from PHP to Hack (=win) shows that some level of strictness/typing/etc is important, but it's pointless to overdo the purity. Just pick sth which is good enough, make sure it has outstanding all-around tooling, have good code-review, and then focus on the product you're building, not the language.

I'm not saying Haskell is shit, I just don't care about it anymore. I'm happy if people get good use out of it, clearly there are problem spaces that are compact, well-defined and correctness is super-important (like parsing).

15
nabla9 4 days ago 0 replies      
> Performance is hard to figure out

1000x changes in performance is not a problem if:

1. Performance of one module is not overly dependent on the code that uses it.

2. Performance never degrades order of magnitude with new compilers.

16
dlwdlw 4 days ago 1 reply      
The thing with enlightenment level ideas isn't that they aren't enlightening but that they attract hype. Individuals pretending to be enlightened if you will.

A scientific mindset as well as liberalism are also ideas where new proponents often want to draw a line in the sand to stratify people into superior and inferior. The original proponents were chasing a higher level of quality for all, but the need for social stratification weaponizes and gates ideas.

17
Safety1stClyde 4 days ago 2 replies      
> If you read an article from 10 years ago about the best way to do something in the language, that article is probably outdated by two generations.

Thank you, that is all I need to know about Haskell. I won't be learning Haskell then, in the same way that I won't have anything to do with C++. I don't have enough time to use these fashion-dominated and fad-obsessed programming languages.

18
davidrm 4 days ago 2 replies      
This has been the most disappointing blog post I have read in quite some time.
19
anorphirith 4 days ago 4 replies      
we should forbid click bait titles on HN, it's an insult to the audience's intelligence

Clickbait title are so common we think they're normal titles.... Here's how he did it: you create a craving for an answer then you offer a solution for that craving. "heres how it was" ==> that's the trickAlso "heres how" should never be used in a title, we all know that the title's subject IS what you're going to talk about.a pre-click bait era title would have sounded like: Learnings after using Haskell for 5 years

20
Ngrok: Secure tunnels to localhost ngrok.com
461 points by pvsukale3  2 days ago   187 comments top 36
1
Lazare 2 days ago 5 replies      
A lot of people seem to be a bit confused about the point of ngrok, why it's useful, how much it costs, etc. Let me try and help out. :)

For me, the killer feature for ngrok is testing/developing webhooks. You install ngrok in your dev environment, start it up, then point the stripe/slack/whatever webhook your working on at the generated URL.

ngrok will 1) proxy that request through to your dev environment 2) log the request 3) log the response 4) let you replay previous requests. It could not be more helpful for developing webhook handlers, and has literally saved me hours of work in the last couple of months alone.

Finally, the free tier is all you need for that; it gives you a unique ngrok subdomain which changes every time you start the tunnel and some (generous) usage caps, both of which are fine for this usage.

People pointing out the potential security issues are correct, but that's an argument to be careful and think about what you're doing. Besides, what's your proposed alternative? Because most of the obvious ones have equally troubling issues.

2
tmp98112 2 days ago 0 replies      
Makes me sad to see all the negativeness towards this service, which clearly works and serves a need some people have.

Yes, there are alternatives, but I hate when people jump to dismiss service like this, without fully considering what issues the proposed alternatives have. Obviously it is ok to mention the alternative options, but that can be made in constructive way.

Let's celebrate the fact that somebody has built and released something and even seems to have a business model to support it. Instead of complaining about 5-20 bucks per month, try to figure out how you could channel some of your corporate multimillion IT budget to this fellow hacker. Wouldn't it be great if building and running this kind of small solutions would be actually a viable way of making living?

3
packetized 2 days ago 2 replies      
Literally the most terrifying service for any security-minded operations-focused person. Wonderful tool, interesting and useful in a dizzying array of aspects - but dear lord, I've had some real horrific moments when users told me that they installed it to allow access to their (private) repos for testing.
4
inconshreveable 2 days ago 14 replies      
Hiya there folks - I'm the creator of ngrok, happy to answer any questions
5
yeldarb 2 days ago 3 replies      
Happy paying ngrok user here.

Love it for developing anything using webhooks and also hybrid mobile apps (I have my app pull the JS from the Dev box I'm working on via ngrok without having to rebuild the app or deploy the code anywhere).

It significantly speeds up my workflow!

6
jeremejevs 2 days ago 5 replies      
Nice tool, but without committing for annual billing (which I don't intend to do, not for the first year of usage) it's $10 a month. My internet connection, my mobile plan, my Photoshop & Lightroom subscription, a huge collection of music (Spotify), 3K~5K movies and TV shows (Netflix), etc., all cost approximately the same. I mean, sure, $120 a year is pocket change for somebody using Ngrok professionally, but that's still super disproportional, compared to, say, monster of a piece of software like Photoshop. I'd probably subscribe for $2, but otherwise, IMO, frp [0] on a $3 VPS [1] is better value, with the extra benefit of being FOSS and having zero limits.

[0] https://github.com/fatedier/frp

[1] https://www.scaleway.com/pricing

7
kyboren 2 days ago 1 reply      
PSA: if you want to provide remote access to a local service, but don't want the potentially-terrifying security implications, use Tor Authenticated Onion Services (AKA "HiddenServiceAuthorizeClient" [0]).

On a machine on the LAN, install Tor and set up an authenticated onion service, and point it to the desired endpoint. In order to access the service, clients need a manually-loaded encryption key (and Tor, of course). Without this key, nobody will be able even to discover your endpoint, let alone actually connect to it.

[0]: https://gitweb.torproject.org/torspec.git/tree/rend-spec.txt

8
yasn77 2 days ago 2 replies      
I prefer the implementation of http://localhost.run

To me it seems a lot cleaner, simply use SSH rather than download any app

9
YPCrumble 2 days ago 2 replies      
A fantastic open-source javascript alternative is localtunnel (https://github.com/localtunnel/localtunnel). I've used this more often than ngrok after ngrok became a paid service.
10
saintfiends 2 days ago 0 replies      
Their 1.x is open source: https://github.com/inconshreveable/ngrok

This is a similar open source alternative:https://github.com/fatedier/frp

Both written in Go.

11
bespoke_engnr 2 days ago 0 replies      
I see some people arguing that "you should use a dev/staging environment with a public IP" instead of having ngrok tunneling traffic directly to your local dev box.

When you're editing HTML/CSS, you don't have to run a deploy script before checking how your markup renders. Ngrok gives people writing web services the same convenience when dealing with requests from a 3rd party on the Net.

It is the equivalent of saving your HTML/CSS source files and instantly seeing the changes when you reload your browser.

I just wrote a little proof-of-concept Alexa app that crawls HumbleBundle ('Bundled Goods', very much beta quality at the moment) and ngrok was invaluable for developing it quickly.

12
pfista 2 days ago 0 replies      
Ngrok is the coolest tool I use on a pretty consistent basis. Developing webhooks locally is usually what I use it for, and the web interface replay capability is amazing. The creator gave a great talk on why he built it and how it progressed over the years: https://www.youtube.com/watch?v=F_xNOVY96Ng
13
DAddYE 2 days ago 0 replies      
A lot of negativity but I found this tool super useful when developing for Alexa and test out my scripts. Keep it going guys!
14
49531 2 days ago 0 replies      
I've used ngrok for a while now, and I love it. I used it just last night to test out some webrtc stuff I was doing. Was able to get friends from around the world on video chat served from localhost within seconds.

It's also super handy when building webhooks, you can use the unique URL to test out apis without having to deploy anything. I can't rave about it enough.

15
xg15 2 days ago 0 replies      
The title of this submission makes it sound like a selling-fridges-to-eskimos scam product - you need to read quite a bit to find out what's it actually about and that it solves (or simplifies) an actual use-case.

I think a better comparision is with DynDNS services: It sets up a public host connected to your own machine - but unlike DynDNS, the host doesn't point to your machine's IP directly. Instead, requests are routed through a proxy/tunnel, so your machine can be kept behind a firewall and is only available through the public host.

(I figure, the proxy allows for some more neat tricks, such as restricting ports/urls/etc or holding requests open while your machine changes IPs.)

16
yannis 2 days ago 0 replies      
For a part-time programmer Mechanical Engineer, it is such a gratifying task to use ngrok. I first came across it a couple of years back. Great Project, great development, open sourced and well written in go. https://github.com/inconshreveable/ngrok/tree/master/src/ngr...

It takes two seconds to deploy an application in the evening from my kitchen table, check it also on my mobile and the next day access it from work also and show it to co-workers.

Call it "usability" for Engineers!

17
leesalminen 2 days ago 0 replies      
I've been on the free tier for a while and have been meaning to upgrade to show support for such a great service. Seeing this post this morning reminded me. Upgraded for the year!
18
nbrempel 2 days ago 1 reply      
Ngrok is one of the most valuable development tools in my toolbox.
19
ausjke 2 days ago 1 reply      
Never used it, what's the difference between ngrok and using DMZ with port-forwarding? are they the same thing? What's the technical advantage other than it is easy to use? I can port-forwarding easily on my router to expose whatever port to the public, why do I need ngrok?

With a DDNS + Port-forwarding you can easily have what Ngrok provides? or am I missing something?

20
samcheng 2 days ago 1 reply      
It's possible to roll your own ngrok clone via SSH tunnels, a publicly-available server somewhere, and autossh. This is basically ssh-tunnel-as-a-service.
21
JohnnyConatus 1 day ago 0 replies      
Love ngrok. The ngrok npm package makes it easy to put into a local dev startup, too.
22
stevemk14ebr 2 days ago 2 replies      
3 important questions:

1) My university blocks LogMeInHamachi which is the main tool i've tried to get around hosting behind a NAT. Will this likely be blocked too, or is it not possible to tell without trying

2) Is there any costs associated with this. Do i ever need to pay

3) Does the person connecting to my server also require a special client or does this appear to them as any standard connection would.

23
vhost- 2 days ago 1 reply      
I really like ngrok. I use it a lot. I just really dislike how the TCP tunnels work. With HTTP you get a unique subdomain which makes it harder for people to just scan and connect. With TCP, it's always 0.tcp.ngrok.io, so you can just scan that domain and connect to anything that's open.
24
joantune 2 days ago 0 replies      
I have been using this for quite a while and it's really useful. Even though I have VPSs available, where I could make a SSH tunnel, this is simply way more convenient, so I end up using ngrok a lot for development
25
roylez 2 days ago 1 reply      
I used to use Ngrok, then I discovered ultrahook which gives me a persistent endpoint, for free.
26
sandGorgon 2 days ago 0 replies      
I'm a happy paid user. If I had one request, it is for a different pricing and better management for groups of users.

People like me would like to buy 5-10 licenses and manage them centrally.

Define shared endpoints and individual endpoints,etc

27
nejdetckenobi 2 days ago 0 replies      
long time ngrok user here. used several times to show my prototypes to the others. It's simpler than deploying on heroku (or anywhere actually), and is without restrictions ofc because you use your own hw.
28
adamson 2 days ago 0 replies      
I've been using this for web server testing since 2012. It's probably the paid tool that's given me the most bang for my buck in terms of hours saved
29
andkon 2 days ago 0 replies      
I love this product page so much! Great illustrations, a wonderful balance of levity and clarity about ngrok's purpose.
30
stefanhuber 2 days ago 0 replies      
Use a hidden service! I manage many intranet servers over tor. You have no problems with nat or firewalls and it is free!

Ok it is slower, but for many things like ssh it's great...

31
andreiw 2 days ago 0 replies      
Cool. Where's the ARM64 linux build? That's more important than the legacy 32-bit ARM builds (i.e. ARM servers)
32
prodicus 2 days ago 0 replies      
Awesome! OT but can anyone suggest some free tools using which I can make similar diagrams. They look pretty cool :)
33
jonthepirate 2 days ago 0 replies      
docker run -ti -e 'PROXY_TO_ADDR=http://www.example.com/' jonthepirate/ngrok:latest

^ that expression will expose a local docker powered web server, assuming www.example.com is the local dns name on your docker network. Enjoy.

34
PacketPaul 2 days ago 2 replies      
How much is the paid service? You could roll your own for around $15/month using an Amazon EC2 T2 micro instance.
35
stummjr 2 days ago 0 replies      
Ngrok is just awesome! A huge shout out to the developers!
36
zAy0LfpBZLC8mAC 2 days ago 1 reply      
Yet another needless cost of not switching to IPv6 already ...
21
Recent version of Handbrake download infected with malware handbrake.fr
380 points by zalmoxes  2 days ago   209 comments top 22
1
rasmi 2 days ago 2 replies      
Something similar has happened with Transmission's download DMGs being replaced on their servers [1] (twice! [2]) in recent memory.

[1] https://news.ycombinator.com/item?id=11234589

[2] https://news.ycombinator.com/item?id=12403768

2
vomitcuddle 2 days ago 6 replies      
I'm going to take this opportunity to plug my favourite open source project - the Nix package manager[1].

It can work as a universal homebrew replacement (works on MacOS, Linux, WSL and can be easily ported to most BSD variants), comes with a huge collection of packages[2] and produces its own reproducible source builds. Like homebrew, it's a hybrid source and binary based package manager (if you haven't done anything to modify the build, it will likely be downloaded from a cache of pre-built binaries[3]). Unlike something like homebrew-cask, it will never download the pre-built .dmg file from the developer's website - with the obvious exception of proprietary software.

It can also work as a great AUR/ports replacement on Linux systems. Fedora doesn't provide FFmpeg or an up-to-date version of a package you need? No problem, just get it from Nix! All the advantages of a rolling release distro, without actually having to use one.

Due to its functional nature, it comes with a wealth of advantages over homebrew and other traditional package managers[4]. Once you get past the learning curve, creating your own packages or modifying existing ones is a breeze. It can create disposable development environments with dependencies of whatever project you're working on, without having to install them in your system or user profile! Check out the Nix manual[5] for more information.

It's so flexible that people have built a Linux distribution where your entire system configuration is a Nix derivation (package) - with atomic upgrades, rollbacks, reproducible configuration and much more! [6]

[1] https://nixos.org/nix/

[2] https://nixos.org/nixos/packages.html

[3] https://hydra.nixos.org/

[4] https://nixos.org/nix/about.html

[5] https://nixos.org/nix/manual/

[6] https://nixos.org/nixos/about.html

3
abalone 2 days ago 3 replies      
Did the author not sign the binary?[1] Why not?

Is it really just because of the $99/yr developer program fee? And if so.. is it starting to sound like a better value now?

[1] https://developer.apple.com/library/content/documentation/Se...

4
oceanghost 2 days ago 5 replies      
God dammit. I downloaded this a few days ago and sure enough, I'm infected. What are reasonable mitigation steps to prevent this in the future? I noticed handbrake said it must "install additional codecs" which is mighty odd, but I didn't think much of it at the time.

Is there a security product on OSX that would have prevented this?

5
asmosoinio 2 days ago 3 replies      
"Further Actions Required

Based on the information we have, you must also change all the passwords that may reside in your OSX KeyChain or any browser password stores."

That sounds like a very large exercise...

6
theunixbeard 2 days ago 0 replies      
Looks like the XProton malware is a RAT.

Full description here:

https://www.cybersixgill.com/wp-content/uploads/2017/02/0207...

7
joshua_wold 2 hours ago 0 replies      
Did this affect Handbrake installs that were checking for updates or only newly downloaded installs?
8
plg 2 days ago 1 reply      
I don't understand how I'm supposed to verify the checksum if I've already installed (and run) the HandBrake.app ... and long since deleted the .dmg installer file ????
9
soraminazuki 1 day ago 2 replies      
I think the main concern here is the state of GUI apps in macOS and Windows. Popular apps in these platforms are mostly closed-source, even for personal side projects. For the few open source GUI apps, no package manager provides support for building GUI apps from source.I wish package managers would make it easier to build GUI apps from source, or even provide their own binary packages for GUI apps. I really feel reluctant to install most GUI apps on macOS and Windows because I can't trust that the build/distribution platforms for these apps are properly secured.
11
leonroy 1 day ago 0 replies      
Yikes. Missed this by 1 day. I updated Handbrake to 1.0.7 on 1st May to compress a bunch of videos. Was a little surprised to see it wasn't signed but after scanning it with ClamXav I figured I was safe and installed it on every Mac in the house so I could crank through my project faster.

If I understand correctly even if I had in fact downloaded the compromised version ClamXav wouldn't have detected the malware?

This kind of stuff is extremely worrying and really strengthens Apple's case for signed application binaries across the board.

Are package managers like Homebrew and MacPorts not also susceptible to this kind of binary poisoning?

12
noobermin 2 days ago 3 replies      
Usually package managers on linux distros, to use an example for comparison, tend to check checksums of downloads for security purposes during any installation. For MacOS users, I guess I understand they want to use software not blessed by Apple, then isn't homebrew or whatever supposed to do the same thing?
13
JohnTHaller 2 days ago 1 reply      
There's a quick analysis of it here: https://objective-see.com/blog/blog_0x1D.html

Along with the fact that Apple updated the built-in sorta-antivirus in MacOS to detect it. But it only detects an SHA1 hash on the original DMG. If someone rebuilds the DMG or puts the malware with another app and builds a DMG, it'll bypass the MacOS sorta-antivirus.

14
atmosx 2 days ago 2 replies      
I can't believe this. I literally downloaded handbrake like 45 minutes ago! Luckily I got the proper version, but boy oh boy, it was a close call. I think I'll reinstall claXmav on all my macs.
15
nly 2 days ago 1 reply      
Aren't the dmgs digitally signed?
16
PhantomGremlin 2 days ago 0 replies      
What about creating different users on a MacOS system to do different things? Wouldn't this mitigate exploits like this?

Why shouldn't I create a "Tommy Transcoder" user on my system? That user would have the Handbrake app in his own Application folder. I assume that Handbrake will run correctly without needing to be installed in the system /Applications?

I already do this for a few items of software. Maybe it should be SOP to do this for most/all software?

Or what about installing most apps into virtual machines and using VMWare to run them?

I do recognize that such an approach couldn't be used universally. E.g. VMWare itself must run on the native machine, and with elevated privileges.

I'm interested in "defense in depth". No single technique can defend against all possible exploits.

17
riobard 1 day ago 0 replies      
The SHA hash of the dmg file is useless. Who still keeps the dmg file? I need a way to verify the app itself is compromised.
18
nnutter 2 days ago 2 replies      
Didn't this also happen somewhat recently? How can this be prevented? The window could be reduced by actively monitoring mirrors? Could BitTorrent help mitigate this because the torrent file validates data and isn't under the control of the parties?
19
HedleyLamar 2 days ago 1 reply      
How does this happen? Even if installed, doesn't Mac's secure operating system prevent user programs from accessing passwords?
20
Angostura 1 day ago 0 replies      
The most important bit of the advice - change all your passwords in keychain.

To coin a phrase - oh shit

21
mikewhy 2 days ago 4 replies      
> The Download Mirror Server is going to be completely rebuilt from scratch.

Am I alone in thinking that this is irresponsible? Why not move releases to github?

Why aren't you going to start signing macOS binaries? I find this offensive. Thanks for potentially compromising users because you couldn't be arsed to pay for a certificate.

22
kefka 2 days ago 1 reply      
Sigh.. This could be somewhat repaired by making a beta-release, distributing to devs and testers. Once confirmed good, rename file and release via IPFS. The key here, is if multiple devs did this, the hashsum would prove the file being shared.

Any one client that's been hacked or infected would show up as an improper hash and easily spotted.

22
Build Yourself a Redux zapier.com
389 points by jdeal  3 days ago   154 comments top 18
1
sergiotapia 3 days ago 16 replies      
If you're looking for something easier to use to help you manage state in your React apps look no further than Mobx. It's pretty incredible how stupid easy it is to use, it kind of feels like cheating.

https://stackshare.io/mobx

I've tried to use Redux a couple of times but I just spent way too much time in plumbing code. Code I really don't care about. To be frank this code looks terrible (no fault of the author):

 const handlers = { [CREATE_NOTE]: (state, action) => { ... }, // ... a thousand more of this }
Not to mention, I never not once felt happy working with Redux. I'm all about developer UX and working with tools that feel nice to use.

With Mobx you just declare a variable as Observable, then mark your components as Observers, and voila: You have crispy, no-plumbing reactivity.

In a way it kind of feels like Meteor where you save data on the database and it replicates everywhere it's being used.

2
rasmi 3 days ago 3 replies      
If anyone is interested in learning this content through an in-depth video tutorial, I highly recommend Dan Abramov's two-hour "Getting Started with Redux" [1] and the excellent follow-up "Building React Applications with Idiomatic Redux" [2]. This is a great article, but learning Redux more thoroughly directly from the creator himself may be of interest to some!

[1] https://egghead.io/courses/getting-started-with-redux

[2] https://egghead.io/courses/building-react-applications-with-...

3
joshwcomeau 3 days ago 2 replies      
TFW you see a thread about Redux, and you just _know_ that the comments are going to consist of nitpick complaints.

Contrarian opinion (apparently): Redux is a lifesaver when it comes to complex applications. There's a little more ceremony, but a lot more organization, a lot fewer bugs.

4
dgregd 2 days ago 3 replies      
I've seen and understand Redux TO DO examples.

However I develop enterprise CRM app. In db there are 200k client records, 500k sales calls records. It is implemented as a standard Ruby on Rails / Postgresql web app. It works quite well. It is also pretty straightforward to implement a such app in a Java/PHP MVC framework.

Let's say I would like to implement UI using React/Redux. How should I start? For example the app has calendar month view, for each day there are 20 sales calls. So the month view has 400 sales calls and clients data displayed (date, time, client name, target group).

Do I have to put 400 sales calls and 400 clients data to a Redux store to display the calendar month view? What about client data search results and pagination? In just few clicks a user can display hundreds of clients records (thousands in case of results map view). Do they belong to a Redux store? If a user modifies one sales call record, how it is persisted in central DB? What about edge cases where some uniqueness conditions have to be checked on central DB level?

Rails covers all things needed to implement my medium CRM app. When I read Redux TO DO tutorials I have a filling that they cover just 10% of what is needed to implement a full CRM app. Could you please direct me to Redux examples / tutorials how to implement a full enterprise database app (SugarCRM scale).

PS. to down voters, please write a few words what is wrong with my questions so I can learn what is appropriate to post on HN

5
tarr11 3 days ago 1 reply      
I like this article. It's probably a good idea to build your own simple todo app using Redux from scratch first, and then follow this guide. It would make a lot more sense.

Using this as a place to put some thoughts on Redux after having picked it up over the past few weeks.

I have been spending the least few weeks re-writing an "offline-first" mobx React app into Redux, after it started spinning out of control and becoming unmanageable. Mobx provided a lot less guidance on how to structure a larger app (read: more than a few pages with non-trivial remote interactions)

Like React itself, it took me a few weeks to grok the philosophy and architecture, and internalize the key components so that I wasn't getting lost every few lines of code.

I had evaluated Elm earlier in the year but passed on it, as there were some interop issues, and the component ecosystem wasn't as mature as react.

Redux has had the effect of organizing my code and helping me reason about the structure, as well as providing time travel for free.

Typescript to be very helpful when building with Redux, specifically when I did something wrong, and had to refactor.

I've also been pleasantly surprised at the middleware ecosystem, and how useful and easy to configure it has been.

6
twfarland 3 days ago 1 reply      
You can replace redux with any FRP library. Your state is a signal/stream/whatever that folds over an initial state with a signal/stream/whatever of actions/messages. Your top level view component should listen and render based on that. Example: https://github.com/twfarland/sprezzatura-acto-mario
7
msoad 3 days ago 1 reply      
I have to deal with Redux at work and I absolutely hate how much code I have to write to flip a binary in my React component!

I used MobX on the side projects and I absolutely love it! I might be biased but I think MobX is so much better for any size project. Redux is just too good at marketing and their "Hello world" looks very very interesting and reasonable but it doesn't scale. When you have multiple people working on the same codebase it becomes a hot mess!

If you're starting a project, give MobX a shot and see how it goes.

8
arbesfeld 3 days ago 0 replies      
One advantage of Redux that people tend to miss is the serializable state object which is incredibly helpful for local logging and remote debugging. It's the reason we built LogRocket (though now we have a bunch of other features for general web apps).
9
antjanus 3 days ago 0 replies      
I wrote a similar article last month on the same topic but it's much more simplified with CodePens to detail the way:

https://antjanus.com/blog/web-development-tutorials/front-en...

It covers only Redux and not React which I think is a little more useful. It DOES cover Enhancers.

Anyways, I've seen this article circulate and I'm glad people are interested in the inner workings of Redux!

10
emehrkay 3 days ago 3 replies      
Am I missing something new with object literals or is this an error:

 window.state.notes[id] = { id, content: '' };

11
acemarke 3 days ago 3 replies      
This is a great article! As I commented on the post itself when it was published, I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at [0]. That includes a section of "Redux implementation walkthroughs" at [1]. This is probably the best article of that type that I've seen. It not only covers the core of Redux, but also builds miniature versions of Redux middleware and the React-Redux `connect` function. I already added it to my list, and definitely recommend it.

Readers may also be interested in my Redux addons catalog at [2], which includes links to hundreds of Redux middleware, utilities, and other useful libraries. That includes multiple ways to batch dispatching of actions.

[0] https://github.com/markerikson/react-redux-links/blob/master...

[1] https://github.com/markerikson/react-redux-links

[2] https://github.com/markerikson/redux-ecosystem-links

12
dclowd9901 3 days ago 0 replies      
Wrote a similar piece last year if you like this kind of thing. I love learning stuff by implementing it myself:

https://medium.com/@davedrew/lets-write-redux-975609b0358f

13
neebz 3 days ago 0 replies      
Shameless Plug: I gave a talk last year explaining similar concepts https://github.com/neebz/react-redux-presentation
15
floatboth 3 days ago 10 replies      
"Redux is a simple library" woah woah stop right there. How is this:

 const handlers = { [CREATE_NOTE]: (state, action) => { ... }, // ... a thousand more of this }
simple? This looks horrible. Every time you want to work on code that modifies data, you have to switch to the one file where you keep all the data-modifying functions? Seriously?!

Freezer https://github.com/arqex/freezer is a much, much better experience. You just shove immutable objects down the component tree, and they just come with methods that modify "them" (by actually changing the data tree). It has an event system as well for when you actually want to centralize actions. Works great with Polymer, by the way!

16
hippich 3 days ago 2 replies      
I am not getting fully redux yet, but from my understanding it is sorta like app-wide message bus with message handlers. Is my understanding correct?
17
antouank 3 days ago 1 reply      
Or do yourself a favour and use Elm.
18
mal34 3 days ago 1 reply      
Making simple things Complex !!
23
Long-dormant bacteria and viruses in ice are reviving as climate warms bbc.com
291 points by raulk  3 days ago   151 comments top 30
1
zbjornson 3 days ago 2 replies      
Soil-borne anthrax is very common; there's in fact "anthrax season" when (usually small) outbreaks happen among wild and farmed animals, from North America to southern Africa, Russia to China and India. (Search for anthrax on https://www.promedmail.org/, there are 37 reports in 2017 so far.) That a thawed carcass was infected is an interesting anecdote as far as the mode of the transmission, but it isn't surprising. That is, it's not a disease that we've eradicated that is coming back to haunt us.
2
secfirstmd 3 days ago 1 reply      
The comments on this article are fascinating and why I love reading the HackerNews. Point vs point debates about interesting scientific theory but in a way that average person like me can understand. I used to read about this kind of conversation happening in the late 19th Century in the bars of Royal Science institutions in Europe - it feels a little bit like that. :)
3
mickrussom 3 days ago 4 replies      
My wife always bugs me about the cold. I tell her operating rooms are cold. Heat = entropy, disease vector increase. Any thawing of permafrost will start to revive dormant diseases, viruses and flora. We might as well complete the trifecta and start looking for ancient DNA and revive long gone species for the win. She always tells me cold and drafts = sick, but if you look where the percent of currently diseased - its never in the north - always in tropical places where diseases, worms, parasites have a field day. There will be a day where she'll be begging for the cold :)
4
arctangent 3 days ago 1 reply      
I'm surprised that Fortitude [1] hasn't been mentioned yet.

It's a fairly good TV show on this topic.

[1] https://en.wikipedia.org/wiki/Fortitude_(TV_series)

5
enknamel 3 days ago 4 replies      
There are quite a few sci-fi novels and one show I saw that feature nightmare scenarios based off of thawing ice. I seriously doubt we will see something catastrophic though. It's been a while since I took bio but bacteria/viruses from thousands if not millions of years ago will most likely not be able to bind to our cells.
6
fhood 3 days ago 3 replies      
I'll just add this to my list of things that I probably should give some thought, but won't because the top of the list includes refugee crisis, income inequality, and all the less Crightonesque consequences of climate change.
7
Houshalter 3 days ago 0 replies      
I'm not saying this isn't a threat. But it doesn't seem as scary as the title or comments are making it out to be. The article admits that most bacteria can't survive this long frozen. Only certain types that have adapted to serving in the cold long term by forming spores. It only mentions one bacteria that harms humans that can do that, botulinum. Which isn't contagious and is only a problem with improperly canned food. And anthrax which is deadly but fortunately not very contagious.

Viruses are more of a concern, but the article doesn't make a great case there either. They mention that scientists found a smallpox victim but were unable to recover a complete smallpox virus. Just fragments of it's DNA. The scariest thing recovered was Spanish Flu. Which fortunately many people have already been vaccinated against: http://www.reuters.com/article/us-flu-vaccine-idUSTRE65E65S2...

8
koolba 3 days ago 0 replies      
Reminds me of the anthrax outbreak in Russia:

http://www.npr.org/sections/goatsandsoda/2016/08/03/48840094...

9
btilly 3 days ago 0 replies      
Diseases from early humans are an interesting point of worry. What tends to make a deadly disease deadly is that it is able to infect us, but is poorly adapted to us. A disease that is adapted to a close relative of ours is likely to both infect us and not be well-adapted to modern humans.

Which could be really, really bad.

10
jondubois 2 days ago 2 replies      
I'm not a microbiologist but when accounting for evolution, you'd think that a microbe which was locked away in ice for millions of years would be maladapted to modern animals - Particularly in terms of transmission between hosts.

I would be more afraid of pathogens that were frozen more recently.

11
chiefalchemist 3 days ago 1 reply      
A legit fear, but it could be a positive.

When you consider, for example, the Zika virus and it's effect on the human brain, perhaps - on the other hand - we have a virus to thank for making homo sapiens more intelligent than our then "competition"?

Bacteria, could be a positive well.

Of course it's a roll of the dice either way. C'est la evolution.

12
cmdrfred 3 days ago 0 replies      
Conversely modern bacteria and viruses are going to sleep in the Antarctic.

https://www.nasa.gov/feature/goddard/nasa-study-mass-gains-o...

13
whoisstan 2 days ago 0 replies      
Read the 'Drowned World' by J.G. Ballard, humans start having ancient dreams.

'Just as psychoanalysis reconstructs the original traumatic situation in order to release the repressed material, so we are now being plunged back into the archaeopsychic past, uncovering the ancient taboos and drives that have been dormant for epochs Each one of us is as old as the entire biological kingdom, and our bloodstreams are tributaries of the great sea of its total memory.

The Drowned World, J.G. Ballard, Millennium 1999, p. 41.

14
jsz0 2 days ago 0 replies      
This sounds like a very manageable threat. We already have systems in place to identify and control the spread of diseases. It's already equipped to deal with new or rare diseases. This will be an added burden but probably no more difficult than dealing with something like ebola. Likely easier due to the geography and population density involved.
15
raulk 3 days ago 3 replies      
Revel with me in the thought that we humans think we are center of the world.

That Earth is made for us and we have the power to shape it in whichever way we wish. That we own the planet.

But, in reality, we don't. We are here only temporarily. There are powerful organisms hiding out there who are perennial.

And they act like guards. If we push it too far, we set off the right conditions for them to spring to life, and restore balance on Earth by anhililating the threat i.e. us.

What a time to be alive!

16
DanBC 3 days ago 4 replies      
> From the bubonic plague to smallpox, we have evolved to resist them

We have antibiotics for plague, and vaccination / eradication for small pox. That doesn't feel like we evolved any resistance. A couple of thousand cases of plague are reported to WHO each year.

17
tomcam 2 days ago 0 replies      
This has been happening for at least a couple of hundred years. Mammoth bodies have been exposed in Siberia since at least the 18th century and probably much further back than that.
18
ccvannorman 3 days ago 0 replies      
I bet the CIA is sweating about that Winter Soldier that froze in the 60s.
19
stuffedBelly 3 days ago 0 replies      
Reminds me of this horror flick I watched a couple of years ago

The Thawhttp://www.imdb.com/title/tt1235448/

20
muninn_ 3 days ago 0 replies      
I guess I prefer that they stay there... but can't help but to say that it seems fascinating that there are these dormant antique lifeforms just waiting to be discovered. Hope they don't kill us.
21
minikites 3 days ago 5 replies      
Over the past 5-10 years I've gone from being mostly optimistic about our collective future to quite pessimistic. It's looking increasingly likely that we're unable to solve problems like climate change that require mass cooperation and that too many people are too selfish and short-sighted to allow for collective action. And in the (hopefully unlikely) event that industrial society collapses (from a pandemic, mass political instability, etc) any surviving humans won't be able to restart it because we've already used all of the "easy" fossil fuels. This is pretty much our only shot at making civilization work.
22
dangayle 3 days ago 1 reply      
Drilling in the frozen north and releasing an ancient monster is the subject of the first episode of the revamped Mystery Science Theater on Netflix.
23
franzwong 3 days ago 1 reply      
No matter the climate change is due to human or nature. The fact is the temperature is getting higher and it is the problem.
24
zoom6628 3 days ago 0 replies      
Nature has given us a whole CDC vault for free. Seems like a golden moment for science akin to corpse in the Alps.
25
yourthrowaway2 3 days ago 0 replies      
We're all gonna die!
26
rdxm 3 days ago 1 reply      
we've already been de-seleceted. now it's just a matter of time....
27
rglover 3 days ago 0 replies      
What a time to be alive.
28
graycat 2 days ago 2 replies      
> "as climate warms"

How much warming, in degrees F, C, or K, since when, measured how, by whom, published where, compared with what other measurements?

Why ask these questions? For one, AFAIK "climate warms" essentially has not been happening to any significant extent for about 20 years and, really, since the coldest of the Little Ice Age -- apparently there was some cooling from 1940 to 1970 so some warming since then. On the Little Ice Age, there was ice skating on the Thames River in London.

Reference for temperature over the past 2000 years? Okay:

Committee on Surface TemperatureReconstructions for the Last 2,000 Years,National Research Council, SurfaceTemperature Reconstructions for the Last2,000 Years, ISBN 0-309-66264-8, 196pages, National Academies Press, 2006,available at

http://www.nap.edu/catalog/11676.html

In the Medieval Warm Period, did all the ice and permafrost melt and make everyone sick? Well, if it all melted, then what's in the ice and permafrost now is not so old and maybe safe. But I didn't hear that diseases released by the melting ice and permafrost made lots of people sick during the Medieval Warm Period.

My guess: The BBC is pushing made-up, cooked-up, stirred-up, gang-up, pile-on, continually reinforced fake, nonsense scare stories to get continuing eyeballs, ad revenue, and British government subsidies.

Not reading it.

For some simple evidence: The Little Ice Age really was significantly cooler, but there is no evidence that it was preceded closely by lower concentration of CO2 -- the lower temperatures had some cause other than lower CO2.

The Medieval Warm Period really was warmer, but there is no evidence that it was preceded closely by higher concentration of CO2 -- there must have been some cause other than higher CO2.

It appears from ice core samples and more that the temperature of the earth has varied significantly over at least the last 800,000 years. Maybe CO2 has had something to do with warming since the Little Ice Age, and otherwise it looks like the causes of warming/cooling had little or nothing to do with CO2.

It appears that people who talk about warming are blaming CO2, in particular from human activities, and from what I've seen in the temperature records for the past 800,000 years, the only time when CO2 might have caused significant warming was since the Little Ice Age -- even if we accept this, there's the problem of the cause of the cooling from 1940 -- 1970. Otherwise the temperature changes had other causes -- so, my guess is that the temperature change since the Little Ice Age also has some cause(s) other than CO2.

Is CO2 a greenhouse gas, that is, absorbs Planck radiation from the surface of the earth? Yup, absorbs in three bands in the infrared; since we can't see CO2, it does not absorb visible light. So, is there a warming effect from that CO2 absorption? Well, maybe, but water is also a greenhouse gas so that maybe the radiation would be absorbed by water instead of CO2. But even if CO2 is the only way that infrared radiation can get absorbed, it's still not clear how much warming, net, all things considered, it would cause. E.g., lighting a match will also warm the earth.

Is there more CO2 in the atmosphere now? Apparently the concentration someplaces is 400 ppm (parts per million) -- IIRC that would be in Hawaii, right, near a volcano, and volcanoes are supposed to be one of the major sources of CO2. Also there's CO2 in the ocean, and warm water absorbes less CO2 than cold water, so maybe recently some of the ocean around Hawaii is warmer and the source of the Hawaii CO2.

I've seen no good presentations of CO2 levels over time with explanations of the causes.

I've seen no good data on CO2 sources, sinks, or flows.

E.g., first cut, how much CO2 is in the atmosphere now? Then, how much CO2 enters the atmosphere from human activity each year now? If the ocean warms a little, say, from an el Nino, how much CO2 is released into the atmosphere? At what rate do green plants take CO2 from the atmosphere? My guess is that CO2 from human activities is comparatively tiny, that the basic data would show this, and this is why we don't get the basic data.

I see lots of articles on CO2 and warming, but I don't see articles with even this basic, first cut data.

So, to me, the articles don't really have a case since if they did they would make their case. In the articles I see efforts to grab people emotionally but darned little data to convince people rationally.

BBC: "as climate warms" is where you lost me.

Or with this logic, we could write even more shocking articles:

As the next galactic gamma ray burst hits the earth, all the atmosphere will be blown off the earth, and everyone will die. Moreover, since the gamma rays will come at the speed of light, we will never be able to see them coming. Now, get scared. Get afraid. Be very afraid. Watch the BBC for hourly updates 24 x 7 for the rest of your life to keep up on just what will happen as the next gamma ray burst hits the earth. Same for marauding neutron stars, highly magnetic neutron stars, and black holes. Read BBC tomorrow for the results as the next black hole hits the earth.For more, the expansion of the universe is slowing down, and we may be in a big crunch and all compressed to a point -- see the BBC next week for the details when this happens. Back home, see what will happen when Yellowstone blows again -- last time it put ash 10 feet deep (it's rock and enough to crush nearly any roof) 1000 miles down wind or some such. Remember, those bacteria are down there, fighting every second among themselves, evolving, just to come out and kill everything else, including YOU!!!

29
bcaulfield 3 days ago 1 reply      
Oh. Goody.
30
akartaka 8 hours ago 0 replies      
Pleistocene Park, the project to keep permafrost in Siberia, at Kickstarter: https://www.kickstarter.com/projects/907484977/pleistocene-p....
24
Open-source chip RISC-V to take on closed x86, ARM CPUs computerworld.com.au
351 points by jumpkickhit  2 days ago   146 comments top 16
1
wyldfire 1 day ago 3 replies      
I thought that microprocessor production and design was fraught with risk of infringing on other designers' patents (even for original ISAs). I can see that industry heavyweights have arrived to support RISC-V, so hopefully that comes with a team of professors/lawyers that could defend them. But, why now? Why couldn't this have happened sooner? Didn't Sun try to create an open SPARC processor design? What does RISC-V have that it didn't?

Is the intent for RISC-V to compete with modern high-end CPU designs, or do we just want to have royalty-free microprocessors for our embedded devices?

You might be surprised (at least I was) to learn that peripherals like hard drives and PCI-add-in cards usually have their own CPU executing their own software. Those processors are often MIPS/ARM/etc based and the manufacturer has to shell out to someone to be able to use that, even if they designed the processor themselves. I can see how this particular market is ripe for something like RISC-V. But does anyone expect RISC-V to really go head-to-head with Xeon, Opteron, ThunderX, Centriq?

I sound incredulous because this seems surprising to me, but I have no evidence to suggest whether it's as unlikely as I think. I've certainly seen open source software designs far superior to closed source ones, so maybe hardware design is no different?

2
filereaper 2 days ago 3 replies      
Many people claim to want an open-source chip, but end up balking at its price tag, this is exactly why Raptor Engineering's Talos failed. [1]

I've pretty much given up hope on a non-x86 based chip hitting our desktops, the closest to reach will be ARM.

The economies of scale aren't there, I pretty much end up rolling my eyes at each of these articles.

[1] https://www.raptorengineering.com/TALOS/prerelease.php

3
faragon 1 day ago 2 replies      
AFAIK, most advanced RISC-V design includes superscalar and OoOE (e.g. comparable to a MIPS R10000 (1995) or an Intel Pentium Pro (1995)), while the RISC-V for the IoT is comparable to a typical single instructions per clock RISC CPU, e.g. MIPS R3000 (1988).

The success in the IoT will depend not only in a cheaper price because of no royalties, but also in the "ecosystem": peripherals, buses, etc. Running Linux is a huge start, so I have no doubt it can be a success in this field.

Regarding the use in mobile and desktop, it will have to wait until SIMD extensions are introduced, and software being adapted (e.g. ffmpeg/libav including RISC-V assembly SIMD implementation for the codecs).

Anyway, realistically, for the RISC-V getting enough traction, some big player should bet on that, which is currently highly improbable, unless some Apple/Samsung/Huawei/Google gets crazy enough for doing it.

4
DonbunEf7 2 days ago 8 replies      
I want to buy RISC-V, both to play with and to support the cause. What are my options like and should I buy something now or wait for the next generation?
5
pubby 1 day ago 0 replies      
As someone not knowledgeable about hardware, I really enjoyed reading Agner Fog's message board where he and others discuss creating a new open source instruction set: http://agner.org/optimize/blog/read.php?i=421

RISC-V is discussed some, and part of the discussion is how to improve it.

6
webaholic 1 day ago 2 replies      
RISC-V is atleast 10 years away from competing with x86 and ARM. It is just now getting to a point where it can power arduino class hardware. Long way to go... but looks promising.
7
blitmap 2 days ago 3 replies      
I look forward to a RISC-V proc powering my laptop with some ridiculous Nvidia GPU. We must escape x86 :(
8
restalis 9 hours ago 0 replies      
So many RISC options and almost none for CISC. If you want yet another low power chip, then it makes sense. If, however, you want to get real on efficient cache usage and a high instruction-per-execution ratio, just do yourself a favor and stop ignoring the costly experimenting results that the industry already paid for.
9
figers 1 day ago 1 reply      
Isn't ARM the RISC solution that is already here? Already dominating mobile, Microsoft is soon to be announcing x86 apps running on Windows on ARM
10
alkoumpa 2 days ago 1 reply      
I wonder how of those fit in a zedboard. (not going to ask how many hours of tool fighting that would require though).
11
sweden 1 day ago 1 reply      
I see on this thread a lot of people getting blind-folded by the "open-source" term attached to the headline of this article.

First of all RISC-V can mean more than one thing: it can refer to the architecture, which is in fact open and free to use, or it can refer to the implementation of the same architecture, which will not be necessarily free or open source. For example, check the so claimed SiFive company which was promising free and open source implementations of RISC-V: http://www.eetimes.com/document.asp?doc_id=1331690

"A year ago there was quite a debate if people would license a core if there was a free version, [but now] weve seen significant demand for customers who dont want an open-source version but one better documented with a company behind it, said Jack Kang, vice president of product and business development at SiFive."

By the end of the day, they just decided to follow ARM's path by providing license fees to their CPUs.

Secondly, when people say that RISC-V is "free" and "open-source" and that will allow companies to create cheaper and more open hardware, that is just an illusion. There are many more things on a SoC other than a CPU (like memories, communication buses, GPUs, power management processors, and so on). Cutting costs on a CPU will not make the cost of an SoC go down to zero, the CPU is just a small part of the puzzle.With RISC-V, you either need to implement the CPU yourself (which will be extremely expensive and time consuming) or you will have to find someone who provides with CPU cores already implemented. And of course that you need to have support and guarantees that the cores you bought will work on silicon. There will be always a huge cost associated when shipping CPUs, you can't escape from that.

You can already imagine that open-source hardware doesn't play by the same rules as open-source software, it's a completely different game with completely different rules.

And people speak of ARM's royalties like if they were a very bad thing. Truth to be told, the royalties you pay ARM can be a very good deal taking into account that you get access to silicon proven CPU cores, support from the best engineers in the industry and you automatically get covered by the many CPU patents that ARM owns.And you can even choose on how you want to pay for ARM's CPU licenses: you can either choose to license an already implemented CPU design by ARM or you can buy an architectural license and implement your CPU completely from scratch (this is what Apple and Qualcomm are current doing). You don't need to be completely tied to ARM.Even in the royalty fees you can choose whether you want to pay a big upfront license fee but then paying low royalties per device or you can choose to pay a low upfront license but compensating on the royalties per device.

There is a lot of misinformation going around the possibilities of RISC-V, mostly of this misinformation coming from people involved in the development of the spec. Don't be fooled by the buzzwords "open-source hardware" and "free hardware".

12
posterboy 1 day ago 1 reply      
What is so special about reduced ISAs, what's the differentiating factors between them?

I mean they are so reduced that the ones I've seen are largely the same few logic ops. RAM access and interrupts might differ some, but a) memory access should follow the implementation and b) essentially everything else is memory mapped (avr, c51, pic)

13
psydk 1 day ago 1 reply      
After enjoying coding on 680x0 in my youth and later being frustrated by x86, I acclaim that new ISA. There is a design decision I'm curious about but I could not find related information. How did they come with the names "x0, x1, x2..." for general purpose registers, instead of the more conventional "r0, r1, r2..."?
14
jlebrech 19 hours ago 1 reply      
would there be a advantage to not only creating a reduced instruction set but a minimal instruction set and letting the compiler do the rest. especially when you can add a lot more cores, so that mul becomes a cpu core with a counter and add for example.
16
roryisok 1 day ago 1 reply      
I always thought ARM chips were RISC. shows what I know
25
Soylent Closes $50M Series B Round Led by GV soylent.com
333 points by thejacenxpress  4 days ago   634 comments top 67
1
johnfn 4 days ago 19 replies      
Soylent gets a lot of hate (I'm looking forward to skipping out on this thread before the inevitable negative comments from people who assume that Soylent consumers will eat nothing but Soylent for the rest of their lives), but to me, it's solved a large problem in my life: what to eat when I'm hungry but I don't have enough time to prepare a full meal. It can happen every now and then when I'm rushing around, and soylent blows away whatever I'd eat before (nothing, some Mexican I bought in the Mission, clif bars, etc).

I've actually switched over to a product called Ample which is similar to Soylent but a bit more health conscious with ingredient choice. Still, I've got nothing against Soylent.

2
rubatuga 4 days ago 23 replies      
Although I was initially brought onto the hype train by great marketing and the promises of health and complete nutrition, I realize that my initial reliance on Soylent was actually part of a deeper problem, brought on by problems such as depression and inappropriate time management. When in reality I should have had enough time to eat out or possibly even cook a meal, I found myself relying on Soylent. I didn't leave my room, and had trouble doing anything. I soon began to lose my appetite and had to force myself to gulp it down, attempting to make sure I wouldn't starve myself. Drinking Soylent was ruining my health.

I think many people approach Soylent as a way to solve some of their problems, but people should realize it won't be and can't be. Another person I knew had bought a few boxes of Soylent attempting to lose weight. In reality, she did not change her weight appreciably, as all she did was consume the same number of calories she would have otherwise.

Lastly I would like to add that there is some debate over the actual nutritional efficacy of the composition of Soylent. If you look at the bioavailability of their calcium supplement, calcium carbonate, it is significantly less than the one found in milk, calcium phosphate. However, Soylent will still claim that it is possible to have 100% D.V. of calcium with 5 bottles.

Overall, Soylent is probably not a healthy solution to your problems.

3
dreammakr 4 days ago 2 replies      
Long-time lurker, first time commenter. I've seen several people mention that there have not been meal replacement products like Soylent ever. The nutritional profile reminds me of EAS Myoplex + additional vitamins. Myoplex has been around since the mid 90s and was always marketed as a meal replacement. Learned about it from a bodybuilder website when trying to gain weight and easily "eat" extra meals. They have never marketed in the way Soylent has but it serves the same purpose. I took it to replace at least one meal a day for about two years. Blood work during that time period was normal and it worked well imo. Nothing against Soylent but the product is not a new concept.
4
tommynicholas 4 days ago 3 replies      
The new cacao Soylent is so much tastier than I expected it would be. Despite all the mocking Soylent gets, the product is excellent. Maybe it's groundbreaking and maybe it's not, but it's definitely helpful to my life.
5
Afforess 4 days ago 4 replies      
The crowds here saying that Soylent is "just" rebranded Ensure/Slimfast/etc sound like the same voices that said Dropbox was rebranded rsync/scp/ftp. The arguments are cookie-cutter and wrong for the same reasons they were with Dropbox.
6
stevenwu 4 days ago 1 reply      
On the topic of raising/burning $:

I remember seeing that they had a position open for a software engineering role. If I remember the job description correctly, they've built out their own online store?

I recently saw that you can buy their products through Amazon - I wonder if it was money well spent to roll out their own web store versus using Amazon/Shopify. As a past customer I don't remember seeing any particularly unique feature that made hiring in-house staff for this aspect necessary.

7
tonydiv 4 days ago 5 replies      
It amazes me that VCs find this type of business interesting because it's not defensible in many of the ways a software company is. I am also not sure why the company wants to raise this much money -- they need to continue growing like crazy (outside of tech regions where engineers who are unwilling to cook live) or go bust.

Nonetheless, the drink is ok. I tried it for a few months. Instead of avoiding cooking, I have embraced it, and now cook incredible meals for $3-$4 using my Joule sous vide. Eating real food has changed my mood significantly.

If anyone in SF wants to buy a whole box of Soylent Original (white bottles), I will sell an extra I have for 40% off. Must pick up, located at Chavez/Bryant.

8
Karunamon 4 days ago 0 replies      
To hopefully short-circuit a lot of pointless debate, the product you're about to mention is not equivalent to Soylent unless all of the following conditions are true:

1. It contains enough calories for an average healthy adult to live on (2000/cal/day, give or take)

2. The sugars used are high glycemic index

3. You will incur no major nutritional deficiencies/toxicities by long term use

4. It costs no more than $14/day (or $8/day for the powder) while satisfying all above conditions

Most diet drinks fail on condition 1, most meal replacements you're aware of like Ensure fail on conditions 2 and 3, and the leftovers usually fail on condition 4.

9
zorbadgreek 4 days ago 5 replies      
I'm trying to understand what Soylent's business moat is. My inclination is economies of scale to drive cost of goods down, like many food companies.

To that end, I don't see their product as all that unique or difficult to replicate, and I also foresee headwinds for them if/when they try to market to a broader base of consumers who already have powders, shakes, bars, and hundreds of other meal substitutes to choose from.

10
NathanCH 4 days ago 1 reply      
Soylent as a company has been run terribly. They've had numerous recalls and the recent price increase for Canadian customers is highway robbery.

That said, Soylent has improved my diet significantly. My entire life I have struggled to consume enough calories. Adding one serving of Soylent per day has allowed me to do two things:

1. Increase the number of calories I consume so I am in a daily surplus (I've gained a healthy 13 pounds since 2015 thanks to Soylent).

2. More importantly Soylent has accustomed me to eating larger meals. I can actually go to a restaurant and eat a full meal. I'm sure you can understand how much that has improved my social life.

Prior to Soylent I was consuming Ensure daily for more than five years but it's not enough calories to make a difference plus it's way more expensive.

11
mks40 4 days ago 12 replies      
A question to Soylent customers:

Do you use it as the occasional convenient meal replacement or how far do you go into replacing all real food?

I suspect there is both, but what I am getting at is how much Soylent's long-term success depends on people seeing eating as a nuisance that should be optimized away versus something that should be savoured and enjoyed.

To me this is in the context of the larger question of personal utility maximisation. In the grand scheme of things, we have just started being able to really monitor and improve all aspects of our lives (in terms of time spent, convenience), and there is the question of how far we (most people/potential customers) ultimately want to go. It has become clear that there is the potential to optimise away friction/time spent in almost all human habits, but it is not yet clear if we really want to keep going down that route.

Will we keep optimizing things like meals just because we can until there are (conceivably) nutrient implants that make eating unnecessary, or will we sort of revert and see that maximising utility of every interaction does not lead to overall greater satisfaction?

In one world, Soylent could eventually dominate, in the other, it will remain a niche product because eating and food is too important too most, also culturally speaking.

12
tdees40 4 days ago 3 replies      
It's a food company, and its product is fully baked. Why aren't they bootstrapping? Are they not profitable? If so, how do they have a business? If I sold ketchup at a loss, would I be able to close a $50M financing round?
13
ckastner 4 days ago 0 replies      
I find it odd how popular Soylent has become. I always considered it to be the product that it was in the movie: the absolute minimum of nourishment given a lack resources. A dystopian nightmare.

A popular argument seems to be "I don't have time eat something proper". To me, that just replaces the lack of one resource (the ones in the movie) with another (time).

14
victorhooi 3 days ago 1 reply      
I just completed the Everest Base Camp trek with my wife - I used Soylent (version 1.4) for the hike. So I basically subsisted on Soylent for 2 weeks of hiking/climbing up to 5,500 metres above sea level.

I did 1 packet per day (2000 calories), supplemented with some snack food on the trail (dried nuts and fruit, Stinger-brand honey waffles, muesli bars etc.).

Motivation was firstly as an experiment (to see how I would cope), secondly because I wanted to accurately control/measure my caloric intake, and thirdly, because I was somewhat paranoid about getting food poisoning on the trek. (I used a MSR Guardian to provide clean filtered water for mixing up Soylent).

I didn't really notice any odd effects - and it went better than expected well. Didn't get food poisoning (wife got diarrhoea, but she ate local food) - was a bit hungry on some days (in hindsight, 2,000 calories was a bit low - I upped it to 2,500 calories on the day I climbed Kala Pattar).

All I can say was, Soylent was great for this use-case, and I'm a firm believer now. At home, I only use Soylent for when I have no time to cook, and need a reasonably healthy/complete meal - if your alternative is going out for a late-night kebab, or 24-hour fast-food, it's not a hard choice for me =).

15
mastarubio 4 days ago 2 replies      
I have been using Soylent for breakfast for roughly 1.5 years and it has been great. I've lost weight and I feel healthier. I haven't got sick during this time either from a flu or serious cold. It's a great time saver when used as a breakfast food and will help save you precious time in the morning before you go to work. Time that you could spend sleeping rather than stuck in a drive-through or making something. Works for my busy lifestyle. And I like the fact that I am getting all those nutrients. While I realize it's not perfect now, it's nice to know that they are continually trying to improve it with the different versions. If it's at least half as nutritious as Soylent claims it is, that's a win in my book. And to think it might actually get to that holy grail where it's genuinely great for you then that will be something special. Realistically, this is probably a slow march, but one that I am proud to be a part of. Mixes great with fruit, peanut butter, honey and other food items too which I know is healthy for me.
16
matthewrudy 4 days ago 0 replies      
I've never tried Soylent, but I do have to drink Ensure many times a day.

Ensure has a massive market, $billions in annual sales, and medically proven results.

The medical market is not where Soylent is going right now, But it is massive and proven.

If they could market to younger, hipper folks who've been prescribed Ensure, but would spend their own money for something nicer... That'd be me.

(BTW: I finally ordered some huel just now... Really sick of Ensure, will give that a try)

17
VikingCoder 4 days ago 3 replies      
Soylent was fine. I maintained weight, and was eating better, and the food was easy.

But I'm in a medical weight loss program now, and I'm loving it. It's doctor supervised, I started out at 400 pounds, and I'm down to 360 after five weeks. It's medically supervised because I'm in ketosis, which can be very dangerous. So best to have blood drawn regularly, etc.

But the food is great. You can also get them as your own supplements / replacements.

http://www.robard.com/products/

Right now I'm eating 1,000 calories a day. I'll be going down to 800 calories per day. And I'm burning about 2,000 calories of my own fat every day (that's a pound). Again, this is a dangerous diet, but it's medically supervised.

I feel great, I don't feel "hungry" all the time. It's awesome.

18
md2be 4 days ago 0 replies      
This is a play on the large soft drink beverage companies who are literally thirsty for companies to replace their soda portfolios. Bai was recentesly aquifed by Dr Pepper for almost 2 billion. So a 10x return for this round is certainly possible.
19
shas3 4 days ago 0 replies      
I am one of those people who got horrible nausea eating a couple of batches of the Soylent bar. I really liked the taste and the nutrition profile. But I simply cannot tolerate any Soylent products after the episodes of nausea. There is only one other food (an obscure Indian snack) that ever created a similar aversion in me. I don't know when, if ever I'll get over the association of Soylent with nausea in my head. I wish it had never happened so that I could have continued using their products.
20
joshjkim 4 days ago 0 replies      
What people gloss over about soylent on their way to the oft-repeated "it's just like slimfast wtf" point is that it's not the specific product that's interesting/innovative/valuable (it is kinda like slimfast...), it's taking that product to an entirely new and arguably larger market - I have no idea many 20 to 30 somethings working in tech were out there buying slimfast 10 years ago, but I'm guessing (and I could be wrong!) it's fewer than the number of folks who buy soylent today. assuming it can maintain appeal to "hard-working tech folks", then it's arguable that they can also successfully appeal to other professionals in other markets who, again, I would guess were not buying slimfast or other meal/diet drinks. it's all about marketing and "telling a new story", and i don't mean that in a bad way at all - sure, any company can quickly copy the ingredients, but they will also have to sell the same story, which is totally possible but arguably harder to do as well. that all being said, marketing IMO is only so good of a moat and there's only a few co's who really dominate relying mostly on it (coke, hermes, nike to name a few), so i'm not sure the new market and their story is worth this large of an investment, but who knows it very well could be!
21
Rudism 4 days ago 0 replies      
I've been trying out alternate-day fasting (alternating between 500-600 calorie "fasting" days and normal eating days). Primarily to help me lose a few pounds, but also in response to all of the recent publicized studies about how fasting and calorie restriction may have other overall health benefits.

Usually on my 500 calorie days I'll just drink a couple Boosts. I'd like to consider Soylent as an option for the fasting days as well, but from a cost perspective it can't really compete with the other alternatives out there (Boost, Ensure, and various no-name store brands available from places like Sam's Club and Costco). Liquid Soylent runs between $2.69-$3.09/400kcal, whereas high protein Boost (which is not even the cheapest option if you're willing to go with no-name brands) runs at $1.93/400kcal. Going with Soylent Powder can bring your costs down more in line at $1.54/400kcal, but the added hassle of having to mix it yourself doesn't make this a very attractive option in comparison.

I guess what I'm ultimately getting at is, assuming you are just using Soylent to supplement an otherwise normal diet, I don't understand its appeal over other less expensive nutritional meal supplements like Boost and Ensure.

22
wakkaflokka 4 days ago 0 replies      
I've been drinking Soylent every day for breakfast for the past year (the bottles). I find it really convenient, and unlike SlimFast and other shakes, it doesn't seem overly sweet to me. It gets rid of my hunger with no fuss, no mess, and no fanfare. Ideally, I could eat a healthy breakfast, but in reality I know that I'd just be pounding down some cereal bars or something, which would arguably be worse nutrition-wise than Soylent.
23
datashovel 4 days ago 0 replies      
Because of all the negativity in Soylent threads I always feel obliged to add my opinion. Soylent has been an extremely positive addition to my life.
24
kingkawn 4 days ago 1 reply      
The idea that nutrition can be boiled down in this way feels that it will inevitably be missing something severely important that the modeling doesn't account for. The modelers afterwards will excuse the oversight by saying look this thing was so small, who would've thought to measure in such and such a way? Nature.
25
Animats 4 days ago 1 reply      
Does this mean they're going to make their own product? Currently, Soylent, the company, is just a marketing operation. Everything else is outsourced.

That's not an unusual strategy for hype-based products. Skyy Vodka and WD-40 were completely outsourced. Skyy Vodka was originally made by Frank-Lin Distillers Products in San Jose, the company that makes most of the low-end booze on the West Coast. Frank-Lin buys bulk ethanol by the tank car load (they have their own railroad sidings), does a little post-processing on the ethanol, takes in tap water and runs it through a deionzing plant, mixes them, adds flavoring, and bottles. They have a really fancy automated bottling line which can handle about a thousand different bottles and can change bottle types automatically. This is called product differentiation.

26
JohnnyConatus 4 days ago 1 reply      
Did anyone else have this problem: you enjoyed the product at first but after X number of bottles (like case 2 for me) your body started to react poorly to it and the taste became repulsive?

Not trying to badmouth soylent, I had a similar experience with a brand of granola bars.

27
Balgair 4 days ago 0 replies      
Oh boy, another Soylent thread.... Here is how the comments are going to go: Half the commenters are going to say Soylent is not supposed to be for every meal, it's just for when you are too busy to eat. The other half cannot fathom ever using Soylent because if they are too busy to eat they will quit their job first, no exceptions. Neither side can bridge the gap, as food culture is very unique to each person. Let the market sort it all, props to Soylent for letting that happen and not wringingtheir hands over all of it.

(For the record, I would quit my job before skipping meals; they are actually that important to me staying sane)

28
hvmonk 4 days ago 1 reply      
Is there any long term effect medical study about this product?

I think our body has organs which releases various gastric juices to digest the food we eat. It is not only about how much calorie/protein one is consuming, there are also some useful by-products which helps in overall functioning of the body as well. A very simple analogy is only drinking fruit juice instead of eating them raw. We are not taking in fibers which helps in digestion, slow decomposition and good bowl movement.

I am very skeptical about approaches like this where we measure our food just in terms of calories, vitamins, protiens and then consume them directly in that format.

29
costcopizza 4 days ago 0 replies      
Ensure wrapped up in a nice minimalist package.
30
rexreed 4 days ago 0 replies      
There's no saying more true than you are what you eat (or drink). Soylent drinkers are definitely Soylent people. I prefer a world of flavor, spice, and variety. I also wonder if it's a coincidence that the funders of the Juicero (solving a problem no one knew they had for people who clearly have too much disposable income) are the same as the funders of Soylent. I know there's a high overlap of HN readers and Soylent drinkers, but I think there's a big disconnect with the rest of the non-Silicon Valley populace.
31
gavanwoolery 4 days ago 0 replies      
As an anecdotal/tiny success story, I drank Soylent exclusively for a month and lost 10 pounds - of course, this was by dramatically reducing my calories and exercising, not by virtue of Soylent alone. That said, I did find Soylent to be great on two accounts - it is not something you will consume for fun/pleasure/killing boredom, and it is easy to measure your calorie intake if you are consuming it solely. Side note: I am not a doctor or nutritionist, do not take my story as scientifically-grounded advice.
32
bcaulfield 3 days ago 0 replies      
Coffiest easily exceeded my modest expectations. Quick, easy to digest when I'm nervous, and the combination of caffeine and theanine is... nice. My diet is trash, and it's vastly better than not eating or stopping for an Egg McMuffin at the drive through.

Soylent has gotten me off fast food entirely. I keep a few bottles at home and at work, so that I can get a meal during my commute that doesn't come from a drive through.

33
muratmutlu 4 days ago 0 replies      
Anyone know the difference between Soylent and a good quality weight gainer like Reflex Instant Mass Pro?

Nutrition Facts

Reflex Instant Mass Prohttps://www.reflexnutrition.com/instant-mass-pro/

Soylenthttp://files.soylent.com/pdf/soylent-nutrition-facts-1-8-en....

34
Kattywumpus 4 days ago 1 reply      
I wonder how long it will be until some VC forces the inevitable rebranding of the Soylent name.

"We need to reach out to a larger demographic with a name that communicates the value proposition of the product. Liquid Lunch focus-groups well in the demographic of females 18-30, which is where we see our growth trending in future..."

I've always liked the cheekiness of the Soylent name and it's really the only thing that's made me pay the slightest bit of attention to the product.

35
akvadrako 4 days ago 0 replies      
I can't stand reading these posts about Soylent. There is so much irrational hate backed up by nothing more than incomplete and irrelevant arguments. People recommend virtually any other product even if it misses half the qualities of Soylent; things like nuts + fruits, whey protein, ensure, clif bars or even cooking.

I'm fairly certain it's due to an astroturfing campaign, but I don't know who would pay for such a thing.

36
mikro 4 days ago 0 replies      
I drink a lot of coffee, and for me the Coffiest product is cheaper and healthier than your average Starbucks drink. I also really like the taste of the Cacao drink, which satisfies my sweet tooth whenever I feel like reaching for a bar of chocolate. I don't necessarily view it as a meal replacement, but rather an upgrade to unhealthy things I already eat regularly.
37
aomix 4 days ago 0 replies      
I still don't understand the strong reactions Soylent gets on either side. There's plenty to criticize and to like but reactions to it cluster around it being the end of the world or solution to all your problems.

I'm a fan of the breakfast Soylent (Coffiest). To me it's the best form of the Soylent idea.

38
ebbv 4 days ago 0 replies      
If Soylent were a good idea it wouldn't need a $50M funding round at this point. People know about it. Most people I know who've tried it after a while didn't like it any more and abandoned it.

And now they've brought out flavors, basically turning it into an expensive meal replacement. It's ridiculous.

39
awl130 4 days ago 0 replies      
I'm reserving judgement until I know exactly how Soylent affects our microbiome. This study seems to just have gotten started: https://mycrobes.berkeley.edu/the-study/
40
venture_lol 4 days ago 0 replies      
Live a restrictive life, careful, watching, planning and get life expectancy of 85yrs ? A bad turn of luck could be the end is right around the corner

Live a wild debauched, taste everything, free for all, no care whatsoever and get a life expectancy of: 80yrs? A somewhat lucky draw could see you beyond 90

Hard choices :)

41
eddieone 4 days ago 0 replies      
As a person who has researched Soylent, I would not describe it as cheap or healthy. Most of the people with opposing opinions seem to think it's a magic weight loss supplement. In reality, the properties that cause weight loss seem to be the low quality ingredients.
42
epmaybe 4 days ago 0 replies      
One thing I'm curious about: do liquid diet like Soylent change the brush border in our intestines in an appreciable way? If I went completely soylent for a few months, and then tried to eat something more...raw, would there be any changes in digestion?
43
sebringj 4 days ago 0 replies      
"Everything the body needs..." - the Matrix next we get implantable nutrient packs that last for a year. Hey, YCombinator idea? Go fund that. (this might classify as a troll post as its not particularly relevant but just had to, sorry)
44
sachinag 4 days ago 0 replies      
If Forerunner didn't participate in this, then Soylent doesn't have a chance. I'd trust Forerunner over GV (or anyone) on a new consumer brand - and that's what Soylent is, going up against everything from Hint water to Ensure.
45
zenkat 3 days ago 0 replies      
Does anyone know the valuation of the company? And more importantly, what justifies that valuation? I have trouble believing that the meal replacement market is all that lucrative.
46
arzt 4 days ago 4 replies      
I'm curious about the naming choice as the end of the movie with Heston reveals that eating "Soylent Green" is a form of cannibalism. Does the term "Soylent" signify something outside of the movie?
47
ceejay 4 days ago 0 replies      
I used to get annoyed by all the negativity Soylent received from people. Now I just get amused by it. Soylent has been nothing but a positive addition to my life.
48
intrasight 4 days ago 0 replies      
How is Soylent better than buying a quality blender and fresh veggies and protein powder and coconut oil? Or is for folks who live in a food desert?
49
skdotdan 4 days ago 0 replies      
The future is food that tastes and feels like real food but with the nutrients that your genetics and physical conditions tell that you should eat. One day...
50
grandalf 4 days ago 1 reply      
I've been wanting to try Soylent but have not done so b/c of the price. Is there an HN promo code or something? I'd try it for a full month.
51
theprop 4 days ago 0 replies      
The best food advice I've read: don't eat or drink anything that humans haven't been eating or drinking for at least 500 years.
52
vthallam 4 days ago 1 reply      
Off topic, but have anyone tried 'Soylent Coffiest'? How do you like it for breakfast + coffee replacement for few days a week?
53
b1gtuna 4 days ago 0 replies      
Congratulations.

I have been drinking Soylent 12 bottles a month. This alone has freed me up from thinking about what to eat for lunch.

54
mtw 4 days ago 0 replies      
Does anyone know of any clinical trials showing health benefits of Soylent? or adverse health consequences?
55
rubyfan 4 days ago 0 replies      
Have they figured out the vile angry flatulence problem?
56
vernie 4 days ago 0 replies      
Maybe... you're not as busy as you believe you are?
57
zzzzzzzza 4 days ago 0 replies      
personally I love soylent. Curious what the hard numbers are on sales growth/units shipped per month.
58
aanet 4 days ago 0 replies      
Soylent and Juicero

Rather surprised that nobody, as yet, has made a connection between Soylent and Juicero.

* 1st: Juicing & Nutrition- Theres very little evidence that liquid food / juicing has any benefits for most adults. Most nutritionists worth their salt will advise against juicing. Juicero (and other juice makers) take perfectly good,healthy, nutrient-rich fruits and vegetables and make them less healthy. Ditto for Soylent. Crushing natural foods (vegetables, fruits, any other in their natural form) together to seek out their nutrients, and reconstituting them in powder/liquid form is, by any other fancy name, a juice. The skin on an apple, the seeds in raspberries and the membranes that hold orange segments together they are all good for you. That is where most of the fiber, as well as many of the antioxidants, phytonutrients, vitamins and minerals are hiding. Fiber is good for your gut; it fills you up and slows the absorption of the sugars you eat, resulting in smaller spikes in insulin. When your body can no longer keep up with your need for insulin, Type 2 diabetes can develop. [1]

- I wonder if people who see the benefits of Soylent/juicing have read Michael Pollan or Marion Nestle. See [1], [2], [3]

* 2nd: Silicon Valley and investmentsBoth Soylent and Juicero are funded by marquee investors. Heres a brief list for Juicero (Total $118M raised) [4]GV (nee Google Ventures)KPCBAbstractCampbell SoupThrive Capital

Heres for Soylent (Total $70M raised):[5]GV (nee Google Ventures)A16ZTao CapitalIndex VenturesYCLerner Hippeau Initialized Capital

What do these have in common? Apart from being in the food business? It is the Food-as-a-Service business model. That is the essential ingredient (no pun) of the business, not the nutrients per se.

In effect, both Soylent and Juicero are products targeted towards high disposable income, busy professionals who want convenience, and perhaps the glow of save the world from hunger. (whatever that means). Any health benefits are inconsequential at best in the grand scheme of things.

If you value your nutrition and health, you are far better off relying on the tried-and-tested advice from Michael Pollan: Eat Food, Not Too Much, Mostly Plants.

All other rationalization of time/effort/nutritional benefit of Soylent/Juicero in save what world from hunger is, well, just plain old rationalization by any other name.

[1] [People think juice is good for them. Theyre wrong. - The Washington Post](https://www.washingtonpost.com/posteverything/wp/2017/04/26/...)[2] [Books | Michael Pollan](http://michaelpollan.com/books/)[3] [Marion Nestle - Wikipedia](https://en.wikipedia.org/wiki/Marion_Nestle)[4] https://www.crunchbase.com/organization/juicero/investors[5] https://www.crunchbase.com/organization/soylent-corporation#...

59
catenthusiast 4 days ago 0 replies      
Nerds will buy anything if trendy Silicon Valley thought leaders endorse it.
60
accountyaccount 4 days ago 2 replies      
Aside from brand, how does soylent differ from ensure?
61
Questron 4 days ago 6 replies      
Weird. You need food during an eight hour shift?
62
CPLX 4 days ago 3 replies      
That's quite a chunk of change for rebranding SlimFast for the urban millennial set. I wonder how many other eight figure venture-fundable concepts could be exploited by lurking in the grocery store aisles with a label maker.
63
moat 4 days ago 6 replies      
I feel like I'm the only who has ever read the nutrition facts. It's a garbage product full of ingredients I wouldn't aim to put in my body. Love the idea of it all, just not the science behind it.
64
pinaceae 4 days ago 0 replies      
Soylent - We've put SlimFast on the Internet.

And this after JuiceBro. Amazing.

65
maverick_iceman 4 days ago 0 replies      
Don't know why GV would invest in Soylent after so much fiasco. Seems like they earn money so easily that they are content to throw it away.
66
metaphorm 4 days ago 0 replies      
I feel like more people should read this book

In Defense of Food by Michael Pollan

http://www.goodreads.com/book/show/315425.In_Defense_of_Food

67
joering2 4 days ago 6 replies      
Serious question -- do they have an FDA approval?? Can they produce and sell it without it?

Recently learnt about some Amish selling home made honey without FDA approval; now awaiting trial on a 20 years jail charge.

26
Coinbase adds support for Litecoin techcrunch.com
340 points by tmlee  5 days ago   241 comments top 18
1
lend000 4 days ago 4 replies      
It will take significant time and effort to overtake Bitcoin's name recognition and first mover advantage. However, there's also an advantage to being a second mover that can adapt quickly to a changing environment... I doubt Bitcoin will be the supreme crypto-currency in a few years. It's just too implausible that Bitcoin is perfect enough as is, and/or the community will be able to implement any needed changes before its "competitors" take market share.

Regardless, it's a good time to be a cryptocurrency investor. We're still in 1996 in Dot Com Bubble time.

2
pyabo 4 days ago 1 reply      
Litecoin is now back because of Segwit. It's Bitcoin's hope to push a final acceptance of Segwit on Bitcoin. But, it doesn't have a real value in my opinion.About cryptocurrencies in general I can say that I use them to get payments and to pay employees and I find them very useful and the user experience is much better than normal banking (fast international transfers and complete tracking) and no need for KYC.
3
jerguismi 4 days ago 1 reply      
Segregated witness should activate in litecoin in about ~6 days.

http://litecoinblockhalf.com/segwit.php

4
Uptrenda 4 days ago 5 replies      
I never really got the appeal of Litecoin. It was basically just a copy and paste of Bitcoin with only minor alterations made to the source code (like changing the PoW for Script which never really defeated ASICs and a few other small changes.) It made no major innovations that can be named and even to this day it still lacks several of the major bug fixes that made it into Bitcoin (there are still patches missing from like ~2014 that result in the software being even harder to use for devs than Bitcoin.)

This is not coming from a Bitcoin Maximalist, by the way (not presently a fan of Bitcoin either.) Just thought I'd point out how bizarre it is that Litecoin even has a price at all when the software is functionally similar to Dogecoin. It's only real claim to fame is that it copy-pasted the code earlier than most other cryptocurrencies and hence now survives as a zombie currency backed by the souls of all the bag-holders foolish enough to invest in it (much like how Yahoo continues to survive to this day.)

Can we all just agree that Litecoin is a way over-hyped and silly excuse for a cryptocurrency? Silver to Bitcoin's gold? I can't think of a single problem that Litecoin actually solves compared to ... well, anything. At best: you could say its a speculative instrument tied to the cancerous block size debate. But other than that - is good marketing really worth the price of $21 USD a coin? I can see a lot of people getting screwed by this when the currency inevitably crashes again ...

5
NamTaf 4 days ago 3 replies      
It seems to me that this is still speculation based on the idea that increased exposure will increase investment in it which will drive the price higher to punters buy up LTC anticipating that it'll rise and thus it self-fulfils. I don't really see what fundamentals have changed to spur such a climb.
6
sxp 4 days ago 0 replies      
I wonder if the sudden increase in price over April is related to to Coinbase buying up LTC to fill their reserves: http://coinmarketcap.com/currencies/litecoin/
7
eelkefolmer 4 days ago 1 reply      
Litecoin spiked up from $16 to $36 this morning. Trading on GDAX (also part of coinBase) has far lower fees (0.25%) than coinBase (1.5%)
8
tuxracer 4 days ago 7 replies      
Is there a reason a lot of these cryptocurrencies have suddenly started to skyrocket all at once?
9
hultner 4 days ago 1 reply      
Is Litecoin still gaining momentum at a significant pace?

I'm not quite up to current developments in the crypto coin communities, however my perception from the outside is that Ethereum has replaced Litecoin as the leading altcoin. Is this incorrect?

How well does Litecoin fare against the congestion problems we've heard about from the Bitcoin communities?

10
nnfy 5 days ago 5 replies      
Would a vendor like coinbase be subject to any legal repercussions if its employees purchased litecoin before the option to purchase went live, and the price spiked?
11
cableshaft 4 days ago 1 reply      
About time. I've been waiting for an easy way to get Litecoin for so long. Every time I checked in the past, it seemed to require working with some Russian bank to get things set up, and something would always error out or "not be supported" or some crap and I'd eventually give up, like in the early days of bitcoin (if I had figured out how to successfully buy it I could have bought btc at $7 a coin once, and probably would have over 100 of them now).

Would have loved to have grabbed some LTC when it was dirt cheap, but I'll settle for getting in early on Coinbase. Did that shortly after Coinbase added Ethereum at $11, and it's now trading at $89.

12
joshuaswaney 4 days ago 0 replies      
Cryptocurrency implementations have to make tradeoffs like any other type of software, and in this case we're looking at the tradeoff between transaction speed and security. Other cryptocurrencies favor anonymity over convenience. Is there a CAP theorem equivalent for these problems? There seems to be a healthy tension between scalability and security at the very least.
13
danielleheong 4 days ago 1 reply      
So little activity until 1st of April. What gives... https://www.coingecko.com/en/price_charts/litecoin/btc/90_da...
14
kzisme 4 days ago 7 replies      
I don't really get the draw towards crypto-currencies - aside from mining them to bring more into the pool of available currency - is there a point to purchasing or using them to make purchases?

Is there a reason I should start using these currencies? Aside from trading currencies to make a few bucks?

15
edpichler 4 days ago 2 replies      
Why should I use litecoin for transfer money when there is bitcoin with more liquidity? I really don't get this yet. Anyone could explain?

PS: I am a Bitcoin enthusiast, and I am not criticizing Litecoin, I just want to understand the possible advantages it could have.

16
ptenk 4 days ago 2 replies      
This rise is driven by Coinbase alone. The premium there was like 30-40% to other exchanges at one point.
17
quotha 4 days ago 3 replies      
This is all gonna end bad
18
dvdhsu 4 days ago 20 replies      
Is there interest in a cryptocurrency index fund? The idea is you could just buy an index fund composed of, say, 50% BTC, 20% ETH, 15% LTC and 15% ZEC. I'm fairly certain that one of these cryptocurrencies will 10 or 100x in value over the next 5 years, but buying each one individually is just such a pain. Would you invest?

In reality, we'd probably buy the top 10 coins weighted according to some measure, and rebalance once a week. We would send out investor updates and let you know what the weights are, along with performance over the past week.

We're YC and our previous idea didn't work out, and this is something we're considering pivoting to. If we get enough interest we'll start one!

27
I Broke Rust's Package Manager for Windows Users sasheldon.com
345 points by sasheldon  1 day ago   152 comments top 20
1
bluejekyll 1 day ago 5 replies      
This is a great example of something else about software. As software grows in usage and use cases, it starts bumping up against edge conditions which need to be handled for various reasons.

Cargo now is becoming stronger and more stable because of bugs like this being discovered. All software goes through this growth cycle. It's great to see these things worked out in the various projects that support Rust.

There is another point here though; anytime the question comes up to just rewrite a piece of software, throw out all the technical debt, it's not as straightforward as it seems. Remember, together with that technical debt lies a lot of valuable learnings written into the code. I haven't worked on Windows directly in years, but I never knew that NUL was a reserved word as a file. I would, and probably still will make this mistake in the future.

Which makes me wonder, has anyone written a file name validation crate that guarantees that you're not writing to any reserved words on a filesystem of the host OS? A quick search of crate.io doesn't turn anything up.

2
garaetjjte 1 day ago 1 reply      
Other magic aliases include CON, PRN, AUX, COM1-9 and LPT1-9. They are aliased to respective devices in Win32 namespace "\\.\". COMs and LPTs above 9 don't have aliases in global namespace and must be accessed explictly in Win32 namespace, eg. "\\.\COM10" (which itself is symlink to NT native "\Device\Serial9")

In fact, it is possible to create files named NUL, COM1, etc. using \\?\ (eg. "\\?\C:\NUL" is valid path) prefix which disables parsing arcane Win32 magic files. Unfortunately these files are causing strange behaviour in applications that don't use that prefix, Explorer included.

source: https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

3
tatterdemalion 1 day ago 2 replies      
As the blog post mentioned, we solve the issue by deleting the crate from the package repository and reserving these problematic names. The incident lasted about 2 and a half hours.

Crate names have to be one or more valid idents connected by hyphens, so no other clever names like `/home` would be possible to upload. We already had some crate names reserved and we just needed to add these to the list.

4
slobotron 1 day ago 2 replies      
There was a bug in Windows 95 (98 too?) where if you tried to open 'nul\nul' or 'con\con' etc, it would BSOD instantly.Provided lots of drive-by fun in computer labs... (got really good at typing win+r con\con)
5
protomyth 1 day ago 1 reply      
For those who don't use Windows and might need this info: https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...
6
captn3m0 1 day ago 3 replies      
What I don't understand is why cargo fetches the entire crate list and create a directory for every crate (even if you never install it). Why not just have a single file with the entire list? The issue mentions they use a trie, but why use the filesystem as the trie store? Why not have a single file?
7
Sir_Cmpwn 1 day ago 5 replies      
Sounds more like a problem with stupid Windows design choices than with anything you did.
8
roryisok 19 hours ago 1 reply      
I was working on a video project for a local comics convention, and named the project file "con.proj". That file hung around until I upgraded my hard drive because no file manager could delete it.
9
yrashk 1 day ago 3 replies      
Wouldn't it make sense for Cargo not to use crate names in file names, and use hexadecimally encoded hashes instead?
10
ziikutv 1 day ago 2 replies      
What did you end up calling the new crate?

Edit: I suggest "terminated"

11
Strilanc 1 day ago 1 reply      
Urgh, this "nul" filename / reserved filename bug is probably in a lot of software.
12
alkonaut 1 day ago 4 replies      
It's very tricky to do cross platform file handling stuff, and only the most mature projects have ironed out this. Just look at your pet project and see if it handles

- Windows and unix line breaks in text files

- Windows and unix path separators

- BOM and non-BOM marked files if parsing UTF

- Forbidden filenames such as in this article

By "handling" I mean it should accept or fail nicely on unexpected input - e.g. say that line breaks should be unix style, or paths should be backslashes etc. Very few projects actually do this well. Even fewer will do even more complex things like handling too long paths with nice error messages etc.

13
encryptThrow32 1 day ago 1 reply      
I recommend ':?', as it will work in POSIX, but not Windows.
14
msimpson 16 hours ago 1 reply      
While I know nothing of Rust, Diesel, or CrateDB, I do know that Windows uses a case-insensitive file system and this fix doesn't seem to take that into consideration. However, the author of the fix does note:

> I believe crates.io's namespace is case insensitive let me know if that's wrong

Someone should probably validate that.

15
toabi 18 hours ago 2 replies      
I tried `npm install nul` on my win7 VM and it created a folder called nul which I can't get rid of _
16
hmottestad 1 day ago 2 replies      
Makes me wonder if I can make a crate called "../.." and have it overwrite some user files.
17
lsiebert 23 hours ago 0 replies      
Hmm... I feel like someone should stick the reserved names into a json somewhere for easy reference for the next package manager.
18
pvg 1 day ago 2 replies      
In the Mac System 7-ish days, people used to earnestly warn each other not to name a file '.Sony' (a special name reserved for the floppy driver) as it supposedly trashed your HD. Although I've never heard of anyone reproducing it.
19
HedleyLamar 1 day ago 1 reply      
Don't they have a continuous integration system where they run the unit tests on all platforms for all checkins to master?
20
joshu 1 day ago 0 replies      
Remember not to name anything pr#6 for either...
28
Things to Use Instead of JSON Web Tokens inburke.com
352 points by LaSombra  19 hours ago   191 comments top 31
1
Freak_NL 18 hours ago 6 replies      
> The problem with JWT is the user gets to choose which algorithm to use.

Only if you completely bungle the implementation on the server-side. The 'none' 'algorithm' isn't supported by up-to-date JWT libraries with a good track record, and you should always limit the algorithms you'll allow on the server. So if you sign your tokens with a RSA-2048 key-pair, you would discard any token that isn't using that algorithm.

Of course if you are building an API that blindly accepts whatever it receives from a user agent you are bound to create a security gap but that holds true for anything users send you though, not just JWTs. JSON Web Token is not that hard to grok, and it isn't a 'foot-gun' technology (just practice trigger discipline i.e., read the documentation).

I'm a Java guy, so I'll limit my experience to the libraries available there, but of the four Java libraries available, three provide strict validation of the signing algorithm out of the box, and explicitly document this in their examples and documentation (I think the fourth does too, but I haven't tried that one myself).

JSON Web Token is a neat standard that has a lot of good parts that can reliably be used to create and process authentication tokens. So if you are still worried about developers getting it wrong, then instead of saying 'don't use JWT', why not promote a safe subset of the specification instead and promote that? Call it 'iron-jwt' or something. It beats rolling your own solution.

Or if you want to be particularly constructive and feel that developers are misusing this technology, write a sensible, short, to the point implementers guide for using JWT and spread the word.

2
jarym 17 hours ago 3 replies      
Never read such nonsense masquerading as an authoritative piece of information.

Author clearly doesn't appreciate that the ultimate truth of any authentication scheme is you should not trust anything from the user. So what if you take away a clients ability to specify what algorithm a piece of data is signed or encrypted with - if you blindly just accept whatever a client did and proceed then you're always gonna find yourself vulnerable.

Intel didn't use JWT tokens or pass a client header - they just trusted what the client sent and landed themselves in the same mess. I make the point to illustrate that what real security pros do is make sure basic checks like validating user input is done regardless of what specification or algorithm is involved.

3
ksri 18 hours ago 4 replies      
I maintain a spreadsheet of the pros and cons of various authentication techniques - https://docs.google.com/spreadsheets/d/1tAX5ZJzluilhoYKjra-u...

JWT is extremely useful when you want a one-time use token to pass a claim to another system. For example, employee portal generates a link for a user to check their available leaves in the HR system. Since the user is logged in to the employee portal, they shouldn't have to login to the HR system. JWT is great for this use case. But for general sessions management, there are better solutions.

4
saganus 12 hours ago 2 replies      
There's a lot of conversation on when should I use JWTs and when not to.

But as alternatives go, has anyone tried using Macaroons?

They've been mentioned in HN a few times but I've never seen the tech catch up, even though it looks like a very cool way to implement authorization.

Is there any particular reason why?

Here are some resources I've found and I think Macaroons is a very interesting concept. Even the paper is accessible to someone like me, without any real security or cryptography expertise.

Research paper: https://research.google.com/pubs/pub41892.html

A new way to do authorization: http://hackingdistributed.com/2014/05/21/my-first-macaroon/

(This link has an invalid HTTPS certificate if that concerns you): https://evancordell.com/2015/09/27/macaroons-101-contextual-...

After I found this, I've always wanted to try it out but haven't had the chance. Does anyone has any experience or comments about it?

5
jondubois 17 hours ago 1 reply      
Almost all of the 'alternative' use cases mentioned by the OP are not what JWT was designed for. JWT was designed to authenticate users/entities by giving them a token which contains basic non-sensitive metadata about that user/entity. It's for authentication not for authorization.

Sure, some implementations of JWT have had bugs in the past, but this hasn't been an issue for quite a while and it's definitely not an issue with the RFC itself. It's the same as if you blamed the TLS/SSL RFC for being responsible for the heartbleed bug in OpenSSL - It makes no sense.

>> You might have heard that you shouldn't be using JWT. That advice is correct - you really shouldn't use it.

This type of blanket thinking is dangerous. There are cases where JWTs are practically necessary and unavoidable. Whenever an extremist blanket idea like this catches on in this industry, it becomes a major pain to have to explain to people over and over why in THIS SPECIFIC CASE it is actually the best solution possible.

6
exabrial 13 hours ago 1 reply      
I disagree with most of his points, but what really bothers me is the underlying message of his article. I don't think we should fool ourselves into thinking we can design a protocol that isn't susceptible to implementation problems.

Did we get SSL/TLS right? No, we've failed a few times, but we're doing better. Why is that? People are paying attention more that ever to OpenSSL and now we have competition with LibreSSL and the like.

Side note: The irony of his coupler example; it's a flawed spec, not flawed implementations... the implementations followed the spec but the spec was dangerous from the start. JWT is the opposite: good spec, but a long time ago some early adopters didn't get things right.

7
luord 7 hours ago 0 replies      
> implementation errors should lower your opinion of a specification. An error in one implementation means other implementations are more likely to contain the same or different errors. It implies that it's more difficult to correctly implement the spec.

Discussion on cryptography and particular implementations aside, I think this is sound and I normally follow this when judging technologies.

8
wvh 17 hours ago 0 replies      
I've implemented some of OpenID Connect, "a simple identity layer on top of the OAuth 2.0 protocol". Combined with OAuth 2.0, it's one of those giant corporate specs (giant at least for security/crypto purposes) with way too many options so anybody can do anything with any algorithm and any kind of workflow.

I wanted to go with a simple, minimal NaCl-based system but in the end did implement a lot of OpenID Connect in the hope it would make interoperability easier with existing client libraries. I don't want to write a client for each and every programming language other people in any related projects would want to use. That in my opinion is the value of something like JWT: you can tell people up front that that's the way they're going to get the user's ID data, no matter how much server or client implementations will be switched around.

I feel that when faced with a spec like OpenID Connect and OAuth 2.0, it's not necessary to implement more than what is strictly needed. If you don't need all the flows and algorithms in your project, don't implement and don't accept them. The parts you implement should comply why base it on a spec if you throw any interoperability out of the window but don't waste months of trying to correctly implement all of a huge corporate spec if that doesn't make sense for the size of your project or organisation. Complete implementations might have value only if that's your main product and you need that line to sell it.

I use JWT only as ID, not as session, allow only one server-chosen algorithm for signing, and rely on TLS for encryption.

There's clearly a need for up-to-date "web-approved" standards to pass crypto-friendly data structures around or maybe I'm just not familiar with any recent efforts. Normalising and serialising JSON is pretty error prone...

9
tptacek 12 hours ago 5 replies      
In cryptography, we have a concept of "misuse resistance". Misuse-resistant cryptography is designed to make implementation failures harder, in recognition of the fact that almost all cryptographic attacks, even the most sophisticated of them, are caused by implementation flaws and not fundamental breaks in crypto primitives. A good example of misuse-resistant cryptography is NMR, nonce-misuse resistance, such as SIV or AEZ. Misuse-resistant crypto is superior to crypto that isn't. For instance, a measure of misuse-resistance is a large part of why cryptographers generally prefer Curve25519 over NIST P-256.

So, as someone who does some work in crypto engineering, arguments about JWT being problematic only if implementations are "bungled" or developers are "incompetent" are sort of an obvious "tell" that the people behind those arguments aren't really crypto people. In crypto, this debate is over.

I know a lot of crypto people who do not like JWT. I don't know one who does. Here are some general JWT concerns:

* It's kitchen-sink complicated and designed without a single clear use case. The track record of cryptosystems with this property is very poor. Resilient cryptosystems tend to be simple and optimized for a specific use case.

* It's designed by a committee and, as far as anyone I know can tell, that committee doesn't include any serious cryptographers. I joked about this on Twitter after the last JWT disaster, saying that JWT's support for static-ephemeral P-curve ECDH was the cryptographic engineering equivalent of a "kick me" sign on the standard. You could look at JWT, see that it supported both RSA and P-curve ECDH, and immediately conclude that crypto experts hadn't had a guiding hand in the standard.

* Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.

* JWT's defaults are incoherent. For instance: non-replayability, one of the most basic questions to answer about a cryptographic token, is optional. Someone downthread made a weird comparison between JWT and Nacl (weird because Nacl is a library of primitives, not a protocol) based on forward-security. But for a token, replayability is a much more urgent concern.

* The protocol mixes metadata and application data in two different bag-of-attributes structures and generally does its best to maximize all the concerns you'd have doing cryptography with a format as malleable as JSON. Seemingly the only reason it does that is because it's "layered" on JOSE, leaving the impression that making a pretty lego diagram is more important to its designers than coming up with a simple, secure standard.

* It's 2017 and the standard still includes X.509, via JWK, which also includes indirected key lookups.

* The standard supports, and some implementations even default to, compressed plaintext. It feels like 2012 never happened for this project.

For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom. For the rare instances that actually benefit from public key cryptography, JWT makes a hard task even harder. I don't believe anyone is ever better off using JWT. Avoid it.

10
deathanatos 8 hours ago 1 reply      
> But the server and the client should support only a single algorithm, probably HMAC with SHA-256, and reject all of the others.

If you have a centralized system dealing w/ authentication, this doesn't work, as now everything that needs to verify JWTs needs the secret. The support for RSA, instead of HMAC, is there to meet a different set of requirement.

What people fail to remember is that JWTs and the libraries that work w/ them do not just wrap data to be authenticated. They also handle verifying the various claims on a token is this token applicable here? is this token expired? things relevant to an authentication token, not just a mere signed blob of data. The suggestion to leave those to be reimplemented by every single end-user is bad advice.

11
komerdoor 15 hours ago 1 reply      
I am using JWT for my projects to keep stateless sessions between servers and for some other tokens (refresh, register, reset pass etc.). Of course extra security measures are required (MitM protection [HTTPS etc.], XSS / CSFR prevention etc.), but this has nothing to do with JWT. I use encryption with a frequently rotated private key to encrypt the part of the payload that only the server may read.

A good read at:https://stormpath.com/blog/where-to-store-your-jwts-cookies-...

12
10165 12 hours ago 0 replies      
"..."algorithm agility" that is a proud part of the specification" (italics mine)

No idea about "JWT" but maybe this is part of the psychology that keeps schemes like SSL/TLS in use. (Part.)

Do you think there are people who are actually proud of achieving complexity in a specification or implementation?

13
tscs37 12 hours ago 1 reply      
It doesn't seem like the author is actually proposing an alternative, rather, the only thing says is "Don't use JWT, here is how you can do X with Nacl"

Even worse;

It is suggested that to authenticate a user one should use TLS. That might be true for a login form, but not beyond that. Once you have logged the user in, you need to continue authentication on every request. JWTs are one way to put this information on the client side without having to put any much trust into it.

The second example is a simple asymmetric encryption example which... for some reason JWT is not a solution for? I've used Ed25519 plenty of times with JWT (custom algo header in this case), so I see no problem there plus... I don't think this is what JWT is actually trying to solve.

The third example is encrypted data to the client, which is also something JWT isn't trying to solve, this is what JWE is for. JWT is purposefully unencrypted and I'm not sure how many developers would actually pretend a signature is encryption.

The last part is an actual example of JWT use cases, in which case however the author blabs on about the (in)famous "algo=none" bug a lot of libraries had. I've specifically used the Go library mentioned and strictly enforcing the algo is a nonbrainer if you are using a custom one anyway. On the other hand, I still use HMAC for a seperate token for short-term authentication over endpoints (to make blacklisting logins easier).

So JWT simply gives me some flexibility in sharing common code for authentication, the same code can consume the long-term tokens and the short-term tokens and much more if needed in the future.

I'm not saying JWT is the end all for problems, but it's rather easy ready-to-use solution for some of my problems.

Why write a signature library when there is one ready to use that, with care, is safe to use?

14
ath0 15 hours ago 1 reply      
This is a case where the high-level point - "insecure options shouldn't be configurable" - conflicts with a reality of crypto protocols: you have to make tradeoffs based on the best known attacks and the platform you're running on, and those change over time. The best hashing algorithm, HMAC algorithm, and signing algorithm to use on a mobile device in 2007 isn't the same as the ones you'd pick today. Any protocol expected to last 10 years should allow for the selection of an underlying crypto algorithm. Maybe "none" should never have been an option - but tying the algorithm to the protocol has pretty severe drawbacks, too.

More broadly - JWT isn't just about the exchange between a single client and a server; the choice of "use cases" misses a very real constraint of multi-party protocols. Within the context of OpenID (or OAuth more broadly), it's about the relationship between an end-user, a resource owner, a client and an authorization server -- all of whom need to be able to interact with a token, and often offline.

15
ceejay 16 hours ago 0 replies      
I was just thinking. What is a good way to phrase the problem in order to understand whether there are more pros / cons to having the client vs server decide which algorithms to use in a transaction.

I haven't thought this through fully, but as far as I can tell ecosystems on the web evolve. And so I think it's probably a good idea that we architect things for the web in such a way that we don't inhibit that evolution. When you put a decision like encryption algorithm in the client's hands does it feel to anyone else that the security will evolve more rapidly, and thus remain more robust? When the client is deciding, there's a larger pool of people "voting" for what is an acceptable level of security. Even though a lot of those "votes" will be based on the default settings of a library, that library will over time become less popular as more and more people consider it unsafe.

By the same token, if a particular service (server-side) does not keep up with that evolution, fewer and fewer people will use it as other (safer) services pop up.

16
ZoneGhost 12 hours ago 1 reply      
For the record, JWT is a payload and format spec for JWS/JOSE, which is really what you're complaining about. JWT is merely a claims set and an optional dot-notation serialization of a single signature.

JWT/JWS libraries that handle all the validation alone are treacherous. You should ALWAYS parse the JSON before hand, perform your claims and header(s) validation, and then pass the payload/header/signature. If anything, it's computationally less risky than running the signature blindly through a signature validation function and then checking the header.

17
makomk 16 hours ago 0 replies      
This complains that JWT does not have forward secrecy, and then recommends NaCl's box primitive instead... which does not have forward secrecy either. (This isn't exactly drawn attention to in the NaCl docs for some reason.)
18
ceejay 17 hours ago 0 replies      
If the specification requires the server to decide which algorithm to use a naive client, who doesn't know which algorithms are safe or not, is just as dangerous.

As far as I know there are no algorithms that exist today that we can guarantee will never be broken in the future. So algorithm choice inherently must be decoupled from the specification.

EDIT: Or a naive server implementation for that matter...

19
palmdeezy 13 hours ago 1 reply      
What makes this all worse is that companies like Auth0 and Stormpath have flooded search results with self promoting blog posts masked as tutorials. This just makes it harder for developers to learn the basics without getting product shoved in their face.
20
arrty88 13 hours ago 0 replies      
Are there any examples of big JWT hacks which took place due to a vulnerability
21
pmarreck 9 hours ago 0 replies      
He should release "JWT-H2O" i.e. JWT with HMAC 256 Only. :) (left a similar comment on his blog)
22
unscaled 15 hours ago 0 replies      
Good writeup of what you should be careful with with regards to JWT. There are some inaccuracies there too.First of all, JWT doesn't support encryption at all - it's JWE that does that. It's an important distinction, since most JWT libraries I ran into don't feature encryption, so if you want encrypted tokens you'll need to use an extra JWE library, or a more full-featured JOSE library.

JWE also support more than just RSA - it definitely does support Elliptic Curve Cryptography (although I would prefer if they chose Curve25519 instead of the NIST curves), and they are used with EC Diffie-Helman in a certain construction that actually gives you more forward secrecy than NaCl box.NaCl Box offers you no forward secrecy, whereas all the ECDH-ES algorithms in JWE offer you partial forward secrecy, which saves you when the sender key is leaked, but not when the receiver key is leaked. That's the best you can get: we can't have a two-way perfect forward secrecy in non-interactive protocol like JWT since we can't perform a direct negotiation of ephemeral keys between sender and receiver).Of course, we're actually comparing apples to oranges here: you can get the same partial forward secrecy guarantee with libsodium's sealed box (not in the original NaCl):https://download.libsodium.org/doc/public-key_cryptography/s...

As for an alternative to JWT/JWE, I think it really depends on what people want, but I have slightly different suggestions:

1. For simple access tokens in low-medium load scenarios, access tokens stored in Redis are probably simpler to implement than JWT + revocation.

2. If you don't have any secret information inside the token, HMAC-SHA256.

3. If you have secret information, I'll actually go with libsodium's XChacha20-Poly1305 AEAD:https://download.libsodium.org/doc/secret-key_cryptography/x...secretbox doesn't support AEAD, but you often have external data that you want to tie the token. It's very easy to implement exactly the same construct with XSalsa20, so it will really be just secretbox with AEAD support, but that's non-standard and you won't find any native library support.

4. Public signature of public data with Ed25519 (this is NaCl's crypto_sign).

5. Authenticated asymmetrically encrypted tokens: to get partial forward secrecy, the easiest way would be using sealedbox and then signing the result with crypto_sign. It's not the most efficient way to do this, but NaCl/libsodium don't have a tailored operation for this use-case, so you would have to use primitives directly.

All in all, not quite clear cut, and nobody is making a library that does that for you, so it's easy to see where JOSE is coming from.

23
Gonzih 13 hours ago 0 replies      
One should always validate algorithm before doing anything with JWT on the server side.
24
retrogradeorbit 18 hours ago 2 replies      
This is acronym heavy. Author needs to specify in the opening which JWT he is talking about. Java Web Toolkit? or JSON web token? It took me a fair amount of reading to work out which one he means. And even them I'm only pretty sure he means JSON web token, but I can't be sure because the Java Web Toolkit also connects to servers, and uses encryption, and all the other stuff he talks about.
25
gant 18 hours ago 2 replies      
Oh hey, it's another "JWT libraries used to be terrible" article.

Idiots shouldn't write authentication code. Especially when credentials are involved.

Funny jwt-go was used as an example; it was never vulnerable to the alg-none attack:https://github.com/dgrijalva/jwt-go/commit/2b0327edf60cd8e04...

26
sergior 18 hours ago 2 replies      
hey, author is missing the bit where you can disallow client to choose the algorithm. No need to read the rest...
27
jlebrech 18 hours ago 0 replies      
sessionstore and give your client a cookie.
28
yellowapple 12 hours ago 0 replies      
So we should stop using cell phones because one implementation happened to spontaneously combust. Got it.

Or maybe - just maybe - the claim that specifications should be judged by their implementations is entirely nonsensical. The fact that Internet Explorer exists doesn't in and of itself mean that web browsing as a whole is a defective concept. The fact that single-ply toilet paper exists doesn't mean we should stop using toilet paper. Likewise, the fact that some JWT implementations are defective does not in and of itself say a whole lot about JWT itself.

29
qatanah 17 hours ago 1 reply      
One of the advantages in using JWT is that the verification is in the cpu level and not a lookup via disk. This may sound strange, but if you don't have a quick lookup cache or centralized cache, This would be a great advantage to use CPU.
30
cheerioty 18 hours ago 0 replies      
Huh? The alternatives listed make me wonder if the author actually knows what JWT is being used for.
31
jlebrech 18 hours ago 1 reply      
JWT is just storing more info than you would with a cookie but pretending it's secure by encrypting it with an algorithm the browser has access to.
29
My Struggles with Rust compileandrun.com
346 points by wkornewald  1 day ago   307 comments top 28
1
kstenerud 1 day ago 9 replies      
One thing I've found with rust is that you struggle struggle struggle trying to do a simple task, and then finally someone says "Oh, all you need to do is this".

Rust has already reached the point where it leaves the world behind. Only the people who have been there since the early days really understand it, and getting into rust gets harder and harder as time goes on.

Yes, there's some awesome documentation, and the error messaging has gotten a lot better. But the power and flexibility of Rust comes at the cost of it becoming harder and harder to figure out how all the thousands of little pieces are supposed to fit together. What's really needed is a kind of "cookbook" documentation that has things like "How to read a text file with proper error handling" and "what is the proper way to pass certain kinds of data around and why".

Right now there's a lot of "what" documentation going around, but little that discusses the "how to and why".

2
erickt 1 day ago 6 replies      
Hello Justin Turpin! Sorry to hear your struggles with rust. It's always going to be a bit more verbose using rust than Python due to type information, but I think there are some things we could do to simplify your code. Would you be comfortable posting the 20 line code for us to review? I didn't see a link in your post.

Anyway, so some things that could make your script easier:

* for simple scripts I tend to use the `.expect` method if I plan on killing the program if there is an error. It's just like unwrap, but it will print out a custom error message. So you could write something like this to get a file:

 let mut file = File::open("conf.json") .expect("could not open file");
(Aside: I never liked the method name `expect` for this, but is too late to do anything about that now).

* next, you don't have to create a struct for serde if you don't want to. serde_derive is definitely cool and magical, but it can be too magical for one off scripts. Instead you could use serde_jaon::Value [0], which is roughly equivalent to when python's json parser would produce.* next, serde_json has a function called from from_reader [1], which you can use to parse directly from a `Read` type. So combined with Value you would get:

 let config: Value = serde::from_reader(file) .expect("config has invalid json");
* Next you could get the config values out with some methods on Value:

 let jenkins_server = config.get("jenkins_server") .expect("jenkins_server key not in config") .as_str() .expect("jenkins_server key is not a string");
There might be some other things we could simplify. Just let us know how to help.

[0]: https://docs.serde.rs/serde_json/enum.Value.html

[1] https://docs.serde.rs/serde_json/de/fn.from_reader.html

3
dbattaglia 1 day ago 5 replies      
I was under the impression that the (somewhat) verbose syntax for error handling and memory management via the type system was a necessary side effect of Rusts entire point of existence: a compiler-guaranteed safe systems language. Neither Python nor C force you in any way to pay attention to errors, making simple scripts much easier to write.

I guess I'm just surprised people think that Rust should be as simple to use as Python. Maybe I'm wrong.

4
tinco 1 day ago 4 replies      
Being able to port a 20 line Python script to a 20 line Rust is the holy grail. Surely Rust has the ambition to one day achieve that, but it is by no means the main priority nor the original design goal of the language.

Justin criticizes the file_double function, it being complex with nested maps and conditionals. All of this complexity is also in the Python code, just hidden away in abstractions, the library and the virtual machine. Rust, right now, is still very explicit and revealing of inherent complexities. This code is exactly why you should use Python and not Rust for this kind of little script. One day the Rust developers hope Rust will be comfortable enough for you to consider using Rust in this situation, but it won't be soon.

The point gets softened a little by the remark that it probably would not be a picnic in C either, but I don't think even that is true. C still allows you to be very expressive, it would not encourage using those maps or even half of those conditionals. Rust is just that more explicit about complexity.

That said I honestly believe Rust is the best thing that has happened to programming languages in general in 20 years. Rust is rocking the socks off all the non-web, non-sysadmin fields, soon its community will make good implementations of almost every hard problem in software and Rust will be absolutely everywhere.

5
f1b37cc0 1 day ago 1 reply      

 > import json > with open("config.json") as f: > contents = f.read() > config = json.loads(contents)
translates to:

 extern crate serde_json as json; fn read_json() -> Result<json::Value, Box<std::error::Error>> { let file = std::fs::File::open("config.json")?; let config = json::from_reader(&file)?; Ok(config) }
And

 > import configparser > config = ConfigParser() > config.read("config.conf")
can be translated to:

 extern crate config; use config::{Config, File, FileFormat}; fn read_config() -> Result<Config, Box<std::error::Error>> { let mut c = Config::new(); c.merge(File::new("config", FileFormat::Json))?; Ok(c) }
Difficult stuff indeed.

6
pornel 1 day ago 4 replies      
These struggles are real. I don't see a way around them other than just learning them (and then they go away, because you know what code won't work, and don't fight it).

It's probably because Rust looks and operates mostly like a high-level language, but still satisfies low-level constraints.

e.g. the confusing difference between `&str` and `String` is equivalent of C's `const char * str = ""` vs `char * String = malloc()`.

In C if you had a code that does:

 char *str = foo(); free(str);
you'd know that in `foo()` you can't return `"error"`, since an attempt to free it would crash the program. And the other way, if the caller did not free it, you'd know you can't have a dynamic string, because it would be leaked. In Rust you don't see the `free()`, so the distinction between non-freed `&str` and freed `String` may seem arbitrary.

7
Inufu 1 day ago 5 replies      
My main gripe with Rust so far has been the unnecessary profusion of Result<> types, making it hard to process and forward errors.

Case in point: the example in the article from the rust documentation that converts errors to strings just to forward them: https://doc.rust-lang.org/book/error-handling.html#the-limit...

In practice, I find a type like Google's util::StatusOr (https://github.com/google/lmctfy/blob/master/util/task/statu...) a lot easier to use (I've written >100kloc c++ using it). This uses a standardized set of error codes and a freeform string to indicate errors. I've yet to encounter a case where these ~15 codes were insufficient: https://github.com/google/lmctfy/blob/master/util/task/codes...

8
msangi 1 day ago 0 replies      
A common theme I find in posts criticizing Rust is that their authors take a problem they've already solved in another language, try to blindly convert it in non-idiomatic Rust and then complain because things get awkward.

I think that it's important to pick the right tool for the job and to follow the patterns of the tool you're using.

Is Rust the right tool for the task described in the post? Probably not, but it could still be used albeit it will always require more work than Python.

What's really missing is a resource showing common problems and their idiomatic solutions.

9
thegeomaster 1 day ago 1 reply      
The `error-chain` crate [1] exists to get rid of precisely the error handling boilerplate the author has encountered. That's not ideal, though, as I believe that a place for such functionality is in the core language, not a separate library, but it gets the job done. As for the `let mut file` bit, that makes sense to me: a file in the standard library is an abstraction over a file descriptor in the operating system, and the descriptor has a file pointer which must be advanced when you read from it. I don't consider it internal state; the read operation will return new data every time, so it's not a pure function. It follows that in order to behave that way, it has to depend on some pretty explicit state.

As the other comment said, Rust needs to make some trade-offs, because you simply can't have an expressive and easy-to-use language that runs so close to the metal and is aimed at being C++-level fast. As such, Rust will never be as easy to write as Python, and for scripts like the author mentioned, I'd say that Python is a much better choice than Rust.

Rust is, by design, a systems programming language and it does have complexities and gotchas that arise from the need to have a lot of control of what actually happens at the machine code level. If we had a Sufficiently Smart Compiler(tm), of course, you wouldn't have to worry yourself about those low-level details and just write what your program needs to do and nothing more. However, in the absence of such an ideal, we must accept that a high-level abstraction must always leak in some way in order to let us control its operation more closely to get the performance we need. In my opinion, it's much better that necessary abstraction leakage is made a deliberate part of the API/language and carefully designed to minimize programmer error, and Rust, I think, does a good job of doing exactly that.

That's not to say that the language cannot be made more ergonomic. For one, I think that rules for lifetime elision are a bit too conservative and that the compiler can be made smart enough to deduce more than it currently does. I'm also excited about the ergonomics initiative, and I hope that the core team will deliver on their promises. In general, as someone who's written more lines in C/C++ in my life than any other language, I'm very excited about the language as a whole, as I think it provides the missing link between those languages that are expressive, high-level, and reasonably safe but slow, and those that are fast, low-level, a bit terse, and allow one to shoot oneself in the foot easily.

[1]: https://crates.io/crates/error-chain

10
dep_b 1 day ago 3 replies      
The big question is would the Python script crash or handle the error when obvious problems like not valid JSON or file not found happen?

My experience with Swift vs Objective-C is that clean Swift is crash free but more verbose when all other things are equal.

If you don't need that level of security because it's just a small script Python was the right choice.

11
cdunn2001 20 hours ago 0 replies      
https://nim-lang.org/docs/parsecfg.html

(Scroll to the examples.)

An exception would be thrown on error. That exception could be trapped in a simple try/catch block.

Nim is very similar to Python -- but statically typed and compiled (quickly) to machine code. There are many situations where Python is a better choice than Nim, but if you're looking to translate Python code for speed and type-safety, Nim is worth considering.

And if you want to translate Python to Nim gradually, look at this magic:

For calling Nim from Python:* https://github.com/jboy/nim-pymod

For calling Python from Nim:* https://github.com/nim-lang/python/tree/master/examples

12
cousin_it 1 day ago 5 replies      
Rust's aversion to exceptions is exactly like Go's aversion to generics - a strongly held position that doesn't actually make anyone's life easier.
13
djhworld 1 day ago 3 replies      
The author doesn't really justify why he needed to port the python script to rust in the first place.

Pulling down some JSON, doing a bit of transformation and sending alerts seems like a perfect candidate for a high level language, I don't see any reason why you would port it to Rust unless you had significant performance concerns

14
ungzd 1 day ago 0 replies      
I don't think it should be as easy and concise as python it's systems programming language without GC. It is not designed to be used in place of all programming languages, just in place of C and C++.
15
kibwen 1 day ago 0 replies      
16
gavanwoolery 1 day ago 1 reply      
Rust has stressed ergonomics of late, yet I sometimes struggle to read Rust code. Obviously, there is value in elegant code, but my question is, would anybody find value in an extremely simple language that could compete with the likes of c/c++? Does something like this exist?
17
shadowmint 1 day ago 5 replies      
It makes me sad to see the example. This is why I maintain `.unwrap()` is one of the worst things in rust.

...because people use it; and then say; 'but don't use unwrap...'; and then use it, and your 'safe' language then happily crashes and burns everytime something goes wrong.

Blogs and documentation are particularly prone to it.

Result and option types are good; but if you're gonna have unwrap, you basically have to have exceptions as well (or some kind of panic recovery), because, people prefer to use it than use the verbose match statement. :/

18
liveoneggs 1 day ago 1 reply      
Isn't nim built for pythonists to do just this sort of thing?
19
ricardobeat 1 day ago 1 reply      
If the goal is [fast, compiled, statically typed], a language like Crystal [1] would probably be a better fit for the author:

 require "yaml" config = YAML.parse(File.open("test.yaml"))
[1] http://crystal-lang.org

20
alkonaut 1 day ago 0 replies      
Would it be theoretically possible to get to this?

 fn gimme_config(some_filename: &str) -> MyConfiguration { toml::from_file(some_filename).unwrap() }

21
vultour 1 day ago 0 replies      
This is a staggeringly bad comparison, almost like comparing drawing a line in C# (couple lines) to trying the same thing in C++ (possibly 100+ lines).
22
AlphaWeaver 1 day ago 0 replies      
Why was the title of this post changed?
23
Dowwie 1 day ago 0 replies      
Fortunately, Rust gets so much love from so many that it will thrive without winning over Everyone
24
frik 1 day ago 1 reply      
Rust is important, it has a great chance being a first modern native system language (memory safety).

Though, I wish it has less exotic syntax. It's like C++ and Erlang had a baby. Look at modern languages with nice syntax like Go, Julia, Swift and compare it to Rust. Someone coming from C, C++, C#, Java, PHP and JavaScript has to learn a lot of new syntax twists that look uncommon and different for little reason. Sure some overly complex early syntax ideas like different ones for different pointer types vanished in newer Rust releases. Now it's probably too late to improve the syntax.

25
dep_b 1 day ago 2 replies      
26
vfclists 1 day ago 1 reply      
The author might consider Nim - https://nim-lang.org/. It is a statically-typed compiled language that about equals Rust in performance, but has a much cleaner higher-level Python-flavored syntax, and a very Pythonic parsecfg module in stdlib.

PS. To my earlier downvoters can I have my hard won karma back, please??? This is the response I have been advised to proffer after consulting on the Nim forum, after my earlier terse comment.

27
p0nce 1 day ago 0 replies      
Hopefully some mechanism to handle exceptional cases at the language level will be invented soon.
28
vfclists 1 day ago 2 replies      
Use Nim
30
FBI director Comey backs new Feinstein push for decrypt bill techcrunch.com
291 points by pearlsteinj  5 days ago   212 comments top 31
1
grandalf 5 days ago 13 replies      
From his perspective as the head of the FBI whose job it is to achieve outcomes within the law, of course Comey advocates encryption backdoors. He would likely also advocate allowing the FBI to suspend the bill of rights for any suspect during the duration of an investigation, and he'd quite likely prefer that the FBI be legally allowed to torture suspects if extreme techniques were viewed as likely to result in useful information. To law enforcement, the rights of a suspect are a barrier to many convictions.

How did we get to this point? Nobody would reasonably argue that extreme surveillance measures, patriot act, etc., is necessary to stop the vast majority of crimes from occurring, so why is it so easy for seemingly serious/intelligent people to think this nonsense is reasonable?

Members of our government are so indoctrinated about stopping "terrorism" that they have lost all sense of perspective. Terrorism is a political word to describe political enemies of the state, yet the patriot act and surveillance machinery has been used in enforcement of many other kinds of (less serious) crime.

I am surprised anyone can still use the word "terrorism" with a straight face anymore after it's become so clear that there is no large existential threat (merely the occasional zealot who acts out due to his/her own mental health issues). And in spite of a historically unprecedented global surveillance system there have been no attacks thwarted.

Comey is a symptom of the kind of cowardly, authority-respecting society we've become. I look forward to the day when our FBI director is not someone whose gaffes and judgment calls we read about in the newspaper on a regular basis.

2
dhfhduk 5 days ago 4 replies      
I'm confused about this. I'm hurried at the moment, but this seems to a bill that orders tech companies to provide a solution to encryption without having a backdoor?

Isn't this like legislating a violation of mathematics or something?

3
FullMtlAlcoholc 5 days ago 2 replies      
So, the NSA and the CIA were recently hacked, yet these numbskulls think we can create a system that will only be accessed by "the good guys" How many hacks, leaks etc will it take for them to understand that if this passes, that will be the end of online security?

New Rule: If you want to propose cybersecurity legislation, you need to pass the fizz buzz test.

4
peterwwillis 5 days ago 0 replies      
> "What nobody wants to have happen is something terrible happen in the United States and it be connected to our inability to access information with lawful authority."

But they're not asking for that. They're asking for the ability to force companies to grant them access to information without something terrible happening.

The only way you could prevent something terrible happening, and have that prevention be "connected to [their] ability to access information with lawful authority", is to have the ability to inspect private data. And the only reasonable way they would do that is to do it surreptitiously.

They could try just asking the user to unlock their iPhone, or demand it with a court order (where I assume they can plead the 5th), but either would tip the suspect off. So they have to do it without the user's knowledge. And the only way to do that is if the company has a backdoor, or makes it so incredibly insecure as to no longer guarantee privacy at all.

The only logical way to give the FBI what it wants is to compromise user privacy.

> During the session, Comey also made repeat plays for expanding the scope of national security letters (NSL) arguing that these administrative subpoenas were always intended to be able to acquire information from internet companies, not just from telcos.

The FBI claims that they would always get permission from a judge for invading user privacy. In the next breath, they want to expand NSLs, which is invading user privacy without requiring a judge's approval.

Both Lavabit and Silent Circle have had to close down their businesses after Lavabit was unreasonably demanded by the government (in a gag-ordered search warrant) to give up its private TLS keys, exposing all its users' privacy. But no law enforcement agency gives a shit about privacy; only secrecy.

5
mgleason_3 5 days ago 2 replies      
Unbelievable. Just happened to see a clip today (https://goo.gl/F9XeQU) where Feinstein was "grilling" Comey about announcing the investigation into Clinton right before the election.

When Feinstein totally let him off the hook I was floored?!? He interfered worse than the Russians - how does he still have a job?

Ahh, she wants his support for the decrypt bill. I'll never understand why the Democrats have zero interest in protecting personal privacy.

6
feld 5 days ago 1 reply      
I dont think Congress intended that distinction but what it does do us is in our most important investigations it requires us that if we want to find out the subscriber info to a particular email to go and get an order from a federal judge in Washington as part of the FISA court. An incredibly long and difficult process. And Im worried about that slowing us down and Im also worried about it being a disincentive for our investigators to do it at all.

Hurdles to protect privacy are important. If it's not an arduous process we have a problem.

7
DarkKomunalec 4 days ago 0 replies      
Would it be okay to mandate spy microphones in all cars, spy cameras in all rooms, and make it illegal to remove or disable them, as long as only the 'good guys', with a warrant, could access the info?

What if doing this would save N people/year from terrorist attacks?

What other rights should we sacrifice for a 'safer' society? Surely we shouldn't let terrorist recruit people, so there goes free speech. We also shouldn't let them gather together to plot their wicked plots, so there goes freedom of association. And if we could bar people at risk of committing terrorist acts, from vulnerable locations, such as subways, airports, parks with a lot of people in them, well, I'm sure that would save a few lives too.

8
utternerd 5 days ago 1 reply      
> saying such legislation would be better from a public safety perspective

According to whom, we the people or a bunch of authoritarians who'd like to be able to access every nook and cranny of our personal lives?

9
nathan_long 13 hours ago 0 replies      
Your device has private data on it. Who has final say on whether someone can access it?

- Option 1: you- Option 2: somebody else

Those are the only two options.

Option 1 protects people from criminals and tyrants, but impedes law enforcement.

Option 2 enables law enforcement but makes people vulnerable to criminals and tyrants.

Any suggestion that we can get the best of both worlds is confused or disingenuous. We have to choose.

Do you get final say on who can access your device's data, or does somebody else?

10
adrr 5 days ago 2 replies      
Putting in backdoors is sure fire way to kill US based mobile phone producers. Criminals will just use foreign produced phones and only way to counteract that is to outlaw those phones. Can't wait till they criminalize having certain firmware on your phones.
11
pgodzin 5 days ago 1 reply      
> We all love privacy, we all care about public safety and none of us want backdoors we dont want access to devices built in in some way. What we want to work with the manufacturers on is to figure out how can we accommodate both interests in a sensible way

How is this possibly reconcilable?

12
ardit33 5 days ago 7 replies      
Diane Feinstein is old and needs to retire. She is completely out of touch with the needs of her constituency, and comes off more like an old guard republican rather then a democrat that she is supposed to be.
13
thegayngler 4 days ago 0 replies      
I don't know why California Democrats elected Diane in the first place. Were there not any real liberals in California to choose from preferably with some expertise in Californias most valuable export?
14
rdxm 5 days ago 2 replies      
geeeez, how long is Cali going to foist Feinstein on the rest of the country. The level of idiocy is just beyond painful...

Edit to add: of course the same could be said about the remaining 49 states and their reps/sens as well...

15
rietta 5 days ago 0 replies      
I was watching the hearing during lunch, had to attend to work meetings, and then saw this article which is what spurred me to post my open letter to Congress tonight and share it here on HN at https://news.ycombinator.com/item?id=14261423. We have to get this information out there in a format that Congress and our non-techie friends and family understand.
16
RichardHeart 5 days ago 0 replies      
Law enforcement is tasked with putting people in jail, not so much preventing future abuses of bad laws by governments. This is why checks and balances must be maintained, for when all you have is a hammer everything looks like a nail.
17
bdamm 5 days ago 1 reply      
"The high profile court battle ultimately ended after the FBI paid a third party company to gain access to the device via an exploit in the security system."

Why isn't this an acceptable solution?

18
AJ007 5 days ago 1 reply      
Can someone call out these alleged encryption back doors for what they are? Junk science.

If Apple and Google aren't legally able to build as secure as devices & infrastructure as possible, the DOJ, FBI, NSA, and CIA sure as hell won't be secure. Merry Christmas to Assange.

19
benevol 5 days ago 0 replies      
If you want to lose all of your tech monopolies, then go ahead with your backdoors (the ones whose existence will be publicly known, that is).
20
microcolonel 5 days ago 0 replies      
> We have to figure out a way to optimize those two things: privacy and public safety.

Given how safe the public is, you'd think that this would mean "we need to focus on privacy". That is the public's priority. The FBI, whose mandate is abviously not to protect the privacy of citizens, is obviously going to advocate for the public safety, or more specifically his organization's degree of visible success in ensuring it.

Obviously the director of the FBI is not who you should be asking for a balanced recommendation regarding safety and privacy.

21
JustSomeNobody 5 days ago 0 replies      
What are the tech companies he has been having a "growing consensus" with? I want to boycott them.
22
jacquesm 5 days ago 1 reply      
Nice bill. Maybe they should finally get around to declaring Pi to be 3 too, two birds with one bill.
23
Mendenhall 5 days ago 1 reply      
Is there any good information on what has been accomplished through such access etc ?

What have they stopped using such methods? I think if they wanted to get anything like this moving forward they need to show results. Not too many trust the government these days.

I do not like the idea of "backdoors" but I can see realistic need for such things. I think many are against such things "until" some massive WMD type attack then the tune will change.

24
scardine 5 days ago 0 replies      
There is another big problem with mandatory decryption laws.

If someone want to incriminate you, they don't need to plant a file with child porn anymore: they just need to plant a file composed of random bytes and acuse you of having encrypted child porn there.

Now good luck providing the court an encryption key that does not exist.

25
cprayingmantis 4 days ago 0 replies      
If you're wondering how it got to this point I'd like to remind you that you (If you live in the US) don't own this country. The people in charge don't care about you. They care about money, power, and stability of their system. It's hopeless to resist because they own your home, your bank account, and all your money. The only way we'll ever change it is getting scientists, nerds, and engineers into congress. I don't know how we'll do it but we have to do it to ensure freedom for everyone in the USA.
26
unityByFreedom 5 days ago 0 replies      
Ridiculous. When will these numbskulls understand that you can't regulate people's use of encodings? It's right there in human language. You can't force everyone to use the same one.
27
jjawssd 5 days ago 2 replies      
Why do California Democrats vote this person in year after year?
28
cmdrfred 4 days ago 0 replies      
Why is someone who is 83 years old and likely has to call her grandson for help paying a bill online writing law about encryption?
29
phkahler 5 days ago 2 replies      
I still don't understand. They want to be able to have a court order a device maker to decrypt data, but today they can already get a court to order the device owner to decrypt it. The device owner actually has the password or key. The truth is that they want to do this without the device owner knowing it's being done.
30
bsder 5 days ago 0 replies      
Right after the Intel security disclosures.

Hmmmmmm.

31
Esau 5 days ago 0 replies      
Color me surprised.
       cached 9 May 2017 04:11:01 GMT