hacker news with inline top comments    .. more ..    25 Feb 2017 News
home   ask   best   23 minutes ago   
1
Amazon Deforestation, Once Tamed, Comes Roaring Back nytimes.com
161 points by blondie9x  6 hours ago   51 comments top 13
1
Rapzid 7 minutes ago 0 replies      
Wow,we need to bring back the "Save The Rainforest" awareness full force. I see people talking reforestation and that's good and all.. But the tragedy of the rainforest isn't so much that "now there are no trees here". It's the loss of the biodiversity and ecosystem. You can replant a forest, but you can't replant The Rainforest.

Man, I love the rainforest. Why don't we hear about it as much? It's been a dream of mine to visit the Amazon. Now that I have the means I really need to get down there.

2
conesus 4 hours ago 7 replies      
I'm running a Kickstarter for a wood product right now and so I've learned a thing or two about how to encourage reforestation. The answer is get economics to drive reforestation.

Cats, cows, turkeys, dogs, and horses, among many other animals, will always be around as long as humans are, simply because we'll ensure that they always have a healthy habitat. The same goes for trees, especially tropical trees.

Reforestation is a huge opportunity if we invest in plantations that have managed cutting, allowing tropical trees to grow up to 40 years before being turned into beautiful products like furniture.

In order to produce high quality lumber that's useful for furniture making, like mahogany for example, loggers have to be selective about the trees they fell. Can't just chop down every one of them because not all will generate productive wood.

Maple is similar in that it's plantation grown right here in the U.S. What that actually means in terms of wood quality is that it suffers a bit from uniformity.

The reason I know this is because I'm running a Kickstarter for a remote control[1]. It's made out of mahogany and maple, but its impact is offset a bit by the small size of the remote, which also makes small features much more distinguishable.

This is also one of the reasons why I'm a vegetarian, as much deforestation is caused by burning forests for raising livestock.

[1]: Turn Touch: https://www.kickstarter.com/projects/samuelclay/turn-touch-b...

3
goodroot 4 hours ago 1 reply      
Nyughh. Sad. This hits me right in the heart. I have been to the Amazon and it is lush, beautiful paradise. It changed my perspective on how valuable natural life is on our planet.

It is not just the net loss of plant matter. The indigenous are under constant duress and we risk losing a unique part of our world; medicinal plants and esoterica, wild-life, and access to limited resources, to boot.

There are groups working with those impacted by this. I am contributing to an NGO called the Alianza Arkana which is doing some work to aid the indigenous and the rainforest.

Contextual plug: http://alianzaarkana.org .

I left a grand paying, comfortable tech job to try to make even-a-small difference. I hope this devastation is not a part of my generation's legacy.

4
rkachowski 1 hour ago 1 reply      
>A decade after the Save the Rainforest movement forced changes that dramatically slowed deforestation across the Amazon basin .... That resurgence, driven by the worlds growing appetite for soy and other agricultural crops

The article doesn't get into it, but what is driving this growing appetite? Is it a natural effect of population growth?

5
lacker 3 hours ago 5 replies      
Who actually owns the rain forest land? It seems odd to me that this article does not really get into that. The simplest way to save the rain forest would be for some entity, private or public, to buy it up and turn it into a park. That seems way more effective than trying to convince companies to not use it for farming, since there is no limit to the supply of companies that are capable of operating a farm.
6
mirekrusin 4 hours ago 2 replies      
"(...) deforestation rose in 2015 for the first time in nearly a decade, to nearly two million acres from August 2015 to July 2016. That is a jump from about 1.5 million acres a year earlier and just over 1.2 million acres the year before that (...)"

...so really it did rise in 2014, from 1.2 to 1.5 so the first statement is false, isn't it?

7
hendler 5 hours ago 0 replies      
Where there are problems this massive, there is opportunity; reforestation, biodiversity, businesses that utilize existing ecosystems without devastating them. Forty years ago environmental protection was seen as being at odds with economic progress. This perception is changing because it must. Tesla/Solar City I hope are only the beginning of aligning great technical achievement, economic viability, and environmental sustainability.
8
knowaveragejoe 5 hours ago 0 replies      
What a shame. Unfortunately, the incentives seem to be aligned such that I fear this will be a difficult trend to slow or ultimately reverse.
9
lyonlim 2 hours ago 1 reply      
This is sad. Anything that we can do?

There's this site, tree-nation.com that helps with reforestation projects. Any others out there?

10
devoply 5 hours ago 2 replies      
Humans are the ultimate invasive species. We complain about fungus and other pests moving their range and causing havoc because of global warming, is it not the case that we do the exact same thing all the time, and that's considered a good thing. And it's not the locals doing it, it's global corporations, many with HQ in developed countries, so it's not as if they don't know any better. I doubt we will ever be controlled until we welcome our new AI masters.
11
EvagohP5 6 hours ago 2 replies      
Wow Amazon has really expanded their operations to include every kind of business, haven't they. Now they've added deforestation to their repertoire.
12
lugus35 4 hours ago 0 replies      
Don't worry, they have collected enough little trees during the Olympic games opening ceremony.
13
taway_1212 2 hours ago 1 reply      
There is a bit of hypocrisy in the discussion about cutting out rain forests. Western nations paid no mind to destroying their local ecosystems when they were getting rich over a span of the last couple of centuries, and only started to care about the environment when they reached comfy living for their citizens. It's unfair to demand something else from developing countries now.
2
Staffjoy V1, aka Suite, open-sourced github.com
43 points by vquemener  3 hours ago   8 comments top 3
1
vs2 3 minutes ago 0 replies      
Very cool, followed the instruction and got it working! What's the license ?
2
cyberferret 1 hour ago 2 replies      
Nice effort to open source your code - kudos to you.

I remember checking out Staffjoy last year sometime, out of curiosity. Seemed to be a nicely designed system that fitted a niche.

I too wonder why it never took off - having consulted in the small business area for 3 decades now, rostering and checking staff availability is a major pain point for business owners, especially those running cafes or security services, catering etc.

Heck, even my wife is on an automated rostering SMS system for her part time job, and it works well and seems to alleviate a lot of stress for her boss. I am surprised that $9/mth wasn't an automatic "Shut up and take my money" for a lot of businesses. I know there is a lot of competition, but surely there is plenty of pie for a few players to be in the same market.

I hope to perhaps one day hear a more detailed post mortem about the business, rather than technical, challenges faced by the founders of Staffjoy.

3
BinaryIdiot 1 hour ago 2 replies      
That's cool it's being opened sourced versus simply dying. Though it's odd that, even though the repo says they're shutting down, their website shows zero sign of that at all and is still encouraging users to sign up.

Also, was this the Homejoy v2 attempt? Seems like contractor scheduling services don't provide enough value to the contractors and customers to keep them from dealing without them once the contact is made. Curious if there are any successful companies that provide the same services, with actual revenue, and what they did differently. It's gotta be something to make the customer and contractor stick to the platform.

3
Postmodern Error Handling in Python 3.6 journalpanic.com
38 points by knowsuchagency  3 hours ago   18 comments top 3
1
rspeer 1 hour ago 1 reply      
There are cases where you could justify writing "except Exception as e" and having it return a value, like if you're the author of Flask or Raven or something that includes handling arbitrary errors in other people's code in its job description.

But within your own code, you would not want to represent "an unforeseen error occurred" with a value, no matter how much you like enums.

If you were parsing JSON and you got an unexpected error that isn't about parsing JSON, logging the error and continuing is not the right option. There is probably nothing reasonable your program can do. Raise the error so your broken code stops running.

2
raverbashing 1 hour ago 2 replies      
"Mypy allows you to add type annotations and enforce them prior to running your program"

Yes, and if I wanted type annotations to stop my program from working I wouldn't be using Python

Enforcing types is exactly what Python IS NOT about. Because of Duck Typing and everything else

So this is not "postmodern error handling" this is "let's code Java in something else and pat ourselves in the back"

Do you want to check errors in "compile time"? Use Pylint. It does the right thing

3
nicolaslem 1 hour ago 1 reply      
Type annotations are also helpful without mypy. PyCharm uses them to provide hints and warnings during development.

At first I was reluctant of the syntax, after a while I got used to it. Function definitions look a lot like Rust ones and do not require the endless docstrings just to document the type of the expected parameters anymore.

4
FCC weakens net neutrality rule in a prelude to larger rollbacks techcrunch.com
448 points by vivekmgeorge  16 hours ago   266 comments top 23
1
guelo 14 hours ago 3 replies      
I hate these types of articles that provide extensive quotes and even a screenshot of part of the pdf, but refuse to link to the actual documents. It's probably an advertising thing where they don't want people to leave the site.

The actual statements are available here https://www.fcc.gov/document/fcc-addresses-unnecessary-accou...

2
tomelders 9 hours ago 2 replies      
Undersandly, the conversation in here revolves around the technicalities and semantics of net neutrality. But this isn't an issue of technology. It's a political issue, or worse, an ideological issue. It's not about the empirical truths of net neutrality, or the collective intent of those who created, and those who continue to develop the technology that has woven itself into the fabric of humanity. It's about idealouges imposing their ideals on every facet of our lives, regardless of the facts.

The sad fact is, this is yet another grim attack on net neutrality by nefarious agents who see the web as something to be dominated and bent to their will exclusivley for political and economic gain.

Like it or not, the work we do is going to become highly politicised. Are we ready for this? Do we have the moral fortitude to resist the influence that fuzzy, sloppy, and emotive politics seeks to have on our discussions?

I think back to how we handled the Brendan Eich debacle. I (regretfully) came down on the punitive side of that argument. And I participated in that debate with a level of anger and vitriol that embarrasses me now. But whichever side you took, there's no doubt that for a brief moment we were deeply divided. The Brendan Eich story was a flash in the pan compared to what is about to happen.

Should we engage in political debate, or should we avoid it? Can we buck the trend and participate in political debate in way that doesn't tear us apart, or should we ignore it as it happens around us and impacts upon our lives and work? Or is there a path between the extremes, where we can be neither ignorant to our political leanings nor beholden to them?

I don't dare offer any advice on how we should prepare ourselves for what is about to come, I just hope we can all think about how we hope to respond before it happens.

One thing I will say though, being someone prone to highly emotional reactions in all aspects of my life; developing software in teams has taught me the value of "strong opinions, weakly held".

3
morgzilla 10 hours ago 3 replies      
I can see how a bit of outrage about this is how the NRA got to the place it is today. This by itself isn't that meaningful, but anything can be politicized, turn public opinion and gain momentum. That's why the NRA's position is to say NO to any kind of gun regulation, because they know that's how you ensure guns are made available and gun culture is for sure secure.

In the tech community I see people rising up against any kind of movement against net neutrality. And I do not want to see it erode. But I worry that by becoming averse to any reversal, any compromise, the communities stance will eventually be so politicized that it is just another part of the unreasonable and ultra biased political landscape that grinds progress to a halt.

4
jerkstate 15 hours ago 13 replies      
Does anyone with a strong understanding of internetworking, peering and transit contract negotiation actually believe that "net neutrality" is possible? traffic shaping of saturated links seems like a necessary outcome to not undermine the smaller users (i.e. low bandwidth communications) that are impacted by heavy users (i.e. video streaming) if two peering parties can't come to terms on cost sharing for link upgrades.
5
Pica_soO 46 minutes ago 0 replies      
I wish we had a slow, but high bandwith alternative to the web in public hands. The problem is the infrastructure.. if there was a way to create a gnu add-hoc wifi network between every home hotspot - at least within a city, the web neutrality could be restored.
6
seibelj 14 hours ago 3 replies      
I know several people who are highly involved with the FCC, telecom industry, and telecom law that think that "network neutrality" is just 2 words. Until 1970, and only because of lawsuits, it was illegal to connect anything to your phone line. You could get any phone you wanted from Ma Bell as long as it was black.[0] If you wanted a different color you had to pay extra. It took force to make Ma Bell and the FCC allow you to plug in your own phone, your own computer, etc. The FCC supports monopolies, if you want competition you should applaud the deregulation of telecom.

[0] https://en.wikipedia.org/wiki/Model_500_telephone#Ownership_...

7
woah 14 hours ago 1 reply      
I asked this in another thread a few days ago, but why are edge servers and CDNs not a violation of "net neutrality"? If you've got an edge server on an ISP, and are paying extra for a leased line from your main data center to that server, you are effectively paying the ISP an additional fee for priority over other traffic on their hardware.
8
ryandrake 13 hours ago 0 replies      
Article didn't load for me:

ERROR: TechCrunch is not part of your Internet Service Basic Web pack. For an extra $29.99 a month you can upgrade to Internet Service Extreme, offering access to over 50 more web sites!

9
subverter 14 hours ago 8 replies      
This raises the limit on the number of subscribers a provider can have before regulation kicks in. In other words, a larger number of smaller providers have one less regulation to worry about.

Isn't more competition among providers what we want? Shouldn't we be doing everything we can even if it's saving 6.8 hours per year in regulatory compliance to help these smaller guys be able to take on these horrible behemoths like AT&T and Comcast?

10
dopamean 15 hours ago 3 replies      
Why is the FCC against net neutrality?
11
VonGuard 15 hours ago 3 replies      
This is the end. If we think this guy's gonna listen to the people, we're completely wrong.
12
Crye 7 hours ago 1 reply      
Let me put my hat in the ring here.

Deregulation of access to consumers will result in cheaper internet and most likely faster internet speeds. However, it will concentrate power to those who already have it. Large ISPs will charge heavy bandwidth companies and only the largest heavy bandwidth companies will be able to afford the fees.

Those heavy bandwidth companies paying the fees will recoup the money through advertising. Remember newspapers and large TV media companies make the majority of their money through advertising. When companies rely on advertising, the users are no longer the customers. They are the product.

Further protecting the companies which rely on advertising will allow those companies to focus less on the customers and more on the advertisers. Companies relying on the allegiance of advertising will naturally shape their political standing to views of the advertisers. Remember also that advertisers are not paying for just eyeballs, but they are all paying for control. If a company starts moving away from their advertisers' political ideology they will lose revenue. Net Neutrality will ultimately give more control to companies that already hold power.

Just my two cents...

13
wav-part 14 hours ago 1 reply      
Is not net-neutrality better handled by IANA ? If you are going to call your router "internet", you must treat all IP packets equally. Seems like reasonable terms to me. Afterall this is the property that made Internet what it is today.
14
fallingfrog 13 hours ago 1 reply      
I suppose one way to enforce net neutrality might be to route all traffic through TOR.. that might mess up the caching for a service like Netflix though. (Could someone who knows more than I do comment on that?)
15
bobbington 6 hours ago 0 replies      
Internet is plenty fast. Companies need to disclose what they are doing to customers, but government shouldn't regulate it
16
rocky1138 14 hours ago 2 replies      
Can't we just create our own local Intranets using Ethernet cables running around cul-de-sacs?

Mine connects to yours which connects to his which connects to hers. Eventually we'll have formed a network.

17
lacroix 12 hours ago 0 replies      
The FCC won't let me be
18
beatpanda 14 hours ago 3 replies      
How long until access to the open internet costs extra?
19
pasbesoin 14 hours ago 0 replies      
Google Fiber got to a couple of nearby communities before they put the brakes on.

I'm left hoping that's close enough to branch out wireless service in short order.

Otherwise, I'm left screwed, between an AT&T that refuses to upgrade its local network (and it's a dense, accessible, suburban neighborhood -- hardly the boonies), and a Comcast that has doubled its rates for basically the same service. Both with caps that will quickly look increasingly ridiculous in the face of the wider world of data transfer.

We'll be back to them insisting on big bucks for assymmetric streaming of big-brand content, with increasing pressure to make that their content (a la data-cap exemptions, etc.)

20
transfire 8 hours ago 0 replies      
This issue could well turn out to be Trump's Achilles heal. If they go too far, the engineers that actually make the Internet work can easily bring the whole shebang down in protest -- and the world is so addicted to the Internet at this point the outrage would be deafening. And if Trump is too proud to back down...
21
nicnash08 14 hours ago 3 replies      
22
bobbington 6 hours ago 0 replies      
Leave it alone. Stop demonizing the companies that give Internet.
23
boona 11 hours ago 1 reply      
If Trump also continues with his plan to deregulate as well, I'm of the opinion that this is great news. This could make Google Fiber and other similar undertakings much more viable. It always gives me the hibby-jeebies when government takes strong control over an industry. This is especially true in the case of the FCC where their original mandate went from regulating airwaves, to regulating the content of said airwaves.
5
Cloudflare data still in Bing caches ycombinator.com
452 points by neonate  9 hours ago   172 comments top 28
1
Smerity 8 hours ago 6 replies      
From the parent thread:

 The caches other than Google were quick to clear and we've not been able to find active data on them any longer. ... I agree it's troubling that Google is taking so long.
That's really the core issue here - the Cloudflare CEO singled out Google as almost being complicit in making their problem worse whilst that exact issue is prevalent amongst other indexes too.

The leaked information is hard to pinpoint in general, let alone amongst indexes containing billions of pages.

I can understand the frustration - this is a major issue for Cloudflare and it's in everyone's best interests for the cached data to disappear - but it's not easy, and they shouldn't say as such (or incorrectly claim that "The leaked memory has been purged with the help of the search engines" on their blog post).

This is a burden that Cloudflare has placed on the internet community.Each of those indexes - Google, Microsoft Bing, Yahoo, DDG, Baidu, Yandex, ... - have to fix a complicated problem not of their creation.They don't really have a choice either given that the leak contains personally identifiable information - it really is a special sort of hell they've unleashed.

Having previously been part of Common Crawl and knowing many people at Internet Archive, I'm personally slighted. I'm sure it's hellish for the commercial indexes above to properly handle this let alone for non-profits with limited resources.

Flushing everything from a domain isn't a solution - that'd mean deleting history. For Common Crawl or Internet Archive, that's directly against their fundamental purpose.

2
MichaelGG 8 hours ago 4 replies      
I've had a fairly high opinion of CF, apart from their Tor handling and bad defaults (Trump's website requires a captcha to view static content.) Yeah I'm uncomfortable with them having so much power, but they seemed like a decent company.

But their response here is embarassingly bad. They're blaming Google? And totally downplaying the issue. I really didn't expect this from them. Zero self awareness- or they believe they can just pretend it's not real and it'll go away.

3
kchoudhu 4 hours ago 2 replies      
It's been pretty entertaining watching taviso's attitude towards CF go from "we trust them" to "dude, you're a tool".

I kind of understand what CF is doing here: they've screwed up, there's no way for them to clean it up, so all they can do now is deflect attention from the magnitude of their screw up by blaming others for not working fast enough in the hope that their fake paper multibillion dollar valuation doesn't take too big a hit.

Still a dick move though. Maybe next time don't use a language without memory safety to parse untrusted input.

4
tonyztan 8 hours ago 3 replies      
Why is Cloudflare underplaying this issue? All data that transited through Cloudflare from 2016-09-22 to 2017-02-18 should be considered compromised and companies should act accordingly.
5
koolba 8 hours ago 0 replies      
Rule #1 of breaches: you can't unbreach

At this point if you don't consider all data that was sent or received by CloudFlare during the "weaponized" window compromised, you're lying to yourself.

6
uladzislau 8 hours ago 0 replies      
I briefly touched base with Cloudflare's Product Management and my impression was that they were overconfident and snobbish in every aspect, which is kind of opposite to what I'd expect from the company like this. Being humble never hurts.
7
rdl 6 hours ago 0 replies      
I really hope people don't lose sight of how helpful Project Zero has been in finding ongoing vulnerabilities and making the Internet a better place.

There is a bit of tension between cloudflare and taviso over the timing of notification, but that is vanishingly insignificant overall.

8
spyder 20 minutes ago 0 replies      
And the "irony" is that some of the data may leaked only to "bad bots" and "IP has a poor reputation (i.e. it does not work for most visitors)."

From their blog: https://blog.cloudflare.com/incident-report-on-memory-leak-c...

9
mhils 7 hours ago 3 replies      
Does Cloudflare have complete logs to rule out that someone noticed this before taviso and used it to massively exfiltrate data by visiting one of the vulnerable sites repeatedly?

If they can't tell, someone may now be sitting on a lot of very juicy data, far beyond what may be left in these caches.

10
paulcole 8 hours ago 1 reply      
Just please tell me the people who found the issue got their free t-shirts.
11
someonenew2913 1 hour ago 0 replies      
Is it just me noticing that cloudflare.com's homepage's image displays many girls (and i think only 1 male) ?

Also, if you take a closer look at the video - each room artificially looks like it's gender-equal and diversity-equal (watch the video, it's fun to notice the artificiality of it) .

How fake can companies be these days ?

Or maybe they were always socially-fake, but it's just the current political state that they use the 'gender-equality' fakeness rather than 'we are all a big family' fakeness that i remember from 5-10 years ago.

My main complaint here is that it seems so obvious that they USE the fact that people want to see more gender equality and inclusion (some want that regardless of the quality of the employees (ie quotas), some want that only if it really reflects reality (ie: gender distribution will be determined just by who passes the company's hiring process, regardless of their gender. no 'discounts' for anyone, regardless of their gender) .

If I were a girl - I would really be suspicious about a company that does that - I would prefer to go somewhere else where I could say 'I got in because i was a good candidate, not because of a female quota that the company had to fill up so they can post a "gender-cool" video to their website'.

12
dorianm 8 hours ago 1 reply      
I'm compiling a list of affected domains (with data found in the wild): http://doma.io/2017/02/24/list-of-affected-cloudbleed-domain...

If you find some samples with domain names / unique identifiers of domains (e.g. X-Uber-...) you are welcome to contribute to the list: https://github.com/Dorian/doma/blob/master/_data/cloudbleed....

13
sneak 3 hours ago 0 replies      
Cloudflare's email to customers has been calling this a "memory leak", which means something entirely different than a "secret data disclosure".

One causes swapping. The other causes a month of extra work.

14
djhworld 27 minutes ago 0 replies      
Can someone explain why Cloudflare parse the HTML in the first place.

Is there some sort of information extraction feature service or something they offer? I don't get it.

15
Rapzid 1 hour ago 0 replies      
As much as CF would like people to believe otherwise (oh and look at our awesome response time and automation!) this cat can't go back in the bag. They should step away from the mic and contact a PR firm that specializes in salvage jobs.

If I were google I would hit back hard. They prob won't just stop, but I would not bother trying to even clean up the data unless under legal pressure. It out there, it's too late.

16
foobarbecue 6 hours ago 4 replies      
After reading this, I'm considering switching from cloudflare for my DNS servers. Recommend a similar free service?
17
flylib 7 hours ago 0 replies      
I lost all respect for Cloudflare
18
skrebbel 4 hours ago 1 reply      
Folks, can we please stop downvoting the parent of the linked comment? It's of no use when it disappears from HN.
19
patcheudor 6 hours ago 0 replies      
Also still in Yahoo caches with the same leaks found in both Yahoo and Bing. I posted the URLs to the linked thread.
20
bitmapbrother 7 hours ago 1 reply      
eastdakota 19 hours ago [-] (Cloudflare CEO)

>Google, Microsoft Bing, Yahoo, DDG, Baidu, Yandex, and more. The caches other than Google were quick to clear and we've not been able to find active data on them any longer. We have a team that is continuing to search these and other potential caches online and our support team has been briefed to forward any reports immediately to this team.

>I agree it's troubling that Google is taking so long. We were working with them to coordinate disclosure after their caches were cleared. While I am thankful to the Project Zero team for their informing us of the issue quickly, I'm troubled that they went ahead with disclosure before Google crawl team could complete the refresh of their own cache. We have continued to escalate this within Google to get the crawl team to prioritize the clearing of their caches as that is the highest priority remaining remediation step. reply

taviso 6 hours ago [-] Tavis Ormandy

>Matthew, with all due respect, you don't know what you're talking about.

>[Bunch of Bing Links]

>Not as simple as you thought?

21
kfrzcode 6 hours ago 0 replies      
IANAL --- what, if any, legal precedent/structure is there for what will happen to CF if, say, 1.5billion users are hacked and money shifts dramatically as a result or some other reasonably thinkable "hypothetical" situation that we, the Internet-At-Large, at this point, have no certain idea if the incident in question has or has not happened ... I'm saying, there's got to be negligence charges or something if there is money lost, that's how capitalism in America works... but this is a Global problem.

If this is how 2017 is pacing, we've got a long year ahead. This is an insanely interesting time to be alive, let alone at the forefront of the INTERNET.

Fellow Hackers, I wish you all the best 2017 possible.

22
sersi 8 hours ago 3 replies      
I have a question which might be stupid.

What happens for sites using Full SSL (a certificate between cloudflare and the user and a certificate between cloudflare and the server), could any information from ssl pages have been leaked?

23
acqq 1 hour ago 1 reply      
It seems that, due to the Cloudflare's confusing disclosure, it's still not clear what and how is leaked. What I personally observed, just by following the discussion and the links to some examples:

- there is a smaller number of sites that used some of the special features of Cloudflare that allowed leakage for some months, according to what Cloudflare said.

- it seems the number of the sites was much bigger for some days, according to what Cloudflare said.

- the data leaked are the data passed through the Cloudflare TLS man-in-the-middle servers -- specifically not only the data from the companies, but the data from the users, and not only the data related to the sites through which the leak happened, but also other sites that just happened to pass through these servers. Again, also the visitor's data, both directions are leaked. From the visitors, their location data, their login data etc. As an example: if you imagine the bank which used Cloudflare TLS, in the caches could be both the reports of the money in the accounts (sent from the bank to the customers) and the login data of the customers (sent by the customers to the bank), even if the bank site hasn't had the "special features" turned on. That's what I was able to see myself in the caches (not for any bank, at least, but the equivalent traffic).

24
kijin 8 hours ago 2 replies      
Millions of domains are on Cloudflare. We can't tell how many of them were affected.

Either we can search for obvious strings like X-Uber-* and try to scrub them one by one, or we can just nuke the caches for all the domains that turned on the problematic features (Scrape Shield, etc.) anytime between last September and last weekend. Cloudflare should supply the full list to all the known search engines including the Internet Archive. Anything less than that is gross negligence.

If Cloudflare doesn't want to (or cannot) supply the full list of affected domains, an alternative would be to nuke the caches for all the domains that resolved to a Cloudflare IP [1] anytime between last September and last weekend. I'm pretty sure that Google and Bing can compile this information from their records. They might also be able to tell, even without Cloudflare's cooperation, which of those websites used the problematic features.

[1] https://www.cloudflare.com/ips/

25
pvg 8 hours ago 3 replies      
This is already a comment on the site, in the relevant thread. Seems a little meta as a post.
26
6 hours ago 6 hours ago 1 reply      
27
pikzen 8 hours ago 1 reply      
Company engaging in practices that undermine internet security and MITM their users found to be doing stupid shit.

Not exactly breaking news. At some point, maybe people will realise that CF is actively making internet worse and less secure, and that it should be treated as nothing more than a wart to be removed.

28
gear54rus 5 hours ago 0 replies      
I just wonder when we can stop beating this dead horse here...
6
How to Be a Stoic newyorker.com
61 points by Tomte  4 hours ago   23 comments top 5
1
s_kilk 27 minutes ago 0 replies      
[Shameless plug] You can read Marcus Aurelius "Meditations" at http://directingmind.com
2
manmal 2 hours ago 10 replies      
One interesting thing I've noticed is that ancient Stoics have not rebuked the concept of god (or "the universe"), a higher power that determines all the things that are not in our control (as a Stoic you very much need to distinguish between things in and outside of your control). I have found it difficult to really, deeply, accept things as out of my control without resorting to some concept of god or "the universe as a well-meaning entity".

Is there someone among you HNers who has retained a positive outlook by believing that the universe is a bleak, chaotic place with no intrinsic meaning to the things happening in it?

3
Pamar 2 hours ago 0 replies      
Allow me to share my collection of links to Stoicism resources (which I will soon update with this)

http://www.pa-mar.net/Main/Lifestyle/Stoicism.html

4
factsaresacred 2 hours ago 1 reply      
For such a small price, I buy tranquillity. Beautifully put.

The Penguin edition of fellow stoic Marcus Aurelius' Meditations is free on Amazon kindle:https://www.amazon.com/Meditations-Marcus-Aurelius-Wisehouse...

5
gfiorav 2 hours ago 1 reply      
Well, it turns out I've been a stoic all this time, I'm finding out.

It blows me away how that part about taking every problem as a chance to learn and become a better "wrestler" fits right in with my natural conclusions. The rest of it describes me adequately also.

I'm reading Epictetus now, thanks for sharing.

7
Show HN: PDF to TXT but keeping the layout github.com
170 points by jlink  10 hours ago   35 comments top 11
1
nemild 6 hours ago 2 replies      
For those interested in converting PDF tables into CSV, there's also Tabula ( http://tabula.technology/ )

(Used by many journalists to analyze the data in PDFs)

2
rsync 7 hours ago 3 replies      
This is important for (al)pine users ... when reading email in a terminal it is very useful to be able to open a PDF attachment as text and view it in the (terminal) mailtool ...

Yes, (al)pine is my mailtool in 2017.

3
tyingq 8 hours ago 2 replies      
Curious if this works better than the pdftotext utility that comes in the Debian poppler-utils package.

That has a --layout option that works really well sometimes and really terrible other times. Doesn't seem to be related to document complexity either.

4
WalterGR 5 hours ago 3 replies      
Fairly frequently, OCR engines are posted here. But almost without exception, they lack layout analysis, which renders them largely useless.

Is this something that could be combined with those OCR engines? (e.g. TesseractOCR...)

5
curiousgal 1 hour ago 0 replies      
Although I haven't tested this yet, these utilities tend to fail when fed a table with empty cells.
6
jlink 10 hours ago 5 replies      
who would be interested by an online website doing the job?
7
agumonkey 5 hours ago 0 replies      
Fun, I did the same thing as a clojure repl exploration to pipe PDF text to a bare Swing GUI (I know, a little absurd in a way).

The deja vu made squint for a minute.

ps: pdfbox is nice

8
Animats 3 hours ago 0 replies      
Is there a PDF to HTML converter which can consistently get line breaks right?
9
robinhowlett 5 hours ago 0 replies      
Nice. I recently got very familiar with PDFBox and parsing complex layouts - it is a great library.
10
marak830 9 hours ago 0 replies      
Ahh this will be useful for my kitchen receipts. Thanks. Now I just need to roll that with an auto translator too.(I guess I have my day off project now :-) )
11
kzrdude 6 hours ago 0 replies      
But does it keep both the layout and Sha-1 hash? Not sure it's HN worthy otherwise.
8
SHA1 collider: Make your own colliding PDFs alf.nu
265 points by ascorbic  15 hours ago   89 comments top 16
1
xiphias 13 hours ago 1 reply      
I think the coolest proof was that the Bitcoin SHA1 collision bounty was claimed:

https://bitcointalk.org/index.php?topic=293382.0

The OP_2DUP OP_EQUAL OP_NOT OP_VERIFY OP_SHA1 OP_SWAP OP_SHA1 OP_EQUAL script was making sure that only a person who finds 2 SHA1 colliders and publishes it can get the 2.5 BTC bounty.

2
Deregibus 13 hours ago 2 replies      
This was a good explanation of what's happening here from a previous thread: https://news.ycombinator.com/item?id=13715761

The key is that essentially all of the data for both images are in both PDFs, so the PDFs are almost identical except for a ~128 byte block that "selects" the image and provides the necessary bytes to cause a collision.

Here's an diff of the 2 PDFs from when I tried it earlier: https://imgur.com/a/8O58Q

Not to say that there isn't still something exploitable here, but I don't think it means that you can just create collisions from arbitrary PDFs.

edit:Here's a diff of shattered-1.pdf released by Google vs. one of the PDFs from this tool. The first ~550 bytes are identical.

https://imgur.com/a/vVrrQ

3
nneonneo 7 hours ago 1 reply      
Shameless plug: I built my own version of this to collide arbitrary PDFs: https://github.com/nneonneo/sha1collider

My version is similar to this, but removes the 64kb JPEG limit and allows for colliding multi-page PDFs.

4
TorKlingberg 13 hours ago 2 replies      
Wow, it works! I thought this was supposed to require Google-level resources and months on processing time. Did the initial collision somehow enable more similar ones?
5
michaf 10 hours ago 1 reply      
I just constructed a little POC for bittorrent: https://github.com/michaf/shattered-torrent-poc

Installer.py and Installer-evil.py are both valid seed data for Installer.torrent ...

6
lxe 12 hours ago 1 reply      
According to [the Shattered paper](http://shattered.io/static/shattered.pdf), the reason why the proof-of-concepts are PDFs is because we are looking at a

> identical-prefix collision attack, where a given prefix P is extended withtwo distinct near-collision block pairs such that they collide for any suffix S

They have already precomputed the prefix (the PDF header) and the blocks (which I'm guessing is the part that tells the PDF reader to show one image or the other), and all you have to do is to populate the rest of the suffix with identical data (both images)

7
fivesigma 13 hours ago 4 replies      
Is this length extending [1] the already existing Google attack?

[1] https://en.wikipedia.org/wiki/Length_extension_attack

Edit: yes, looks like it is.

As sp332 and JoachimSchipper mentioned, the novelty here is that it contains specially crafted code in order to conditionally display either picture based on previous data (the diff). I can't grok PDF so I still can't find the condition though. Can PDFs reference byte offsets? This is really clever.

Edit #2: I misunderstood the original Google attack. This is just an extension of it.

8
neo2006 9 hours ago 0 replies      
I don't understand the whole collision thing. I mean a sha1 is 160bits so if you are hashing information longer then that collision is a fact, being able to forge a piece of information with constraints is the challenge and even that with enough power you end up being able to try all the combinations. What I understand from that collision reported is that they use PDF format which can have as many data inserted to it as comment /junk as you want so all you need is enough processing power to find the right padding/junk to insert to get the collision. Am I missing something here ?
9
averagewall 2 hours ago 0 replies      
I just tested Backblaze and found that its deduplication is broken by this. If you let it backup the two files generated by this tool, then restore them, it gives you back two identical files instead of two different ones.
10
Grue3 2 hours ago 1 reply      
Doesn't work for me. One of PDFs always says "Insufficient data for an image" (sometimes for the same image that worked before).
11
grandalf 13 hours ago 1 reply      
I would imagine that a lot of old data is secured by SHA1, which may be available for attack.

Does anyone have any idea about a broad risk-assessment of systems worldwide that might be vulnerable as SHA1 becomes easier and easier to beat?

12
odbol_ 9 hours ago 0 replies      
What's the smallest file you can make collide? Could you make two files collide that are actually smaller than their hashes?
13
reiichiroh 13 hours ago 1 reply      
Practical question: does this generate a "harmful" (harmful to a repo system like SVN) PDF if the flaw in the hashing is enough to crash/corrupt the system?
14
Globz 13 hours ago 4 replies      
Damn that didn't take long to go from $100K to carry out this attack to a single day to offer a website for SHA1 collision as a service...
15
b1gtuna 12 hours ago 0 replies      
Just tried it and it really got me the same sha1... damn...
16
ythn 13 hours ago 5 replies      
I wanted to try this tool, but the upload requirements were so stringent (must be jpeg, must be same aspect ratio, must be less than 64KB) that I gave up. Would be nice if sample jpegs were provided on the page.
9
List of Sites Affected by Cloudflare's HTTPS Traffic Leak github.com
710 points by emilong  1 day ago   176 comments top 39
1
r1ch 19 hours ago 5 replies      
Just got this classy spam from dyn.com. Wonder if they're going through this list emailing every domain contact.

> As you may be aware, Cloudflare incurred a security breach where user data from 3,400 websites was leaked and cached by search engines as a result of a bug. Sites affected included major ones like Uber, Fitbit, and OKCupid.

> Cloudflare has admitted that the breach occurred, but Ormandy and other security researchers believe the company is underplaying the severity of the incident

> This incident sheds light and underlines the vulnerability of Cloudflare's network. Right now you could be at continued risk for security and network problems. Here at Dyn, we would like to extend a helpful hand in the event that your network infrastructure has been impacted by today's security breach or if the latest news has you rethinking your relationship with Cloudflare.

> Let me know if you would be interested in having a conversation about Dyn's DNS & Internet performance solutions.

> I look forward to hearing back from you.

2
actuator 1 day ago 2 replies      
I wrote this(1) script to check for any affected sites from local Chrome history. It checks for the header `cf-ray` in the response headers from the domain. It is not an exhaustive list but I was able to find few important ones like my bank site.

1: https://gist.github.com/kamaljoshi/2cce5f6d35cd28de8f6dbb27d...

3
crottypeter 1 day ago 2 replies      
Today I learned that uber does not have a change password option once you are logged in. You have to log out and pretend you forgot the password. Bad UX if you don't know.
4
koolba 23 hours ago 2 replies      
That's a wide impact. While any hijacked account is bad, some of these are really bad.

For example, https://coinbase.com is on that list! If they haven't immediately invalidated every single HTTP session after hearing this news this is going to be bad. Ditto for forcing password resets.

A hijacked account that can irrevocably send digital currency to an anonymous bad guy's account would be target number one for using data like this.

5
ig1 18 hours ago 7 replies      
Worth noting this statement by Cloudflare CTO:

"I am not changing any of my passwords. I think the probability that somebody saw something is so low it's not something I am concerned about."

http://www.bbc.co.uk/news/technology-39077611

6
nikisweeting 1 day ago 1 reply      
Aww man I submitted my list hours ago but I guess it never made it past the New page. https://github.com/pirate/sites-using-cloudflare

Original post: https://news.ycombinator.com/item?id=13720199

7
pulls 1 day ago 0 replies      
For what it's worth, as part of work on the effects of DNS on Tor's anonymity [1] we visited Alexa top-1M in April 2016, recording all DNS requests made by Tor Browser for each site. We found that 6.4% of primary domains (the sites on the Alexa list) were behind a Cloudflare IPv4-address. However, for 25.8% of all sites, at least one domain on the site used Cloudflare. That's a big chunk of the Internet.

[1]: https://nymity.ch/tor-dns/

8
Cyphase 1 day ago 1 reply      
You missed the "possibly" in the header.

And the disclaimer right at the top:

This list contains all domains that use cloudflare DNS, not just the cloudflare SSL proxy (the affected service that leaked data). It's a broad sweeping list that includes everything. Just because a domain is on the list does not mean the site is compromised.

9
nodesocket 6 hours ago 1 reply      
This is ridiculous and somewhat irresponsible. This is just a list of domains using CloudFlare. The leak was only active under a set of very specific cases (email obfuscation, server-side excludes and automatic https rewrites).

I question Pirates (https://github.com/pirate) motives for even doing this? Karma? Reputation?

10
vmarsy 18 hours ago 3 replies      
Something I have a hard time understanding, is how Cloudfare's cache generator page had access to sensitive information ?

Were the 2 things running on the same process? If they were not, there's no way that the buffer overrun could read an other process memory, right? it would have failed with a segfault type of error.

If so, shouldn't Cloudfare consider running the sensitive stuff on a different process, so that no matter how buggy their caching engine is, it would never inadvertently read sensitive information?

11
jitbit 19 hours ago 2 replies      
Webmasters and App-devs running on CloudFlare. You (at least) have to "force-logout" your users that have a "remember me" cookie set.

At least change the cookie name so the token stops working. For example, in ASP.NET - change the "forms-auth" name in the web.config file

12
Splines 1 day ago 1 reply      
If I have an account on an affected site, but did not interact with the site (via my browser or through some other site with an API call) during the time period when the vuln was live, am I still at risk?
13
AdmiralAsshat 19 hours ago 1 reply      
Authy is on the list. It would be really nice if they confirmed whether they are vulnerable or not, considering they hold all of my 2FA tokens. Otherwise I'll have to re-key the database.
14
jandy 1 day ago 2 replies      
I'm confused by the "not affected" remarks. I thought the issue was any site which passes data through cloudflare could be leaked by requests to a different site, due to their data being in memory. Have I misunderstood?
15
danjoc 17 hours ago 2 replies      
Is there a "standard" in the works for changing a password? Stuff like this is happening rather too frequently for my taste. I need a tool I can use to update all my passwords everywhere automatically and store the new ones in my password manager.
16
edaemon 1 day ago 0 replies      
This list doesn't appear to include sites that use a CNAME setup with CloudFlare -- i.e. sites on the Business or Enterprise plans that retain their authoritative DNS and use CNAMEs to point domains to a CloudFlare proxy.

There probably aren't many but with something this serious it could be important. I'm not sure how one would go about finding the sites that use the CNAME option. If it helps, they use a pattern like:

 www.example.com --> www.example.com.cdn.cloudflare.net
Hacker News is one such site, but it's listed in the "notable" section (it's not in the raw dump).

17
RidleyL 18 hours ago 0 replies      
I wrote a python script to help check your LastPass database for any potentially affected sites.

https://github.com/RidleyLarsen/cloudbleed_check_lastpass

18
JaggedJax 17 hours ago 2 replies      
In an email from Cloudflare sent out this morning they said:

> In our review of these third party caches, we discovered data that had been exposed from approximately 150 of Cloudflare's customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.

Does this jive at all with the Google or Cloudflare disclosures? They are claiming that across all caches they only found and wiped data from ~150 domains, can that be true?

19
jschpp 1 day ago 3 replies      
That list isn't that useful...First of all, there is a LOT of pages hosted by CloudFlare @taviso acknowledged that in the original bug report. (https://bugs.chromium.org/p/project-zero/issues/detail?id=11...)Furthermore, you can't say which sites were hit by this bug and simply listing all CloudFlare sites is more or less fearmongering. If you are a verified victim of this bug CloudFlare will contact you.Lastly, if you want to be sure to mitigate effects of the attack just do it... If you want to be absolutely sure that your session keys etc will remain uncompromised simply repeal all active session cookies.
20
dikaiosune 1 day ago 0 replies      
I've been tinkering with a Python notebook for a few minutes to try to quickly assess how much of my LastPass vault is affected:

https://gist.github.com/dikaiosune/0ca7829884b3b3f790418f0f1...

Improvements welcome.

One interesting thing: the raw dump that's linked from the list's README doesn't seem to include a couple of notable domains from the README itself, like news.ycombinator.com or reddit.com. I may be mangling the dump or incorrectly downloading it in some way.

EDIT: disclaimer, be responsible, audit how the dump is generated, etc etc etc

21
Wrhector 1 day ago 0 replies      
This list seems to be missing any sites that are using custom nameservers, which would be common on top sites using the enterprise plans. A better way to detect if the proxy is being used would be to resolve the IP and see if it lies in Cloudflare's subnets.
22
luckystartup 19 hours ago 1 reply      
Oh crap. I've entered my banking password into Transferwise quite a few times.

Welp, time to change all my passwords.

23
paradite 17 hours ago 1 reply      
Couldn't find a practical description of who is affected anywhere. Is it just the customers who have Cloudflare HTTPS proxy service being affected, or anyone using Cloudflare DNS is affected?
24
pbhjpbhj 23 hours ago 0 replies      
Do browsers still leak history info (eg http://zyan.scripts.mit.edu/sniffly/) is it possible to have a page show visitors if they are likely to be affected?
25
pmontra 1 day ago 2 replies      
I have hundreds of passwords in my password manager. That's going to take a week, considering I also have to work.
26
arikrak 19 hours ago 2 replies      
It would be more useful if there was a way to see sites that actually were using the Cloudflare features that caused this bug. A large number of sites use Cloudflare, but few should have been affected by this bug:

> When the parser was used in combination with three Cloudflare featurese-mail obfuscation, server-side excludes, and Automatic HTTPS Rewritesit caused Cloudflare edge servers to leak pseudo random memory contents into certain HTTP responses.https://arstechnica.com/security/2017/02/serious-cloudflare-...

27
janwillemb 1 day ago 0 replies      
Thanks for posting and curating this list.
28
base698 20 hours ago 1 reply      
Has Cloudflare fixed the issues? Should I update passwords now or wait?
29
tonyztan 15 hours ago 1 reply      
Just received an email from Glidera, a Bitcoin exchange. This is the first service to ask me to reset my password. I wonder why Uber, NameCheap, FitBit, and many others have yet to warn their users? Is Cloudflare downplaying this?

> Hi [Username],

> A bug was recently discovered with Cloudflare, which Glidera and many other websites use for DoS protection and other services. Due to the nature of the bug, we recommend as a precaution that you change your Glidera security credentials:

> Change your password> Change your two-factor authentication

> You should similarly change your security credentials for other websites that use Cloudflare (see the link below for a list of possibly affected sites). If you are using the same password for multiple sites, you should change this immediately so that you have a unique password for each site. And you should enable two-factor authentication for every site that supports it.

> The Cloudflare bug has now been fixed, but it caused sensitive data like passwords to be leaked during a very small percentage of HTTP requests. The peak period of leakage is thought to have occurred between Feb 13 and Feb 18 when about 0.00003% of HTTP requests were affected. Although the rate of leakage was low, the information that might have been leaked could be very sensitive, so its important that you take appropriate precautions to protect yourself.

> The actual leaks are thought to have only started about 6 months ago, so two-factor authentication generated before that time are probably safe, but we recommend changing them anyway because the vulnerability potentially existed for years.

> Please note that this bug does NOT mean that Glidera itself has been hacked or breached, but since individual security credentials may have been leaked some individual accounts could be vulnerable and everyone should change their credentials as a safeguard.

> Here are some links for further reading on the Cloudflare bug:

> TechCrunch article: https://techcrunch.com/2017/02/23/major-cloudflare-bug-leake...> List of sites possibly affected by the bug: https://github.com/pirate/sites-using-cloudflare/blob/master...

> If you have any questions or concerns in response to this email, please contact support at: support@glidera.io

30
vasundhar 23 hours ago 1 reply      
Unfortunately this seem to include news.ycombinator.com
31
iKenshu 1 day ago 1 reply      
What if I sign in with facebook or other? Should I change muy password con facebook or what?
32
jasonlingx 23 hours ago 0 replies      
Do I need to change my cloudflare password?
33
yeukhon 21 hours ago 0 replies      
Would Internet Archive able to "cache" the leaks?
34
arca_vorago 19 hours ago 1 reply      
Apparently root case was:

/* generated code */if ( ++p == pe ) goto _test_eof;

"The root cause of the bug was that reaching the end of a buffer was checked using the equality operator and a pointer was able to step past the end of the buffer. This is known as a buffer overrun. Had the check been done using >= instead of == jumping over the buffer end would have been caught."

Detailed timeline:

"2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information

2017-02-18 0032 Cloudflare receives details of bug from Google

2017-02-18 0040 Cross functional team assembles in San Francisco

2017-02-18 0119 Email Obfuscation disabled worldwide

2017-02-18 0122 London team joins

2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide

2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide

2017-02-20 2159 SAFE_CHAR fix deployed globally

2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide"

Seems like a pretty good response by cloudflare to me.

35
beachstartup 20 hours ago 0 replies      
this is another data point that supports my personal, hare-brained theory that the expectation of privacy on the internet is simply naive, a fool's errand. it never existed, and never will.

this is despite (or maybe because) of my best efforts to secure systems as a major part of my job.

36
djph0826 21 hours ago 0 replies      
Volusion.com
37
amq 1 day ago 1 reply      
The title is misleading (for now). It is just a list of all sites using CF, compromised or not.
38
cromulent 1 day ago 3 replies      
"List of Sites possibly affected"

Sites using Cloudflare, really. However, Cloudflare say that only sites using three page rules were affected - email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites. [1]

Is this over-estimating the impact, perhaps?

[1] https://blog.cloudflare.com/incident-report-on-memory-leak-c...

39
StavrosK 1 day ago 5 replies      
I would like to point out that, if most sites used two-factor authentication, this leak would be at most a minor inconvenience. Maybe we should push for that more. Just days ago I talked to Namecheap about its horrible SMS-only 2FA and asked them to implement something actually secure, maybe contact your favorite site if they don't have 2FA yet.
10
Online-go.com is now open source github.com
55 points by artursapek  7 hours ago   3 comments top 3
1
Cyphase 5 hours ago 0 replies      
To be clear, it's just the frontend for now. Frontend as in the client-side stuff and the backend stuff required to directly support it. The core backend code that manages the games and what-not is not currently available, though it looks like that might happen eventually.[1]

All that said, I'm glad to see this happen!

[1] https://forums.online-go.com/t/the-online-go-com-user-interf...

2
rayalez 4 hours ago 0 replies      
Absolutely amazing webiste. Incredibly well done, has a great community, and Ive had tons of fun playing on it.

If you want to play go - I highly recommend it!

3
partycoder 4 hours ago 0 replies      
OGS (online-go.com) is great.I am impressed that it hasn't taken over KGS by now, which is to my knowledge the largest western server.

I have tried Pandanet IGS, KGS, Tygem, WBaduk and even obscure ones such as Fly or Die and Go Chat (Facebook messenger bot).

I think OGS offers by far the most frictionless way to start and a more modern UI. It even offers really nice features such as the ability to draw on a board during a review.

11
Bees can train each other to use tools arstechnica.com
182 points by tambourine_man  16 hours ago   39 comments top 8
1
anfractuosity 14 hours ago 1 reply      
Nice, it just goes to show how intelligent they are.

There are so many cool things about bees.

In the paper "Detection and Learning of Floral Electric Fields by Bumblebees" they mention how bees can detect if other bees have harvested pollen from flowers, based on an electric field.

Along with the waggle dance, their functioning in a colony.

Apparently they might let out a little vibration to express surprise when another bee bumps into them https://www.newscientist.com/article/2121275-honeybees-let-o...

Also they can apparently sense the earth's magnetic field - http://web.gps.caltech.edu/~jkirschvink/pdfs/Bees.pdf

2
toothbrush 7 hours ago 8 replies      
When "even" bees are intelligent enough to teach each other skills, i really marvel at the mental gymnastics/machinery that goes into convincing oneself that humans are somehow a class apart from animals, and it's totally okay to eat them after having kept them pent up in atrocious conditions. Makes mental note to be a better vegetarian

(and FWIW: i'm not being morally superior here: i believe that in reality, probably being vegan is the only morally defensible position, but the flesh being weak and all... And in actual fact, probably even that is a tricky position: by surviving one is probably making the calculated decision (conscious or not) to put one's own survival above the cost of some other's demise.)

3
yawz 14 hours ago 1 reply      
Hobbyist beekeeper, here. One of the most fascinating things about bee communication (IMHO) is not necessarily the two distinct types of dances they do, or not the precision on the distance and the direction of the target, but doing all that in the pitch darkness of their hive.
4
vcdimension 8 hours ago 1 reply      
There are many advantages in training animals to perform tasks rather than building machines to do them. Animals are far more intelligent than current machines, more robust, well adapted to our environment, don't polute, require few/zero raw materials (with the associated environmental costs), have low/zero production costs, etc.

I would much prefer to live in an environment of plants and animals than one of concrete and metal.

The bottleneck is in the training, and this is where technology could play a part. Robotics and machine learning algorithms could be employed to improve the training process (for example see here: http://thecrowbox.com/).

5
intrasight 7 hours ago 0 replies      
I'm at a loss for words. First spiders that can strategize [http://thescienceexplorer.com/nature/jumping-spiders-smarter...]. Now bees that can tech tool use.
6
spraak 8 hours ago 0 replies      
I'm not very surprised, but glad that this has been validated. I've always felt that bees are very intelligent.
7
logicallee 14 hours ago 0 replies      
The researchers recorded the following transcript, translated from bees' dance, a few months after introducing git to the colony:

"No you stupid wasp, how many times have I told you, YOU DO NOT REBASE A SHARED BRANCH. Never, ever, ever. If I see you do that again I swear to God I am telling the queen."

8
exBarrelSpoiler 12 hours ago 1 reply      
If only more tech companies can learn from bees for when they do on-boarding!
12
Ragic Editable forms with relational data ragic.com
62 points by refik  9 hours ago   27 comments top 13
1
hobofan 8 hours ago 3 replies      
I would really like to see a good tool in the "beyond Google Sheets"-space.

I tried Ragic just now for 10-15 minutes and it is far from simple, like their landing page claims. It is nothing at all like a database and it also is nothing at all like a spreadsheet (and not in a good way).

Edit: Looking at previous discussions about it here on HN, the criticism in this 5 year old comment all still holds true: https://news.ycombinator.com/item?id=3960207

2
tluyben2 1 hour ago 0 replies      
We have been running Flexlists.com[0] for many years as a 'background sideproject'. Not to make money; it was to scratch an itch and I still use it a lot. We launch somewhere before DabbleDB I think and some others and they all folded so we never took it further. Ours is trivially simple to use which I do not find the case with others. The interface is somewhat dated and the source code has only been updated for security in the recent 5 years. I am going to continue with it soon as I do believe there is something and Flexlists has enough users and fans. Good to see people are still working in this space.

[0] http://flexlists.com

3
collyw 19 minutes ago 0 replies      
Isn't this basically the same as MS Access?
4
chenster 3 hours ago 0 replies      
This particular application has name called database application builder (DAB). I think that's probably how DabbleDB got its name. Also checkout out Caspio (http://caspio.com) and ZenBase (http://getzenbase.com). They are similar products in this space.
5
macmac 1 hour ago 0 replies      
DabbleDB anyone? One of those products that really should have been more successful.
6
wmccullough 8 hours ago 2 replies      
> "Over 90% of enterprise IT projects are delivered late."

Due to piss poor project managers and lack of business requirements and/or realistic deadlines.

7
mtdewcmu 8 hours ago 2 replies      
This seems to serve the same purpose as, e.g., Access. What is different about this?
8
zellyn 8 hours ago 1 reply      
Is this like dabbledb?
9
fiatjaf 8 hours ago 0 replies      
Many updates, new features and a redesign? Seems very nice.
10
bikamonki 8 hours ago 0 replies      
Your pricing page is not mobile-friendly.
11
sebringj 7 hours ago 0 replies      
More like tRagic based on the comments here.
12
davidascher 8 hours ago 1 reply      
HTTP only sign up page? really? in 2017?
13
etchalon 5 hours ago 0 replies      
How is this better than Filemaker Pro?
13
A curated list of Terminal frameworks, plugins and resources github.com
211 points by febin  16 hours ago   66 comments top 18
1
seanp2k2 15 hours ago 2 replies      
Long-time bash guy, I used to use just git-bash-prompt + a few dozen aliases and functions, but recently got some shell envy after seeing a coworker with powerline on fish. I checked out ooh my fish and it was really hard to get it to do what I wanted; things like saving a function (since aliases are defined in one with ooh my fish) took a few hours to figure out because of how homebrew installs fish on Mac. It was also overly difficult to get ^r rev search working, $(foo) didn't work, the random one-liners I'd type didn't work half the time with fish.

I switched to zsh + prezto with powerlevel9k ( https://github.com/bhilburn/powerlevel9k ) and within an hour, I had something awesome. Here's a guide on installing it: http://www.codeblocq.com/2016/09/Pimp-up-iTerm-with-Zsh-and-... and I'd really really recommend powerlevel9k. Nothing against fish, but after a few weeks, it was just too hard to unlearn all my bashfulness, whereas in zsh it just works as it used to + had the extra functionality I got with fish + now is even more awesome and useful thanks to powerlevel9k.

2
gumby 15 hours ago 1 reply      
I spend most of my time in Terminal (well in Emacs) and don't use much more than I could have had on a AAA (Ann Arbor Ambassador 42x80 terminal). What am I missing out on? This page doesn't say, just lists alternative terminals.
3
dedhed 15 hours ago 3 replies      
I have been using MobaXterm for a while on Windows. I don't see it mentioned very often in lists like this. It could probably fall under multiple headings in this list.

http://mobaxterm.mobatek.net/

4
WalterGR 11 hours ago 5 replies      
I'd love to find a shell + terminal that's aware of the semantics of shell output.

One nice thing about (good) GUI programs and websites is that 'results' are quickly navigable. In a terminal, I'd love to be able to drill down into results of ls; from grep output quickly open a file and jump to that match; etc.

(Shell output can be in any format, but even if it could grok the output of only specific programs/commands (and also their switches) that would be a starting point.)

Does anything like this exist, for any platform? It seems like PowerShell could be a good match, but I don't know anything about its ecosystem.

5
djsumdog 15 hours ago 6 replies      
So many things wrong with this list. Fish stuff in the ZSH section, the FreeBSD package manager in the Linux section, Cygwin is not a package manager (you still need apt-cyg or sage to install things in Cygwin without rerunning setup.exe) ...

Sometimes I see good stuff in these lists, but this author looked like they didn't even really try.

6
potomak 10 hours ago 0 replies      
That list is missing a section for terminal music players. I created one in Haskell some time ago: https://hackage.haskell.org/package/haskell-player
7
jevinskie 11 hours ago 0 replies      
CRT (cool-retro-term) is a nice, free alternative to Cathode and runs on Linux too.

https://github.com/Swordfish90/cool-retro-term

8
jwilk 14 hours ago 1 reply      
Does it mean that "awesome" is no longer cool and every curated list is going to be "sexy" instead?
9
bcg1 10 hours ago 1 reply      
Tilda should not be left off the list.

https://github.com/lanoxx/tilda

10
jasonmorton 15 hours ago 1 reply      
Anyone know if there is any terminal + server side piece that, like terminology or iterm2, lets me inline plot in R, python over ssh? Linux or mac.
11
navs 12 hours ago 0 replies      
While technically not a terminal client, Blink is a mosh supported ssh application for iOS.

http://www.blink.sh/

Open source so if you've got yourself an Apple Developer account you can build and run it yourself.

12
mikejmoffitt 15 hours ago 3 replies      
Is there still no way to make iTerm2 refresh faster? When doing quick scrolling or output, it's so noticeably choppy compared to Terminal.app, or even Xterm.
13
tbrake 14 hours ago 1 reply      
Interesting not to see csh/tcsh there. I'm out of the loop when it comes to these things; is it pretty much dead? Did bash 'win'?
14
mastazi 10 hours ago 1 reply      
For Windows, I think Git Bash/MSYS2 are worth mentioning as well.
15
kitd 14 hours ago 1 reply      
Good to see Cmder mentioned for Windows. Does everything I need and looks great.
16
TurboHaskal 13 hours ago 1 reply      
FreeBSD's pkg is listed under Linux.
17
e5an 8 hours ago 0 replies      
No mention of fizsh?
18
NeverTrump 15 hours ago 1 reply      
nice list except git is not a shell and powershell is not a terminal emulator :)
14
When Dumb Pipes Get Too Smart saurik.com
205 points by saurik  15 hours ago   45 comments top 5
1
z3t4 13 hours ago 6 replies      
I have 10ms ping to news.ycombinator.com, and 100ms ping to www.amazon.com. Yet time to first byte is 20% faster to www.amazon.com. What actually happens is my PC connects to Cloudflare, witch in turn connects to HN. This is an unnecessary step, and is highly over-rated.
2
syncsynchalt 12 hours ago 0 replies      
Lovely bit of debugging in this article, I really enjoyed it! Somehow a task that would be grueling to do myself is so much more enjoyable when read about.
3
dpc_pw 13 hours ago 1 reply      
We need to switch dump pipes to be dumb content-addressable p2p pipes with maidsafe, ipfs, dat, or anything like that, and most problems that CDNs are trying to solve would disapear.
4
fapjacks 7 hours ago 1 reply      
> Cloudflare likes to look triumphant.

Yeah absolutely this. They spin everything they do as some kind of heroic "for the people!" decision even when it's just about cutting costs or not having to solve "hard" problems. One example are DNS "any" queries. Cloudflare just decided to toss standards out because they aren't up to conforming to them. As far as I'm concerned, this Cloudbleed thing is karma, and nobody should believe anything Cloudflare says about itself.

5
lanius 10 hours ago 1 reply      
Has the WebCore scheduler improved since?
15
A Google a Day agoogleaday.com
154 points by bharatkhatri14  15 hours ago   60 comments top 18
1
BickNowstrom 12 hours ago 6 replies      
These days, you can not have Pub Quizes with at least a few teams cheating by using a phone to Google the answers. Knowing trivia of the top of your head seems to have become as useful a skill as calligraphy.

A friend of mine was about to fall for a hoax/scam. And even though they changed the words and meaning of quite a few elements, I was able to pinpoint the exact scam using carefully crafted Google queries.

With Google there is just no way to bullshit people anymore. Someone would tell a strong story in the 90s during a birthday, and you'd have to go to library the next day to verify or discard it. Not anymore.

Someone asked why those old modems made noise, and instead of giving an answer right away, it took all of 15 seconds to find the answer online, much better than I ever could answer it.

I remember my first job skill test. It was multiple choice and you were allowed to use the internet. I answered all questions by Googling keywords from the question, in combination with keywords from each answer, and looking which combination gave the most results. Answering this way I got a near perfect score. There were questions about programming languages I hadn't even written a "Hello World"-example for.

With all this goodness, comes of course the danger of relying on Google for all your answers. If it is not on the first page of the results it is not true. Especially younger people believe a lot of facts they find online. Another danger is using Google for confirming a bias: With so many pages online, there is bound to be a page in the results that agrees with your initial hunch, however incorrect it is.

I participated in the pilot for Google Answers. There were people there that, if the answer was to be found anywhere online, could answer it, no matter their expertise on the subject. Googling well is a valuable skill.

2
danso 14 hours ago 1 reply      
FWIW, if you enjoy this kind of stuff, you might like Daniel Russell's (Google research scientist, creator of A Google A Day) blog on using Google Search for research (and research on how people use search):

http://searchresearch1.blogspot.com/

He regularly posts quirky challenges (What kind of cow is in this picture? Can you see the Farallon Islands from San Francisco, and where should you stand at what time of year to best see them?). Also contains lots of useful information about the state of the query and engine, such as which search operators have been deprecated, or which obscure search operators no one seems to know about.

3
squeaky-clean 11 hours ago 2 replies      
Some of these were neat, some were really stupid.

"What do you learn the definition of on page 21 of the 2011/2012 Official Rules of the NBA" was "legal goal" really? It doesn't say that anywhere on the page. "legal field goal" didn't work, "Scoring and Timing" didn't work (the header of the page). None of the other definitions on Page 21 worked, there's a few. The only mention of "legal goal" is in the Index where it points to page 21.

4
Grue3 2 hours ago 0 replies      
The fact that it requires to type in the exact answer (for its own definition of exact) is annoying. The first question I got was how Hemingway's protagonists are called. Let's say the answer is "A B". Well, the answer I had to type in was "The Hemingway A B". Both "The" and "Hemingway" are mandatory.

Later the question was what musical period was the definition of symphony, sonata etc. standartized in. I copy pasted "C period", it didn't work, so I tried some other ones. Well, apparently "C" was the correct answer all along.

5
jannyfer 14 hours ago 4 replies      
In a review of Google Home, I was amazed to learn that you can ask "Who is the guy that plays God in a lot of movies" and Google will answer back with Morgan Freeman.

I'd love to try asking the questions from this site to Google Home and see how it does.

6
mamurphy 14 hours ago 1 reply      
With this and https://quickdraw.withgoogle.com/, whether I want to play super-obscure-trivia or draw stick figures, google has me covered.
7
stonesam92 10 hours ago 2 replies      
This reminds me of "The Wikipedia Game" we'd play at school.

You'd all start on one random page, and race to get to some completely unrelated target page, only by clicking through links to other pages.

It was always surprising how few degrees of separation there were between wildly unrelated topics.

8
schoen 5 hours ago 0 replies      
In the early 1990s there was a very conceptually similar game called the Internet Hunt:

https://en.wikipedia.org/wiki/The_Internet_Hunt

An example of the very first one, from August 1992:

http://www.ibiblio.org/history/1sthunt.txt

Google has definitely made these a lot easier, so the questions have had to get a lot harder!

9
giarc 13 hours ago 3 replies      
This seems pretty fun however navigation seems a bit difficult. Some links from Google results wouldn't open and I had to open in a different tab, then go back to the main tab. Not sure if it's my browser or the game?
10
emodendroket 14 hours ago 1 reply      
Wish it gave me some sense of how my score stacked up to others.
11
gberger 6 hours ago 0 replies      
They should call you out for directly googling the question, as it defeats the point, since you would get the answer only because the exact question was posted on some forum.
12
cryptozeus 11 hours ago 0 replies      
Very hard to use it on mobile phone
13
wnevets 14 hours ago 1 reply      
I had a trailing space and it said my answer was wrong
14
magic_beans 14 hours ago 2 replies      
What's the point of this? To learn how to use google search?

Who is the intended audience?

15
0003 11 hours ago 0 replies      
Anyone else's comp become a turbine?
16
notatoad 12 hours ago 0 replies      
this sounds like fun, but the first query i did threw up a firefox "blocked by content security policy" error.
17
jeron 14 hours ago 1 reply      
are the questions always the same?
18
sksixk 12 hours ago 0 replies      
i remember a game very similar to this (that i used to participate in and do very poorly) back around mid 90s that people used to play online. except you'd use things like archie and gopher. i don't remember the details but someone(s) would come up with questions and you'd have to find the answers online using these tools (before google).

anyone remember this game? my recollection is hazy but i think the questions were sent out periodically and teams would rush to get them all answered first.

16
Wordbank: An open database of children's vocabulary development stanford.edu
141 points by Jasamba  16 hours ago   12 comments top 6
1
aidos 11 hours ago 0 replies      
My wife is a speech and language therapist for under 5s and we have 2 children under 5 ourselves. There are so many techniques she uses without even thinking about it to encourage communication that, left on my own, I would never have known to use. For example, in the really early days, when a child said "blah blah blah blah" I would have been inclined to repeat it, but now I'll say "that's right, an aeroplane!" (Or whatever it is).

Parent-child interaction goes a really long way in child development and if you ever get the chance, it's worth sitting in on a session (whether your child needs extra help or not). A large part of the work my wife does is around enabling parents to assist kids that need more input (through no fault of the parents themselves).

2
minimaxir 14 hours ago 1 reply      
Looks like the interactivity is running in R Shiny; and I'm hitting "License Quota Reached" errors.
3
vsviridov 14 hours ago 1 reply      
Expected the walrus to be the logo due to an obscure meme. Was not disappointed.
4
euph0ria 14 hours ago 1 reply      
Comparing Swedish/Danish/English in the vocab:

http://wordbank.stanford.edu/analyses?name=vocab_norms

Seems like data sample is too small to infer anything useful for Swedish.. but comparing Danish and English is interesting. Seems like Danes outperform or English kids underperform. Would be interesting to understand what the major driver is for the effect.

5
mistermann 8 hours ago 0 replies      
Semi-related question: if anyone knows of something similar for physics or chemistry (for a bit older kids) would appreciate it!
6
isanganak 13 hours ago 0 replies      
Funny how it says English and English(British) in those coloured bubbles :)
18
Uber blocks employees at work from chatting on Blind App businessinsider.com
73 points by mayoralito  4 hours ago   22 comments top 9
1
urahara 3 hours ago 1 reply      
This is very much like a sign that Uber again takes an old path of silencing people and problems instead of fixing toxic culture. Makes me think that they also prepare to blackmail Fowler.
2
k_sh 3 hours ago 0 replies      
> "Our activity at Uber has gone up 3x since they blocked us on their WiFi," Shin says.

Streisand Effect in 3... 2... 1...

It's 2017. Have we really not learned this lesson yet?

3
retube 3 hours ago 1 reply      
Sounds like there's a pretty simple work around, just use mobile Internet not wifi.

(You shouldn't use corporate wifi for a personal phone anyway)

4
omar3550 2 hours ago 0 replies      
Can I somehow register to be an Uber employee on the app without actually being one? Would love to read all the crap employees are going through as lessons learned when I start my own company (one day). Any ideas? The blind app requires an uber.com email address. Any employees want to help out a fellow HN'er?
5
Flammy 2 hours ago 1 reply      
First time hearing of Blind, can anyone share their experiences?

(working at a startup so can't just sign up and see it myself...)

6
sfifs 2 hours ago 1 reply      
So employees are trying to chat "anonymously" on a tech company's wifi network? Seems remarkably dumb for tech employees.

It should be assumed as a given that any company or hotel wifi network is monitored and HTTPS is quite possibly is MITMed.

7
thewhitetulip 2 hours ago 0 replies      
Even in 2017, we are still not understanding that if you block something it becomes more exciting just for the fact that it was banned, ignore it and it'll die.
8
passivepinetree 2 hours ago 1 reply      
This is hopefully a prelude to a Streisand effect type situation here, but my biggest reaction to this came in the ad below the piece: an "article" describing the reason behind the F and J bumps on any keyboard. Does there really need to be an article about that? It seems like common sense, or something you might learn in any typing exercise ever.
9
rhizome 3 hours ago 0 replies      
This will definitely help fix things.
19
How a Mistake Gave Us the Word 'Cherry' merriam-webster.com
103 points by ohaikbai  14 hours ago   61 comments top 13
1
Lerc 12 hours ago 5 replies      
Somewhat relatedly, when my daughter was younger she picked off one of the solidified tendrils that flows down the side of a candle and called it a wack. Obviously candles are made of wacks.
2
fred256 12 hours ago 3 replies      
Seems similar to how "a napron" turned into "an apron". More examples here: https://en.wikipedia.org/wiki/Rebracketing
3
aplusbi 9 hours ago 4 replies      
Kind of like how the word "pirogi" is plural (a single dumpling is a "pirog"), but in English multiple pirog are usually referred to as pirogies.
4
rudolf0 13 hours ago 1 reply      
Now I'd like to know how the French word for "shampoo" (the noun) came to be "le shampooing".
5
nivla 8 hours ago 2 replies      
Similarly the word mango comes from the Malayalam word ma (pronounced: "manga"). The European traders mispronounced them into the now known mango.

[1]https://en.wikipedia.org/wiki/Mango#Etymology

6
danieltillett 11 hours ago 1 reply      
I think I am going to start calling cherries cirisapples to annoy my kids.
7
Buge 4 hours ago 0 replies      
An article about a grammar mistake contains a grammar mistake in the first sentence:

>unless we start hacking away them.

8
jameshart 10 hours ago 1 reply      
The same mistake is in the process of giving us the word 'kudo'.
9
DonGuero 12 hours ago 2 replies      
This wouldn't have happened if William the Conqueror was known as William the Conquered.
10
acqq 10 hours ago 1 reply      
The article cites Old English: "ciris" as in "cirisbeam" (the ciris tree) and claims the error from the "Old North French" variant "cherise" but the word is much older. E.g.

Latin, 1 century AD: Cerasus (AFAIK C is pronounced ch as in chain, Edit: thanks to danans for the correction: ch is a modern and k as in king the traditional pronunciation, so it's even closer to the Greek one)

https://en.wiktionary.org/wiki/cerasus

And even older, ancient Greek:

(pronounced probably like kera-sos):

" Of Anatolian origin. Compare Akkadian "karu""

Of course, Akkadian is the oldest Semitic language for which the records exist, at least 4000 years old, i.e. around 2000 BC. Their empire was in the part of today's Iraq -- in the area to which the people who later wrote the Torah (which even later became the part of the Old Testament) referred as "the garden of Eden."

The cherries are our direct connection to the mythical paradise.

(And, when I'm by Eden and fruits, the famous "forbidden fruit" wasn't an apple in the original text, that's a wrong, later, interpretation: https://en.wikipedia.org/wiki/Forbidden_fruit#The_Apple )

11
iLemming 7 hours ago 0 replies      
For a moment I thought it's about git and cherry-picking.
12
mixmastamyk 8 hours ago 0 replies      
Abogado (lawyer) --> avocado.
13
kboukadoum 11 hours ago 2 replies      
Pretty neat to look at this wikipedia link https://goo.gl/kAR1LW and see the descent of a ton of words of English!
20
Show HN: JsonTree, a 3.53kb JavaScript tool for generating html trees from JSON maxleiter.github.io
59 points by MaxLeiter  12 hours ago   21 comments top 10
1
brockwhittaker 10 hours ago 0 replies      
You should consider removing all the other functions from the global scope, especially because they have relatively generic names like "generateTree", "toArray", "depth", and "toggleClass".

Consider using an actual class or a closure perhaps?

2
Cheezmeister 10 hours ago 1 reply      
Thanks for sharing! I see your 3.5k and raise (lower?) you 1.6k

https://github.com/cheezmeister/kapok

[I think it does most of what yours does](https://github.com/Cheezmeister/kapok/blob/master/tst/kapok....) (EDIT: Nope, missing URL loading and XSS cleaning!)

3
kc10 11 hours ago 2 replies      
I didn't realize it's a tree until I clicked it. Changing the buttons to + and - signs might help.
4
edko 10 hours ago 0 replies      
I noticed that your json2html function assigns the same id (top) to all non-leaf tree nodes. You might want to fix that.
5
foota 8 hours ago 0 replies      
I definitely expected this to be like a JSON serialization format for HTMl.
6
ComputerGuru 9 hours ago 0 replies      
This doesn't work in Safari on iOS. Values don't show up, no tree.
7
Lorin 11 hours ago 1 reply      
Neat, but generated tree isn't keyboard accessible.
8
kyriakos 9 hours ago 1 reply      
The menu on the linked page does not work on mobile because the fork me banner is overlayed on top on it.
9
frik 3 hours ago 0 replies      
Doesn't work on mobile browser.
10
WhitneyLand 10 hours ago 0 replies      
Doesn't work on mobile? (iOS)
22
Amazon guts affiliate program, cuts fees for electronics in half amazon.com
306 points by Domenic_S  16 hours ago   208 comments top 31
1
codingdave 15 hours ago 1 reply      
I make some money with their affiliate program. Not a lot, just enough to buy ourselves dinner out a few times a month. But I always consider it to be a nice bonus. I know the program can change any time. I know my web traffic can die off. Someone else could build a new site that is better than mine. Or something completely unanticipated could shut down this income. These are all risks that I accept with open eyes.

The danger comes if you are not aware of the risks inherent in your own income. Sometimes it does make sense to let your income have some instability in it, and let someone else control it -- maybe it is a case like mine where it is small enough to not matter. Or maybe it is large enough that it is worth the risk. Just don't let yourself get in a situation where it is large enough that you are living on it, but not so large that the risks are acceptable. Because that is when changes like this will bite you.

2
Domenic_S 15 hours ago 1 reply      
Amazon's deleting volume tiers and adjusting commission ("fee") percents.

It's a massive loss (~50%) for affiliates like Wirecutter that do mostly tech/electronics, and a huge boost for the luxury beauty category.

Current fees: https://web.archive.org/web/20170106214444im_/https://images...

3
oliwarner 14 hours ago 2 replies      
People get a bit bratty when Amazon drops the rates on certain categories. They need to be clear on what the Affiliate program's purpose is. The goal isn't to kick back money to people who link to Amazon, it's there to make Amazon dominant in multiple markets.

They're there now. They have critical mass. They're the first place organic search for new stuff.

There's no sense in throwing money after sales they'd already get. They're better off using it as discount to get sales they wouldn't.

4
usaphp 15 hours ago 3 replies      
It will be a big loss for pc tech YouTube reviewers, I know it's a big chunk of their revenue. They already removed a lot of them from their affiliate program because youtubers used to just tell viewers to bookmark their affiliate amazon link because it supports their channel. So a lot of them had to open a new affilaite account to comply with new rules and all previous videos were demonetized because of account suspensions.
5
robbrown451 15 hours ago 1 reply      
I'm not sure I understand where you're getting the "half" figure. I'm trying to compare the linked chart to this https://web.archive.org/web/20170106214444im_/https://images...

and not seeing what is cut in half.

(Also I notice that the new chart says musical instruments are 6%. For electronic musical instruments -- digital keyboards, for instance -- does this mean the fee has gone up from 4%?)

6
startupdiscuss 15 hours ago 3 replies      
Confused. Which is the previous structure?

This:https://web.archive.org/web/20170106214433im_/https://images...

or this:https://affiliate-program.amazon.com/welcome/compensation

If they are going from a volume based approach to a margin based approach that is rational, and good for everyone.

(i.e. why payout more for 1000 rubber bands that makes them uncompetitive to sell, and you should pay out more for that high end tv).

7
sharkweek 15 hours ago 2 replies      
As an Amazon affiliate who has done quite well with it, this is definitely a gutting.

But... if I'm being honest with myself, it also seems kind of reasonable. I think their original plan was pretty generous. I was kind of expecting this to happen at some point.

8
encoderer 16 hours ago 3 replies      
My heart breaks for the Wirecutters of the world affected by this.
9
TazeTSchnitzel 16 hours ago 2 replies      
The title sounds editorialised (guts), yet there's no article.
10
iopuy 16 hours ago 1 reply      
Would it be possible to post the commissions before the change?
11
TekMol 15 hours ago 0 replies      
Does anybody here know what the profit margins of electronics are?

I have been running onlineshops before (not electronics though) and we happily spent all of the profit margins of an order on trackable advertising. Because a) the lifetime value of the customer b) the word of mouth value of a customer and c) the untracked sales generated by the advertising.

2.5% of revenue sounds unbelievably cheap to generate an actual trackable order.

12
grandalf 14 hours ago 1 reply      
Interestingly, Electronics are the area where I'd recently noticed Amazon was least competitive.

I stopped by the local Microcenter (which is, incidentally, has a nice assortment of hobby-oriented electronics items for sale) and they beat Amazon's price on a Samsung EVO SSD by over $20. Since they price match, I got a $3 discount on one of the other pieces of hardware I bought that day.

All in all, the time I spent driving there likely make the savings irrelevant, but I was surprised that they were so much more aggressive on the SSD pricing.

13
14
dkrich 14 hours ago 0 replies      
What I'm curious about is how this will affect sites that rely on affiliate clicks but more for the 24-hour cookie (ie, where somebody buys anything within a day of clicking the link).

I could see a site like the WireCutter getting lots of clicks to Amazon and then the person not buying that product buy remembering later "hey, I forgot that I need dog food." Well dog food happens to be a 10% commission now, so maybe it isn't as bad as it would seem.

Also, the WireCutter's sister site is the Sweet Home, and I think home goods are now up to a flat 8% rate, so they may not be any worse off.

15
jrs235 11 hours ago 0 replies      
If you think about industry/categorical margins, the new rates more closely reflect them. As others have pointed out, the more appealing prior rates were to encourage people to "push" and advertise products in categories that amazon wasn't a dominant player. They are now doing better so they have let off the gas on those products; they don't need to advertise them as much.
16
MicroBerto 13 hours ago 1 reply      
My business (PricePlow) is obviously affected. However, due to being a niche site, Amazon is not a majority of our revenue, nor is it our top-trafficked store.

My strategy is this: At these commissions in the Health niche, Amazon will no be in our "preferred" tier of stores. On March 1, their products will no longer show up on our blog (unless they are the only store with it in stock) -- and the blog gets the vast majority of our traffic.

They will still show up in our main site (where I need to decide whether or not to keep their exclusive buttons), and they'll still be involved in our hot deals and price drop alerts.

Stores need to earn our best visitors, and Amazon is no longer deserving. Surprisingly, they're most often not the best deal on our site anyway, so I don't think anyone will be too upset.

I may try to negotiate my own rates, but I don't think we're big enough for that (not yet, at least). Everything is negotiable when you have legit traffic and other options.

Meanwhile, we've been diversifying our revenue with various industry SAAS services that can be scaled globally. This has been a big focus of mine, knowing that these kinds of things can happen at the drop of a hat.

But at the end of the day, this is still a paycut, and it still hurts. Amazon will ultimately lose more of our traffic for it, and I really don't think they'll even notice this on their bottom line compared to the explosive profits they get from AWS.

Seems like bad PR more than anything.

17
eb0la 14 hours ago 1 reply      
Does anyone know which percentage of all electronics sales go to Amazon?

Affiliate programs are a good way to get market quotas. If they're #1 in sales, then there's no need to spend marketing bucks on it.

18
dawnerd 15 hours ago 1 reply      
Just in time for LTT to get their account back...
19
twodayslate 16 hours ago 5 replies      
It is very hard to get an affiliate account right now. I can't seem to get accepted.
20
johnnypalps 6 hours ago 0 replies      
the real tragedy is that everyone you are likely to hear from on this subject isn't earning real money. The big earners are all on custom rates already and won't discuss terms or earnings.

Reading the Associates discussion forum is the definition of depression. People running sites for many years talking about earning $200 in a month. please, enlighten us as to your thoughts on the new rate structure!

21
diminish 15 hours ago 7 replies      
is anyone making 4 or more digits monthly revenue from Amazon associates or any other affiliate program?
22
vram22 13 hours ago 3 replies      
Has anyone made more than, say, a few (tens of) dollars a month, via affiliate schemes? I mean in the last year or two. Things might have been different earlier, that is why the condition.

And related question, is there an affiliate scheme for Amazon India? I had checked a few times earlier for the US-based Amazon affiliate scheme, and IIRC, each time it said that it was only for the US, or not for India.

23
the_watcher 14 hours ago 0 replies      
This is a huge loss for independent reviewers. Sites like The Wirecutter almost certainly would not have existed like the did without it.
24
skyisblue 14 hours ago 0 replies      
This is another hit to publishers who are already struggling with low ad revenue and an evergrowing number of adblock users.
25
eggie5 15 hours ago 0 replies      
my Pinterest spam bot revenues will take a dip :(
26
nnash 15 hours ago 1 reply      
I would imagine that anyone in the space has multiple channels for affiliate revenue and not just Amazon. There are literally dozens of companies in a single niche that you could pick from for most of the affected categories.
27
pasbesoin 16 hours ago 3 replies      
Yet another variation on: Don't be a sharecropper.
28
Animats 15 hours ago 1 reply      
What's the difference between a "video game" (1% fee) and a "digital video game" (10% fee)?
29
FT_intern 13 hours ago 1 reply      
How is the Amazon affiliate commission compared to other online affiliate ecommerce websites?
30
robertcorey 14 hours ago 0 replies      
welp had a web app idea for amazon affiliate kicking around for past year, this is what I get for not implementing.
31
ceyhunkazel 11 hours ago 1 reply      
What is annoying is Amazon pays non-US affiliates only by gift card or by check. Direct deposit is only for US affiliates, which is nonsense. I earn commissions from my web application http://www.jeviz.com .
23
Announcing the first SHA-1 collision googleblog.com
2873 points by pfg  1 day ago   482 comments top 73
1
nneonneo 1 day ago 5 replies      
The visual description of the colliding files, at http://shattered.io/static/pdf_format.png, is not very helpful in understanding how they produced the PDFs, so I took apart the PDFs and worked it out.

Basically, each PDF contains a single large (421,385-byte) JPG image, followed by a few PDF commands to display the JPG. The collision lives entirely in the JPG data - the PDF format is merely incidental here. Extracting out the two images shows two JPG files with different contents (but different SHA-1 hashes since the necessary prefix is missing). Each PDF consists of a common prefix (which contains the PDF header, JPG stream descriptor and some JPG headers), and a common suffix (containing image data and PDF display commands).

The header of each JPG contains a comment field, aligned such that the 16-bit length value of the field lies in the collision zone. Thus, when the collision is generated, one of the PDFs will have a longer comment field than the other. After that, they concatenate two complete JPG image streams with different image content - File 1 sees the first image stream and File 2 sees the second image stream. This is achieved by using misalignment of the comment fields to cause the first image stream to appear as a comment in File 2 (more specifically, as a sequence of comments, in order to avoid overflowing the 16-bit comment length field). Since JPGs terminate at the end-of-file (FFD9) marker, the second image stream isn't even examined in File 1 (whereas that marker is just inside a comment in File 2).

tl;dr: the two "PDFs" are just wrappers around JPGs, which each contain two independent image streams, switched by way of a variable-length comment field.

2
m3ta 1 day ago 4 replies      
To put things into perspective, let the Bitcoin network hashrate (double SHA256 per second) = B and the number of SHA1 hashes calculated in shattered = G.

B = 3,116,899,000,000,000,000

G = 9,223,372,036,854,775,808

Every three seconds the Bitcoin mining network brute-forces the same amount of hashes as Google did to perform this attack. Of course, the brute-force approach will always take longer than a strategic approach; this comment is only meant to put into perspective the sheer number of hashes calculated.

3
mabbo 1 day ago 15 replies      
One practical attack using this: create a torrent of some highly desirable content- the latest hot TV show in high def or whatever. Make two copies, one that is malware free, another that isn't.

Release the clean one and let it spread for a day or two. Then join the torrent, but spread the malware-hosting version. Checksums would all check out, other users would be reporting that it's the real thing, but now you've got 1000 people purposely downloading ransomware from you- and sharing it with others.

Apparently it costs around $100,000 to compute the collisions, but so what? If I've got 10,000 installing my 1BTC-to-unlock ransomware, I'll get a return on investment.

This will mess up torrent sharing websites in a hurry.

Edit: some people have pointed out some totally legitimate potential flaws in this idea. And they're probably right, those may sink the entire scheme. But keep in mind that this is one idea off the top of my head, and I'm not any security expert. There's plenty of actors out there who have more reasons and time to think up scarier ideas.

The reality is, we need to very quickly stop trusting SHA1 for anything. And a lot of software is not ready to make that change overnight.

4
cesarb 1 day ago 3 replies      
On a quick scroll of the comments, I haven't seen this posted so far: http://valerieaurora.org/hash.html

We're at the "First collision found" stage, where the programmer reaction is "Gather around a co-worker's computer, comparing the colliding inputs and running the hash function on them", and the non-expert reaction is "Explain why a simple collision attack is still useless, it's really the second pre-image attack that counts".

5
lisper 1 day ago 4 replies      
This point seems to be getting re-hashed (no pun intended) a lot, so here's a quick summary: there are three kinds of attacks on cryptographic hashes: collision attacks, second-preimage attacks, and first-preimage attacks.

Collision attack: find two documents with the same hash. That's what was done here.

Second-preimage attack: given a document, find a second document with the same hash.

First-preimage attack: given an arbitrary hash, find a document with that hash.

These are in order of increasing severity. A collision attack is the least severe, but it's still very serious. You can't use a collision to compromise existing certificates, but you can use them to compromise future certificates because you can get a signature on one document that is also valid for a different document. Collision attacks are also stepping stones to pre-image attacks.

UPDATE: some people are raising the possibility of hashes where some values have 1 or 0 preimages, which makes second and first preimage attacks formally impossible. Yes, such hashes are possible (in fact trivial) to construct, but they are not cryptographically secure. One of the requirements for a cryptographically secure hash is that all possible hash values are (more or less) equally likely.

6
necessity 1 day ago 2 replies      
> If you use Chrome, you will be automatically protected from insecure TLS/SSL certificates, and Firefox has this feature planned for early 2017.

No need to wait. The option to reject SHA-1 certificates on Firefox is `security.pki.sha1_enforcement_level` with value `1`.

https://blog.mozilla.org/security/2016/01/06/man-in-the-midd...

Other configs worth doing:

`security.ssl.treat_unsafe_negotiation_as_broken` to `true` and `security.ssl.require_safe_negotiation` to `true` also. Refusing insecure algorithms (`security.ssl3.<alg>`) might also be smart.

7
mate_soos 1 day ago 0 replies      
I am a bit saddened that Vegard Nossum's work, which they used for encoding SHA-1 to SAT, is only mentioned as a footnote. The github code is at

https://github.com/vegard/sha1-sat

and his Master Thesis, whose quality is approaching a PhD thesis is here:

https://www.duo.uio.no/bitstream/handle/10852/34912/thesis-o...

Note that they also only mention MiniSat as a footnote, which is pretty bad. The relevant paper is at

http://minisat.se/downloads/MiniSat.pdf

All of these are great reads. Highly recommended.

8
amichal 1 day ago 3 replies      
Linked http://shattered.io/ has two PDFs that render differently as examples. They indeed have same SHA-1 and are even the same size.

 $ls -l sha*.pdf -rw-r--r--@ 1 amichal staff 422435 Feb 23 10:01 shattered-1.pdf -rw-r--r--@ 1 amichal staff 422435 Feb 23 10:14 shattered-2.pdf $shasum -a 1 sha*.pdf 38762cf7f55934b34d179ae6a4c80cadccbb7f0a shattered-1.pdf 38762cf7f55934b34d179ae6a4c80cadccbb7f0a shattered-2.pdf
Of course other hashes are different:

 $shasum -a 256 sha*.pdf 2bb787a73e37352f92383abe7e2902936d1059ad9f1ba6daaa9c1e58ee6970d0 shattered-1.pdf d4488775d29bdef7993367d541064dbdda50d383f89f0aa13a6ff2e0894ba5ff shattered-2.pdf $md5 sha*.pdf MD5 (shattered-1.pdf) = ee4aa52b139d925f8d8884402b0a750c MD5 (shattered-2.pdf) = 5bd9d8cabc46041579a311230539b8d1

9
anilgulecha 1 day ago 5 replies      
Big things affected:

* DHT/torrent hashes - A group of malicious peers could serve malware for a given hash.

* Git - A commit may be replaced by another without affecting the following commits.

* PGP/GPG -- Any old keys still in use. (New keys do not use SHA1.)

* Distribution software checksum. SHA1 is the most common digest provided (even MD5 for many).

Edit: Yes, I understand this is a collision attack. But yes, it's still a attack vector as 2 same blocks can be generated now, with one published, widely deployed (torrent/git), and then replaced at a later date.

11
Aissen 1 day ago 0 replies      
I love the fact that there is a tool for detecting any collision using this algorithm:https://github.com/cr-marcstevens/sha1collisiondetection

and it's super effective: The possibility of false positives can be neglected as the probability is smaller than 2^-90.

It's also interesting that this attack is from the same author that detected that Flame (the nation-state virus) was signed using an unknown collision algorithm on MD5 (cited in the shattered paper introduction).

12
korm 1 day ago 1 reply      
[2012] Schneier - When Will We See Collisions for SHA-1?

https://www.schneier.com/blog/archives/2012/10/when_will_we_...

Pretty close in his estimation.

13
0x0 1 day ago 4 replies      
I'm trying to play with this in git. Added the first file, committed, and then overwrote the file with the second file and committed again. But even when cloning this repository into another directory, I'm still getting different files between commit 1 and 2. What does it take to trick git into thinking the files are the same? I half expected "git status" to say "no changes" after overwriting the first (committed) pdf with the second pdf?
14
SamBam 1 day ago 3 replies      
I'm confused by the "File Tester" at https://shattered.it/

It says "Upload any file to test if they are part of a collision attack."

When I upload either of their two sample collision documents, it says they are "Safe."

15
mikeash 1 day ago 3 replies      
For those of us who are totally clueless about the construction of these hash functions, what is the fundamental flaw in SHA-1 that allows this attack? How do newer hash functions avoid it?
16
mckoss 1 day ago 2 replies      
Computing a collision today costs about $100K from my reading of the paper. So most uses of SHA1 are protecting documents of far lower value, and would not be likely attack targets (today).
17
jasode 1 day ago 5 replies      
>Nine quintillion computations; 6,500 years of CPU; 110 years of GPU

Is there a rough calculation in terms of today's $$$ cost to implement the attack?

18
jeffdavis 1 day ago 1 reply      
Does git have any path away from SHA1?

I know the attack isn't practical today, but the writing is on the wall.

19
jgrahamc 1 day ago 7 replies      
How am I going to explain this to my wife?

Actually a serious question. How do we communicate something like this to the general public?

20
matt_wulfeck 1 day ago 0 replies      
> We then leveraged Googles technical expertise and cloud infrastructure to compute the collision which is one of the largest computations ever completed.

And this, my friends, is why the big players (google, Amazon, etc) will win at the cloud offering game. When the instances are not purchased they can be used extensively internally.

22
koolba 1 day ago 8 replies      
What's the impact to something like git that makes extensive use of SHA-1?

In their example they've created two PDFs with the same SHA-1. Could I replace the blob in a git repo with the "bad" version of a file if it matches the SHA-1?

23
korethr 1 day ago 2 replies      
So, since Git uses SHA-1, does this mean we're going to see a new major version number of Git that uses SHA-2 or SHA-3 in a few years?

I don't expect one overnight. For one, as noted, this is a collision attack, one which took a large scale of power to achieve. In light of that, I don't think the integrity of git repos is in immediate danger. So I don't think it'd be an immediate concern of the the Git devs.

Secondly, wouldn't moving to SHA-2 or SHA-3 be a compatibility-breaking change? I'd think that would be painful to deal with, especially the larger the code base, or the more activity it sees. Linux itself would be a worst-case scenario in that regard. But, it can be pulled off for Linux, then I'd think any other code base should be achievable.

24
userbinator 1 day ago 0 replies      
It's interesting to note that when the first MD5 collisions were discovered a bit over a decade ago, they were computed by hand calculation. Next came the collision generators like HashClash/fastcoll (remember these?) which could generate colliding MD5 blocks within a few seconds on hardware of the time. I wonder how long it will be before the same can be done for SHA-1, because it seems here that they "simply" spent a large amount of computing power to generate the collision, but I'm hopeful that will be reduced very soon.

As for what I think in general about it: I'm not concerned, worried, or even scared about the effects. If anything, inelegance of brute-force aside, I think there's something very beautiful and awe-inspiring in this discovery, like solving a puzzle or maths conjecture that has remained unsolved for many years.

I remember when I first heard about MD5 and hash functions in general, and thinking "it's completely deterministic. The operations don't look like they would be irreversible. There's just so many of them. It's only a matter of time before someone figures it out." Then, years later, it happened. It's an interesting feeling, especially since I used to crack softwares' registration key schemes which often resembled hash functions, and "reversing" the algorithms (basically a preimage attack) was simply a matter of time and careful thought.

There's still no practical preimage for MD5, but given enough time and interest... although I will vaguely guess that finding SHA-256 collisions probably has a higher priority to those interested.

25
rnhmjoj 1 day ago 2 replies      
About tor: if an attacker produces a public key that collides with the SHA-1 hash of someone else's hidden service, then he would still need to generate the corresponding RSA-1024 private key, which is infeasible as of today.

Is this correct?

26
orasis 1 day ago 2 replies      
"Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision."

Huh? It's been around a lot longer than 10 years.

27
manwithaplan 1 day ago 1 reply      
Seeing how they ridicule MD5, I think they should have spent a bit more time on the proof PDFs, and have their MD5 digests collide also.
28
Asdfbla 1 day ago 0 replies      
Maybe just writing 2^63 would have been easier to interpret than that huge number in the context of cryptography. (Unless you assume this targets a non-technical audience, which I doubt.)

Pretty impressive, though. And worrying, because if Google can do it, you know that state-level actors have been probably doing it for some time now (if only by throwing even more computing power at the problem).

29
sah2ed 1 day ago 2 replies      
> "Today, 10 years after of SHA-1 was first introduced, ..."

That part from the original article seems to be missing something?

30
jwilk 1 day ago 3 replies      
> How did you leverage the PDF format for this attack?

> A picture is worth a thousand words, so here it is.

> http://shattered.io/static/pdf_format.png

This picture is meaningless to me.Can someone explain what's going on?

31
SadWebDeveloper 1 day ago 0 replies      
It's still quite impractical, m sure with some quantum computer or a custom ASIC built by those "super nerds" at the NSA its possible but but for you general adversary aka "hackers" (skiddies IMHO) it will be infeasible.

What this means is for all of you [developers], is to start new projects without SHA1 and plan on migrating old ones (if it's totally necessary, normally don't unless you use SHA1 for passwords).

A Great resource for those who still don't know how or what hash to use, is paragonie: https://paragonie.com/blog/2016/02/how-safely-store-password...

32
mrb 1 day ago 0 replies      
Related: someone claimed a 2.48 BTC (~2800 USD) bounty by using this SHA1 collision data: https://news.ycombinator.com/item?id=13714987
33
ktta 1 day ago 4 replies      
Here's a good blog about how SHA-1 works:

http://www.metamorphosite.com/one-way-hash-encryption-sha1-d....

The biggest risk I see with this is how torrents are affected:

https://en.wikipedia.org/wiki/Torrent_poisoning

There's also a problem with git, but I don't see it being that as susceptible as torrents:

http://stackoverflow.com/a/34599081/6448137

34
zurn 1 day ago 1 reply      
Anyone have back of the envelope calculations for the cost of the CPU and GPU time?
35
yeukhon 1 day ago 0 replies      
How do you actually create a collision? The paper is beyond my level of comprehensions. Are we going to see someone writing up an open source tool to allow one to generate another file with the same hash?
36
divbit 1 day ago 1 reply      
So good timing to have just started working on a sha3 version of git I guess...
37
wfunction 1 day ago 1 reply      
It's kind of odd that over 9 months ago it was known that Microsoft would stop honoring SHA-1 certificates starting from 1 week ago. Anyone know if this is just a pure coincidence? See https://blogs.windows.com/msedgedev/2016/04/29/sha1-deprecat...
38
dingo_bat 1 day ago 1 reply      
> Hash functions compress large amounts of data into a small message digest.

My understanding of crypto concepts is very limited, but isn't this inaccurate? Hash functions do not compress anything.

They have an image too which says "<big number> SHA-1 compressions performed".

Seems weird to see basic mistakes in a research disclosure.

39
mrybczyn 1 day ago 0 replies      
SHA-1 isn't broken until someone makes a multi step quine that hashes to the same value at every stage!

BTW quine relay is impressive: https://github.com/mame/quine-relay

40
tjbiddle 1 day ago 0 replies      
It should also be noted that their examples files also have the same file size, in this case 422435 bits, after creating the collision - which I find fascinating!
41
mysterydip 1 day ago 2 replies      
Forgive my ignorance, but it seems a solution to collision worries is to just use two hashing algorithms instead of one. We have two factor authentication for logins, why not the equivalent for hashed things?

Give me the sha1 and md5, rather than one or the other. Am I wrong in thinking even if one or both are broken individually, having both broken for the same data is an order of magnitude more complex?

42
polynomial 1 day ago 0 replies      
It appears they are using a 2^63 hash operation attack that has been well known for nearly a decade. (Brute force of SHA-1 is 2^69.)

I wonder why they did not use the 2^52 operation attack that Schneier noted in 2009?

https://www.schneier.com/blog/archives/2009/06/ever_better_c...

43
ratstew 1 day ago 1 reply      
I got a chuckle out of the binary diff. :)

http://i.imgur.com/OmFHELl.png

44
Aissen 1 day ago 2 replies      
Anyone good enough in AWS pricing can reproduce the $100k pricing for one collision ? Using EC2 g2.xlarge instances I'm more at $2.8M.
45
bch 1 day ago 2 replies      
I wish there were sample documents, but if one had two computed hashes would this mitigate this SHA1-shattered flaw ? e.g. good_doc.pdf sha1=da39a3ee5e6b4b0d3255bfef95601890afd80709, md5=d41d8cd98f00b204e9800998ecf8427e ? With the sample project I'm looking at (GraphicsMagick) on Sourceforge for example, it provides both SHA-1 and MD5 hashes...
46
bobbyyankou 1 day ago 2 replies      
Can someone help me understand what the major distinction is between this accomplishment (SHAttered) and the same team's The SHAppening (2015)?

It looks like the did the same thing or something similar in 2^57.5 SHA1 calculations back then versus 2^63 SHA1 calculations this time.

47
jqueryin 1 day ago 0 replies      
What's funny is Google still promotes SHA-1 in some of their APIs: https://developers.google.com/admin-sdk/directory/v1/guides/...
48
imron 1 day ago 2 replies      
I wonder if there are any 2 single commits on Github from different repositories that have the same SHA1 hash.
49
RyanZAG 1 day ago 4 replies      
Is a 30 day disclosure period really enough for something like this? It's obviously not possible to 'fix' big systems that rely on SHA-1 such as git or github in only 30 days. Hardware devices that use SHA-1 as a base for authenticating firmware updates?
50
ianaphysicist 1 day ago 0 replies      
This is one of the reasons it is important to have multiple hash algorithms in use. Even when a collision can be triggered in two systems, it becomes markedly harder to trigger a simultaneous collision in other systems at that same point (payload).
51
kyleblarson 1 day ago 0 replies      
What's the over/under on how long ago the NSA accomplished this?
52
goncalomb 1 day ago 1 reply      
https://security.googleblog.com : "This website uses a weak security configuration (SHA-1 signatures), so your connection may not be private."
53
icedchai 1 day ago 2 replies      
> "10 years after of SHA-1 was first introduced"

wasn't SHA-1 introduced in the 90's?

54
wickedlogic 1 day ago 0 replies      
Would providing multiple SHA-1's from both the whole and N subsections (or defined regions) of the byte stream make this impractical... or at this point is the cost just going to drop and make this not relevant?

Like a NURBS based sudoku multi-hash...

55
RealNeatoDude 17 hours ago 0 replies      
> Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates.

Why? Was it in anticipation of this attack specifically?

56
rodionos 1 day ago 0 replies      
57
wapz 1 day ago 0 replies      
I don't know too much about hashing/encryption, but if you salt the sha-1 will this still be able to find a collision?
58
macawfish 1 day ago 0 replies      
So is this why Google asked me to type in my password this afternoon? Cause I was kinda cautious about that, but still did it...
59
e0m 1 day ago 0 replies      
10 million GPUs is not insane when you have a billion dollar security cracking infrastructure budget. Especially when you compare it to the rest of the cyber warfare budget.
60
0xcde4c3db 1 day ago 2 replies      
If you trust a signer, does this attack do anything to invalidate their SHA-1-based signatures? Or is the scenario strictly an attacker generating both versions of the message?
61
pavfarb 1 day ago 1 reply      
Now I really wonder what will happen to Git we all know and love.
62
jmartinpetersen 1 day ago 0 replies      
Is it coincidental that this GPUs on Compute Enginge were announced recently? This seems like a nice burn-in test and it being completed should free up ressources.
63
Siecje 1 day ago 1 reply      
Is Mercurial impacted?
64
nurettin 1 day ago 0 replies      
"as google, we spent two years to research a way of generating sha-1 collisions and made quintillions of computations to generate an example" <- not very convincing or practical. It's like those japanese animes where the nerdy kid boasts about having computed your every move.
65
donatj 1 day ago 2 replies      
So from a security standpoint if my hash was a sha-1 concatenated to an MD5, how long would it be before they found a collision?
66
pix64 1 day ago 0 replies      
Is there any merit to using two hashing algorithms simultaneously particularly if the algorithms are very different in nature?
67
Lord_Yoda 1 day ago 1 reply      
As I consumer how do I identify sites which are vulnerable? What should I do to protect my data?
68
madhorse 1 day ago 1 reply      
"But MD5 is still okay right?" -Laravel Developer
69
kazinator 1 day ago 1 reply      
> In practice, collisions should never occur for secure hash functions.

That is mathematically impossible when reducing an N bit string to an M bit string, where N > M.

All hashes have collisions; it's just how hard are they to find.

70
yuhong 1 day ago 0 replies      
identical prefix, not chosen prefix. I was more interested in an SHA-1 collision ASIC.
71
Zhenya 1 day ago 0 replies      
It's telling/ironic that the google security blog will not display the text without javascript enabled.
72
bekimdisha 1 day ago 0 replies      
but ... but .... half of the web is no longer secure? ... :D
73
mtgx 1 day ago 1 reply      
Never forget: when Facebook, Twitter, and Cloudflare tried to slow-down SHA-1 deprecation:

https://www.facebook.com/notes/alex-stamos/the-sha-1-sunset/...

https://blog.twitter.com/2015/sunsetting-sha-1

https://blog.cloudflare.com/sha-1-deprecation-no-browser-lef...

I think Microsoft tried to do it too early on, but eventually agreed to a more aggressive timeline.

24
Google Cloud Platform is the first cloud provider to offer Intel Skylake googleblog.com
277 points by rey12rey  19 hours ago   183 comments top 24
1
timdorr 18 hours ago 2 replies      
Just for reference, since you don't choose your processor explicitly on GCP, but instead choose your zone with homogenous processors, here is their current processor/zone layout: https://cloud.google.com/compute/docs/regions-zones/regions-...

Since some GCP engineers are watching: Presumably we'll see some new zones to provide these processors, or will it be a limited release within existing zones? And if so, will you be moving away from homogenous zones in the future?

2
zbjornson 18 hours ago 2 replies      
I've been benchmarking these against Haswell and Broadwells. Despite being 300 MHz slower, we're getting between 5 and 45% faster benchmarks on linear algebra functions that we run a lot, even without doing much work to tailor to AVX512 instructions yet.

The cache is also a whopping 56 MB.

3
boulos 18 hours ago 2 replies      
It's been awesome to see our Skylakes rolling in over the past several weeks. I personally have been waiting nearly 10 years for AVX-512 ever since playing with LRBni.

Disclosure: I work on Google Cloud (and helped a bit in our Skylake work).

4
boulos 15 hours ago 1 reply      
There are a few threads asking about various features (SGX, TSX, etc.) so I want to make a top-level comment: we're not ready to share more today (sorry).

Disclosure: I work on Google Cloud.

5
jscipione 17 hours ago 3 replies      
Why is it important to offer Intel Skylake on cloud platforms? Is there some specific processor extensions present in Skylake that make them particularly compelling in a cloud environment for a particular industry or a particular set of needs?
6
Johnny555 14 hours ago 0 replies      
Amazon's c5 (Skylake) series shouldn't be far behind...

https://aws.amazon.com/about-aws/whats-new/2016/11/coming-so...

7
jhgg 16 hours ago 0 replies      
I wonder if Skylake would offer a material improvement for our workload. We don't necessarily use AVX-512, but we do use a heck of a lot of CPU resources on the current architecture. We are a python/elixir shop.

Great job GCP team!

8
lightedman 18 hours ago 4 replies      
I'd rather wait for Ryzen. You won't know which Skylake processor you're getting - the gimped one or the non-gimped one. AMD tends to keep their features consistent across the line.
9
qhwudbebd 14 hours ago 1 reply      
Any sign of IPv6 support on the horizon? Slightly embarrassing in 2017...
10
youdontknowtho 15 hours ago 1 reply      
Did Intel actually enable the TSX extensions in Skylake? If I'm not mistaken, they shipped it in the last couple of generations but disabled it after release. (Something like that?)

It's something that I've wanted to play with for sometime. It's cool that GCE has them available as a service.

11
nodesocket 4 hours ago 0 replies      
Would a typical load balancer/web server running NGINX doing SSL termination see an improvement switching to Skylake?
12
tambourine_man 9 hours ago 1 reply      
Constructive criticism:

Your calculator page is unusable on mobile due to fancy "material" form filling.

https://cloud.google.com/products/calculator/

13
gcp 15 hours ago 2 replies      
Whenever I want to try GCP, during signup I get stuck at "Account type Business" and the need to enter a VAT number.

It hints there's Individual Accounts, but I see no way how to set it to that?

14
chillydawg 17 hours ago 1 reply      
Side question: are these extensions available in the desktop (i7) parts? Wanting to test out some optimisations for some code I have.
15
ctz 17 hours ago 1 reply      
No mention of SGX; a major Skylake feature for cloud computing. Is it enabled? Is it accessible?
16
johansch 18 hours ago 1 reply      
I would be quite nice if they actually advertised the particular type of CPU core you rent, rather than some abstract unit of computation. Or at least some kind of performance baseline.
17
chaosfox 18 hours ago 1 reply      
>In our own internal tests, it improved application performance by up to 30%.

this post would have been interesting if they had included those tests.

18
kierank 18 hours ago 2 replies      
Are these E5-skylakes?
19
mtgx 18 hours ago 2 replies      
Hopefully it will be the the first to offer the much cheaper AMD Ryzen/Naples, too.
20
Cyph0n 17 hours ago 2 replies      
Oh, so now I can't voice my opinion until I've been "in the business" for x amount of years? Yeah, nice try.

Another commenter already brought that issue up, but thanks for pointing it out again. I still think that it's quite silly to claim that Ryzen Rev. A may end up being a paperweight based on a mistake that took place a decade ago. Whatever floats your boat, I guess.

And from what I read, it seems like it was an extreme edge case, so the TLB error was triggered only during specific workloads. Sucks to be AMD back then.

21
stuckagain 16 hours ago 3 replies      
I have worked both for AMD and for AMD's one-time largest customer. I know exactly what I am talking about. You, on the other hand, are talking out of your butthole.
22
fabrigm 13 hours ago 0 replies      
A Google day is not responsive
23
mikecb 18 hours ago 0 replies      
I think it's funny that big Xeon upgrades seem to always occur approximately 8 months after big Power architecture blog posts.
24
Mo3 16 hours ago 6 replies      
Hahaha. Meanwhile I've been running 2 machines with Skylake and a combined 24 cores/48 threads, 256GB DDR4, 6x512G SSDs, unmetered 1Gbit/s public and 2,5Gbit/s internal for over half a year for a combined $150 total, in complete privacy and in full control of my hosts.. go dedicated, people.
25
A Queer Taste for Macaroni publicdomainreview.org
30 points by pepys  11 hours ago   1 comment top
1
donretag 9 hours ago 0 replies      
A different take on macaroni: The Macaroni in Yankee Doodle Is Not What You Think https://news.ycombinator.com/item?id=12356070
26
Cloudflare and FastMail: Your info is safe fastmail.com
148 points by jsingleton  12 hours ago   79 comments top 14
1
nawtacawp 10 hours ago 2 replies      
Love fastmail -- I have six domains. I can send/receive from whatever address I want.
2
joe_developer 2 hours ago 2 replies      
FastMail scores pretty well on this comparison of email providers based on privacy:

https://thatoneprivacysite.net/simple-email-comparison-chart...

3
hackuser 11 hours ago 2 replies      
Would one of the apparently many happy FM subscribers share details of what kind of security they provide?

(It bears repeating: No email can be very secure.)

4
mxuribe 11 hours ago 1 reply      
Now that is how a company pro-actively communicates to their userbase and the public at large! Honest information but inspiring confidence in their platform as well as their support. Well done fastmail! Kudos!!
5
uladzislau 11 hours ago 3 replies      
I'm wondering if Fastmail provides better deliverability of incoming emails (have issues with lost emails in free Gmail account)?

Also if something goes wrong is there a mitigation plan in place to recover data and restore access of the user?

That's another major concern - to lose access to the account and not be able to recover it - because Gmail has no support.

6
kevindqc 11 hours ago 1 reply      
Is it better to use two different domains for API and static content then, in order for cookies to be only sent to the API domain/subdomain? That way, if there are requests to the static content that's served by a CDN (ie: Cloudflare), it won't contain sensitive cookie information?
7
agentgt 3 hours ago 0 replies      
I have been exploring other email services. I was checking out protonmail recently as fast isn't my top priority. Anyone like protonmail? I'll have to give fastmail a spin soon.
8
temp246810 8 hours ago 1 reply      
I had high hopes for fastmail and it's an awesome product, just not quite ready for me yet.

I'm not yet at a point where I need to lug my reading glasses everywhere - provided I can adjust font sizes on my phone.

The fastmail app is just a wrapper over the mobile site and you can't adjust the font size. The font as it is is tiny and unusable for me.

I contacted support and while they were curteous and prompt, they basically said too bad and refused to give me a refund, even though I had purchased the year subscription just a few days before.

Still have good things to say about them, just wish the app were more accessible.

9
ivm 8 hours ago 3 replies      
What amazes me in Fastmail is how fast their web UI both on desktop and mobile. Especially compared to Gmail but it's also more responsive than Slack or Trello.
10
rb666 1 hour ago 0 replies      
Moved to Fastmail from Google last month, love it!
11
_RPM 11 hours ago 2 replies      
Fuck. I use cloud flare for MX records to ensure I don't completely rely on FastMail for everything.
12
toomuchtodo 11 hours ago 1 reply      
Thanks FastMail folks. I still use Gmail primarily (multiple labels per message feature), but I pay for an account so that you're still around when I need you to be.
13
qrbLPHiKpiux 11 hours ago 0 replies      
Thank you FM. A happy subscriber
14
ocdtrekkie 12 hours ago 6 replies      
For when a FastMail guy shows up here: Thanks for the note on this. And hugs, because I love you.

- Happy customer since November.

27
VR social productivity app Bigscreen raises $3M with Andreessen Horowitz techcrunch.com
67 points by febin  14 hours ago   21 comments top 10
1
ChicagoBoy11 7 hours ago 1 reply      
I have never used Bigscreen (just had access to a Rift for a couple weeks) but I'm not surprised by this investment. When I played around with it, despite the several gaming options available, the most compelling experience I had in VR was just a pretty mundane casino game where I played poker. That above all else sold me on the potential of VR. As long as people think about VR just as immersive gaming, the potential is going to be limited. But what the casino VR experience taught me was that even something completely mundane like playing a game of cards could be far more immersive and more entertaining in a VR setting; if I had 5 friends over at home with me, I would've much rather plugged in with them into this beautiful virtual casino then dealt with shuffling cards at home in real life. Can't wait to be able to get the hands on my own VR device to see everything that Bigscreen has to offer!
2
jacquesc 10 hours ago 1 reply      
After wasting money on 2 other VR desktop apps, I was shocked that Bigscreen was free and 10x better than the other ones.

Hope they can put the investment to good use. I'm definitely not opposed to paying them when the final version is released.

3
netinstructions 12 hours ago 2 replies      
I can't wait for VR resolution to increase so things like this work really well.

From my experience using the Oculus dev kit 2 there was too much of a "screen door" effect and it was hard to read text on a virtual monitor vs reading text on a real life monitor. It wasn't practical to use a virtual monitor to say, write code or surf the web.

But resolution will only get better! At that point will things get fun and I can see people eschewing monitors for VR "monitors".

4
venti 7 hours ago 0 replies      
This reminded me of a project by Sun Microsystems called Project Wonderland. They created a virtual world in which you could attend meetings and look at slides together with other avatars although this was not in VR.

Here is a demo video: https://youtu.be/-CFOGDBFKrk

5
return0 10 hours ago 0 replies      
They re doing what secondlife should have done long ago. I wonder if they have a platform for games.
6
theaustinseven 12 hours ago 1 reply      
I've used bigscreen before, and while it might be hard to read small text, it is still a really nice app. I don't use the "multiplayer" feature, but what is really nice is to switch up my work environment at home from time to time. It also eliminates a lot of distractions.
7
pazimzadeh 11 hours ago 2 replies      
In a virtual environment wouldn't you want a program like Blender to have a 3D interface?

And are couches and living room environments a skeumorphobic ornament to help the transition to VR?

8
callumprentice 11 hours ago 2 replies      
Does Bigscreen let you view modern web content in VR? If so, what are they using under the hood to do that?
9
pmcpinto 10 hours ago 0 replies      
Interesting to see a VR company working fully remote
10
gfody 8 hours ago 0 replies      
I hope they're working on the street protocol for the metaverse!
28
Code.mil An experiment in open source at the Department of Defense github.com
350 points by us0r  20 hours ago   144 comments top 21
1
engi_nerd 19 hours ago 14 replies      
This is a huge battle I am in the middle of fighting right now. I am working on a project that is extremely late and we are having all kinds of political pressure put on us by very senior people. Meanwhile their damn IA staff won't approve any of the tools or hardware that I need to help us get the job done.

One huge obstacle to open-source anything in DoD is the attitudes of their information assurance professionals. I have been told by numerous DoD IA people that "Open Source is bad because anyone can put anything in it" and "We'd rather have someone to call." I understand the second point -- we honestly don't have the time to run every last issue to ground and it's probably better if we do have some professional support for some of our most important tools. But the first just boggles my mind.

But the IA pros are, as a group, schizophrenic, because somehow people are getting things by them anyway. The system I'm working on has Python as a build dependency. The devs are creating reports using Jupyter notebooks.

Basically the DoD needs to stop being so damn obstinate about open source.

2
dkhenry 20 hours ago 2 replies      
I love seeing this kind of work done. Not because its going to radically change the underlying technology, but having the air cover a project like this will provide can enable so many government coders who get shut down by their first tier manager who tells them they can't use open source components or can't open source their code. Its might seem silly but just getting the projects out in the open increases their hygiene more then any other single factor.
3
austincheney 18 hours ago 2 replies      
Speaking as a long time US soldier here is how the military perceives code:

* There is no copyright and plagiarism doesn't exist. Internally to the military everything is libre to the most maximum extreme. While people do get credit for their work they have no control over that work and anybody else in the military can use their work without permission.

* Service members and employees of the military are not allowed to sue the military. As a result software written by the military has no need to disclaim a warranty or protect itself from other civil actions.

* Information Assurance protections are draconian. This is half way valid in that there are good monitoring capabilities and military information operations are constantly under attack like you couldn't imagine. The military gets criminal and script-kiddie attacks just like everybody else, but they also get sophisticated multi-paradigm attacks from nation states. Everything is always locked down all the time. This makes using any open source software really hard unless it is written yourself or you work for some advanced cyber security organization.

4
lloydde 19 hours ago 2 replies      
No one wants yet another license.

Is there an explanation about why Unlicense is not appropriate? Or what it would take for an Unlicense derivative to meet the legal requirements? Could the laws be changed in small ways to allow US Government employees to more fully participate in open source?

"The Unlicense is a template for disclaiming copyright monopoly interest in software you've written; in other words, it is a template for dedicating your software to the public domain. It combines a copyright waiver patterned after the very successful public domain SQLite project with the no-warranty statement from the widely-used MIT/X11 license." http://unlicense.org/

I like how other commenters have included other successfully US.gov and specifically DoD open source such as BRL-CAD and NSA's Apache Accumulo.And the DoD Open Source FAQ is interesting and something I haven't seen before: http://dodcio.defense.gov/Open-Source-Software-FAQ/

Open source and US.gov participation reminds me of what happened with NASA Nova. It was pretty sad that when OpenStack became relevant in the industry that seemed to cause a panic at NASA and they pulled completely out of OpenStack development. Instead of NASA being to help the project stay focused on being opinionated enough to be generally useful (out of the box), NASA was too afraid about the perception of competing with proprietary commercial interests. (It was nice to see last year, all these years later, that NASAs Jet Propulsion Laboratory is now a user again having purchased RedHat OpenStack.)

5
rectang 20 hours ago 2 replies      
The NSA open sourced what became Apache Accumulo years ago, so that government org has made peace with the copyright issue.

The DoD, though, is still trying to feel its way around. There seem to be some lawyers there who are very hard to convince. For years, they've been asking to have various licenses and CLAs modified and we've been telling them no.

Here's their latest request for the Apache License 2.1:

http://markmail.org/message/eueu4rzlbpe2ugcj

6
zo7 18 hours ago 0 replies      
My only bit of experience working on a DoD-related project was a huge turn-off for me to do any more work in that space in the future because they were resistive about approving any open source software. The development mindset on the project was to re-implement everything (including some tricky algorithms we were using) because it was unreasonable to expect any timely approval, even if it's a feature from the current version of a library that was already approved for an older version. I don't see the reasoning with it, since if anything open source is more secure because you know exactly what is going on inside of it, compared to closed source which may be from a trusted source but you have no idea what it's really doing under the hood.

Hopefully this helps push things in the right direction, although I'm not optimistic.

7
brilliantcode 18 hours ago 6 replies      
Not only is helping the defense industry downright immoral, it's a waste of talent.

Just think back to why you studied computer science or coding. I hope it wasn't to help build spy tools on your friends & families. I hope it wasn't to help engineer destructive weapons that is dropped on innocent civilians.

Fuck code.mil, fuck lockheed martin.

edit: I've turned down VC money a while ago because I discovered they had previously sold a company to Lockheed Martin affiliate. Downvote all you want but I'm not some spinless piece of shit that will throw out principles and morals for it. I love making money but it's not worth losing your compass or soul over.

8
brudgers 19 hours ago 2 replies      
BRL-CAD has been an open source US Department of Defense project for many years. It is architected with the *NIX philosophy of chaining small single purpose tools...The exception that proves the rule? It's own version of Emacs.

It highlights a unique aspect of Federal Government developed software: it's public domain rather than licensed based on copyright law. This facilitates reuse but complicates contribution by outside developers.

https://brlcad.org/

https://brlcad.org/d/about

9
imroot 20 hours ago 1 reply      
It'll be interesting to see the intersection of this and forge.mil (which was/is the DoD's implementation of SourceForge and associated services). About 5 years ago, there was a fair amount of Open Source Software being ran in DISA for supporting the branches and the software that they wrote, but, there was little open-sourcing of that software, even amongst the individual branches of service (the Marines might write something that the Army could use, but, there were political or other factors that precluded that from happening).
10
wyldfire 18 hours ago 0 replies      
> This can make it hard to attach an open source license to our code.

It's not clear to me why this is necessary/desired. Is it because of contribution to existing works protected by copyright or something else?

From the OSI's FAQ [1]:

> What about software in the "public domain"? Is that Open Source?

> There are certain circumstances, such as with U.S. government works ... we think it is accurate to say that such software is effectively open source, or open source for most practical purposes

What problem does this license aim to solve?

[1] https://opensource.org/faq#public-domain

EDIT: ok this comment [2] clears things up a bit. AFAICT It's specifically regarding a mechanism to permit foreign contributors while allowing them to disclaim liability.

[2] https://github.com/deptofdefense/code.mil/issues/14#issuecom...

11
lewiscollard 17 hours ago 0 replies      
> Usually when someone attaches an open source license to their work, theyre licensing their copyright in that work to others. U.S. Federal government employees generally dont have copyright under U.S. and some international law for work they create as part of their jobs. In those places, we base our open source license in contractrather than copyrightlaw.

> ...

> When You copy, contribute to, or use this Work, You are agreeing to the terms and conditions in this Agreement and the License.

I do not see how this is enforceable, or that it even makes sense, any more than it would make sense for me to take, say, a NASA photo and slap my own terms on it. If it's in the public domain, there's no ownership and no 'or else' to back a contract setting licensing terms.

The alternative is that I'm misunderstanding this license, of course. Where am I going wrong?

12
xemdetia 20 hours ago 2 replies      
Am I missing something here or is there nothing associated with this initiative other than 'please check our LICENSE agreement?'
13
ryanmaynard 20 hours ago 1 reply      
It appears some of the 18F crew are behind this. I'm interested to see what unfolds in this repo.
14
magicmu 19 hours ago 2 replies      
On one hand it's always cool to see increased adoption of open source, but it strikes me as more than a little subversive for the DoD to adopt an open source methodology. I can't help but see the appropriation of an inherently equitable and socialist means of sharing innovation (FOSS) by a violent, exclusionary, and globally oppressive regime to be a step in a very wrong direction.
15
_lex 19 hours ago 1 reply      
It sounds like there's a space for a company that simply validates these issues and supports opensource software, for customers like DOD. I'd expect that such a company could charge each customer quite a bit, and that each customer will want pretty much the same verification of the same libraries, with additional work only needed as new stuff gets requested. Thoughts?
16
kogus 16 hours ago 4 replies      
I have never worked on code intended for military use. From my layman's point of view, it seems like DoD code would either be "the most boring legacy CMS you can imagine" or "top secret missile guidance AI systems". The former isn't interesting. The latter should probably stay closed-source.

Is there any DoD code that is both interesting and suitable for public consumption?

17
noobermin 11 hours ago 0 replies      
It makes a lot of sense for Gov't funded IP to not have a copyright attached to it. I feel similarly for gov't funded research. Of course, this doesn't include things that should be export controlled for national security reasons.
18
cosinetau 16 hours ago 0 replies      
I did a senior research project with a DoD contractor at my university in my last semester. It was a lot of fun, and we got to get exposed to a handful of tools and practices these parties use. I'm very excited at the prospect that maybe some of them will become free. Kudos DoD!
19
rmc 18 hours ago 0 replies      
Wonder if they will have a code of conduct.... :P
20
rkeene2 19 hours ago 1 reply      
There's also forge.mil, which has existed for a while but requires a TLS client certificate to access.
21
clarkenheim 20 hours ago 1 reply      
Thinly veiled publicity stunt by the Department of Defence here.
29
120K distributed consistent writes per second with Calvin fauna.com
76 points by zenithm  14 hours ago   45 comments top 9
1
g0del_was_wr0ng 2 hours ago 0 replies      
Including your 9x write amplification in the number of "consistent writes" doesn't count -- like at all. I'm amazed nobody called you out on this yet.

You're doing 3k batches per second with 4 logical writes each, right? So that is at most 3-12k writes per second using the way that every other distributed database benchmark and paper counts.

Or otherwise - if you continue counting writes in this special/misleading way - you'd have to multiply every other distributed db benchmark's performance numbers with a factor of 3-15x to get an apples-to-apples comparison.

The 12k batched writes/sek through what I assume is a paxos variant is still pretty impressive though! Good to get more competition/alternatives for zookeeper & friends!

2
itp 13 hours ago 3 replies      
This seems cool, and I sincerely wish them nothing but success. That said, I had a major sense of dj vu while reading this post -- I worked at FoundationDB prior to the Apple acquisition, when we published a blog post with a very similar feel:

http://web.archive.org/web/20150325003241/http://blog.founda...

I'm not trying to make a comparison between a system I used to work on and one that I frankly know little to nothing about; rather, I'd suggest that building a system like this just isn't enough to be compelling on its own.

3
web007 12 hours ago 1 reply      
This description is very misleading.

120,000 writes per second is accurate, talking about actual durable storage (disk) writes. But it's only 3,330 transactions, which should be the number that a user cares about.

I don't have proper data and I'm a bit rusty, but I feel like Cassandra could blow that away if you set similar consistency requirements on the client side (QUORUM on read, same for write?). Am I understanding this correctly, or does Fauna/Calvin give you something functionally better than what C* can do?

4
qaq 6 hours ago 3 replies      
Maybe I am missing some special point but a decent PG box will do 1,000,000+ TPS vs 3,000+ TPS here. When pgXact lands it will do close to 2,000,000 TPS. So reading all the posts about the amazing new db "X" that can do about N times less than PG on a multi-node cluster I get confused why the numbers are being presented as some sort of achievement.
5
zenithm 14 hours ago 1 reply      
This is a new one to me...the referenced paper is here: http://cs.yale.edu/homes/thomson/publications/calvin-sigmod1...

How does this algorithm compare to whatever Google Spanner does?

6
imownbey 13 hours ago 2 replies      
"Calvin's primary trade-off is that it doesn't support session transactions, so it's not well suited for SQL. Instead, transactions must be submitted atomically. Session transactions in SQL were designed for analytics, specifically human beings sitting at a workstation. They are pure overhead in a high-throughput operational context."

Is this specifically for distributed SQL only? I think there are some scalable SQL systems that don't support sessions either.

7
olegkikin 9 hours ago 0 replies      
2011: Benchmarking Cassandra Scalability on AWS - Over a million writes per second

http://techblog.netflix.com/2011/11/benchmarking-cassandra-s...

Also a single SSD from 2015 is rated at 120K writes per second:

PM1725: http://www.samsung.com/semiconductor/global/file/insight/201...

8
rystsov 11 hours ago 1 reply      
Is it possible to download fauna to play with it on my own?
9
lngnmn 12 hours ago 1 reply      
Consistent writes to a permanent storage or didn't happen.
30
Securing Browsers Through Isolation versus Mitigation medium.com
6 points by PretzelFisch  45 minutes ago   2 comments top
1
lightedman 39 minutes ago 1 reply      
All that work when I just block all ad networks in my router with a site whitelist, block Javascript, and block Flash.

Even my Windows 2000 laptop is essentially bullet-proof. Don't need all that nonsense just to read my typical news sites and as an additional bonus the router whitelist puts a stop to Windows Update ignoring the utterly-useless core Windows HOSTs file and stops it from doing anything further to my Windows 7 install.

       cached 25 February 2017 11:02:02 GMT  :  recaching 2h 36m