hacker news with inline top comments    .. more ..    30 Jul 2016 News
home   ask   best   3 years ago   
1
America uses stealthy submarines to hack other countries systems washingtonpost.com
31 points by Jerry2  30 minutes ago   4 comments top 4
1
nstj 10 minutes ago 0 replies      
Did you know that the subsea fiber optic cables which the submarines hack are only 17mm thick? [0]. Great companion story at ARS from a while back worth a read.

[0]: http://arstechnica.com/information-technology/2016/05/how-th...

2
gormo2 7 minutes ago 0 replies      
>But despite the rising prominence of Russian hackers in this news cycle and Chinese hackers before that it's worth pointing out that the United States has grown fairly proficient in cyberspace, too.

How am I not surprised that in an article about US espionage the American state media makes sure to remind yet again us how bad, bad, bad the Russians are.

3
benevol 6 minutes ago 0 replies      
Edward Snowden revealed a long time ago that the NSA also taps the undersea communications cables for mass surveillance (which obviously includes US citizens).
4
chinathrow 5 minutes ago 0 replies      
Assume all oceanic cables rigged. The CIA has been doing this for decades.

Added: https://www.amazon.com/Blind-Mans-Bluff-Submarine-Espionage/...

2
Habits of highly mathematical people medium.com
131 points by CarolineW  4 hours ago   39 comments top 8
1
maroonblazer 2 hours ago 5 replies      
Couldn't this also be titled "Habits of high rational people", with mathematics simply being one application?

I'm not a mathematician and was miserable at math in school, but I apply these habits in the business world every day. They help me cut through a lot of crap that comes from other people's sloppy/lazy thinking.

>Anyone who has gone through an undergraduate math education has known a person (or been that person) to regularly point out that X statement is not precisely true in the very special case of Y that nobody intended to include as part of the discussion in the first place. It takes a lot of social maturity beyond the bare mathematical discourse to understand when this is appropriate and when its just annoying.

I don't disagree but would argue that the far more common problem is people - not just mathematicians, mind you - not considering definitions enough which ultimately leads to confusion and/or misunderstandings and consequently additional, unnecessary, cycles spent in discussion about "what do you really mean?"

2
louprado 8 minutes ago 0 replies      
> The most common question students have about mathematics is when will I ever use this?

The author then goes on to give 6 semi-abstract reasons as to why abstract reasoning matters. Since this question is most likely to be asked by a child it is code for "how can math-skills make me money or get me a job." Upon hearing the author's answers, 13 year old me would conclude, "it doesn't, and math is useless much like poetry (which I now value BTW)". A child who isn't thinking in this practical way is already academic minded and doesn't need an answer. "When will I ever use this?". It obvious! "In next year's math class".

It is sad that a student can leave high-school with almost no classes that ever makes math less abstract. The most beautiful moment in my life was my freshman college physics class. It first filled me with joy and followed by anger. Because I almost didn't go to college and there was no reason I wouldn't have understood this course at 13. Same goes for finance.

So answers should focus on applications like modeling simple Newtonian physics of a satellite in orbit or predicting the RPM when a 4-stroke engine red-lines based on max-stress specs. Or show them useful short cuts like the Rule of 72 and if they are curious show a derivation. Show examples of ratios/percentages, how fractals can simulate a landscape, or examples of the Fibonacci series in nature, etc.

3
auggierose 45 minutes ago 1 reply      
Very good article. I might add that a further pitfall in being "highly mathematical" is when you assume that the person you are communicating with is also "highly mathematical" and therefore has similar habits. That's not true MOST OF THE TIME and can lead therefore to severe misunderstandings.
4
mrcactu5 1 hour ago 1 reply      
#6 scaling the ladder of abstraction

This is one I have a tough time explaining to students who are so focused on the task at hand they don't feel there's time to sit back and reason in generality.

5
javier2 2 hours ago 3 replies      
7. Being precise.

The precision you hold yourself to through a rigorous approach to a problem is also very valuable and is the reason I think calculus is very important.

6
haasn 1 hour ago 0 replies      
Just reading those 6 bullet points reminds me very, very strongly of the kind of discussions I have with my friends (all of whom are CS students).
7
geertj 3 hours ago 5 replies      
Only six habits, I would have hoped for a seventh one :)
8
fibo 28 minutes ago 0 replies      
mathematician here, so true
3
Example Driven Development wilfred.me.uk
6 points by jsnell  24 minutes ago   discuss
4
Israel Proves the Desalination Era Is Here scientificamerican.com
449 points by doener  15 hours ago   225 comments top 28
1
CydeWeys 12 hours ago 17 replies      
I've never been as afraid of fresh water shortage as a lot of other people seem to be precisely because it is exactly the kind of shortage that is solved by market forces. Freshwater getting scarce? OK, so the price goes up. Now there's more incentive to conserve water by replacing fixtures for low-flow variants, letting your lawn die (why do you really need one in a desert anyway), and using more efficient irrigation methods in agriculture. The price of water still keeps going up? Then the price of meat goes up because growing crops to feed animals become more expensive, and people eat less meat. Meanwhile, increased price makes desalination more profitable, and more plants come online.

I don't see it as anything rising close to a civilization-ending threat. Seawater is abundant. Freshwater can be made cheaply enough. We are extravagantly wasteful with water as it is, and that can be fixed easily enough as it becomes more expensive.

2
dingleberry 11 hours ago 5 replies      
I've used reverse osmosis unit for years. new unit is less than $100usd it paid itself in months. maintenance is cheap.

my RO has kinda low efficiency. 4 units of water in, i get 1 unit of clean drinkable water. 3 units are 'waste' (don't believe what socmed says, it's not dirty). the waste water is used for laundry, aquarium and gardening. my fishes and vegs are thriving.

to test purity, just plunge the ohm meter probes into a cup of ro water. if the resistence is in 1MOhm, it's pretty pure. mountain/mineral water bought from store is about 150kOhm. after drinking ro water for a while, store-bought water tastes slightly salty.

my house pumps water from ground (well-pump). for a family of 6, i need 1m3 of water. that's about half hour of 500W pump. an ro unit (40W) runs about 3 hours for 5gal bottle. so give or take, 1/2kWH or about 15KW per month. that's about $1 or $2 for my water need per month.

if i must buy raw water (such as living in an apartment), it's about $1 per m3, so about $30 per month; however, i get drinkable water too. ro units are portable. i saved $28/mo in water expense after moving from apartment.

flashing back from memory, i used to lug around 3 5gal water-cooler bottle from stores to my apartment every 3 days (i had 3 bottles). it was not pleasant and wasted time (go to store, queue, lug heavy bottles back)

ro unit has relay so you don't have to sit and wait for it to fill. it automatically cuts power to the pump when its internal clean-water-tank is full.

3
azernik 14 hours ago 3 replies      
In Israel, it's mostly a triumph of politics over economics - Jordan, Israel, and the PA have cooperated in developing desalination plants [1] in part in order to defuse conflicts over water. Water allocations have historically been a zero-sum competition in the region, so when there's a chance (however expensive) to just throw money at the problem to make it go away, everyone's willing to play along.

[1] http://www.worldbank.org/en/news/press-release/2013/12/09/se...

4
abalone 14 hours ago 3 replies      
> Desalination used to be an expensive energy hog, but the kind of advanced technologies being employed at Sorek have been a game changer. Water produced by desalination costs just a third of what it did in the 1990s.

How much of that price drop is due to lower energy prices? How sustainable is the power source?

Even with significant efficiency improvements (3X), shifting from local freshwater sources to desalination would have a massive energy requirements.[1] For such a bold claim ("the Desalination Era is Here"), this article has a rather lightweight examination of the environmental impacts.

[1] https://en.wikipedia.org/wiki/Desalination#Energy_consumptio...

5
pauldprice 31 minutes ago 0 replies      
This is a fantastic development. Tying the climate change and water shortage to civil unrest, regional war, immigration disaster, and world security problems is something the US presidential candidates are failing to understand or at least communicate effectively.

This article was a hard read due to lack of editing. I hope they update it with fewer typos and cut and paste errors. Peer review people, at least.

6
bluejekyll 4 hours ago 1 reply      
I don't see a lot of discussion around the brine. In previous reading brine will eventually create areas in seas where fish and other life starts dying off because of the extreme salt content.

This article mentions the brine "is just pumped back into the Mediterranean", but don't they need to be concerned about killing fisheries?

7
aphextron 14 hours ago 6 replies      
Where is the power coming from? Desalination can only be as efficient and sustainable as the power source. It's not surprising if Israel can build massive desalination plants dependent on cheap fossil fuel power.

This is just trading water now for more CO2 added to the atmosphere, worsening the underlying problem.

8
adventureartist 13 hours ago 2 replies      
For some perspective: LA county uses about ~50 Million cubic meters of water per year. The largest Israeli desalinization plant is ~225 Million cubic meters per year. The entire state of California uses ~50 Billion cubic meters of water per year. Very rough numbers
9
mabbo 14 hours ago 2 replies      
I've heard that just as big as their desalination efforts are their water reuse efforts. All that sewage we work so hard to get rid of? Turns out that's still fresh water. Sort of. It's just got some other stuff in it is all.
10
dvirsky 3 hours ago 0 replies      
> One of the driest countries on earth

Average rainfall in Tel Aviv - 528mm/year, that's about 50mm less than Jerusalem - and London. Now, a lot of the country is a desert, but even Beersheba gets an average of 400mm. Only the far south is really dry.

Not that Israel didn't need its desalination system, it might be the only good and/or competent thing our government has done in recent years, but it's not a very dry country in its inhabited areas.

11
lyonlim 13 hours ago 0 replies      
I wanted to understand how Singapore's desalination capacity (which is known as our fourth tap for water sources), compares with this. The numbers are quite mind blowing, at the scale Israel is doing desalination.

Our current desalination capacity, with 2 plants, amount to only 100 million gallons per day which currently meets 25% of our needs [1].

[1] https://www.pub.gov.sg/watersupply/fournationaltaps/desalina...

12
jmspring 13 hours ago 1 reply      
The article states --

"Desalination used to be an expensive energy hog, but the kind of advanced technologies being employed at Sorek have been a game changer. Water produced by desalination costs just a third of what it did in the 1990s. Sorek can produce a thousand liters of drinking water for 58 cents. Israeli households pay about US$30 a month for their water similar to households in most U.S. cities, and far less than Las Vegas (US$47) or Los Angeles (US$58)."

But I saw nothing about where the energy comes from, the type of source, the scale. I'd like to compare this with local proposals, etc.

One of the biggest DeSal issues is cost of conversion and source, proximity of source, etc. play a huge factor.

13
manachar 14 hours ago 2 replies      
"Water shoots into the cylinders at a pressure of 70 atmospheres and is pushed through the membranes, while the remaining brine is returned to the sea."

I've heard this brine can have a pretty negative ecological impact, I wonder how that could be mitigated.

14
Reason077 7 hours ago 2 replies      
Their counterparts in Syria fared much worse ... the wells ran dry and Syrias farmland collapsed in an epic dust storm. More than a million farmers joined massive shantytowns on the outskirts of Aleppo, Homs, Damascus and other cities in a futile attempt to find work and purpose.

To what extent the current situation in Syria influenced by environmental changes, I wonder?

Unemployed farmers -> Social instability -> Civil Unrest -> War?

15
yiyus 5 hours ago 0 replies      
Another example worth checking is Isla de Hierro, in the Canary Islands (Spain). Not only they are using desalination to produce most of the fresh water they need, but the desalination plants and the whole island run 100% with renewable energy.
16
ksec 11 hours ago 0 replies      
On the Subject of Water. > Sorek can produce a thousand litres of drinking water for 58 cents.I am guessing this is the cost to produce and not the final cost to Homeowners. How much further down can we reduce this cost? I am guessing this is close to the limit of efficiency, any more reduction will have to come from cheaper energy source?Since Nuclear Power requires lots of water, would the both makes sense working together?

Most countries have leaky water around pipes. UK lose around 20%, US is a little above 10%. Are the any innovation that we can shrink this number? I dont see why this cant be within 98%+ region. What is stopping it?

17
vmateixeira 6 hours ago 0 replies      
What about climate changes and any other possible drawbacks?

Shouldn't this be considered having in mind that we're bringing water to.. where it's not supposed to exist, hence influencing a micro climate and making climate changes?

18
SixSigma 3 hours ago 0 replies      
But ...

The Tampa Bay Desalination Plant produces 25 million gallons per day of drinking water to provide 10% of the region's needs. [1]

Americas largest seawater desalination plant provides 50 million gallons (189 million liters) of drinking water for the San Diego area each day - 8% of needs

[1] http://www.tampabaywater.org/tampa-bay-seawater-desalination...

[2] http://www.japantimes.co.jp/news/2015/11/02/world/florida-fl...

19
ZoeZoeBee 13 hours ago 0 replies      
Cape Coral, Florida has been using a RO desalination plant to provide drinking water since the 1970s, currently they produce 18 Million Gallons a day

http://fwrj.com/techarticles/0909%20FWRJ_tech2.pdf

20
sathishvj 9 hours ago 0 replies      
"water can be a bridge" - :-) Nicely worded.
21
HillaryBriss 14 hours ago 3 replies      
> Sorek can produce a thousand liters of drinking water for 58 cents.

In Los Angeles, the DWP tier 1 retail price for a cubic foot is very, very roughly 5 cents or roughly $1.75 for 1000 liters. And DWP's prices are going up (for a host of reasons).

22
ronjitg 11 hours ago 1 reply      
What about the environmental costs of desalination though? The article doesn't mention them, but they are significant.
23
andrewclunn 11 hours ago 0 replies      
And here Sim City had me thinking this was common place when I was a kid.
24
orionblastar 12 hours ago 1 reply      
As we know when a water source dries up in a Middle Eastern nation, the people who need it move to a different location and some get desperate. This leads to more refugees for other nations to take on, and a small fraction join a terrorist organization to survive. If this technology can bring fresh water to Middle Eastern nations, it can help fight terrorism, and end the refugee problem as they'd have no reason to flee if plenty of water and food are available as well as jobs.
25
duncan_bayne 9 hours ago 0 replies      
I've said this for years here in Australia, when people talk about a "water shortage" - there's no such thing. What there is, here at least, is a cheap power shortage and a water infrastructure shortage.
26
known 5 hours ago 0 replies      
Thank you, Israel; Please also amend https://en.m.wikipedia.org/wiki/Marriage_in_Israel
27
sandworm101 13 hours ago 1 reply      
Honestly, I had always assumed Israel was much more advanced than this article points out. In the 90s I was a kid in the middle east. We had a "sweet water" tap in the kitchen for drinking. Everything else was recycled. We didn;t flush our toilets with drinkable water. And low-flow appliances were the norm. I find it shocking that Israel hadn't started such measures until 2007.
28
srameshc 14 hours ago 4 replies      
This technology could be a boon to country like India where huge population relies on rain and some of the reservoirs are at about 40% despite decent rainfall this year. On the other note will desalination and consumption of sea water by humans balance the rising sea level ?
9
How to design parabolic, hyperbolic, elliptical reflectors for 3D printing stratnel.com
15 points by krisraghavan  3 hours ago   2 comments top 2
1
chris_va 8 minutes ago 0 replies      
If you had a specific design in mind, it might actually be more satisfying to generate the STL triangles directly. You can fit your parabola in 2D (by, say, computing reflection angle to a fixed focal point), then just rotate and dump out STL.

The wiki has a nice intro to the subject:

https://en.wikipedia.org/wiki/Parabolic_reflector

2
duke360 12 minutes ago 0 replies      
ok for the 3d printing of the "structure" but then how to make it reflective? do you use regular allumination chamber or what technique ? which precision you can achieve to validate the prototype ?
10
Chinas Cheating Husbands Fuel an Industry of Mistress Dispellers nytimes.com
82 points by credo  3 hours ago   61 comments top 9
1
ripitrust 1 hour ago 5 replies      
I doubt that this will ever work in countries other than China because women (and marriage) in other countries are mostly protected by law, while in china, women are not so well protected. Also, culturally speaking, a divorced woman is very hard to get marriage in China. While a divorced man is easy. In society, People think that "divorce" somehow depreciated the value of a woman, while "divorce" means nothing (sometimes positive thing) for a man. Also, after marriage, Chinese woman will not focus mostly on work or career promotion but rather on family, so in the long term, they are 1) financially attached to the husband 2) emotionally attached to the husband. This makes them want to "fix" the marriage rather than abandon it, even if the fix is superficial and (maybe) temporary.

But as more and more young women (born after 80s and 90s) are married, this kind of issue may be mitigating. Because young women tend to be much well-educated and wealthy

2
rdslw 2 hours ago 9 replies      
Hard to believe this is something more than 'wannabe business PR' plus movie PR.

China female/male ratio is heavily unbalanced. There is a LOT men on market for women to choose. Also almost no foreign woman marry chinese man, on the contrary to the chinese women marrying a lot of foreigners.

1+2 means china is a market where men are at bad position and women has plenty of fish to choose from. Opposite situation which would create such services as described in the article.

Source:https://en.wikipedia.org/wiki/Missing_women_of_China

According to 2012 figures from the National Bureau of Statistics, Chinas sex ratio at birth (the number of boys born for every 100 girls) was as high as 118.

3
sanxiyn 2 hours ago 2 replies      
The key paragraph, in my opinion:

"One response to marital infidelity is divorce. But divorce can be costly, especially for women. Aside from the social stigma that falls more heavily on women, family property and finances in China tend to be registered in the husband's name. A divorced woman can find herself homeless, adding to the pressure of taking measures to save the marriage."

4
shubhamjain 50 minutes ago 0 replies      
Another point that is missing here is, in many cultures, women are reluctant to consider any fault in their husbands. Cheating? Fault of the mistress. Domestic violence? Fault of herself. Coming from India, I have seen this happening countless times where women would never see anything wrong with their spouse even in worst cases.

This may seem horrifying but it is often a result when women are brought up to be good obeying wives, be great mothers and religiously do household duties.

5
kazinator 11 minutes ago 0 replies      
Laugh; "mistress disspelling" is a time-consuming and expensive remedy which focuses on the symptom of the infidelity. After weeks and months of manipulating the mistress, assuming it works, the husband may just find another one. Back to square one.

What if he already has several? Do you hire three counsellors with three strategies to dispel all of them? That's going to get really expensive, fast.

6
Illniyar 2 hours ago 4 replies      
What's the point though? Wouldn't the husband just end up with a different mistress?

I mean they do not resolve the problems in the marriage that led to the situation. Do these women believe it's a one time thing? or that that specific mistress is special?

7
matt_wulfeck 53 minutes ago 0 replies      
> Weiqing eventually ended the affair, she said, by persuading the other woman to take a higher-paying job in another city. I dont care how that woman is living now, Ms. Wang said. I just feel relieved that my husband is back.

This makes me sad because the issue is not "fixed", it's just lost one of its symptoms.

If you speak to enough people who have been married for a "long time" you realize they have been through some really hard stuff. It takes an extreme amount of work and forgiveness from both sides. I can't picture success with just one party being interested in staying together.

8
sverige 2 hours ago 1 reply      
But how will this work in, say, LA or New York? A friend of mine in NYC had a lovely wife who divorced him after discovering his serial philandering. He said there were young women throwing themselves at him all the time. (He's wealthy.)

The stigma isn't as strong here as it is in China, either. And he ended up paying for the divorce.

On the other hand, she's not as well off financially as she had been. It was emotionally difficult for her as well, but the root of that problem was deeper than the symptom.

Not sure that this service would work here.

9
jogofogo 2 hours ago 1 reply      
.
11
GIMP 2.9.4 and our vision for the future girinstud.io
112 points by ashitlerferad  10 hours ago   50 comments top 7
1
phkahler 57 minutes ago 0 replies      
Still waiting. They need to prioritize 3 things IMHO, and those are 1) GEGL integration - this was claimed to be 80 percent done in 6 weeks (several years ago). Updating core code and libraries should be done quickly, not spread out over years concurrently with other development. 2) Update to GTK3. How can a flagship OSS program still be using a many-years-old GUI toolkit? And finally 3) Wayland support. This will be easier with GTK3 and is still slightly future-looking, but I'm writing this on a Wayland desktop so it won't be long before GIMP on the whole is completely built on outdated technology. I know all of these are in the works, but it seems like a nice sprint could get each one done in a month or two at this point. Yet here we are seeing another blog about anything but these...
2
imurray 4 hours ago 5 replies      
Gimp 2.8 frustrated me: I could no longer open a .png, edit it and easily resave it back to .png. The Gimp developers knew better and made the UI strongly favor saving as .xcf, which makes sense in some use-cases, but not mine.

I found this fix, which made quick uses of GIMP less painful for me: http://shallowsky.com/software/gimp-save/

I'm assuming the new save behavior persists in Gimp 2.9.x, but I don't know.

3
Kjeldahl 1 hour ago 0 replies      
Be aware, if you recommend GIMP to people on Macs with retina screens they are going to be very disappointed. TLDR; retina screens on Macs simply aren't supported properly (everything you do will be at a minimum of 2x scale). See https://medium.com/@kjeldahl/gimp-and-inkscape-on-retina-mac... for more details.
4
pritambaral 6 hours ago 1 reply      
The action search feature is nice. I like it that more software is getting search-across-all-menu-entries.

I'd have really liked Ubuntu's HUD menu-search to have become a standard on the Linux desktop; adding search-across-menus would become much easier for softwares, most wouldn't need any addition of code.

5
lnanek2 1 hour ago 1 reply      
Really not a fan of this. In a link he says he took the code for saving/exporting and split it apart: http://girinstud.io/news/2015/09/improving-the-export-proces...

The only way I ever use GIMP is through forked builds that let you save as whatever you want without the annoying save/export distinction, so they seem to making it tougher for people who are making good forks of their bad software to do their job.

6
qwertyuiop924 4 hours ago 1 reply      
GIMP is a handy tool: I use it for a lot of my image editing needs, for when very basic tools won't do. I rarely need the power of PS, and can't afford it anyway (although I do miss that magnetic lasso: GIMP's magic scissors don't quite have a good enough algorithm). All of this talk about the future is exciting.

However, yes, GIMP is quite rarely an acceptable replacement for PS, and I find this unlikely to change any time soon.

Also, GAP is really, REALLY, REALLY awkward and unwieldy to use. I would reccomend Synfig Studio for animation instead.

7
oggedintocom 8 hours ago 7 replies      
GIMP is an icon of open-source software, but do any professional designers use it full-time over Photoshop?

Compare with Sketch, which has taken scores of users from Illustrator.

12
The Limits of Satire (2015) nybooks.com
14 points by Tomte  3 hours ago   6 comments top 3
1
markonthewall 14 minutes ago 0 replies      
>Now that the whole world is my neighbor, my immediate Internet neighbor, do I make any concessions at all, or do I uphold the ancient tradition of satire at all costs? And again, is a culture that takes mortal offense when an image it holds sacred is mocked a second-rate culture that needs to be dragged kicking and screaming into the twenty-first century, my twenty-first-century that is?

The answer to this question is quite simple: France is a nation-state, if you wish to be part of the French nation you need to fulfill its two most fundamental clauses:

- the desire to adopt a common history, in its glory and its shame

- the desire to live and thrive together: "to have achieved great things and achieve more, together"

France is an assimilationist country and the French state takes its sovereignty from the Nation i.e the People. If you wish to come here, if you wish to be part of it, then you need to uphold its principles and traditions.

And of course, it should be needless to say that no one is forced to come here and that on the contrary, they are urged to leave as soon as possible if they find out that they have a burning desire to murder journalists- cartoonists when those draw their prophet the wrong way. That is non-negotiable.

I find it remarkable that the "multicultural crowd" fails to adjust their views so they can be sound with their moral principles when switching their country-wide frame of reference to a global one. If you sincerely believe that cultural minorities have a right to govern themselves and that moral monism is "oppressive" then how come haven't you figured that the French are an underwhelming minority at the world-scale and that their conception of the good life (eudaimon) - not only differs than yours - but includes protecting their public space from religious interference.

So to reply to your question and playing to your rules, dear author:

>And again, is a culture that takes mortal offense when an image it holds sacred is mocked a second-rate culture that needs to be dragged kicking and screaming into the twenty-first century, my twenty-first-century that is?

Mutatis mutandis, this apply exactly to your disdain for the French tradition of satire that you hold in perspective with your globalised view of the world.

2
madink 35 minutes ago 1 reply      
There is no limit to satire , there is only limit in the head of uneducated people.The debate need to be shut before it's even opened. You should not have to censure yourself (when doing satire) because you are scared of offending people. Otherwise , any group can claim to be offended by anything , and you give them power on you. What if vegetarian decide it's offensive to eat meat ? What if I decide that eggplant are offensive ?What if I decide it's offensive to be offended ? There is no end to this. Let's limit freedom of speech at the hate speech mark and allow the rest , legally and in practice.
3
mcguire 1 hour ago 0 replies      
The article has an interesting point about satire, although it loses the plot towards the end.

Allegedly, Voltaire said to Rousseau, "I disagree with everything you say, but I will fight to the death for your right to say it."

You cannot disagree with many of the Hebdo comics, or defend them; there is nothing there to disagree with or defend. The comics are just mocking attacks from one group on another.

13
Using Code Snippets in Chrome Developer Tools alexkras.com
136 points by akras14  12 hours ago   38 comments top 12
1
dccoolgai 10 minutes ago 0 replies      
I have always loved Snippets and the whole Chrome Souces ecosystem in general. In fact, I use Sources as my editor of choice over Sublime, Notepad++, etc. I find the "map to local" feature outdoes anything I use from the other editors. Only gripe is I wish snippets were stored as text files (or at least that was an option)... couple weeks ago my Chrome profile got corrupted and I lost like a year of snippets.
2
kyriakos 10 hours ago 5 replies      
In the 80s home computers shipped with a programming language as their OS (BASIC language to be precise) - today the browser dev tools are the closest equivalent. You can use them on any PC with no additional software - wished it was made more obvious so everyone could experiment with programming like those of us who got their first computer experience 25-30 years ago could do.
3
acangiano 9 hours ago 0 replies      
Speaking of text editors within browser dev tools, I like Scratchpad quite a bit. https://developer.mozilla.org/en-US/docs/Tools/Scratchpad

It's built-in in Firefox, but can be installed on Chrome as well.

4
twic 5 hours ago 1 reply      
Pretty cool. But surely not a patch on Firefox's built-in JavaScript editor, which has autocomplete and inline documentation:

https://developer.mozilla.org/en-US/docs/Tools/Scratchpad

That said, something missing from both is an easy way to import and export the code.

5
stephenr 2 hours ago 2 replies      
I know a lot of developers basically treat Google as Jeebus returned, and therefor infallible, but this just seems like unnecessary bloat.

What's next, a JS minifier built into Chrome, so 1% of users can use it as an IDE while everyone else just gets more shit they don't need or want?

6
stefanwlb 1 hour ago 0 replies      
Are the snippets saved to the google account as well, so that I can access them anywhere I sync my chrome dev browser?
7
4684499 9 hours ago 1 reply      
Is that possible to run these snippets automatically once page is loaded? I guess this will raise some security issues if it's possible, but I don't want to use things like Tampermonkey anymore.
8
adamnemecek 8 hours ago 0 replies      
I discovered it back when this came out and pentesting has never been the same.
9
foota 8 hours ago 0 replies      
I was showing this to my coworkers the other day, they thought it was great.
10
mxuribe 2 hours ago 0 replies      
I never knew this; quite handy!!
11
Nickersf 8 hours ago 0 replies      
Love it. Been using for a while now.
12
ben174 10 hours ago 4 replies      
Not sure this is "hidden". Anyone who has spent ten minutes poking around chrome dev tools is likely to have discovered this.
14
A founder's perspective on 4 years with Haskell baatz.io
284 points by mightybyte  17 hours ago   131 comments top 9
1
radicality 15 hours ago 7 replies      
Maybe someone can chime in here - I would love to be working full-time in Haskell, but I'm having trouble figuring out just how much knowledge I need upfront. I know I would be reasonably good at it after 2-3 months of working full-time with some Haskell expert to keep bugging.I try to learn as much Haskell as possible after my normal work but of course my rate of learning is much slower than if it was my full-time job. Looks to me like a chicken-and-egg problem. Anybody have any tips/knowledge of getting a Haskell job?
2
facorreia 15 hours ago 5 replies      
Interesting, but so many caveats (e.g. writing rough libraries for things like sending emails). I find that Scala offers a better balance by enabling functional programming while still taking advantage of a huge, mature ecosystem including great libraries and tools.
3
akurilin 15 hours ago 1 reply      
Thanks for posting that! We've also been a Haskell at scale in production company for a few years at Front Row, great to see others pulling off the same successfully. It's a small community so I'm happy to pool our resources together to make this continuously a better ecosystem to build businesses on.
4
thoradam 2 hours ago 2 replies      
> You might later find that something that took you 5 lines is a standard abstraction ready to be re-used.

Such a great point and perhaps one of the biggest challenges as languages allow increasingly reusable and powerful abstractions. I would love to have GHC tell me something like "this piece of code here has a signature that is familiar, you could probably make this fit into {list of abstractions}".

5
Nemant 15 hours ago 2 replies      
The Better platform gets around half a million learning actions every week and it has been running for well over a year with no downtime or maintenance. We still dont know of any bugs.

I'm guessing people use your learning software during weekdays and working hours (5 days and 8 hours/day). That's 3.4QPS. How many machines are you running on?

Also, I'm really shocked to hear you've not encountered any bugs. How many humans are using your systems? A scale is good enough (100s, 1000s, 10000s?).

6
rkrzr 16 hours ago 5 replies      
Is there something like a "standard" HTTP client library and also a SQL abstraction library in Haskell nowadays, that everyone is using?Similar to what 'requests' and 'SQLAlchemy' are in Python?

I feel those would be the two libraries that I would probably miss the most when switching to Haskell for a project.

7
nxc18 16 hours ago 6 replies      
This is a great article - functional languages like Haskell don't get enough credit in a world where JavaScript's shenanigans are the accepted norm.

Also check out F#, especially if you already have a code base involving .NET in any way. The CLR makes it super easy to write some parts in a functional language and other parts in more traditional OO - after all, the right approach often varies even within projects.

As a personal anecdote, taking the time to learn Haskell or any other functional language makes you a better programmer. The concepts often apply to less'pure' languages and certainly stretch you to think in new ways.

8
hkjgkjy 16 hours ago 3 replies      
How I wish for a lisp like Clojure with a type system like Haskell...

Hope core.typed will be that!

9
Confusion 7 hours ago 1 reply      
I was really disappointed by Haskell when I wrote the simple dynamic programming solution to the knapsack problem in it. To get good performance out of that took a lot of time and help from people at #haskell to deal with space leaks.

Ultimately, the functional solution was verbose, harder to understand and still slower than the more imperative solutions in Clojure (also very hard to get good performance in BTW, but at least you can easily implement performance critical stuff in Java) and Ruby.

That experience really turned me off Haskell.

15
Shared Space: Balancing Astronomical Ads, Artificial Lights and Starry Nights 99percentinvisible.org
10 points by misnamed  2 hours ago   2 comments top 2
1
nerdponx 1 hour ago 0 replies      
http://darksky.org

I'm glad the International Dark-Sky Association exists. Someone has to fight for this stuff, and having an organization with a website makes it easier for individuals to help.

2
dingaling 1 hour ago 0 replies      
One of the disincentives against smarter control of street lighting is the near- or even below-cost electricity rates offered by power companies. Since it's so cheap, councils and other authorities leave lights blazing at full intensity and coverage all through the night instead of reducing intensity or shutting-down sections of lighting.

We astronomers in Northern Ireland were excited a couple of years ago when there was the prospect of lights being turned-off for five hours in the wee hours due to budget cuts, but unfortunately a new cheaper power contract was agreed with the suppliers.

16
Uber Cant Force Arbitration Over Pricing Antitrust Claim bloomberg.com
48 points by lnguyen  9 hours ago   10 comments top 3
1
ipsi 3 hours ago 3 replies      
Second paragraph:

> U.S. District Judge Jed Rakoff in Manhattan said Friday that Ubers online user agreement didnt provide Spencer Meyer with sufficient notice of its arbitration policy for it to be binding.

Assumed this meant that if they'd told him a bit further in advance (e.g. 30 days, etc), it'd be fine, but no. Further down the page:

>In the New York case, Rakoff said that Meyer registered with Uber in October 2014 using a Samsung smartphone. The registration form included the words "By creating an Uber account, you agree to the Terms of Service & Privacy Policy," according to the judge.

> The form didnt require users to click on the "Terms of Service & Privacy Policy" to register. Users who clicked on the link were taken to nine pages of highly legalistic language that no ordinary consumer could be expected to understand, Rakoff said. The arbitration clause, which includes a waiver of the right to sue in court, was at the bottom of Page 7, he said.

> Rakoff ruled the registration process didnt provide sufficient notice to Meyer that he was waiving his right to have his claim heard by a jury in court.

Which seems a lot more interesting.

2
carbocation 2 hours ago 1 reply      
When I enter into a business contract, I have a lawyer review the documents before I sign them. When I download an app or signup for a web service, I don'tbut the legal documents that I'm signing are no less complex in those cases. It's unreasonable to expect a serious understanding of these contracts on the part of the user, and it's unreasonable to expect them to hire a lawyer to be sure they're OK with what they're agreeing to. The costs, multiplied across all customers, would be prohibitive. For mass market services like this, there should be a better way.
3
home_boi 28 minutes ago 0 replies      
I don't know about this one. No one reads the terms of services for simple web services or apps. But I feel that drivers should have read the TOS because Uber is paying them and it is common sense to read any and all papers when doing business.
17
Paw Advanced API tool for Mac paw.cloud
311 points by andyfleming  18 hours ago   108 comments top 45
1
flyosity 15 hours ago 9 replies      
I can't believe some of the comments in this thread. Paw is an advanced developer tool with serious functionality for testing APIs, and many of the most upvoted comments are basically just posting links to similar tools, with substantially less functionality, that are free.

How many of these commenters make thousands of dollars every month as professional software engineers? How many make thousands each week? Paw is an extremely high quality, native Mac app, made by a small team trying to provide for their families by building software for other software engineers like all of us. It's a paid tool but it's probably the most sophisticated API testing tool that exists.

I've been a completely satisfied Paw user for over a year. It's been invaluable to me and I use it constantly. It is worth the money. It is more sophisticated with a far better user interface than anything else out there. Use the trial, buy it if you like it, but this pattern of comments saying Check Out Product Z, It's Like Product Y But Free (And Less Good) makes me hate coming to HN.

2
aavotins 3 hours ago 1 reply      
Software engineers are an interesting bunch... Reading all the bashing comments about Paw not being free, I can't help but feel a vibe, that some people expect to be paid thousands of dollars and not pay a single cent themselves. Where does this entitlement come from?

Good job guys! You made a fantastic product. I will always favor native apps over Electron hacks. I can't count how many times I've CMD+Tabbed to Chrome and hit CMD+W to close a tab, only to see Postman disappear. It disrupts my flow.

One thing though. I'm a developer, I know what I'm doing. Please make JSON Text the default option. Formatted JSON doesn't hurt or threaten me :) I expect to see my responses exactly as they are.

Edit:Unfortunately I have to edit this comment to give additional feedback about Paw. I'm using the 30 day trial, i.e. I am evaluating the product. Paw is running in the background, I'm not interacting with it in any way, but it jumps to foreground, just to show me a pop-up window informing me, that I have 29 days left of trial and should upgrade. No, thank you, I installed the app just an hour ago. I'm aware of the 30 day trial. No need to nag about it every 20 minutes, that will not prompt me to upgrade any sooner. If anything, it does quite the opposite.

3
josefdlange 16 hours ago 3 replies      
I love using Paw. I'm no longer in the building phase of my service, but when I was defining my API and testing endpoints, Paw was critical in the process.

My main feature request (and I'm not sure if any competitor does this, so please inform me if it exists somewhere):

While Paw allows me to export code, which is cool and all, it would be very interesting to allow me to compose workflows, like, say Automator in macOS, including assertions, so that I could essentially compose and export integration tests with Paw. It'd be neat to see some more generally API-definition features hit Paw, or maybe a companion app that plays nicely with Paw to do the definition half of things, compatible with Swagger and whatnot.

Clearly not a well-thought-out idea, but I think there's space in Paw's domain for some form of what I'm talking about.

4
jamestnz 14 hours ago 1 reply      
Paw is a great tool, but I can also attest to the excellent support of the developer. I made a mistake when uninstalling it off my old Mac, and found myself unable to activate the serial on my new Mac. Micha replied personally, took me at my word regard the issue I was having, and immediately added a second seat to my serial for free, as the simplest way to guarantee I'd never have the problem again. I appreciated that.

Secondly, there are indeed many similar tools, but I've found that Paw in particular has well-thought-out implementations of a number of useful features (cookies, JSON parsing, auth methods, history) as well as a neat way of managing requests across environments e.g. dev machine vs test server vs production API.

Paw isn't the only tool I use when testing/developing API endpoints, but I find it to be the most featureful while also playing nice as a Mac app, with a decent UI and the expected things like remembering my previous window positions/states the next time I launch it.

5
jameslk 17 hours ago 4 replies      
For those who want something that works really well, with a very nice UI and is free, I would highly recommend Insomnia for Google Chrome (it functions as a standalone app outside of Chrome).

https://chrome.google.com/webstore/detail/insomnia-rest-clie...

6
eddieroger 18 hours ago 1 reply      
I've used Paw for a few years since I started getting inconsistent Postman results thanks to Chrome cookies, and I haven't looked back. It's really a fantastic tool that is part of my daily life as an API guy. The dev, Micha, is super helpful via email and has even incorporated feedback when the app didn't work the way I expected (like sending request body on GET). I would honestly buy it again if updates weren't free.
7
jadengore 17 hours ago 1 reply      
I've owned Paw for a while, and I find it much more user-friendly than Postman. It's not for everyone, sure, but the developer has put in good work and you won't regret a purchase.
8
orliesaurus 1 hour ago 0 replies      
I haven't used Paw in a while but from what I remember Paw has an excellent UI and UX - it goes above and beyond API developer's expectation and it also used to integrate with mashape (not sure if this is still a feature?). Either way spectacular product!

I also got familiar with Stoplight.io - which I have found to be a little bit harder to use but has a large array of features that I think could inspire a lot of developers to build and grow the API dev community

aaand I don't work for either companies!

9
eliangcs 6 hours ago 1 reply      
I'm the creator of HTTP Prompt (https://github.com/eliangcs/http-prompt). HTTP Prompt is HTTPie enhanced with autocomplete and syntax highlighting. This is the first time I hear about Paw, and I wonder how it compares to HTTP Prompt. Would any Paw users like to share?
10
dchuk 18 hours ago 0 replies      
Interesting, this is the first time I've seen a domain with the .cloud extension. I wonder how many other companies are going to follow suit?

Regarding the app, Paw is friggin amazing. I was lucky to get it back when it was much cheaper, but it's definitely still worth the $49 even now. The team stuff is interesting too, as me and my guys share our exports all the time.

11
greggman 11 hours ago 3 replies      
This is an honest question, I'm not trying to be negative but I'm really confused.

It would seem like best practices is that all of this is part of your automated tests. Tests that are checked in to your repo and written in code and part of your CI. How does a native app that is OSX only fit in that scenario? Or maybe I'm not understanding it?

12
jen20 18 hours ago 1 reply      
Paw is a really great product, I've been using it for a long time now. I _really_ wish they would combine this with traffic capture - similar to Fiddler on Windows. Charles does the trick, but it's a mess to set up for proxying HTTPS traffic.
13
ecaroth 18 hours ago 0 replies      
I purchased Paw about 6 months ago and used it heavily, until recently I have ran into multiple situations where it didn't properly include custom headers I specified into the request, and was causing odd errors that I assumed were the fault of the code I was testing (happened in multiple different languages/projects). I have since started using Postman, but would love to go back to Paw since I appreciated some if it's features (such as being able to save an API definition into the github repo for sharing with other devs)

EDIT - didn't realize this post was for Paw 3, which just became available. Installing now and excited to try it!

14
mightykan 14 hours ago 1 reply      
Why do I need to create an account with an email and password to purchase this if its not a subscription service? To add insult to injury, I have to agree to some Terms of Service that probably indemnifies this company from doing whatever they want with my data. Who comes up with these brilliant ideas?

The Mac App Store is absolutely horrible (lots of bugs, slow, inconsistent, almost unusable, absurd certificate expiration issues that are completely embarrassing for a company like Apple, etc.) and Apple has completely ignored all developers who use it... but this is exactly why I always prefer buying from it. I dont want some nobody developer harvesting my information, and selling/renting it off to some who knows who either now or some years down the line when the company folds.

Please have respect for your potential paying customers and drop these kinds of practices. Other than recovering a potentially lost license key, there is absolutely nothing I need from you, including any and all news or updates, after I have purchased the app. Therefore you should only really need some unique value (like the hash of an email address) for that.

15
throwaway745234 17 hours ago 2 replies      
Bought Paw on the Mac App Store a while back. I wonder if this update will come to me as well, or that I need to repurchase. For now I can't find Paw 3 there at all; perhaps they stopped using the MAS?
16
kodisha 12 hours ago 0 replies      
Best feature of paw (idk if it was intentional)

- fetch huge json

- collapse first level fields

- leave node/subnode of interest open

- change backend

- refresh paw, and see changes way faster, coz it will show only open subtree

really useful for large json responses! (Postman dies on those)

17
sinzone 9 hours ago 0 replies      
Paw is engineered by a top notch team in France. They care about quality and details. We at Mashape have partnered with them in the past and they are super responsive and always focused on the best experience for the developer.
18
developer2 9 hours ago 2 replies      
As someone who just spent 30 minutes setting Paw up for my needs, here is my review. Unfortunately, it is not worth $50. It's not the price point itself, it's the fact that the app is too unpolished to pay any amount. After the first 30 minutes, I won't be continuing - and would not even if it were free.

The application hijacks mouse events for custom widgets that don't function as expected. It takes far too much pointer precision to manage the request list and the groups. Half the time the drag-and-drop glitches so that you are highlighting rows without actually having the item with the cursor. You also cannot drop a group to the end of the list. Similar problem in the environments config window: add a second environment for a variable; the column widths are too short to see the variable's value, so you try to resize the columns and it doesn't work even though the resize icon appears on hover.

The "JSON" response format is hideous and for some reason the default. The "JSON Text" format is what I want and switch to, but this fact is not remembered and every single new response resets back to the ugly xml-tree-like format.

By the way, trying to click the help icon for dynamic values opens the documentation to a page[1] that doesn't load due to an encoded '#' symbol (%23). Again, a sign of a final product with very little QA, being sold at a fairly premium price for which one expects quality.

The UI and interactions are far from seamless. The constant harassment of a popup trying to get me to upgrade is the last straw. When a user is on trial, you don't interrupt their workflow every few minutes.

[1] https://paw.cloud/docs/dynamic-values/response%23Response_Bo...

19
ianunruh 18 hours ago 3 replies      
How is this different from Postman? It seems very similar, but Postman is free and has other really awesome features (like being able to export tests and run them in the command line).
20
avitzurel 17 hours ago 2 replies      
I've been using Postman for a couple of year now if I recall correctly. It's an absolute timesaver for me when developing and using an API.

Paw looks really good. I'll definitely give that a try, especially if the team sharing features are good.

21
hugocbp 18 hours ago 0 replies      
Great news! I use Paw all the time while exploring new APIs. It is a great piece of software.

EDIT: Wow! Just saw that the upgrade to v3 is free for v2 users! Nice surprise!

22
harrychenca 9 hours ago 0 replies      
If you already write specs for your API, you would seldom use a GUI to test API.
23
cwisecarver 17 hours ago 0 replies      
Paw is excellent. I've used it for years. The one feature I hope for in v3 is the ability to import cookies from Safari/Chrome/etc.
24
karlshea 18 hours ago 0 replies      
I've been using Paw 2 for a project in the last couple of months and it's totally amazing. Very excited to give the new version a try!
25
pimlottc 18 hours ago 0 replies      
Looks cool! Probably should specify Web API tool, though; there are plenty of APIs that have nothing to do with HTTP.
26
liveify 18 hours ago 0 replies      
It's not cheap but it is one of my most used dev tools that is not an editor/IDE. Highly recommended.
27
davidhariri 16 hours ago 0 replies      
I've been using Paw for two months now. Much better than Postman IMO. Best to the developers
28
discordance 15 hours ago 1 reply      
Just a heads up to the product owner - the site needs some work for mobile users:https://imgur.com/a/akNqP
29
andyfleming 18 hours ago 0 replies      
The big difference with Paw 3 that is allowing my team to make the switch is team syncing.
30
morgante 12 hours ago 0 replies      
I've used Postman for a while, but I keep running into a strange issue where it somehow shares cookies with Chrome. It's proven very annoying for testing certain APIs (because the cookies override whichever auth headers I'm sending), so I've been looking for an alternative.

Paw definitely looks promising. The pricing seems a little steep though.

31
hkjgkjy 16 hours ago 0 replies      
Looks cool :-). If there is any Emacs hackers looking for a way to make and document api calls, I recommend Restclient-mode[0]. Emacs Rocks! Episode on Restclient-mode[1].

[0] https://github.com/pashky/restclient.el

[1] http://emacsrocks.com/e15.html

32
aphextron 17 hours ago 0 replies      
This looks amazing and desperately needed. Postman is ok, but to finally have a proper native app with a modern UI is fantastic.
33
andyfleming 18 hours ago 0 replies      
I'm having a conversation with the CEO of Postman on twitter now about ways Postman could improve: https://twitter.com/andyfleming/status/759134340185862144

For now though, I'm super excited about the new Paw app.

34
Walkman 17 hours ago 0 replies      
It even supports SSL client certificates!
35
xenadu02 12 hours ago 0 replies      
I had the pleasure of emailing with Paw's creator when he was thinking of applying to YC and introducing him to the founders of PlanGrid.

I can vouch for Paw; it's an awesome tool.

36
RantyDave 4 hours ago 0 replies      
I have this and it's awesome.
37
enahs-sf 16 hours ago 0 replies      
Looks much better than the curls I am currently pasting to my teammates.
38
tommynicholas 18 hours ago 0 replies      
Paw is amazing
39
CiPHPerCoder 11 hours ago 0 replies      
EDIT: I found more info. They're using RNCryptor.

Hopefully not the C++ bindings! https://github.com/RNCryptor/RNCryptor-cpp/issues/2

40
KuiN 16 hours ago 1 reply      
As a (somewhat stubborn) adherent to httpie | jq, what do I gain from my $50 going to Paw?
41
batbomb 14 hours ago 0 replies      
I was worried for a second somebody brought the Physics Analysis Workstation back to life:

https://en.wikipedia.org/wiki/Physics_Analysis_Workstation

42
tuxracer 16 hours ago 1 reply      
Would love to see built-in HAR import/export
43
a-b 16 hours ago 1 reply      
I'm curious if there is anything in Paw that curl or siege can't do.
44
bjoernw 15 hours ago 0 replies      
Jmeter is free and has more features https://jmeter.apache.org/ Though I agree it might be overkill for simple use cases.
45
bmarkovic 7 hours ago 0 replies      
I understand that ISVs pick Mac because that's where most of the paying customers are but the OSX monoculture oozing from Bay Area today is even more tiresome than the Wintel monoculture of yesteryear.

Ask authors of Macaw what good it did them or did Web flow actually eat their lunch.

As far as I am concerned if a dev tool isn't three platforms crossplatform it didn't need to exist at all. Don't be surprised if it's another "yeah I remember them" niche thing in a couple of years. Existence of Postman and Insomnia is already a burden for this product, being OSX only will likely limit it's eeach to Starbucks dwelling hipsters from SoCa.

18
The Blog That Disappeared nytimes.com
69 points by jakevoytko  11 hours ago   27 comments top 9
1
bko 2 hours ago 1 reply      
I don't think it's in Google's interest to just indiscriminately delete someone's blog. Google rarely makes decisions on a human level and prefers not to intervene. They are getting a lot of bad press from this that the management would probably prefer to go away. I think the answer lies in this:

> We are aware of this matter, but the specific Terms of Service violations are ones we cannot discuss further due to legal considerations

I think the most likely scenario has to do with cp. IANAL, but I think in many jurisdictions cp has strict liability, meaning that if you are in possession, knowingly or not, you are criminally liable. If the author was hosting or somehow tied in to cp, the authorities would likely force Google to take it down and hand it over for investigation. The case is probably still being investigated so no action has been taken, but if Google hands over the archive, it would likely be liable for dissemination of cp.

This seems to be a lot more reasonable explanation to the secrecy than the standard "Google is evil" argument. The main thing that makes me doubt this explanation is the authors vocal outcry. If he were involved in some nefarious activity, I would imagine he would just disappear into the shadows if he suspected his acts were close to being uncovered

2
Udo 7 hours ago 4 replies      
It doesn't really matter whether Google was "in the right" or what the backstory of this case is. It's also not a question of bringing this to the attention of the "right" people at Google. This is an issue inherent to the entire system. The problem is that people entrust their work and often their entire public persona to entities that are guaranteed not to act in their interest.

I don't even think this is about access to decentralized alternatives, after all if you devote years of content to a platform you can probably manage to go through some technical setup pain. Gmail, Blogger, Youtube, Twitter, Facebook - there are self-hosted alternatives out there to all of them, but the big trade-off is that content creators have to shoot themselves in the foot on discoverability if they opt out of the big platforms.

Either at some point we will collectively wise up and re-decentralize or we'll go further down the path we're already on. Our lives ruled arbitrarily by DRM and ToS. I do recognize that I'm personally part of the problem, it's really hard to opt out.

3
natecavanaugh 2 hours ago 0 replies      
I tend to take it for granted that third party services can and will delete everything you post whenever it suits them, or even just because whoever is running the data management is inept or uninterested in a sane backup policy. But that might just be because I "grew up", digitally speaking, in the era of self-hosted websites and blogs, of FTP and local development pushed remotely. It does seem that the entire system is now designed around always trusting some other service for managing our data, so I wonder if this will continually influence users to trust an illusion, and a far harder to understand mental model of what is "mine", "ours" and "yours". Maybe it'll all course correct itself, with services baking into their models personalized data recovery and management, or at least a more sane model of data ownership.But I do think it will take more issues like this, instead of just the "death by a thousand cuts" that is happening now when your spouse wipes out all of your playlists because they didn't know they affected every device, or not knowing which photos have been synced and which haven't because your cloud storage is full.

Sometimes all the magic starts feeling a lot less like Harry Potter and more like Pee Wee's Big Adventure.

4
_Codemonkeyism 7 hours ago 0 replies      
Again we're in a sad state when people with tools at their fingertips can't see them and think if Google doesn't offer a blog service, there is nothing they can do.
5
mxuribe 1 hour ago 0 replies      
Maybe to sandworm101's point there was automation occurring here. Google certainly dives into AI...maybe they have an experimental AI back-end service to try and catch things which might be considered "offensive"...and in this case perhaps it failed...but google does not want to publicize any failure of its AI to avoid whatever future promotional plans it has int store for such tech. But that of course is kind of a conspiracy theory. I guess/hope the real reason comes out into daylight.
6
paulpauper 3 hours ago 0 replies      
It bears repeating, always have back-ups of everything. Store a version on the 'cloud', on an external hd, and on the hard drive of your computer.

maybe one of those 'ads' were underage or something

Google is notoriously Orwellian and Kafkaesque in their TOS so I think he's out of luck. It would be a nice guesture if they just gave him back his data though. I don't think that's too much to ask

7
d--b 7 hours ago 1 reply      
Mmmmh, as long as it's not clear what the violations were, it's useless to discuss whether or not Google was in their right to shut down the account.

This is just a reminder that public publishing follows rules, and if you want to bend the rules one way or another, this is what may happen to your account...

8
jake-low 7 hours ago 0 replies      
Some discussion here [0] from when the story appeared on Fusion [1] a couple weeks ago.

[0]: https://news.ycombinator.com/item?id=12099757

[1]: http://fusion.net/story/325231/google-deletes-dennis-cooper-...

9
blowski 3 hours ago 1 reply      
Why don't they just give him the content, so he can host it somewhere else?
19
Elsevier My part in its downfall (2012) gowers.wordpress.com
90 points by LolWolf  12 hours ago   26 comments top 12
1
philjohn 5 hours ago 0 replies      
I experienced the other side of this, as someone working for a library technology company. Getting licenses to index just the metadata of non-open access journals can be extremely costly, even when the journal publishers derive a net benefit from the arrangement (if discovery software surfaces a result from your journal, and the academic clicks through, the library link resolver records it and the figures are used to justify a continued subscription).

It's a massive racket, the source material (academic papers) are free, the editorial staff are not paid, e-journals have obviated the need for printing and distributing paper copies ... Elsevier and their ilk are a parasite, feeding on the spoils of academic research that is often publicly funded.

But then, in a "publish or die" mentality that researchers are forced into, publishing in journals with higher impact ratings help them keep their job; unless every single researcher agrees to stop publishing in the paid-for journals and move en-masse to open access this sorry situation will continue unabated.

2
abdullahkhalids 7 hours ago 2 replies      
A better bottom-up strategy is one based on assurance contracts. Some scientists might not be willing to immediately part with Elsevier because of other constraints. But they can publicly declare a promise: if scientists X,Y,Z (perhaps their direct competitors, or the editorial board of a journal) promise to not work with Elsevier, neither will I. Scientists X,Y,Z might then have their own promises. If these promises are recorded publicly, that can start a growing movement where today no one has to do anything, but in a few months or years a lot of people have no disincentive not to work with Elsevier.
3
paulsutter 30 minutes ago 0 replies      
> Returning to the subject of morality, I dont think it is helpful to accuse Elsevier of immoral behaviour: they are a big business and they want to maximize their profits, as businesses do.

Exactly - Don't blame Elsevier the company, blame the people who can end this nonsense:

- The editorial boards of all high-cost-closed-access journals are to blame. Each board should immediately resign as a group, and start a new journal, as did the board of Topology[1].

- All elite universities are completely in the wrong to allow researchers to submit articles to high cost closed access journals. Elite universities have the greatest power to force the editorial boards to act.

- The employees of Elsevier are to blame. Every employee of Elsevier should immediately begin searching for better job, and resign as soon as possible.

The demise of shitheaded publishers is inevitable. The companies themselves will go kicking, screaming, and making as much damage as they can, as a harvest strategy. Please people, put an end to this misery.

[1] https://en.wikipedia.org/wiki/Topology_(journal)

4
seanhunter 5 hours ago 0 replies      
Tim Gowers is an amazing mathematician and a person of great integrity, but this is not exactly news given that the boycott has gone on for so long now (see the date on the article). More recently, in March 2016 he founded a new ultra-low-cost journal (Discrete Analysis) which publishes all articles on arXiv. https://gowers.wordpress.com/2016/03/01/discrete-analysis-la...
5
infinite8s 4 hours ago 0 replies      
Note - this from 2012. An update is published here - https://gowers.wordpress.com/2015/11/10/interesting-times-in...
6
murphysbooks 1 hour ago 1 reply      
The US Government is trying to address access to publicly funded research by requiring that most government agencies have open access plans.

https://duckduckgo.com/?q=public+access+plan&ia=web

This was started government-wide from a memo by John Holdren.

https://www.whitehouse.gov/sites/default/files/microsites/os...

Each agency comes up with their own plan, but they have to make access to the research possible without charge.

Many agencies provide access to submitted manuscripts.

Another good avenue to address access would be to get states to adopt similar policies based on research by state employees.

It is often easier to get states to pass laws than it is to deal at the federal level.

Is anyone interested in writing up some legislation?

7
jpatokal 2 hours ago 0 replies      
This is from 2012, and his then idea of a website listing academics who publicly boycott Elsevier was long ago implemented here: http://thecostofknowledge.com/
8
freyr 4 hours ago 0 replies      
He mentions that the editorial board of Topology resigned to found Journal of Topology. This is counted as a success story, but when I try to read a paper there I get this message:

"You may access this article for 1 day for US$40.00."

Great deal.

9
freyr 4 hours ago 1 reply      
It's tough because academic disciplines are so provincial; each has its own short list of (typically closed) journals that garner respect.

To get people to switch to an upstart open access journal takes momentum out of the gate. You need to convince qualified volunteers to do peer review for a journal they've never heard of and that might vanish tomorrow. You need to convince influential insiders to submit quality papers. And you need to repeat this process for each insular sub-discipline.

10
mavhc 9 hours ago 1 reply      
From 2012
11
1_listerine_pls 1 hour ago 0 replies      
I won't eat at McDonald's.
12
T0T0R0 7 hours ago 3 replies      
It's funny, the whole complaint about the tactic of bundling.

This is something that punk and hard core music has done, pretty much since the beginning. Most punk rock discographies are filled with repetitive overlaps of the same songs, and re-recorded versions, or versions of songs performed at live shows.

For a while, I had just assumed it was part of the whole small budget, DIY, low fidelity aspect of the genre. Bands might have tracks on EP's and 7 inches, that go out of print and become rarites, or the band might just be disorganized, or have extra space on a disc, and pile in a few more tracks, because what the hell?

But after a while, collections become bloated with all kinds of pointless cruft, and you look through some catalogs, and it's pretty obvious that some bands would just get desperate to make sure they have something in the new releases section every six months, to stay visible, and they'd obviously record one new track, bundle it with four or five other tracks, and dump it into their catalog to indicate signs of life.

It's not unlike like baseball cards, where doubles are a known factor, and concentrations of rare desirable items are controlled and designed, to fuel collector's habits.

I guess the difference with scientific journals is that the fanbase doesn't have the same sort of emotional investment in their collectables.

20
Obtaining Wildcard SSL Certificates from Comodo via Dangling Markup Injection thehackerblog.com
241 points by pfg  20 hours ago   53 comments top 11
1
0xmohit 20 hours ago 3 replies      
The timeline looks awesome:

 June 4th, 2016 Emailed security@comodo.com and reached out on Twitter to @Comodo_SSL. ... July 25th, 2016 Robin from Comodo confirms a fix has be put in place.
50+ days! Funny that the entity in question is the biggest CA.

That said, one should never click on links and buttons in emails. It's more comforting to be able to copy a link instead.

2
AndyMcConachie 20 hours ago 1 reply      
What's particularly terrible about problems like this is an organization with absolutely no relationship with Comodo is vulnerable. The CA system is so horribly broken.
3
deathanatos 15 hours ago 1 reply      
> Peeking at the above raw email we notice that the HTML is not properly being escaped.

While I don't believe the author is mistaken that he successfully injected HTML into the email, the email snippet quoted in the article is properly escaped, because none is needed, because the email is not HTML:

 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ snip ] Subject: <h1>Injection Test</h1> <pre>This order was placed by</pre>
This is correct for plain text as <h1> has no special meaning. That said, this is a multipart/alternative email:

 Content-Type: multipart/alternative; boundary="(AlternativeBoundary)"
"multipart" meaning the email contains multiple parts, "alternative" indicating that they're the same semantic things, but alternate representations of it. We're looking at the plain text representation, when we should be looking at the HTML representation.

(These are so email clients that don't support HTML or are configured to ignore can fall back on something.)

4
greglindahl 20 hours ago 2 replies      
To somewhat avoid email-based attacks like this, I asked my email client to show me the text instead of html email when both are provided. It's pretty clear to me that not that many people do this... many major websites send me emails where the text version is empty, truncated, completely different content, or a message like "Your email client is misconfigured, it should be showing you the html version".
5
retox 20 hours ago 1 reply      
Incompetence, from top to bottom.If you are in a position to deny them money you owe it to the internet to divest asap.
6
martinald 20 hours ago 1 reply      
This is so bad. I used to think a while back that SSL pinning was over the top. It looks like we as an industry need to move to SSL pinning asap wholesale.
7
hannob 18 hours ago 0 replies      
A very obvious mitigation to this and similar issues seem to be to forbid HTML in domain validation emails.I don't see a reason not to do this and I totally expect that Comodo is not the only CA having such issues.
8
Fej 17 hours ago 0 replies      
Another day, another CA problem.

I'm almost desensitized to it at this point. Despite the fact that these problems are potentially Web-breaking.

9
strommen 18 hours ago 3 replies      
Wow, this attack was XSS 101.

And it could have been mitigated with anti-XSS 101: rejecting all form posts containing angle-brackets.

10
fred256 15 hours ago 1 reply      
Why does the title specifically mean wildcard SSL certificates?
11
cesis 20 hours ago 2 replies      
Not free. But an attempt to counter LetsEncrypt [0].

This [1] is what Comodo attempted to do a while back.

[0] https://letsencrypt.org/

[1] https://letsencrypt.org/2016/06/23/defending-our-brand.html

21
Building a browser-based test automation server on the Google Cloud Platform github.com
74 points by seleniumbase  12 hours ago   10 comments top 4
1
boulos 10 hours ago 1 reply      
Cool! Why'd you roll your own MySQL instead of using Cloud SQL though? With Cloud SQL 2nd Generation you get a managed database even on an f1-micro for less than $10.
2
manojlds 11 hours ago 2 replies      
I've started using docker and docker-compose for browser automation.

Edit - I blogged about the basics of my setup - https://stacktoheap.com/blog/2016/01/04/running-webdriverio-...

3
loadfocus 5 hours ago 0 replies      
Or you can use https://loadfocus.com for only 10.95 per month, even cheaper and without having to do all the extra work.
4
ridruejo 9 hours ago 1 reply      
Glad to see them using Bitnami images :)
22
What Its Like to Live in the Worlds Most Polluted City nationalgeographic.com
120 points by kamaal  16 hours ago   60 comments top 12
1
nagarjun 6 hours ago 2 replies      
Bangalore is no different. I hate to sound stereotypical and complain about the utter lack of infrastructure here but I have no choice. I grew up in Bangalore and only a decade ago, things a were a lot better.

The city drew millions of people from across the country in the last few years and simply hasn't keep up. Add to that an incredibly unstable state government and you have a city that can no longer service its residents. There's garbage everywhere, rude people, broken roads (which when left that way, attract more garbage because no one really cares anymore) and a complete lack of civic responsibility. The monsoons have made it worse! People get stuck in traffic jams for hours. Even in nicer neighborhoods, we lose power for half a day because of the rains.

I'm a proud Indian but I lose complete hope when I observe the current state of Indian cities.

2
yyyuuu 8 hours ago 1 reply      
I have been living in Delhi for around one year now, and as one of the other comments pointed out, this city is filthy and extremely polluted even by Indian standards (Note: I am an Indian and have traveled to and lived in many Indian Cities).

Here are some things that in my opinion make this one of the worst places to live:

- Filth, Garbage, lack of sanitation. Most of Delhi is filled with garbage. There is Garbage lying everywhere. People throw in out of their cars on roads while driving. People dump plastic bags filled with garbage to nearby streets on a daily basis.

- Pollution. Its monsoon season here in Delhi right now and for the first time in a year, I was able to spot a few stars in the sky last night. The rest of the year, the air is filled with a haze of noxious gases. I have seen vehicles emitting large plume of white smoke and no traffic policeman cares to stop them and ask for their pollution certificate. Garbage dumps mentioned by the OP are literally like mountains with smoke emanating from various parts all the time.

- Lack of space. Ok. There is no space in Delhi. For one to walk or jog.

- Angry culture. I am sad to say that I have never seen more road rage incidents anywhere else. People have very little patience and respect for fellow individuals seems to be missing more often than not.

3
sandGorgon 6 hours ago 1 reply      
I'm Indian and I live in Delhi. Yes it's pretty bad and it's a consequence of the massive population that exists in Delhi. Having said that...we are trying.

Delhi has much stricter vehicular pollution norms than other Indian cities...and it adopts them first. All public transportation in Delhi runs on compressed natural gas. The first city in India to introduce Singapore style partitioning of vehicular traffic (the odd-even scheme was accepted enthusiastically By the common man at the cost of inconvenience). The Delhi Metro rapid rail is a feat of engineering having covered the entire city in record time under existing buildings that are hundreds of years old. There is general acceptance of traveling in the metro (where seeing people working on laptops is common).

In general as opposed o places like Bangalore the drive to create sustainable civic infrastructure is a commonly held belief and translates into local politics.

It will take time...but we are getting there.

4
refurb 15 hours ago 4 replies      
The article mentions waste water treatment plants, but no sewer system to transport the waste.

Anyone know why?

I've heard people say "too many people", but too many people means a huge labor force that is low cost.

I would think that building a sewer system would be pretty straightforward. Massive countries like China have pretty good wastewater treatment (relatively speaking) in the cities. So do other developing countries.

Why is it such an issue in India?

5
manish_gill 9 hours ago 3 replies      
Having lived in Delhi all my life, I can confidently say that it's really as bad as it looks. There's a huge divide though. Like, the middle class (to which I belong) and the rich live in relatively cleaner areas, go to places where there are no piles of trash lying around (and if there are, ignore them). I think the city can't fix unless it keeps ignoring it and make the poor suffer. There is no sense of community here. "It's not my problem" is the prevailing attitude.
6
fareesh 7 hours ago 0 replies      
I've had a lot of difficulty talking to people here about the climate change - specifically people from business communities.

There seems to be a prevailing attitude of "We are now in a phase of development as an economy. First world countries had an opportunity to grow without regulating pollution. Regulating us will inhibit our ability to catch up".

For some reason the literal end of the world isn't a strong enough argument for them. When I mention it, I am frequently told that it doesn't matter because no matter what is done, the outcome in terms of climate change is going to be the same at this stage.

7
weeksie 15 hours ago 0 replies      
The pollution there is a nightmare. I spent maybe four days in Delhi and had a cough that followed me for almost a week afterward. Even by Indian standards the place is filthy.
8
Fej 10 hours ago 0 replies      
Puts in perspective our problems.

Not that our problems aren't worth fixing, it's just a stark reminder that some places in the world remain uncivilized.

9
negamax 5 hours ago 1 reply      
All the pictures are from the dumping ground! Guys, it's an official dumping ground for the whole city. Many years ago it was far away but Delhi kept on growing. Current state is result of swallowing many villages/farms and neighboring cities for decades. This article takes a part of Delhi and paints it entirely like it.

I am surprised by other commenters are well.

10
hackaflocka 15 hours ago 1 reply      
Also: limping, injured cows and dogs (with broken legs and bashed in skulls) hobble along the middles of the roads, red lights usually have no meaning, there is literally no concept of a stop sign. Life is cheap. The population is excessive. There's a basic break-down of order. At night cars with wealthy "gangsters" roam to pick up street children to rape them and then drop them off at some random place. The children often don't know what happened (think Slum Dog Millionaire).

Source: from there originally. Been there once in the last 10 years. Saw little had changed. Vowed never to go back.

11
omegaworks 15 hours ago 7 replies      
>There have been times I've had garbage in my hands and I've had to carry it with me all day, because there are no bins anywhere, he remembered.

Boo hoo. Tokyo has no public trash cans and it is immaculate. The people there just care about the place they live in and don't burn shit to heat their houses.

12
bogomipz 6 hours ago 0 replies      
Its not even just the industrial waste and litter in Delhi though, the city is streaked with red spit from Paan(betel nut) chewers. It is everywhere, its disgusting. Then there is also the fact that around half the population don't have access to a bathroom on the subcontinent, so people urinate and defecate outside everywhere. You are never not far from bodily fluids in Delhi. More people have access to cell phone than bathrooms in India. All this aside though it is truly an amazing and fascinating place to visit. It's unlike anywhere else.
23
Looking back on Swift 3 and ahead to Swift 4 swift.org
138 points by dalbin  17 hours ago   47 comments top 12
1
dak1 13 hours ago 1 reply      
> - First class concurrency: Actors, async/await, atomicity, memory model, and related topics. This area is highly desired by everyone, as it will open the door for all sorts of new things on the client, server and more. We plan to start formal discussions about this in Phase 2, but it is unfortunately crystal clear that a new concurrency model wont be done in time for the Swift 4 release. This is simply because it will take more than a 12 months to design and build, and we want to make sure to take time to do it right. It also makes sense for the memory ownership model to be better understood before taking this on.

It's great to hear planning and discussion on this is beginning, and that they're being honest upfront about the timeline to see it implemented.

2
jernfrost 5 hours ago 1 reply      
Awesome things in store for Swift. With Rust like memory model as alternative and async/await I think it will massively broaden the appeal of Swift. I can start to see Swift taking over as the main mainstream language in the future. Java and C# has just accumulated too much cruft, they will be the new C++ languages. You can do anything but you have to accept a lot of complexity and awkward syntax to do that.
3
DAddYE 11 hours ago 1 reply      
I'm very happy that Apple open sourced Swift.

Chris and his crew are amazing and the vibrant community definitely helps/challenge them and will bring us things like:

 - concurrency (at least planned) - cyclone/rust memory model! - scripting - syntactic sugras
Congrats!

4
spotman 14 hours ago 0 replies      
Wow quite intriguing and exciting that swift 4 may have a rust inspired memory model!

Can definitely see the use case for swift expand a bit with this in its wings.

5
ksec 12 hours ago 1 reply      
Wouldn't First class concurrency in a way breaks ABI and Stdlib as well? ( Otherwise it would be second class )
6
coldcode 3 hours ago 1 reply      
The question I still have, is when will Swift 3.0 release be available in Xcode and for Linux use? The comment of Swift 3.X release in spring 2017 can't be 3.0, or am I missing something?
7
kibwen 14 hours ago 2 replies      

 > For Swift 4, the primary goals are to deliver on the > promise of source stability from 3.0 on, and to provide > ABI stability for the standard library.
Hm, does this imply that only the stdlib will have the benefit of a stable ABI? IOW, that it won't be possible to distribute compiled artifacts built with different versions of the compiler and expect them to link properly?

8
mark_l_watson 11 hours ago 1 reply      
I just run Ubuntu now but I like the features of Swift.

Does anyone who uses Swift on Linux have any comments, suggestions, use cases, etc.?

9
drivebyops 10 hours ago 2 replies      
is offical Windows support planned?
10
dschiptsov 12 hours ago 2 replies      
Perhaps, one should borrow message-passing as a standard language idiom, actors and pattern matching on receive from Erlang, the way Akka did, instead of copying hat async/await ugly hacks.

Message passing as a core concept of a language is fundamental and necessary (given that interlop with ObjC is important), and having that and macros gives related control structures for free.

11
legulere 7 hours ago 3 replies      
Why does mailing list archival software not contain basic css? Long lines make it hard to read the text.

You don't even need much css:http://bettermotherfuckingwebsite.com

12
alikhan82 14 hours ago 5 replies      
Kind of an incomplete article as there is no mention of the environmental cost. Desalination is not a magic solution. Everything is a trade off. Desalination means pumping salt back into the ocean which just compounds the problem.
24
FTLAB FSG-001 A $30 Geiger Counter for Android and iOS cnx-software.com
42 points by zxv  12 hours ago   17 comments top 6
1
Animats 8 hours ago 1 reply      
Checking food with a gamma radiation detector is not too useful. [1][2] Some "preppers" are into this, and get excited if they count for an hour and get 20% higher than usual. They're probably seeing ordinary variation in background radiation, which varies over the course of a day.

A tester called LANFOS has been developed in Japan, to deal with possibly-contaminated food from Fukishima. It's a round pot-like device with shielding and plastic scintillation detectors into which a sample can be inserted. This has been tested against other methods and the results agree with standard laboratory tests.[3] That's a practical solution in an area where you really do have to test.

If you're worried about suddenly encountering a big gamma emitter or X-ray beam, get one of these.[4]

[1] http://www.bloomberg.com/news/articles/2011-04-12/geiger-cou...[2] https://www.princeton.edu/~ota/disk3/1979/7907/790720.PDF[3] http://www.foodqualitynews.com/R-D/Radioactive-compounds-det...[4] https://www.amazon.com/NukAlertTM-radiation-detector-keychai...

2
kwikiel 10 hours ago 0 replies      
There is a need to verify how good this device is. What is measurment error
3
gravypod 2 hours ago 1 reply      
I'd love to use this as an rng.
4
wycx 2 hours ago 1 reply      
Any guesses on what the sensor is in this device?

The marketing material says semiconductor sensor, which is a little vague.

5
sandworm101 10 hours ago 1 reply      
This thing can never be "accurate". Proper instantaneous measurements for radiation require more than a good detector. The operator is part of the equation.

Say you have two pieces of fish in front of you. Answering the question "which one is more radioactive" requires more than holding a device over each for a couple minutes. You have to think of the flux, the surface area of the fish visible to the detector, the orientation of the detector, any background sources, the mass of each piece and most importantly the distance between the fish and the detector. That cannot be built into a hand-held consumer product. Absent that, these devices will only scare people.

Note the pic in the OP showing the detector plugged in via an extension cable. I'd bet that they moved it around until magically it's measurement lined up with the other device.

Also, low levels of radiation are nothing to be concerned with. The linear no-threshold model (the direct relationship between radiation and cancer) is no longer considered appropriate when discussing very low levels.

6
techdragon 8 hours ago 2 replies      
I'm actually reverse engineering this and similar cheap sensors as part of creating an iOS app... I'm significantly motivated by how utterly pathetic their companion apps are.
25
We Should Not Accept Scientific Results That Have Not Been Repeated nautil.us
815 points by dnetesn  1 day ago   258 comments top 63
1
misnome 1 day ago 16 replies      
Define repetition.

It's not as simple as that, for all sciences - once again an article on repeatability seems to have focused on medicinal drug research (it's usually that or psychology), and labelled the entire "Scientific community" as 'rampant' with " statistical, technical, and psychological biases".

How about, Physics?

The LHC has only been built once - it is the only accelerator we have that has seen the Higgs boson. The confirmation between ATLAS and CMS could be interpreted as merely internal cross-referencing - it is still using the same acceleration source. But everyone believes the results, and believes that they represent the Higgs. This isn't observed once in the experiment, it is observed many, many times, and very large amounts of scientists time are spent imagining, looking for, and measuring, any possible effect that could cause a distortion or bias to the data. When it costs billions to construct your experiment, sometimes reproducing the exact same thing can be hard.

The same lengths are gone to in order to find alternate explanations or interpretations of the result data. If they don't, they know that some very hard questions are going to be asked - and there will be hard questions asked anyway, especially for extraordinary claims - look at e.g. DAMA/LIBRA which for years has observed what looks like indirect evidence for dark matter, but very few people actually believe it - the results remain unexplained whilst other experiments probe the same regions in different ways.

Repetition is good, of course, but isn't a replacement for good science in the first place.

2
mmierz 1 day ago 6 replies      
I see a lot of people commenting here that there's no incentive to repeat previous research because it's not useful for getting grants, etc. This kind of true but I think it misses something important.

At least in life sciences (can't comment on other fields), it's not that scientists don't repeat each other's results. After all, if you're going to invest a significant fraction of your tiny lab budget on a research project, you need to make sure that the basic premise is sound, so it's not uncommon that the first step is to confirm the previous published result before continuing. And if the replication fails, it's obviously not a wise idea to proceed with a project that relies on the prior result. But that work never makes it into a paper.

If the replication succeeds, great! Proceed with the project. But it's time-consuming and expensive to make the reproduction publication worthy, so it will probably get buried in a data supplement if it's published at all.

If the replication fails, it's even more time-consuming and expensive to convincingly demonstrate the negative result. Moreover, the work is being done by an ambitious student or postdoc who is staring down a horrible job market and needs novel results and interesting publications in order to have a future in science. Why would someone like that spend a year attacking the work of an established scientist over an uninteresting and possibly wrong negative result, and getting a crappy paper and an enemy out of it in the end, instead of planning for their own future?

If enough people fail to replicate a result, it becomes "common knowledge" in the field that the result is wrong, and it kind of fades away. But it's not really in anyone's interest to write an explicit rebuttal, so it never happens.

3
undergroundOps 1 day ago 3 replies      
I'm a physician and I've been suggesting this to my colleagues for a few years, only to be met with alienated stares and labeled cynical.

The doctors and doctors-in-training I work with have altruistic motives, but place too much stock in major medical studies. They also frequently apply single-study findings to patient care, even to patients that would've been excluded from that study (saw this a lot with the recent SPRINT blood pressure trial).

And don't even get me started on the pulmonary embolism treatment studies. What a clinical mess that is.

It's frustrating.

4
ythl 1 day ago 1 reply      
> Nowadays there's a certain danger of the same thing happening (not repeating experiments), even in the famous field ofphysics. I was shocked to hear of an experiment done at the big accelerator at the NationalAccelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogenresults to what might happen with light hydrogen, he had to use data from someone else'sexperiment on light, hydrogen, which was done on different apparatus. When asked why, he said itwas because he couldn't get time on the program (because there's so little time and it's suchexpensive apparatus) to do the experiment with light hydrogen on this apparatus because therewouldn't be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they aredestroying--possibly--the value of the experiments themselves, which is the whole purpose of thething. It is often hard for the experimenters there to complete their work as their scientific integritydemands.

-- Richard Feynman, "Surely You're Joking, Mr. Feynman", pp. 225-226

5
cs702 1 day ago 5 replies      
My first reaction to this headline was "duh." Of course we should hold off on accepting scientific claims (i.e., predictions about the natural world) that to date have been verified only by the same person making those claims!

My next reaction was, "wow, it's a sad state of affairs when a postdoctoral research fellow at Harvard Medical School feels he has to spell this out in a blog post." It implies that even at the prestigious institution in which he works, he is coming across people who treat science like religion.

6
jbb555 1 day ago 3 replies      
We shouldn't "accept" or "reject" results at all.

It's not a binary option. One poor experiment might give us some evidence something is true. A single well reviewed experiment gives us more confidence. Repeating the results similarly does. As does the reputation of the person conducting the experiment and the way in which it was conducted.

It's not a binary thing where we decide something is accepted or rejected, we gather evidence and treat it accordingly.

7
mydpy 1 day ago 1 reply      
I agree in principle. There are a few concerns:

1. How should we receive costly research that took special equipment and lots of time to develop and cultivate? I.e., CERN?

2. A lot of research is published, ignored, and then rediscovered. In this case, we may want to accept the research until it cannot be repeated (i.e., in another journal publication).

3. Reviewers of academic publications probably are not qualified or have the time to recreate all scientific research.

4. Isn't the academic system at its core kinda... broken?

8
gnuvince 1 day ago 3 replies      
Maybe conferences should have a "reproducibility" track for that purpose? Also, I don't know about other fields, but I'm pretty sure that in CS, if you just took a paper and tried to reproduce the results, you'll get rejected on the ground that you offer no original contribution; no original contribution => no publication => no funding.
9
bontoJR 1 day ago 4 replies      
The big issue right now is funding of replicated research, who wants to fund a research to prove someone else was right? Most of these funds are granted based on potential outcome of the new discovery like: potential business, patents, licenses, etc... not being the first one would probably wipe most of these benefits, cutting down to a small probably getting funded...

Now, straight to the point, who's going to pay for the repeated research to prove the first one?

10
jernfrost 5 hours ago 0 replies      
I don't get why this is not top on the agenda for the scientific community and the government. Huge amounts of research money is lost in repeating stuff that doesn't work. Huge amounts of money is lost chasing broken science.

I blame this on the neo-liberal ideology. This intense focus on getting money's worth, on tying grants to specific goals, counting publications etc. Driving research exclusively on a very narrowly defined money incentive has driven us further into this sort of mess. The money grabbing journals which has prevented any significant innovation in how science is shared.

I think what science needs is a model closer to that of open source. With open projects anybody can contribute to but where verification happens through personal forged relationships. The Linux kernel code quality is verified by a hierarchy of people trusting each other and knowing something about each others quality of work. Work should be shared like Linux source code in a transparent fashion and not behind some antiquated paywall.

I don't think the grant system can entirely away, but perhaps it should be deemphasized and instead pay a higher minimum amount of money to scientists for doing what they want. Fundamental science breakthrough doesn't happen because people had a clear money incentive. Neither Einstein, Nils Bohr, Isaac Newton or Darwin pursued their scientific breakthroughs with an aim of getting rich. Few people become scientists to get rich. Why not try to tap into people's natural desire to discover?

11
guaka 1 day ago 2 replies      
Totally agree. I'd go even further and make free licenses on scientific source and datasets mandatory. Research that is funded by public money should lead to public code and data.
12
nonbel 22 hours ago 1 reply      
Cue all the people justifying their pseudoscientific behavior. If it is too expensive to fund twice, it shouldn't be funded once. If that means the LHC and LIGO wouldn't get done, then we should have only funded one of them. We need to remain skeptical of those results until replicated by a new team. Even one replication is pretty weak...

Independent replications of experiment (and the corresponding independent reports of observations) are a crucial part of the scientific method, no matter how much you wish it wasn't. Nature doesn't care if it is inconvenient for you to discover her secrets, or that it is more difficult for you to hype up your findings to the unsuspecting public.

13
fhood 1 day ago 0 replies      
Many people have mentioned that replicating an experiment can be expensive, but I don't think anybody has really brought up just how expensive this can be.

Not all science is done in a lab. Replicating an experiment is obviously feasible for a short term psychology experiment, but in earth sciences (oceanography for instance.) it is far less often possible to reproduce an experiment for the following reasons.N.B. This is all from my personal experience of one field of science.

1.) Cost. If you got funding to take an ice-breaker to Antarctica to "do science" it required several million dollars to fund. It is difficult enough to secure funding for anything these days, none the less prohibitively expensive attempts to reproduce results. (honestly any serious research vessel will run costs into the millions, regardless of destination.)

2.) Time. Say you are on a research vessel taking measurements of the Amazon river basin. This is a trip that takes months to years to plan and execute. If you return to duplicate your experiment 2 years later, the ecology of the area you were taking measurements of may have changed completely.

3.) Politics. Earth sciences often require cooperation from foreign entities, many of which are not particularly stable, or whom may be engaging in political machinations that run counter to your nationality's presence in the country, or both. Iran and China are two good examples. Both are home to some excellent oceanographers, and both of which can be very difficult to Science in when your team includes non Iranian/Chinese nationalities.

14
Gatsky 1 day ago 0 replies      
Lots of scientific results are repeated but not published. If it doesn't work then people just move on. The problem is journals. There is no way to publish your attempts to repeat an experiment, unless you put it into another paper.

The other issue, especially in the life sciences, is inaquedate statistical input. If someone performs an underpowered, confounded experiment and gets a positive result, then someone else performs the same underpowered confounded experiment and gets a negative result, what have we learned except that the experiment is underpowered?

15
unabst 22 hours ago 0 replies      
With science, the profession and the product are distinctly different, and we are failing to hold the profession to the standards of the product. Science, the profession, is political, incentive driven, and circumstantial. Scientists need to get paid. Science, the product, is apolitical, existential, and universal. So those who love and believe in the products of science may wish upon themselves to be these things also. I know I do. Except, sometimes it just ins't practical, or even possible.

But repeatability actually matters more professionally. Scientifically speaking, if the science is bad it just won't work when others try to make use of it. All bad science will be identified or corrected as we try and make use of it and convert it into new technology. Technology mandates repeatability. So those scientists who fail to produce repeatable science, regardless of how professionally successful they may be, will inevitably fail to produce any new technology or medicine, and vice versa.

16
cube00 1 day ago 1 reply      
Having wasted time trying to replicate someone else's results who 'lost' their code, I agree! Maybe repeating the experiment should be part of the peer review.
17
doug1001 21 hours ago 0 replies      
"repeated" in this context is not incorrect, but i think "replicated" is perhaps a better choice.

That aside, i think repeatability is a much more useful goal (rather than "has been repeated"). For one thing, meaningful replication must be done by someone else; for another, it's difficult and time consuming; the original investigator has no control over whether and when another in the community chooses to attempt replication of his result. What is within their control is an explanation of the methodology they relied on to produce their scientific result in sufficient detail to enable efficient repetition by the relevant community. To me that satisfies the competence threshold; good science isn't infallible science, and attempts to replicate it might fail, but some baseline frequency for ought to be acceptable.

18
kayhi 1 day ago 0 replies      
In the world of chemistry, biochemistry and microbiology a huge step forward would be for journals to require a complete list of products used. The publication should also include the certification of analysis for each item as they vary over time.

For example, here are two product specifications for a dye called Sirius Red, the first by Sigma-Aldrich[1] and the second by Chem-Impex[2]. The Sigma-Aldrich product contains 25% dye while the Chem-Impex contains equal or greater than 21%. These two dyes could be quickly assessed with a spectrophotometer in order to determine an equivalency, however you need both dyes on hand which doesn't seems like a good use of funding. Also this touches on another problem in replication which is, what is in the other 75%+ of the bottle?

[1] http://www.sigmaaldrich.com/Graphics/COfAInfo/SigmaSAPQM/SPE...[2] http://www.chemimpex.com/MSDSDoc/22913.pdf

19
Mendenhall 1 day ago 0 replies      
Look at research done on many political hot button topics. They love results that have not been repeated. I see all sorts of posts even on HN that reference such "science" as well. The root problem, people who are pushing an agenda.
20
jmilloy 1 day ago 0 replies      
Obviously I agree that scientific results must be reproducible. But I also realize that it's simply infeasible to repeat the entirety of every study, and much less to also go to the effort to write and peer-review those repeated results.

What I think is overlooked in this discussion as that a lot of confirmation work already happens. Most (all?) scientific results are incremental progress built on a heap of previous work. In the course of normal research, you reproduce existing results as necessary before altering conditions for your own study. If you can't confirm the results, well then perhaps you have a paper (though it can be politically challenging to get it published, and that's a separate problem). But if you do, then you don't waste time publishing that, you get on with the new stuff.

Ultimately, I don't think scientists do accept results in their field that they have not repeated.

21
hudathun 1 day ago 0 replies      
Looking good is, sadly, better rewarded than doing good in many areas of life. It's doubly sad that this affects our body of scientific knowledge. Even claims that are reproduced can suffer from funding bias and confirmation bias. The truth hopefully comes out in the end, but I'm sad for the harm that's caused in the interim.
22
habitue 17 hours ago 0 replies      
A big distinction here is that different fields have different levels of dependence on prior results. In fields like psychology etc, you don't need the previous results to work in order to run your own experiment. In other words, if you cite a well-known paper saying "people seem to work faster near the color red" and your paper runs an experiment to see if they work faster near the color yellow, if the red paper is later unreplicable, it doesn't change the outcome of your experiment in any way.

In contrast, if you are in machine learning and you are extending an existing architecture you are very directly dependent on that original technique being useful. If it doesn't "replicate" the effectiveness of the original paper, you're going to find out quickly. Same for algorithms research. Some other comments here have mentioned life sciences being the same.

So I think there's a qualitative difference between sciences where we understand things in a mostly statistical way (sociology, psychology, medical studies) where the mechanism is unknown (because it's very very complicated), but we use the process of science mechanistically to convince ourselves of effectiveness. e.g. I don't know why this color makes people work faster/ this drug increases rat longevity / complex human interactions adhere to this simple equation, but the p value is right, so we think it's true. Versus sciences where we have a good grasp of the underlying model and that model is backed up by many papers with evidence behind it, and we can make very specific predictions from that model and be confident of correctness.

23
webosdude 22 hours ago 0 replies      
Didn't John Oliver say the same thing few months ago in his episode on scientific studies, https://www.youtube.com/watch?v=0Rnq1NpHdmw
24
lutorm 12 hours ago 0 replies      
Define reproduced? Do we mean "conduct the same experiment multiple times so we can assess the variance on the outcome"? Or do we mean "conduct the same experiment multiple times to figure out if the first result is a screw-up"?

Those two aren't the same, and I think far too many think that the point is the latter when, imho, it's actually the former. Pure screwups will likely get found out, just like glaring bugs are usually found. It's when your result actually has a huge variance but you're looking at only one (or a few) samples and draw conclusions from it that's insidious, like the fact that it's the bugs that just change the output by a tiny bit that are the hardest to notice.

25
dalke 1 day ago 2 replies      
I disagree.

One part of science is observation. Including observations which cannot be, or at least have not been, repeated. For example, consider a rare event in astronomy which has only been detected once. Is that science? I say it is. But it's surely not repeatable. (Even if something like it is detected in the future, is it really a "repeat"?)

Some experiments are immoral to repeat. For example, in a drug trial you may find that 95% survive with a given treatment, while only 5% survive with the placebo. (Think to the first uses of penicillin as as real-world example.)

Who among you is going to argue that someone else needs to repeat that experiment before we regard it as a proper scientific result?

26
leecarraher 1 day ago 0 replies      
It's not just money that prevents people from repeating experiments, it's recognition.

The general idea for research to be accepted is that it makes some novel, albeit small, impact on the field, acceptable for publication in a peer reviewed journal or proceeding. Repeating someone else's experiments wont get you that, so in general it wont help you graduate or move you toward a higher position at a university or in your profession, meaning there is very little motivation for researchers to pursue such endeavors.

So instead of just throwing money at the problem, we may need to entirely revamp how we recognize the pursuits of researchers.

27
ramblenode 21 hours ago 1 reply      
I have an alternative proposal: do a study right the first time.

That means:

A) Pre-registering the study design, including the statistical analysis. Otherwise, attaching a big label "Exploratory! Additional confirmation needed!"

B) Properly powering the study. That means gathering a sample large enough that the chances of a false negative aren't just a coin flip.

C) Making the data and analysis (scripts, etc.) publicly available where possible. It's truly astounding that this is not a best practice everywhere.

D) Making the analysis reproducible without black magic. That includes C) as well as a more complete methods section and more automation of the analysis (one can call it automation but I see it more as reproducibility).

Replication of the entire study is great, but it's also inefficient in the case of a perfect replication (the goal). Two identical and independent experiments will have both a higher false negative and false positive rate than a single experiment with twice the sample size. Additionally, it's unclear how to evaluate them in the case of conflicting results (unless one does a proper meta-analysis--but then why not just have a bigger single experiment?).

28
framebit 1 day ago 0 replies      
This problem, like many in modern day science, can in large part be traced back to unstable funding. On the Maslow's-style hierarchy of research lab needs, the need for funding is a lot lower on the scale than the aspiration for scientific purity, just as a human's need for food is lower on the scale than their desire for self-actualization.

If competition for research dollars ceases to be so cutthroat, it will go a long way towards solving this and many other seemingly entrenched cultural problems.

29
yiyus 22 hours ago 0 replies      
> The inconvenient truth is that scientists can achieve fame and advance their careers through accomplishments that do not prioritize the quality of their work

An even more inconvenient truth is that scientists cannot even keep their jobs if they prioritize the quality of their work. The pressure to publish novel results is too strong and it is almost impossible to get any support for confirming previous ones.

30
bshanks 22 hours ago 0 replies      
I agree with the main point of this article but in terms of its analysis and prescriptions I think it gets two things backwards. (1) Most scientists seek fame as a means to the end of getting tenure and funding, not the other way around; if you gave them tenure (and the ability to move their tenure to somewhere else if they wanted to move) and perpetual funding and told them they could choose to be anonymous, I think many would choose that option. (2) Replication is not done/published enough because the incentive to do so (measured in: increase in probability of getting tenure per hour spent) is not high enough, not because people are overly willing to accept unreplicated work.

In order for a lot more replication to get published, what would be needed would be for people who spent their careers replicating others' results (at the expense of not producing any important novel results of their own) to get tenure at top institutions (outcompeting others who had important novel results but not enough published replications).

31
ehnto 1 day ago 1 reply      
We learned the importance of this in high school science and it baffles me that it's not already the case.
32
munificent 23 hours ago 0 replies      
Most disciplines where correctness is important seem to end up having some adversarial component. It is explicitly how the justice system in the US works [1]. Many software companies have separate QA departments that are deliberately kept at a remove from the engineers to encourage some rivalry between them. Security issues are almost always managed in an adversarial way (though here, you could argue that's because it reflects how the system itself is [mis-]used). Markets are intended to allow fair competition between producers to find an optimal price and product for consumers.

Peer review is supposed to do this, but the fact that peer reviewers are often colleagues leads to collusion, whether intended or not.

Maybe we need a separate body of scientists whose sole joband whose entire prestigederives from taking down and retracting bad science.

[1]: https://en.wikipedia.org/wiki/Adversarial_system

33
csydas 1 day ago 2 replies      
I think with the increased visibility of scientific research to the general public, it's less that science needs to stop accepting unrepeated results, but instead the paper process needs to be updated to reflect the new level of availability, and journal databases need better relationship views between papers and repeated tests.

As an outsider looking in on the Scientific process, I am not really sure how applicable my opinions are, but I see these as useful changes.

Basically, in reverse order, my suggestions for science to adopt are as follows:

Papers in databases need to have fields related to reproduction studies, and it needs to start becoming a prideful part of the scientific process; just as there is a lot of pride and money, researchers should start to thump their chest based on the reproducibility of their work, actively seeking out contemporaries and requesting a reproduction study as part of the pubilshing process, and subsequently updating.

The papers published themselves should take a moment (perhaps no more than a paragraph) to include a "for media" section that outlines the "do's and don't's" on reporting on the research. For example, cancer research should clearly state examples of acceptable understandings in lay person terms as a sort of catch for sloppy reporting. Something like "Do not write "cure for cancer found" or "Effective treatment", instead write "progress made, etc". Basically put a sucker punch to outlandish headlines and reporting right in the paper itself, and let journalists who want to be sensationalist embarrass themselves.

This seems like two very simple acts that could raise the bar for science a bit.

34
pc2g4d 22 hours ago 0 replies      
Replication isn't enough. It's also necessary to know how many non-replications have occurred but got swept under the rug. It's not the existence of replications that matter---it's the rate of replication relative to number of replication attempts.

So I agree with the title "We Should Not Accept Scientific Results That Have Not Been Repeated". But I would add to it "We Should Not Accept Scientific Results from Studies That Weren't Preregistered". Registration of studies forces negative results to be made public, allowing for the positive result rate / replication rate to be calculated.

Otherwise the existence of a "positive" result is more a function of the trendiness of a research area than it is of the properties of the underlying system being studied.

35
dahart 1 day ago 1 reply      
It's unfortunate that the suggestions at the end don't seem to offer a realistic attack vector.

> First, scientists would need to be incentivized to perform replication studies, through recognition and career advancement. Second, a database of replication studies would need to be curated by the scientific community. Third, mathematical derivations of replication-based metrics would need to be developed and tested. Fourth, the new metrics would need to be integrated into the scientific process without disrupting its flow.

Yes, absolutely those things need to happen, but the problem is how to get this funded, how to get people to not see reproducing results as career suicide, right? Items 2-4 will fall out as soon as item #1 happens.

How do we make item #1 happen? What things could be done to make reproducing results actually an attractive activity to scientists?

36
typhonic 1 day ago 0 replies      
I've always been amazed by how widely the Stanford Prison Experiment results are accepted when a) the experiment has not been repeated and b) the experiment didn't even get completed. It was stopped when the researchers had made up their minds about the results.
37
triangleman 22 hours ago 0 replies      
Ironically, one of the reasons Semmelweis's colleagues rejected his "hand-washing" hypothesis was that it did not have a good enough empirical/statistical basis.

http://www.methodquarterly.com/2014/11/handwashing/https://en.wikipedia.org/wiki/Contemporary_reaction_to_Ignaz...

38
middleman90 1 day ago 1 reply      
I thought this was in the definition of "scientific"
39
macspoofing 22 hours ago 0 replies      
Or at least, the media shouldn't report on results until they have been repeated. This would cut down on the daily "X causes cancer / X doesn't cause cancer" media spam.
40
aficionado 23 hours ago 0 replies      
The solution is easy and it applies to most sciences: all research articles should include a pointer to download the dataset that was used and an annex with the details on how it was collected.
41
VlijmenFileer 1 day ago 0 replies      
Like climate science, right? Let's set up a statistical meaningful set of equivalent earths, and start doing some serious peer review.
42
collyw 1 day ago 0 replies      
Just to play devils advocate, won't there be a self correcting mechanism?

If results are genuinely useful, then people will want to build upon that work, and will have to repeat the science. On the other hand if it can't be repeated, then it will not get further work done and fade into obscurity. Curious what other peoples opinion on this are?

43
tudorw 1 day ago 0 replies      
this increasingly includes code that needs to run in the future, and citations within code, see this group working in that field https://www.force11.org/sites/default/files/shared-documents...
44
twoslide 21 hours ago 0 replies      
One problem is that there is no incentive to replicate. From the PhD onwards, academia creates incentives for original research. Replications, particularly those that confirmed existing research, would not benefit the researcher much.
45
chucky_z 23 hours ago 0 replies      
In for instance, a bioscience lab, I don't believe that results should even be accepted unless they're repeated with similar reagents. Some reagents are so specific they only prove something.... for that one single thing, which could be unique on this planet.
46
vonnik 23 hours ago 1 reply      
In that case, macro-economics is simply disqualified from being scientific. It's almost impossible to repeat large-scale events, controlling for all variables. Have to say I'm not particularly impressed with the quality of Nautilus's analysis.
47
grashalm 1 day ago 2 replies      
Sometimes in cs if your research is embedded in a huge ecosystem, it can become quite expensive to reproduce results. I mean proper reproduction, not just rerunning the Benchmarks. If you are dealing with complicated stuff, the reproducer might also just not be able to do the same thing technically.
48
colinprince 21 hours ago 0 replies      
I searched for the word "tenure" in the article, but didn't find it.

The drive to get tenure is a big reason that scientists publish so much, funny that was not mentioned.

49
mankash666 22 hours ago 0 replies      
What we need is accountable statistics - something that cannot be manipulated.

One idea is to enforce storing or indemnifying a time-stamped data base of the raw data on the block chain

50
Zenst 23 hours ago 0 replies      
Independently verified and repeated I would add.

After all any scientific test that fails when somebody else repeats it is not science but the domain of magic and religion, so clearly not science.

51
jheriko 23 hours ago 0 replies      
wait! we should use the scientific method for science?

its a radical suggestion. sad that it is, but true... ;)

seriously though... you can't have falsifiable results if you don't constantly try to falsify them. then it just becomes a result, which means the conclusion you can draw is close to nothing.... not quite nothing, but exceptionally close. :)

52
vorotato 22 hours ago 0 replies      
There is no science without repetition.
53
bane 1 day ago 0 replies      
I think a better way of thinking about what we want than "repetition" is "independent corroboration".
54
cm2187 1 day ago 0 replies      
How do you that in the medical field? Studies are often based on a small number of patients affected by a particular condition.
55
sevenless 22 hours ago 0 replies      
We also should not accept historical claims that have not been repeated :)
56
aminorex 1 day ago 0 replies      
More pragmatically, we should not accept scientific publications and conferences which do not publish negative results and disconfirmations.
57
return0 1 day ago 1 reply      
We now have the tools to do it , and we should be doing it. The fate of scientific findings is not to publish papers, they belong to open and continuous scrutiny. And someone should build a github of scientific facts.
58
Eerie 1 day ago 0 replies      
UGH.Scientists are not getting funds to repeat result.
59
VikingCoder 1 day ago 0 replies      
This is wrong-headed in the extreme.

What we should demand is scientific results that have FAILED.

When we see a p=0.05, but we don't know that this SAME EXACT EXPERIMENT has been run 20 times before, we're really screwing ourselves over.

Relevant: https://xkcd.com/882/

60
michaelbuddy 22 hours ago 0 replies      
Agreed, which means 50% of social science at least is disqualified and should not be making into future publications or become part of curriculum.
61
tedks 1 day ago 0 replies      
From the authors of "Why Science Needs Metaphysics" this rings a little hollow.

Nautilus is just a slightly less vitriolic version of the Aeon-class anti-science postmodernist blog. Like Aeon, it's garbage.

62
dschiptsov 1 day ago 0 replies      
According to old-school philosophy of science truth could be discovered only by removing all the nonsense, as a remainder, not by pilling up nonsense on top of nonsense out of math and probabilities.

Probabilities, for example, are not applicable to partially observed, guessed and modeled phenomena. It should be a type-error.

As for math - existence of a concept as a mathematical abstraction does not imply its existence outside the realms of so-called collective consciousness. Projecting mathematical concepts onto physical phenomena which could not be observed is a way to create chimeras and to get lost in them.

Read some Hegel to see how it works.)

63
known 1 day ago 0 replies      
Lottery < Statistics < Science
26
The Security of Our Election Systems schneier.com
75 points by yarapavan  6 hours ago   58 comments top 11
1
specialist 1 minute ago 0 replies      
"This means voting machines with voter-verified paper audit trails..."

I've attended "audits" of VVPATs. They merely verify that the printer still worked as expected. Nothing more.

This turf has been hashed and rehashed. The Election Verification Network (academics, administrators, activists) have covered this many times. Auditing electronically mediated elections is impractical and adds little more certainty in the results.

No, crypto voting doesn't help.

2
Dowwie 5 hours ago 8 replies      
There's a strong moral argument in favor of the DNC leaks: the American people ought to know that its political process was corrupted by powerful actors within its own institutions.

Schneier said, "This kind of cyberattack targets the very core of our democratic process. " In this case, though, the attack targeted actors who prevented democratic process.

Were we exposed to fictitious narratives intended to advance pro-Russian agendas? Was truth revealed, without manipulation of content? It seems to be the latter.

Attacking the polls would be unacceptable and deserve retaliation, but it hasn't happened yet. Attacking a whistle blower who has helped to reveal a corrupt political process isn't something I'd agree with. Schneier speculates that foreign influence will continue into the polls -- I guess we had better strengthen the election process and ensure transparency, then.

3
eternalban 3 hours ago 2 replies      
The OP is a shameful blot on Bruce Schneier's record [imo].

For "evidence" we are directed to The New York Times -- a political organization. This sort of evidence certainly suffices for the non-technical set but that HN is accepting this without subjecting the assertion to the rigor that we apply to topics that are not conflated with emotional and psychological triggers is disconcerting.

I would like to pose the question here to my fellow geeks: Do you really think Russians are so incompetent that they would not avail themselves of e.g. Tor to cover their tracks?

[edit: take courage & answer the question instead of downvoting.]

4
brudgers 1 hour ago 0 replies      
A political party's computers are not part of the "critical election infrastructure" unless the party [or parties] has become the state.

Exceptionalism is political crack. States seek to influence elections in other states. Always have, always will. Having a candidate aligned explicitly aligned with the interests of a foreign state is quite common in Americas. As is having a foreign state explicitly align themselves with a candidate.

5
chvid 4 hours ago 2 replies      
Why is this not flag killed like all the other (politically biased) articles on the DNC hack?
6
troiter 3 hours ago 0 replies      
What influence? It's called exposing lies and corruption within the DNC. If anything, I praise Putin for it. Let's not lose perspective of reality thinking about tech security.
7
revelation 5 hours ago 2 replies      
I think you would be quite silly to just outright accept intelligence agency declarations of "it was Russia". As history shows, not only are these people frequently ignorant to technical realities, but political reasons at every single layer of these organizations obscure and pervade the truth.
8
lr 2 hours ago 0 replies      
For once in my life, I am glad we have the Electoral College.
9
youngButEager 3 hours ago 1 reply      
This is a WHISTLE BLOWER situation.

First identify who is trying to persecute the whistleblower.

There, you've found the party that has committed untoward acts who is now trying to SILENCE THE WHISTLEBLOWER/CHANGE THE SUBJECT.

I was a Bernie supporter. A LOT of people were/are.Not at all happy with the DNC.

Having a whistleblower confirm our idea that the DNC was trying to hurt Bernie --

-- now I know how a parent feels when they finally solve the tragedy of a missing family member.

REALLY CATHARTIC.

And really depressing.

10
DanielBMarkham 5 hours ago 1 reply      
Timely, well-reasoned, and excellent article by Schneier.

But there's a problem.

Elections are managed by state governments by design. This is to prevent centralized political corruption. Having the feds "take the lead" is a little too nebulous to be practical.

What could be done is a certification system for electronic voting that requires a paper audit trail and individualized printed receipts for each voter. (Which would be encrypted to prevent others from determining which votes were cast)

The big leap is that electronic-only systems are never going to work. For various reasons, I don't think most folks are ready to go there. That is the major problem that must be solved. After that's fixed, the other stuff will at least be easier to address.

11
fncndhdhc 1 hour ago 1 reply      
And yet a Democratic Party IT administrator was shot and killed in DC two weeks ago. The media is trying to attribute it to a mugging, but no items were found taken off of his body.

http://www.nbcwashington.com/news/local/Man-Shot-Killed-in-N...

27
More Wrong Things I Said in Papers scottaaronson.com
89 points by ikeboy  17 hours ago   30 comments top 6
1
startupdiscuss 17 hours ago 4 replies      
You know, these days, we shouldn't have the concept of a blog or paper being published. It should be collaborative effort.

The initial post should be a comment period where you ask for feedback and no matter how large your mistake, it should be considered okay.

And slowly as you get more eyes on it, it should solidify into a more accepted paper.

2
dharmon 14 hours ago 0 replies      
In my PhD thesis I claimed a function was convex that was not. The embarrassing part was that in the next section I proved that an almost identical function was non-convex.

Phew, I feel better getting that off my chest. Good thing nobody reads dissertations.

3
clifanatic 17 hours ago 1 reply      
I wish I was smart enough to be that wrong.
4
erdevs 15 hours ago 1 reply      
Aaronson is awesome. Intellectual honesty like this is to be admired in any field.
5
schoen 16 hours ago 5 replies      
I wrote a paper a few years ago that was published in a journal and which I felt was subsequently refuted by another paper. (I argued that for various reasons a server can never tell if a client has been virtualized or modified or not in the absence of hardware-based anti-virtualization features, but https://www.cs.cmu.edu/~jfrankli/hotos07/vmm_detection_hotos... and some recently work on obfuscators make me think I was mostly to entirely wrong.) Particularly since I don't have a blog right now, I keep thinking that I have no way to correct my assertions in public. It feels like it would be useful to be able to do that, just so if anyone somehow comes across the old paper they could quickly find out why its arguments aren't valid.
6
alecco 16 hours ago 0 replies      
http://retractionwatch.com/ is fun to visit once in a while
29
Bringing HSTS to www.google.com googleblog.com
102 points by ejcx  21 hours ago   62 comments top 4
1
the_mitsuhiko 19 hours ago 1 reply      
Does this mean that "nosslsearch" is now no longer supported?
2
huula 6 hours ago 1 reply      
Just curious, this looks like some already supported feature of nginx, only that it's supported through redirection. Is this a redirection too or a protocol change? How will that be reflected on the address bar?
3
chinathrow 19 hours ago 2 replies      
Doesn't that mean that wifi captive portals using www.google.com won't be able to take over the connection and re-direct to the captive portal?
4
peterwwillis 17 hours ago 1 reply      
HSTS will one day be remembered as the HTTPS version of SMS 2nd auth: A bad hack with good intentions. Sure, it can have some positive effect in the short term, but there are so many ways to subvert it that as its popularity grows, so will the attacks.
30
Intel Programmable Systems Group takes step towards FPGA based system in package newelectronics.co.uk
37 points by zxv  13 hours ago   12 comments top 3
1
Qantourisc 8 hours ago 2 replies      
Hmm this made me wonder: what would happen if you made a system that supported multiple architectures (ARC, PPC, x86, FPGA, ...) at the same time ?
2
unwind 6 hours ago 1 reply      
Site seems down. I tried to find an alternative source but didn't come up with much.
3
ianderf 9 hours ago 1 reply      
This can be a real breakthrough in computing technology. Just in time, as the improvement of desktop and server CPUs has stalled almost completely.
       cached 30 July 2016 16:02:01 GMT