hacker news with inline top comments    .. more ..    1 May 2016 News
home   ask   best   2 years ago   
1
Why now is the best time to study quantum computing (2014) arxiv.org
59 points by Phithagoras  1 hour ago   17 comments top 6
1
onion2k 12 minutes ago 1 reply      
If we know now is the best time to study quantum computing then we can't know where the best place to study it is.
2
dheera 8 minutes ago 0 replies      
Although I have studied the basics of quantum computing, I don't think it will become necessary for computer scientists to know the gory physics details of quantum computers. All they will need to know is that a certain function that is normally O(n^2) now has a magical implementation that is O(n), and another function that is normally exponential is now polynomial.

Quantum computers will likely manifest themselves as co-processors, and you'll have a nice well-abstracted API to access those implementations within traditional languages, i.e.

 #!/usr/bin/env python4 from quantum import qc qc.init(device="/dev/quantum0") factors = qc.factor(15)

3
d0m 38 minutes ago 2 replies      
If I was still a student, I would definitely go in quantum computing. So much opportunities for innovation on a very cool subject.

I often wonder what will happen at the 0-day quantum machine where it's not just a few qbit but the real deal.. I think anyone in possession of such technology will be able to crack any SSL certificate, and thus gain access to almost anything online. I wonder if criminal organisations aren't secretly investing in such thing? And, not to be paranoid, but we're almost certain it will be possible to build them, wouldn't it be prudent to start investing in the defense against such things? What kind of security could we have to counter a quantum computing? Would it only be possible to use quantum computing to defend against quantum computing?

4
ThePhysicist 53 minutes ago 0 replies      
I gave an introductory talk about experimental quantum computing at the 31C3 where I explain how a basic two-qubit quantum processor works and how we can run a very simple quantum algorithm (Grover search) on it, here's the video link:

https://youtu.be/1PcseLsYZ9Y

5
carterschonwald 53 minutes ago 0 replies      
Another really fascinating connection is with linear / resource logic and quantum computation. A lot of really neat resource logic connectives have interpretations as setting up quantum experiments.
6
mrdrozdov 1 hour ago 3 replies      
How does this compare to Justin Trudeau's explanation? https://m.youtube.com/watch?v=rRmv4uD2RQ4
2
Little-Known Committee Proposes to Grant New Hacking Powers to the Government eff.org
130 points by DiabloD3  4 hours ago   42 comments top 9
1
tptacek 44 minutes ago 4 replies      
EFF's misleading summary aside (EFF's gonna EFF), I have a question about the substantive issue here. Specifically:

How could the FRCP work otherwise? They're in effect saying: if the evidence pertinent to a crime is online, and is either (a) on Tor or some other service where we don't know precisely where it is, or (b) on a botnet or some other environment where it's spread across 100 different jurisdictions, a judge can issue a warrant to obtain that evidence.

Judges can already issue warrants to obtain electronic evidence in, I think, exactly the fashion EFF describes here. The limitation they have today is procedural: they can only issue those warrants in their own court district.

But if you don't know the right court district, or a search would effectively require you to get warrants in every district, procedural rules make it hard to get a warrant today. That seems... stupid. The fact that evidence pertinent to a criminal case is on a Tor hidden service shouldn't make it inaccessible to the courts.

2
___ab___ 3 hours ago 6 replies      
The Judicial Conference of the United States is neither a "little-known committee" or in any way secretive or shady, unless one is totally ignorant of how the judicial system works. The EFF certainly is not.

The conference is composed of: "the Chief Justice of the United States, the chief judge of each court of appeals federal regional circuit, a district court judge from various federal judicial districts, and the chief judge of the United States Court of International Trade." [0]

You can disagree with their decisions, but don't try and imply that they are duplicitous. I expect better of the EFF.

[0] https://en.wikipedia.org/wiki/Judicial_Conference_of_the_Uni...

3
tomku 2 hours ago 1 reply      
As I posted on the other thread about this news, the interpretation that this gives the government "new hacking powers" is just flat-out wrong, and shame on the EFF for using it to spread FUD.

The ONLY thing changed by this proposed rule is the venue in which the government can apply for warrants, expanding it to include any jurisdiction involved in the crime under those two specific circumstances that the EFF blog post mentions.

It does NOT change any of the rules of probable cause involved in getting a warrant. It does NOT grant any kind of "new hacking powers". It does NOT criminalize Tor or allow law enforcement to get a warrant simply because someone used Tor.

There are reasons to not like this rule change based on what it actually means. Misrepresenting things that you don't agree with ultimately hurts your own side because it makes it trivial for people on the other side to dismiss your complaints as ignorant and wrong.

4
maxerickson 3 hours ago 0 replies      
What a great headline to attach to an action signed by the Chief Justice of the Supreme Court and sent to the leaders of both houses of Congress.

Recent discussion of the rule change:

https://news.ycombinator.com/item?id=11594597

A smaller one:

https://news.ycombinator.com/item?id=11604112

5
benevol 3 hours ago 0 replies      
Official version: "grant new hacking powers"

Reality: "Make legal what has been going on illegally for years"

Ok, land of the free.

6
pappyo 2 hours ago 0 replies      
Every US politician will side on the side of security. If they don't and some awful terrorist plot happens on US soil under their watch, their political career is over. If the argument is between fear and an abstract notion of freedom, fear will always win out.

The only way it changes is if the US does away with career politicians, or fear of the government becomes > fear of terrorists.

7
csoghoian 14 minutes ago 0 replies      
The FBI has been using malware since at least 2003 [1], probably a few years before that. Today, the FBI has a dedicated team, the Remote Operations Unit, based out of Quantico, which does nothing but hack into the computers and mobile phones of targets. According to one former top FBI official, among the team's many technical capabilities, is the ability to remotely enable a webcam without the indicator light turning on [2].

Although DOJ has been using malware for nearly fifteen years, it never sought a formal expansion of legal authority from Congress. There has never been a Congressional hearing, nor do DOJ/FBI officials ever talk explicitly about this capability.

The Rule 41 proposal before this advisory committee was the first ever opportunity for civil society groups, including my employer, the ACLU, to weigh in. We, along with several other groups, submitted comments and testified in person.

Our comments can be seen here [3,4]. Incidentally, it was while doing the research for our second comment that I discovered that the FBI had impersonated the Associated Press as part of a malware operation in 2007 [5].

Ultimately, the committee voted to approve the change to the rules requested by DOJ. In doing so, the committee dismissed the criticism from the civil society groups, by saying that we misunderstood the role of the committee, that the committee was not being asked to weigh in on the legality of the use of hacking by law enforcement, and that "[m]uch of the opposition [to the proposed rule change] reflected a misunderstanding of the scope of the proposal...The proposal addresses venue; it does not itself create authority for electronic searches or alter applicable statutory or constitutional requirements."

[1] http://www.nytimes.com/2016/04/14/technology/fbi-tried-to-de...

[2] https://www.washingtonpost.com/business/technology/2013/12/0...

[3] https://www.aclu.org/sites/default/files/assets/aclu_comment...

[4] https://www.aclu.org/files/assets/aclu_comment_on_remote_acc...

[5] http://bigstory.ap.org/article/23f882720e564b918d83abb18cd5d...

8
JustSomeNobody 3 hours ago 0 replies      
December, huh? This being an election year we all know this won't happen.
9
MichaelBurge 2 hours ago 0 replies      
The rules mention that the police must notify the owners of the computers or information, so police wouldn't be secretly hacking into your computers without telling you. That actually would be pretty bad.

The malware one seems entirely reasonable to me. If you have malware, chances are you're aiding criminals by providing them with hardware to commit their crimes with. Why shouldn't a judge issue a search warrant or have your computer seized? The computer is literally part of the crime scene. If you don't like it, don't install malware.

The first one I'm not really sure where it would be used. Is it just, say, "police are allowed to use TOR vulnerabilities to gain access to the servers serving .onion links in the course of their investigation"?

I guess their point is that the changes should've been initiated by Congress, since it's more than procedural. I can buy that, even if the changes themselves seem innocent enough.

3
A Haskell Reading List stephendiehl.com
35 points by rspivak  1 hour ago   2 comments top
1
MelmanGI 41 minutes ago 1 reply      
> Here is a list of papers and writings of what I consider are essential Haskell reading

Most of those papers are really not "essential" for getting things done in Haskell.

If you want to dig deeper into Haskell's type system or Category Theory in general, then yes, there are a lot of good papers in that list.

If you just want to write safe, conscious and understandable code, then you are much better off reading the excellent "Haskell Programming from first priciples" [1] or the slightly outdated "Real World Haskell" [2].

[1] http://haskellbook.com/[2] http://book.realworldhaskell.org/

4
To become a good C programmer (2011) fabiensanglard.net
32 points by __john  1 hour ago   2 comments top 2
1
scarmig 2 minutes ago 0 replies      
K&R is often recommended, and it's certainly fun to read and accessible. But I've also heard it's outdated, and doesn't rally focus much on modern C software design, mostly because the world knew little about it when K&R was written.

Thoughts?

2
dsfuoi 14 minutes ago 0 replies      
This is the first time I have seen sizeof used like this:

 sizeof( &array[0] )
This looks equal to:

 sizeof( array )
at first glance, which would give the size of the entire array in bytes, but of course the &array[0] expression is really:

 &*( array + 0 )
which simplifies to:

 array + 0 
which is a pointer. And using sizeof on it gives the size of a pointer to int.

---

This is just a really convoluted way to write 2:

 &array[2] - &array[0] &*(array+2) - &*(array+0) (array+2) - (array+0) 2 - 0
Again I have never seen this written in such fashion.

5
Linux-insides: Inline assembly github.com
12 points by 0xAX  35 minutes ago   discuss
6
Humble Book Bundle: Hacking Presented by No Starch Press humblebundle.com
38 points by p4bl0  1 hour ago   8 comments top 4
1
blfr 1 hour ago 1 reply      
This is great. I was going to pick up Automate the Boring Stuff with Python after reading http://www.alexkras.com/review-automate-the-boring-stuff-wit...

Any reviews of the other ones from HN crowd?

2
StavrosK 1 hour ago 1 reply      
A bit offtopic, but does anyone know why Humble Bundle have the "Bitcoin" button look disabled? I don't understand why they would want to discourage Bitcoin payment, since it's not subject to chargebacks.
3
virmundi 41 minutes ago 1 reply      
Make sure you disable an ad blocker. I temporarily allowed the site, but the checkout system didn't work. After I globally allowed all sites, the modal popped up.
4
p4bl0 1 hour ago 0 replies      
I particularly appreciate the fact that the books come DRM-free :).
7
A cache miss is not a cache miss larshagencpp.github.io
43 points by ingve  2 hours ago   18 comments top 7
1
alain94040 27 minutes ago 1 reply      
If you want to measure cache miss effects, you really should rewrite this code in C, so you know exactly what gets allocated in memory with what layout.

Without more details on data layout, you could be seeing the side-effect of aggressive pointer prefetching, which may only be possible with one layout but not another. Or there could be a fixed stride for some of the data allowing prefetchers to kick in. It's hard to tell and deserves some more experiments to isolate what is going on.

2
halomru 54 minutes ago 2 replies      
That's a nice demonstration. I would even go a bit further with the conclusion/lesson: Cache misses are not evil.

When data isn't in a register and isn't in the L1 cache, it takes a lot of time to fetch it from other caches or about 200 clock cycles if it comes all the way from main memory. We measure that event as a cache miss. But modern x86 processors will go to great lengths to execute the rest of the program while waiting for the data to arrive. A cache miss only really slows the program down if there aren't enough nearby instructions that can be executed while waiting for the missing value to arrive.

You could likely write a program that triggers a cache miss every 30 clock cycles but runs at the same speed as a program without cache misses. In a different program, a cache miss every 30 clock cycles can mean a slowdown by two orders of magnitude. Cache misses are only a useful metric to give us an idea where to look, not to show actual problems.

3
netheril96 11 minutes ago 0 replies      
Is the comparison really fair? On a 64-bit system, the `std::unique_ptr<int>` takes 8 bytes, while a linked list node takes at least 24 bytes (as it must have a previous and next pointer). The larger sizes means less number of values can be in the cache at any time and may be the real reason for the slowdown.
4
ryandrake 40 minutes ago 0 replies      
Sets a good example to MEASURE things rather than just relying on "old wives' tales" of programming that everyone believes because everyone believes them. So many times, I've asked someone why they used some weird algorithm, why they wrote their own linked list, or why they unrolled that loop, and it's always "But it's faster!" Well, did you measure it to actually see if it's faster?

One nit: Please, please, stop perpetuating the use of the non-word "performant." I cringe every time I hear it (now even in person at work!) Using it just makes you sound dumb, and clearly the author is not dumb.

5
onetimePete 24 minutes ago 0 replies      
The real trouble is the inability of chips to identify what haskell could completely avoid, doing the same algorithm for the same input repeatedly.Hashing over input skip and save output, that would be a optimization.Also Tools that give Programmers a DAO Visualization of the OO-Clutter they impart on there work.

A flag that guarantees side effect freedom to a set of operations and suboperations, for the processor that would be great.

6
userbinator 54 minutes ago 0 replies      
It reminds me of this article, where adding prefetching to attempt to tell the CPU where to go next for a liked-list operation actually had a slightly detrimental effect on performance: https://lwn.net/Articles/444336/
7
webaholic 1 hour ago 3 replies      
I think this is the reasoning behind preferring SOA vs AOS.
8
Introducing the Infinit file system infinit.one
11 points by urza  57 minutes ago   discuss
9
UNSW takes lead in race for non-toxic, thin-film solar cells unsw.edu.au
13 points by ytz  1 hour ago   1 comment top
1
Aelinsaar 32 minutes ago 0 replies      
It's looking like the next decade is going to be absolutely insane for solar power, and it's no wonder that everyone wants a piece of that.
10
10nm versus 7nm semiengineering.com
6 points by jsnell  47 minutes ago   1 comment top
1
aexaey 7 minutes ago 0 replies      
Wait, did I read this right - Intel is going to be the last one to get 10nm? That's quite a reversal of what we've seen previously. Even on 14nm Intel had about a year of head-start, measuring by first device shipped.

> Samsung, for one, plans to ship its 10nm finFET technology by years end.

> TSMC will move into 10nm production in early 2017

> Intel will move into 10nm production by mid-2017

11
Automatic Image Colorization of Greyscale Images Using CNN waseda.ac.jp
91 points by timlod  6 hours ago   12 comments top 6
1
argonaut 3 hours ago 0 replies      
For context, a different colorization model with about the same results: http://richzhang.github.io/colorization/

Another model previously posted on HN, with (IMO) worse results than these two models: http://tinyclouds.org/colorize/

2
strictnein 1 hour ago 0 replies      
Not sure if the title would be too long, but I'll be honest and say that I thought this was about the news organization for a minute.

CNN = Convolutional Neural Networks in this context.

3
neom 2 hours ago 1 reply      
This is amazing to me. My major was Digital Imaging Technology in 2005 and I remember doing this by hand in photoshop wondering if one day there would be a button for it.
4
MelmanGI 1 hour ago 2 replies      
I would really like to see approaches like these applied to movie scenes. Especially how differences in single colorized frames depicting the same scene could be handled.
5
hartator 51 minutes ago 0 replies      
I wonder how companies doing this for money will be impacted. That's amazing what a time saver it will be.
6
astrosi 3 hours ago 2 replies      
Impressive stuff, I especially like the style transfer that can be done by using the global features of one image and the local features of another (Fig. 7)

What I find somewhat annoying is that whilst they show some examples from their validation set, and a couple of examples of the model failures. They don't appear to show a random selection of cases from their validation set.

12
How France sank Japan's $40B Australian submarine dream reuters.com
72 points by altstar  5 hours ago   35 comments top 10
1
lubos 3 hours ago 0 replies      
The only one who kept the deal with Japan alive was Tony Abbott because of his personal friendship with Japanese PM.

Mitsubishi Heavy Industries doesn't have an experience to build submarines overseas and there were significant risks leaving someone to "learn on the job".

USA has been (correctly) pressuring Australia to select vendor on its merit and not to take into account geopolitics. Geopolitically, Japan is a lot more important to Australia than France. This is the reason Japanese believed for so long they had the deal in the bag.

With Abbott losing his job, it became easier for France or Germany to hop in. Especially when new prime minister had agenda of damaging previous PM. I think Abbott must feel terrible about Japan not getting the contract but it only shows deals like these should be done at arm's length. Abbott even wrote personal letter to Japanese PM to apologize.

2
luch 4 hours ago 3 replies      
Interestingly enough, France used to make the same nave mistakes the Japanese did : follow too closely the bid "rules", not lobbying enough, not taking political landscape into consideration, not attending to "social events", etc.

Having worked in the areospace industry, I've seen France lose some "in the bag" public markets by letting other countries lobby and spin bid's technical specifications to their advantage.

It seemed they learned the lesson, at least for this bid.

3
tomhoward 4 hours ago 4 replies      
Here's one commentator's take on the matter, from the Australian Financial Review:

Can there possibly be an upside to [Prime Minister] Malcolm Turnbull's decision to squander billions of taxpayers' dollars building 12 French submarines in [the state of] South Australia?

It's hard to think of one.

Of course, there are potentially critical South Australian seats at stake in the coming election and Turnbull no doubt believes it's worth every penny to ensure that the Australian people are not deprived of his greatness.

But surely there were cheaper ways to buy off the South Australians.

With a 30 to 40 per cent local cost premium as a starting point and the history of the Collins class submarine to go by, the federal government could have hired all the Australian Submarine Corporation (ASC) workers to do nothing and the taxpayer would have been billions of dollars better off because at least they wouldn't have been making grossly overpriced submarines.

http://www.afr.com/opinion/columnists/alan-mitchell/turnbull...

4
fma 2 hours ago 0 replies      
Doesn't looks like France sank Japan's bid...Japan sank themselves by thinking they won why clearly the bidding process just started.
5
antr 4 hours ago 0 replies      
General Douglas MacArthur once said: "The history of failure in war, or in any other human endeavor, can be summed up in two words: 'too late.'" This quote fits perfectly with the story.
6
GunboatDiplomat 2 hours ago 0 replies      
Aren't submarine props usually classified? Meaning you don't just let people take pictures of them?
7
danieltillett 4 hours ago 1 reply      
It is not too hard to understand. If you are inexperienced fighting with the big boys you will lose. As an Australian I hope we get something useful out of the tens of billions we are about to spend.
8
panini_tech 3 hours ago 0 replies      
sounds likeAbbott and Costello Met Frankenstein :)
9
brooklyndude 1 hour ago 0 replies      
I'm confused, we're supposed to support an arms deal, while the word is having a hell of a time keeping it all together? Are we not past this boys and testosterone thing? It's not the 12th century.

Lets grow up kids. Just get to Mars already.

10
democracy 2 hours ago 1 reply      
If you want to be strong in Asia you should work with the country in the region, i.e. Japan. France is a powerful country in this regard, but also not a very reliable one (cancelled Israel and Russia deals, for example).

I am sure there is more to it than just one businessman being smarter than the other one. Or AU leadership being strategically smart.

I believe it is a "divide and conquer" type of thing, one of "the great games" at play.

13
High-Speed Ad Traders Profit by Arbitraging Your Eyeballs bloomberg.com
41 points by r0h1n  5 hours ago   17 comments top 5
1
raimundjoss 1 hour ago 1 reply      
I think the analogy here with Wall St is apples and oranges. In Wall st, the security traded is fungible. Here in the ad world, an ad view could be high quality or lower quality. It could be a bot or a human looking to buy something.

I am an engineer who spent 12 years in the ad tech industry. This problem is especially acute in the online video ad side where dollars are exchanged based on views and CPMs, not someone buying something online (this being more accountable is less open to abuse). In my old job, we tested what % of traffic are fraudulent. These traffic are daisy-chained from one ad buyer/seller to the next. Inevitably it will hit someone unscrupulous. In aggregate we found anywhere from 20% to 95% of the traffic we see for video ads to be fraudulent.

There are some very sophisticated bot farms out there that gets around detection, mostly operated out in Eastern Europe and Asia. If you look at Comscore 100 video sites, you can always tell who's gaming the system when from one month to next, an unknown brand just jumped high in the top 100.

This is the reason Facebook had shut down Liverail that they spent $450M on. Super high percentage of ad fraud.

2
very_difficult 2 hours ago 0 replies      
I have experience doing the exact thing described in this article, but not at the level of sophistication described. I still made a decent amount of money from it.

All of the exchanges (on the buy and sell side) know it's going on, and they generally don't try to stop it provided that it doesn't become news. A large number of well-known, venture-backed ad tech companies make money from this and they're not incentivized to stop it.

It's rare that the companies actively encourage arbitrage, but some do.

3
ikeboy 2 hours ago 0 replies      
(2014)
4
SixSigma 4 hours ago 3 replies      
If these were concert or sports tickets the people doing it would be called "scalpers".

The world is a curious place, full of contradiction and wonder. We only like the free market for some activities.

5
yummyfajitas 2 hours ago 1 reply      
This article uses very odd language.

Traders buying and reselling at a higher rate could be distorting the markets and removing the efficiency that were supposed to see through real-time bidding, he said.

By definition successful arbitrage makes the market more efficient - it brings prices closer together (making the cheaper exchange more expensive and the more expensive one cheaper).

Furthermore, arbitragers in ad exchanges are causing more ads to be sold, providing a valuable service to buyers and sellers.

Suppose there is a sell order on exchange X, a buy order on exchange Y, and these orders are compatible. Unlike public equities markets (which have RegNMS) this order may NOT be routed from X to Y. The result is inventory is wasted or put to a lower value use.

If an arbitrageur notices this he can cause the transaction to occur which would not otherwise occur.

14
Handcuffed to Uber techcrunch.com
15 points by felarof  1 hour ago   9 comments top 3
1
mkagenius 5 minutes ago 2 replies      
Why can't they take loan from someone and give a 10% interest in a few weeks?
2
adam-a 20 minutes ago 2 replies      
I've not been in the position of buying options before, but is that really how the tax system works? I understand you have to pay tax on income from shares, but if you're buying shares you haven't had any income from them at that point right? I would have through you'd just pay tax on any money you received when you sold the shares. Curious to know if that's how it works in the UK as well as the US, anyone have any pointers?
3
zzalpha 51 minutes ago 2 replies      
I'm far more curious about what will happen when these companies start seeing significant portions of their workforce facing expiring option plans...
15
Design of the RISC-V Instruction Set Architecture [pdf] berkeley.edu
102 points by ingve  7 hours ago   30 comments top 4
1
ndesaulniers 6 hours ago 1 reply      
This should be a good read. I aspired to do stuff like this in my undergrad: https://github.com/nickdesaulniers/Omicron/blob/master/Proje...

People like the author, Andrew Waterman, and others like Bunnie Huang who work towards making more of computing open are inspiring. I feel like the last piece of the puzzle is open FPGAs. I'm quite sure FPGAs are critical to the open hardware movement.

I should quit Google and solve this...

2
rwmj 6 hours ago 0 replies      
The best part of this paper is the thorough analysis of why existing ISAs suck (chapter 2).
3
bluetomcat 5 hours ago 5 replies      
I have often wondered, do we really need any general-purpose registers to be visible in the ISA? Aren't they an implementation detail, just like caching?

IMHO, a memory-to-memory architecture would make for a much simpler ISA and allow much easier code generation (no register allocation needed).

4
jkeler 4 hours ago 2 replies      
It defines extensions but no way to detect them. Am I missing something?
16
The Increasing Problem with the Misinformed baekdal.com
54 points by r721  6 hours ago   27 comments top 11
1
whack 1 hour ago 8 replies      
I agree with the article's premise that a misinformed public is dangerous to democracy, but I disagree with the premise that this is a new development. The public has always been misinformed for one simple reason: everyone loves to form opinions on how things ought to be, but few people feel the need to do the research and homework needed to actually form credible opinions on the topic.

This is not a popular thing to say, but democracy would work a lot better if misinformed people were simply not allowed to vote: https://outlookzen.wordpress.com/2014/01/21/democracy-by-jur...

2
coldtea 1 hour ago 1 reply      
>But there is a problem with this graph. By ranking the data like this, we don't take into account the severity of the lies a person makes.

That's the problem? The fact that you take a site at face value regarding its fact checking score, which could be totally BS, partisan, inaccurate, etc., in real life, isn't a problem?

>But look at the above graphs. If PolitiFact was clearly biased, they wouldn't be as wide ranging as this.

That still leaves two ways it can be partisan (or, off) wide open:

1) It can be partial to the shared assumptions/ideology of both parties (who are more alike than different compared to how parties are in other western countries).

2) It can be partisan to one party, and still mark as "bad liars" its people -- only for that party it does it for second players, whereas for the other party it does it for the leadership and first-rate players.

This still gives a total like "here, we have reported on 100 lies from Democrats and 100 from Republicans" thus we're impartial, while still hurting one or the other party far more.

3
CM30 2 hours ago 1 reply      
Of course, there are also a few more reasons this sort of 'misinformation' is becoming an issue. Like all the sites online that actively make up news and stories to get clicks on social media, while only saying its 'satire' someone deep in a disclaimer page linked from the footer. It's pretty hard not to be misinformed if the internet is filled with deliberately misleading fake news sites that try and capitalise on people's fears and political opinions to get ad clicks.

Just look how many debunked stories on Snopes comes from sites whose sole purpose is to trick people on Facebook.

In addition to this, a lot of journalists need to ask themselves whether people's distrust of them comes from them being treated as 'idiots' by the press that's supposed to represent them.

There's a point here where the article says:

"Only about 20% feel positive towards newspapers today, again following the decline in trust in our politicians."

But how about another reason? It's not just the fact the politicians are seen as almost completely non trustworthy, but the fact the press are seen as completely out of touch and more interested in supporting the 'status quo' than the population. It's the fact that having an opinion to the left or right of the media gets you labelled as 'crazy' or 'bigoted' or 'horrible'. That supporting Sanders gets you called a 'Bernie Bro'. Etc. The level of contempt a lot of journalists show towards their audience leads to people hating them, which leads them finding people that exploit that hatred for less noble ends (usually extremist groups and publications).

That's something else the press needs to fix, and fast.

4
jccalhoun 1 hour ago 0 replies      
I don't know if it is increasing or not. The paper doesn't really present evidence that misinformation is actually increasing. Is it worse than during the lead-up to the Spanish-American War? https://en.wikipedia.org/wiki/Propaganda_of_the_Spanish%E2%8...
5
dforrestwilson1 48 minutes ago 0 replies      
https://en.wikipedia.org/wiki/Yellow_journalism

Setting the clock back to the 1890s.

6
pdonis 27 minutes ago 0 replies      
If the public is misinformed, who is misinforming them? The press.
7
known 1 hour ago 0 replies      
1. "Media does not spread free opinion; it generates opinion." --Oswald Spengler https://en.wikipedia.org/wiki/The_Decline_of_the_West#Democr...

2. https://en.wikipedia.org/wiki/Information_asymmetry

8
appleflaxen 2 hours ago 1 reply      
Weird that the proposed solution is to make newspapers write better news. What prevents this from being copied over and over again by web-based news outlets, thus destroying their ability to generate revenue?
9
0xabababab 2 hours ago 0 replies      
Does anyone actually read that many statistics outside of an academic paper?
10
miguelrochefort 1 hour ago 0 replies      
If you think the problem with democracy is the public being misinformed, you're part of that group.
11
andrewclunn 1 hour ago 0 replies      
I'm offended, or at least I assume I should be, since I only read the article's headline.
17
If You Can't Smell Him, Can You Love Him? nautil.us
40 points by pmcpinto  4 hours ago   20 comments top 6
1
nkrisc 2 hours ago 4 replies      
I can't find the source, but I read about a study done with women where they smelled sweaty shirts from unrelated men or their brothers or fathers. Across the board they rated the shirts from their blood relatives as smelling worse than the others. The theory was that people with different genetics have different makeups of microbes living on their skin, making them smell different than you.

I like the way my wife smells, and we certainly share no relatives for many, many generations.

2
Geekette 2 hours ago 2 replies      
I think the concept of emotional contagion is a stretch. I simply do not believe that empirically[1], people who smell sweat excreted by frightened people will also feel fear. Disgust as general reaction to all sweat samples is more believable because sweat is generally viewed as an unpleasant smell.

The notion that you are lacking an entire layer of communication when you do not have a sense of smell is also subjective, especially when it comes to those born without it; you can't miss what you don't know. Also other senses tend to compensate, i.e. when some blind people report heightened sense of hearing.

[1] Lack of discussion of sample size makes me think it wasn't representative in size or demographics.

3
sixhobbits 2 hours ago 0 replies      
"In experiments, volunteers who sniffed sweat samples from people experiencing certain feelings, such as fear or disgust, felt those emotions themselves. When volunteers smelled the odor of disgusted people, their lips curled just as they do when smelling something disgusting, says Dalton"

Interesting article. I'm curious as to how much reliable research has been done into contagious emotions. It sounds a bit farfetched the way they describe it in the quote above. The idea is talked about in a Live Science article too [0], but also not much detail is given. I haven't read much about it, but it sounds like one of those single paper non-reproduced ideas that's interesting enough on its own for people to keep talking about it. Does anyone who knows more about this than me have other opinions?

[0] http://m.livescience.com/24578-humans-smell-fear.html

4
ondeodiff 2 hours ago 0 replies      
There's been plenty of research on the role olfactory sense play on sexual attraction. It's totally not surprising. If you look at the animal kingdom, scent is a key detector for ovulaton cycles, etc. Here's an old one from NIH ..http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1885393/
5
DanBC 3 hours ago 0 replies      
Here's a BBC Radio Four programme about various forms of anosmia. It includes a woman who can't smell her partner, and she misses that, and realises that smell is often mentioned by bereaved people as being a powerful trigger for memories.

http://www.bbc.co.uk/programmes/b076cg3n

(I have no idea what bits of the BBC are geo-blocked. Sorry if this isn't available.)

6
aaron695 43 minutes ago 2 replies      
Another reason why the bullshit about working remotely is 'great' is not that simple.

Smell is an important part of the workplace.

So is touch and 3d vision.

18
What Happened to Google Maps? justinobeirne.com
359 points by doff  15 hours ago   163 comments top 46
1
panic 9 hours ago 3 replies      
I'm confused by the replies saying that more dense labels would somehow harm usability for driving or navigation. When you're driving, you know where you're going, the map knows where you're going, and it's easy to see the route you need to take. When you're not on a route, denser labels help you to orient yourself with the map and to know when to zoom into a particular location.

That said, I also think the focus on paper maps is misplaced. Old-style road maps had to answer the question, "how do I get there from here?" New-style digital maps don't need to answer that question any more! Questions new-style maps need to answer include:

* I know the name of a place or street; where should I zoom in to see more things around that place or street?

* I need to go to a (gas station / rest stop / hospital); where's the closest one?

* How would I get home from where I am?

* I'm in an unfamiliar place and would like to go "downtown" (where there are restaurants and things to do); where is "downtown"?

* Where is my car right now?

Roads help you to orient yourself with the map, but they aren't as fundamentally important to digital maps as they were to old-style road maps. The visual space of the map might be better spent helping answer questions like these.

2
googleisking 3 hours ago 1 reply      
Google Maps, like several other google products (like Groups, even Search), has seen a constant degrade in user experience since 2005 or so.

Maybe people forgot, but google maps was /blazing fast/ in the beginning.

Nowdays, it brings my browser down to a crawl even before the images are shown. Maybe people are just stuck with this "google is the best" mentality, but this has stopped being universally true since many years.

Use OpenStreetMap. It's data is way superior. It's _your_ data. Cannot strett this enough.

Want a fancy browser? Nokia maps have always been incredibly sleek to use:

 https://maps.here.com/
Heck, even bing maps are /so much faster/. The imagery is also higher quality in several regions.

Google has still the lead with street view, but for the actual maps I really encourage you to look for alternatives. They've destroyed their interface as far I'm concerned.

3
x0054 9 hours ago 5 replies      
So I am not crazy after all! The other day I was driving in Palm Springs, and I was using Google Maps. I literally had to zoom in until the road I was on almost completely field the screen before it would show me the name of the road or the roads around it. They did something to their display algorithm where you now have to zoom almost entirely into an area to see anything about the area, very inconvenient.
4
elmerland 11 hours ago 8 replies      
I'm surprised the author didn't mention the main difference between paper maps and google maps. That is, google maps is interactive. With a paper map what you see is what you get. You had to cram as much information as it would allow. But this is not the case with google maps. You can zoom in, out, and anywhere in between. You can't compare the two based on the level of information displayed at one fixed zoom level because google maps is 3D whereas traditional paper maps are 2D.
5
mbrock 10 hours ago 2 replies      
There's a broad trend of tech companies disrupting traditional industries and then making rookie mistakes and generally not living up to the standards of the tradition.

Google, hire cartographers. Amazon, hire librarians and typesetters. Spotify, hire musicologists.

6
rrockstar 8 hours ago 0 replies      
The way I use Google Maps a lot is for discovery. I look at the map of New York to see cities around it. When I look at a more zoomed in level of the city I want to see different boroughs and major roads. More zoomed in you want to discover shops and businesses. The bareness of the current Google Maps makes it very unsuitable for these functions. For example, even at full zoom level it only shows a few (<5%) of the shops and bars at the city center where i live. If i want to get a feel for a city (where are the most restaurants, where are the shopping centres, etc) the maps are really bad for that unless i go searching for the specific terms in the search bar. But that is the down side of the search bar: you never finf something you didn't know you were looking for.
7
a3_nm 4 hours ago 3 replies      
I think that OpenStreetMap has a bit more information that Google Maps at the same zoom level. To compare on New York:

https://www.openstreetmap.org/#map=9/40.5263/-73.8556

https://www.google.fr/maps/@40.5487361,-73.9399427,9z

(Edit: as the child comments correctly point out, this is only the default OSM rendering style.)

8
DanHulton 1 hour ago 2 replies      
Is it just me, or is that site unreadable? Chrome 50, Windows 10, and the font is about the thinnest I've ever seen.
9
rspeer 10 hours ago 5 replies      
Gripe: The font used on this blog is so thin it's almost invisible. I dislike this trend.
10
lucb1e 2 hours ago 0 replies      
I've been noticing this but could never quite put my finger on what it was. This is exactly it.

On google maps I can never find what I want. I thought it was because I've been using OpenStreetMap, and had gotten used to a different display style. Seeing a place once in OSM anywhere in the world, and zooming out from it to continent view, I can almost always find it again later. On Google Maps I always got lost. Now I finally get what the problem is.

11
altitudinous 11 hours ago 5 replies      
Google know more than the author about how Google Maps is used by end users. The author is grading Google Maps based on the number of cities and roads displayed, not how the users use it.

Google provide an alternative mapping product, Google Earth, for satisfying curiosity about the planet. Google maps is primarily a navigation tool. They have very distinct use cases.

12
jccalhoun 1 hour ago 0 replies      
I've had the frustrating experience of looking on google maps and seeing a town I was looking for not be labeled at a zoom level where I could see where it was in relationship to other towns. But I wonder if part of this is about google trying to funnel users into using maps in a particular way. Is this their attempt to get users to search more and scrutinize maps less? Are they trying to make online interactive maps a different experience than paper maps?
13
flyinghamster 4 hours ago 0 replies      
I've noticed the disappearing detail from Google Maps as well, and I find it really annoying - especially when perusing a rural area and having to zoom in to ridiculous levels to see town names.

Another thing I'd like to see is making the "avoid tolls" setting easier to get to. Northern Illinois is toll road central, and I-355 in particular is a huge ripoff when you pay cash. Since I don't need any of the tollways for commuting, I can't justify getting an I-Pass.

14
tomfaulhaber 11 hours ago 0 replies      
Google is a data company and the data available has changed over the last few years. Maps are both a way to present data and a way to collect data.

The new data that Google has comes from Android handsets and from users using Google maps and Waze on Android and non-Android handsets.

This data is all about users in motion. At the scale shown in this article, it's almost exclusively people driving. As a result, it makes a lot more sense to focus on the connections over the places they connect. This becomes clearer when you view the roads as more active entities by including congestion and other real-time data.

This may not be the best presentation for everyone, but it seems to be the presentation that fits best with Google's current mission and capabilities.

15
jasonkester 9 hours ago 0 replies      
The difference between today's Google maps and the authors 1960 paper map is easy to understand when you stop to think about how those maps are used.

The paper map was used to navigate from place to place. That never happens with a google map. Sure, you navigate with them, but by telling Google where you want to go and letting them draw a line on your map. You don't need all that extra information if your phone is navigating you from place to place. You just need something clean that you can glance at to get a sense of where you are.

So that's what they've designed their maps to give you.

16
adwf 4 hours ago 0 replies      
It's the speed/lag that really annoys me nowadays. If I pan across the map or zoom in, I know I'm waiting a good 5-6 seconds before the new tiles will be loaded. And that's on desktop, not mobile. It's so irritating I've just about abandoned google.
17
notatoad 10 hours ago 0 replies      
This criticism reminds me a bit of when people criticize Google search results for a query like "insurance" or "shoes" having too many ads. Searching for vague terms is useless, so they just display ads instead.

At the zoom level the screenshots are taken at, maps are essentially useless. The most important information they can convey is "there's lots of roads here" or "this region is densely populated". The maps aren't optimized for accuracy, they're displaying a summary. The long island example really struck me - the old map displayed the primary route only, the new map conveys the fact that there are multiple options. If you're stuck in traffic and you pull up the map, you can see there's another decent route and ask the app to provide you with directions on an alternate route. If you're using the old maps, you'd just see the single primary route highlightedand assume you should stick with the route you're on.

18
agumonkey 5 hours ago 0 replies      
The Google part happened in Maps. It was a mapping product, and now it's a geographical fronted to search. The v2 was all about services on a cute (and sluggish) rendering substrate. It's now usable these days, still way slower than OSM or bing. I miss the old presentation but alas ...

ps: I recently discovered the 'my timelime' feature. Surprising to say the least.

19
ulkesh 9 hours ago 0 replies      
"What Happened to Google Maps?" or "How to say 'more roads and less labels' in 20 different ways".
20
SeanDav 6 hours ago 0 replies      
Google really won't care about any comments here, they are all about the data. So simply stop using Google maps. Tell your family to stop using Google maps. Tell all your friends to stop using Google maps. Blog about not using Google Maps.

Find alternatives, there are several mentioned in these comments for starters.

When/if Google start seeing a reduction in their map use, only then will they start paying attention.

21
Terretta 38 minutes ago 0 replies      
Last dozen images in the article are mis-distributed. They don't go with the local text.
22
ekr 8 hours ago 1 reply      
I've only ever been using Google Maps for the sattelite imagery, street view, and routin. For those things, Google Maps is still great. OpenStreetMap is what I use for orienting, find cycling paths, trails etc.

So I'm not too bothered by this change. What I don't understand is why hasn't anybody taken the Google Maps routing and use it in OSM apps? Might not be legal, but similar non-commercial projects it should be fine.

23
iamflimflam1 9 hours ago 3 replies      
I suspect that these changes are dues to the switch from bitmap tiles to vectors.
24
cwmma 4 hours ago 0 replies      
I suspect part of the idea is that you will use the search to find cities instead of finding them visually based on their names on the map. The issue with large cities that are near other large cities being omitted from maps is 'The Baltimore Phenomenon'

https://en.wikipedia.org/wiki/Cartographic_generalization#Th...

25
tuukkah 9 hours ago 0 replies      
With open-source tools and services for OSM data from Mapzen and Mapbox, you can make your own map styles: light or heavy in detail, highlighting cities, highways or footpaths.
26
lemiffe 6 hours ago 1 reply      
"Less is just less. And that's certainly the case here."

Not sure I agree, in my opinion most of us use search & destinations nowadays, even in offline mode. The only reason I look at a map is to gauge distance between me and my destination.

27
jrbapna 10 hours ago 2 replies      
While the analysis is fine, the conclusion is almost certainly false. At Google's scale, nearly any iteration made on core products is backed by an immense understanding of their end users and a near unlimited supply of user data.
28
darkhorn 2 hours ago 1 reply      
Satellite imagery for close zoom levels aren't loaded for Turkey! https://productforums.google.com/forum/m/#!topic/maps/Ixm4C6...
29
sztanko 2 hours ago 1 reply      
Optimal placement of labels is an NP-hard problem[1]. Since google maps transferred to vector and label placement is now done on the (mobile) client, I am not surprised by what happened. [1] https://en.m.wikipedia.org/wiki/Automatic_label_placement
30
AJRF 4 hours ago 0 replies      
I think what Google is doing makes sense given smartphone adoption and the fact the maps are use for fundamentally different functions now.

You don't need a hulking great map with loads of detail at a high level to get from point A to point B, you now just use your smartphone for that.

I assume Google spotted a trend of people searching place names as opposed to picking points between two separate locations.

So how does that change the function of the map?

Well, we no longer need to have the zoomed out overload of detail, if we need more information about a place we are visiting, we type in the city name, or address, then zoom in close to see the detail we need.

The article kind of skimps over the point that we can interact with those maps now.

31
donretag 9 hours ago 0 replies      
"The primary route across Long Island Interstate 495"

Off-topic, but as a native NYCer, we would never call it that. It's the LIE. I once had a woman ask me in the parking lot of a Walgreens how to get on the 278. I was puzzled for a second, then I realized she was talking about the BQE. Living out in California now, I miss the days of calling highways by name.

32
ZeroGravitas 8 hours ago 0 replies      
The conclusion seems to undermine the whole piece. If the changes were made to help mobile uses then great, I almost always use it on mobile, and apparently so do the majority of users.

He poses the question of which map you'd want when lost. A mobile phone with Google Maps is clearly the right answer.

33
xgbi 2 hours ago 0 replies      
What if they simply don't want people to print the map and rather use google maps to actually perform the guidance?
34
wtbob 6 hours ago 0 replies      
Great article, but man it reads oddly with JavaScript turned off. Huge masses of text repeated, scrolling weirdly broken. It's almost a proof of its own point
35
hueving 9 hours ago 1 reply      
The comparison to a paper map is stupid. A paper map is severely limited in that you can't zoom in on it so it has to be packed with enough information to hopefully be useful. An interactive map only needs to give you enough context to know what to zoom in on. If I'm using a touch interface that overloads me with information in one screen, it's a bad interface.

It's like claiming that the new york times should display the entire full front page of the newspaper on a mobile device so you can read several articles without scrolling or loading more content because that's what you used to be able to do with the real paper.

36
jalami 7 hours ago 0 replies      
I noticed the same thing a few months ago. I live in a larger city, largest in our county, but we don't exist on the map unless you really zoom. A small town right next-door that's a lot more affluent shows up even when zoomed out to the tristate region. My guess was Google is stepping up their advertisement business for cities.

That or people just find whitespace aesthetically pleasing and Google designers went kind of crazy with it.

37
sz4kerto 9 hours ago 2 replies      
Look at Here maps. It is a much better designed map, from a cartography perspective.
38
praestigiare 2 hours ago 0 replies      
When using a map for navigation at a glance, with a route overlaid on the map, I do not need to know the names of the streets, but I do need to see that there are three cross streets before my turn.
39
PhantomGremlin 9 hours ago 0 replies      
There's much more wrong with Google maps on the desktop than what the article mentions.

My biggest gripe is contrast, rather the lack thereof. Zooming in and out doesn't help. There's a lack of contrast at all levels!

And the algorithm for displaying place names sucks. You'll see certain names at one level, zoom in and they disappear, zoom in some more and they finally reappear.

Paper maps are unquestionably more ergonomic (but much less convenient) than Google maps. But it's not just Google. I find other online maps equally bad. It's quite sad that a paper Rand McNally map is so much better at actually presenting the geography of an area.

Perhaps other posters here are right, it seems like Google maps is designed for point-to-point navigation, nothing more.

40
sabujp 8 hours ago 0 replies      
Comparing google maps to here maps at the same zoom level (on desktop, not on mobile) here maps has more info. It's like google maps has completely given up on the desktop.
41
Pyxl101 12 hours ago 3 replies      
The new maps look clearer and less cluttered and more useful to me. I would guess that the maps were simply designed to follow their primary utility function which is navigation.

Old style printed maps had cities on them, because the map didn't know where you were going! You had to find your city or your location on the map. Now the map knows where you're going, so it can show that place extra-clearly while hiding a lot of detail that's not relevant. Roads are relatively more relevant than cities, since you travel along them to get from one place to another: displaying a road shows the user that they have a primary thoroughfare between locations. You might not care about the name of a city if you're just passing through; and the city that is your destination will be specifically shown.

My guess is that they display only as many cities as needed to help people orient themselves while looking at the map, to understand what they're seeing. More than that is irrelevant to the primary use-case of navigation.

> Google Maps of 2016 has a surplus of roads but not enough cities. It's also out of balance. So what is the ideal? Balance.

The ideal is utility, and the key use-case for Google Maps at that zoom level is driving navigation. The user's going to input their own destination into maps anyway, most of the time, and they'll expect it to appear, so it's no surprise when it does.

Google would have data on this: how many users use Google Maps while driving regularly, multiple times on a trip (at that zoom level), while not having a destination entered (and with no destination, obviously no turn-by-turn directions)? Probably not many. Now imagine overlaying your route with current position and destination on the maps - it's going to be easier to scan the new ones. Edit: Navigation is the primary use-case for a map, and I'd guess usage motivated by that purpose dwarfs the rest by an order of magnitude, and so it's a good default.

42
london888 5 hours ago 1 reply      
Anyone done a comparison with Apple Maps?
43
ableal 8 hours ago 1 reply      
The "Google, where did I leave my keys?" joke is getting closer to feature status, and that's probably what's going on with Google Maps.

Anecdatum: last month I was driving out for a weekend in some rural bungalows a few miles outside a small city (Elvas, Portugal). The address was a bit vague, the place name too common, so much so that I had GPS coords stowed away in a message pic (don't ask ;-).

So, when I pop out Google Maps in the old faithful iPad2 (which happens to be the 3G version, and therefore GPS chipped, good for navigation), and zoom in the area ... amidst thin local roads and lots of blank space, there's the place name and the days we're staying there.

Turns out the Gmail app in that iPad was also used to send or review the emails with the reservations.

Even in my 'desktop' I've been noticing Google Maps marking out city places which seem small compared to other landmarks, but where I often go or mention in emails.

(Thanks, I do know where I left my keys today, I'm good.)

44
powera 9 hours ago 0 replies      
It's probably already been said, but comparing Google Maps to a paper map is stupid.

The paper map has to have lots of cities on it, because there's no other way to find where exactly the specific suburb is if it's not on the map. In Google Maps, you can zoom in, you can search, you can have a link for direction sent to you, ...

The author may claim "less is just less", but apart from "printing a map before knowing where I'm going", I can't think of a situation where the "improved" map would be at all useful.

45
drumttocs8 10 hours ago 0 replies      
Really, just zoom in if you're unhappy with the decluttering setting...
46
london888 5 hours ago 0 replies      
Most use cases the user will have their location and destination marked with route - so they don't need all the detail at that zoom level.
19
McLaren needs Compaq laptops with bespoke CA cards to maintain the remaining F1s jalopnik.com
191 points by mstats  11 hours ago   112 comments top 21
1
hannob 8 hours ago 3 replies      
This is a glimpse into what we'll see much more in the IoT-future.And while for a superexpensive car someone will take care, the same is not necessarily true for your home control system, washing machine, (insert other technology with lifetimes far beyond normal IT cycles), which may rely on an ancient app that doesn't run on any modern system any more.
2
sandworm101 10 hours ago 4 replies      
Don't single out McLaren. Countless much more important systems can only be accessed through ancient tech. The Stealth bomber (the b-2) is a product of the same technological era as the F1. Would anyone here be surprised if the new ran a story about them being grounded because that one last laptop capable of talking to their systems finally gave up the DOS ghost?
3
asimuvPR 3 hours ago 2 replies      
I have a friend who owns a 2007 BMW 745. The car is an electronics nightmare and needs BMW's service software to keep it running properly. For example, installing a new battery requires a system reset through the service software. Problem is that the software runs on an old laptop that dual boots into a custom image of windows xp or some Linux based system. The machine needs an rs-232 port, which is not rare itself yet, but you also need a special OBDII to ethernet to rs-232 adapter. The maintenance software is a nightmare to work with and requires constant restarts to get the network connection to close in order to run some other test. You can get the computer on ebay with the software and additional hardware but they are they are getting rare and old.
4
jpalomaki 9 hours ago 2 replies      
From IT perspective nice to see hat McLaren still continues to provide support for this old consumer product with only a hundred running installations. They could have just issued sn end of life statement and ask customers to upgrade to newer version.
5
koenigdavidmj 10 hours ago 3 replies      
Whodathunkit? "CA card" isn't exactly a Google-friendly phrase, even with the quotes. Which means this article doesn't say much.
6
kirrent 10 hours ago 2 replies      
If it ain't broke don't fix it. Especially when the computers are relatively cheap compared to the bespoke hardware. My favourite example of this was the school district which used an Amiga to control the district's HVAC systems. Replacing parts occasionally on a 30 year old computer is far cheaper than an entirely new and unproved system.

http://hackaday.com/2015/07/23/this-little-amiga-still-runs-...

7
pjc50 8 hours ago 1 reply      
A while ago I met someone with a side business in maintaining the Aston Martin Lagonda's high-tech dashboard. It's a set of CRTs driven by proprietary 70s 6v logic, and he claimed to be the only person still in business who knew how it worked.
8
stefano 7 hours ago 1 reply      
> but MSOs team understands that they can only remain the most desirable modern supercars ever made if they work on keeping them functional, drivable and just as fast as they were back in 1992.

JavaScript frameworks could learn something from this. In the frontend web development world, backward compatibility and stability are severely underrated.

9
colordrops 7 hours ago 3 replies      
Sorry for being picky, but is there any context where the word "bespoke" provides more information than "custom"?
10
dredmorbius 6 hours ago 1 reply      
What does "CA" mean in this context?
11
Grazester 2 hours ago 0 replies      
Just about any aftermarket engine control unit would be able to take the place of the system in the F1
12
hanief 10 hours ago 2 replies      
A lot of banks still run mainly on FORTRAN or COBOL. I guess it's "if it ain't broke, don't fix it" attitude.
13
throwaway20161 2 hours ago 1 reply      
Our control system at work needs a DOS program to perform diagnostics on the modems controlling communication to our subsea wells. (System made in the mid 90s).

Now when we service these modems the OEM vendor comes with DOS running in a VM on a normal pc. When you know what we we rent this PC for (few $K per month), I just can't help but laugh.This PC was also not possible to purchase from the OEM.

#oil

14
cisstrd 5 hours ago 0 replies      
Sorry since this is not exactly "on-topic", but wanted to share my favourite video about the F1 "Ferrari Enzo versus Mclaren F1 - Fifth Gear" https://www.youtube.com/watch?v=2kLlmxUAB5A

I absolutely love this car. Definitely built for a purpose, not much hand-holding, just the sight of this manual switchboard, the position of the seat, the minimalistic dashboard, ... <3

15
manigandham 8 hours ago 2 replies      
Serious question - why isnt this stuff, including today's manufacturing, using easily interchangeable modules with standard interfaces? Why doesn't everything just fit into a single box that can then be replaced with box v2 in a few years, adding more features and performance and removing this maintenance nightmare?
16
EvanAnderson 10 hours ago 1 reply      
It sounds like McLaren didn't get very good documentation from whoever designed the interface and management software (and now the information may be lost). Spending more money on that documentation up front would have likely left them in a lot better situation today.
17
fsaneq2 9 hours ago 0 replies      
So what? I'm sure they could replace them if they felt like it was worth it. Clearly they don't.
18
hoodoof 9 hours ago 0 replies      
The price of failing to keep your technology up to date.
19
marban 6 hours ago 0 replies      
Meanwhile, millions of people receive their paycheque compiled on a 70s Tandem.
20
milesf 9 hours ago 2 replies      
Why can't the system be emulated? Am I missing something?
21
Theodores 10 hours ago 1 reply      
I think we have expectations that hi-tech cars from the likes of McLaren are 'hi-tech' through and through - they are not!!!

The McLaren F1 is a car for the track, the few examples that exist do go out and race. Over a race weekend I imagine the car is taken apart and put back together again in a multitude of ways, e.g. wheels taken off and different ones put on. Note how those wheels are held on with just the one big bolt that has to be tightened massively. That is not 'hi-tech', that is using the appropriate race-grade technology for the job.

I have only stared into the bowels of a McLaren F1 once, but I bet that beyond the gold there are lots of things held together with nuts, bolts and clips that look crude compared to bicycle technology with bearings that really are cruder than on a bicycle. Yet these parts can be swapped in and out and adjusted easily.

My point being that high-end race cars are not entirely high tech, under the hood there is stuff that is 'bits of bent tin'.

20
Show HN: Writing Streak write fiction every day writingstreak.io
5 points by rayalez  1 hour ago   2 comments top
1
fiatjaf 51 minutes ago 1 reply      
I think this makes sense for a lot of people here, and that people will be able to sell books on Kindle store writing like this, but it is surely sad that writing has become this nowadays.
21
Emacs, Google this github.com
69 points by sndean  9 hours ago   17 comments top 9
1
sjm 4 hours ago 0 replies      
Similarly, I use engine-mode (https://github.com/hrs/engine-mode) with Emacs, which allows you to set up basically any search engine. I use Google (on C-c / g) and Github (on C-c / h), but the engine-mode readme explains how to set it up with DuckDuckGo, stack overflow, etc.

It's configurable, but results can be displayed in eww, Emacs' built-in text-based web browser.

2
ktamura 7 hours ago 1 reply      
Neat idea. I've been using something like this with Acme from plan9port.

Acme's plumber lets the user customize behavior on a piece of text based on its context or a program that takes it as an argument.

A previous discussion about Acme on HN: https://news.ycombinator.com/item?id=4533156

3
_asummers 1 hour ago 1 reply      
In a similar type of helpfulness (though the Google one could replace this in certain cases) helm-dash is an excellent plugin. You can bring up the helm-dash menu and search your Dash docs for functions by name, or search for the symbol at point. I have mine configured to open the docs in my browser, pointing to the local docs.

If you use OS X, something is broken in browse-url (I think) which causes anchors to not get opened correctly (annoying), so I added this[0] snippet to allow it. Not robust at all, but you get the idea. The "open -a" gives window focus to Chrome, mimicking the built in behavior.

4
sooheon 4 hours ago 1 reply      
I use Launchbar (https://www.obdev.at/products/launchbar/index.html) for this. It works system wide, I just select the word or phrase and tap the Option key once. This brings up Launchbar with that text inputted, and then I can send that off to Google, Wikipedia, the system dictionary, YouTube, or anything else. I love it because instead of having different convoluted keybindings for every different type of search (C-c / g, C-c / h, etc.) I only have one, and then dispatch off a fuzzy frecency search that takes <2 letters for the result I want. So tap Alt, Tab, g, Enter searches Google, while Alt, Tab, w, Enter searches Wikipedia, etc. It does tons more with this same paradigm, I use it more than the command line, and can't recommend it enough to people who like to optimize their computer use.
5
toothbrush 5 hours ago 1 reply      
Haha, interesting -- i've had a similar functionality that i hacked into my window manager (XMonad) forever ago. I have an arbitrary keystroke that grabs the highlighted (x clipboard) text, raises a running firefox instance, and plugs it into a StartPage search :). Especially useful when a terminal command barfs an error.
6
merb 1 hour ago 0 replies      
Emacs is a great operating system it lacks a good editor, though.
7
agumonkey 5 hours ago 0 replies      
Funny, just installed it two days ago after reading http://melpa.org/#/selected page which hints at it.
8
raverbashing 3 hours ago 2 replies      
Does Google's terms of use allow this?
9
_ph_ 5 hours ago 0 replies      
Great! Thanks for this. I had considered writing something like that for ages. Sometimes error messages seem to be created only as a key for a search on Stackoverflow.
22
Prometheus: Monitoring for the next generation of cluster infrastructure coreos.com
66 points by Artemis2  9 hours ago   8 comments top 3
1
jamescun 2 hours ago 0 replies      
I'm still not convinced of "Google Infrastructure for Everyone Else". Google built Google's infrastructure because that is what Google needed. Your CRUD app probably doesn't need that.
2
Dowwie 1 hour ago 0 replies      
Do the comparisons from the Prometheus web site still apply? Examples on the following page pertain to projects with a lot of ongoing activity, such as InfluxDB: https://prometheus.io/docs/introduction/comparison/

The storage comparison really should be updated..

3
ymse 5 hours ago 2 replies      
This is more a job ad than an article. I've wanted to try out Prometheus a while, but can't figure out how to:

1. make it highly available

2. play nice with firewalls

If I deploy Prometheus outside a NAT, and want to monitor 100 physical machines on the inside with node_exporter, as well as a dozen different services, how to make these metrics available?

What if I have four identical NATed sites and want them all monitored by the same outside Prometheus instance(s)?

23
The Collatz Conjecture in PostgreSQL github.com
34 points by ktamura  6 hours ago   8 comments top 3
1
ThePhysicist 3 hours ago 0 replies      
CTE's are awesome indeed :)

As another interesting use case, here's the solution of the "eight queens problem" in Postgres:

https://gist.github.com/adewes/5e5397b693eb50e67f07

An here's the accompanying article:

http://andreas-dewes.de/articles/solving-the-eight-queens-pr...

2
programLyrique 3 hours ago 2 replies      
(Better?) known also as the Syracuse problem: I had never heard about the "Collatz conjecture"; in French speaking countries, it's exclusively called the Syracuse problem.

I wonder how names for this conjecture have diverged like that...

24
Letters of Note: 1984 V. Brave New World lettersofnote.com
48 points by js2  8 hours ago   18 comments top 7
1
mattmaroon 2 hours ago 3 replies      
Both ended up far wrong because they overestimated government's cohesiveness. Most first world governments are far too inept to accomplish anything so difficult as either book prognosticates. We've seen nothing but a slide toward increasingly ineffective democracies since their era.
2
vmorgulis 57 minutes ago 1 reply      
Zamyatin is the less known precursor of Huxley and Orwell:

https://en.wikipedia.org/wiki/Yevgeny_Zamyatin

3
redsummer 2 hours ago 0 replies      
North Korea, Belarus and perhaps a few others are Orwell. The rest are Huxley. I've noticed that former Orwells, like East Germany and Poland, are thought to be warmer in character, have closer families, and are less atomised than the western consumerist equivalents.
4
justsaysmthng 4 hours ago 1 reply      
"The change will be brought about as a result of a felt need for increased efficiency. " - Huxley

vs today :

"Work less, enjoy more!"

---

Of course it is much more efficient to keep the subjects well fed and well hypnotized, rather than waste a lot of energy on keeping them subdued through violence. It is harder, but the returns are much worth it.

And that's not only what the "elites" who realize (they only gain a slightly higher standard of living), but also the mythical "AI singularity" which should one day spontaneously arrive.

Our fears of "the machine" which kills all humans because we are pests are unfounded - a hyper intelligent entity would quickly understand that violence is a very weak tool of control. Pleasure is much more powerful and achieves much more plus a thankful smile.

It is "efficiency", "pleasure", "less work", "more fun" that we should be wary about ... but not me , I like to enjoy myself. And I welcome whatever overlord (AI, aliens) that promises more pleasure and less pain any day.

5
uptownJimmy 4 hours ago 0 replies      
Orwell was the better writer, Huxley the better prognosticator.
6
nomoch 5 hours ago 1 reply      
Orwell talked about this in one of his essays. The man who does not have fascism beating in his breast can't understand the pleasure of smashing someones have with a boot for all eternity. Sex, money and control have nothing to do with the pleasure of destroying others and their work.
7
devnonymous 7 hours ago 1 reply      
For those who found that interesting - There's an also a comic made using the foreword to ' Amusing ourselves to death ' which also contrasts the two:

http://highexistence.com/amusing-ourselves-to-death-huxley-v...

It's a pity the comic had to be removed from its original site citing copyright reasons. imho, it could have been considered as fair use. Sad.

25
Scouting the Firmware jcjc-dev.com
12 points by paulw0  4 hours ago   1 comment top
1
stevetrewick 1 hour ago 0 replies      
Nice. I have an HG533 that I've just replaced. Exploratory hacking of its config file[0] got me to the ATP CLI via Telnet but I've been meaning to get into some UART pestering for a while now and I've got a few old routers lying around that I could practice on.

Unlikely, but is anyone able to recommend any way of connecting to the pins without having to drill or solder, my visual acuity isn't up to making a solid job of this and my soldering is frankly dire.

26
US surveillance court rejected ZERO spying requests in 2015 theverge.com
28 points by obi1kenobi  1 hour ago   4 comments top 3
1
awinter-py 51 minutes ago 0 replies      
right, because the FIS judges who say no get fired. this is survivor bias in action.

The problem at hand isn't 'how do you catch criminals on tor', it's 'how do you engineer the social contract to prevent rebellion'. Secret legal proceedings don't serve the latter goal.

2
morsch 16 minutes ago 0 replies      
Also: "The FBI also reportedly sent 48,642 national security letters in 2015."
3
dsacco 41 minutes ago 1 reply      
Could we change the title to de-capitalize "ZERO"? It sounds like the sky is falling as presently written.
27
Cisco Finds Backdoor Installed on 12M PCs securityweek.com
51 points by based2  3 hours ago   6 comments top 6
1
kefka 20 minutes ago 0 replies      
I'm not sure how to take this.

On the simple review, this seems like garbageware and a nice exploit. But the name PUP gives it away; potentially unwanted programs. We can't say for sure that the user didn't want them.

Now, if the program resists removal at the behest of the user, then yes, it's malware. But I've done computer work back in the day with Bonzi Buddy, and there real users who wanted that pile-o-crap on their computer.It was very much wanted, and went out of their way to get.

2
ae_keji 1 hour ago 0 replies      
This is fascinating, malware is being distributed in broad daylight by companies registered and operating in developed countries. Attacks from single users in, or pretending to be in, Eastern Europe or some other area with weaker unauthorized access laws seem to be getting less popular, as hackers are realizing technology and methods of unauthorized access are developing a lot faster than laws can keep up with them.
3
petetnt 2 hours ago 0 replies      
The statement provided by the company in question is kinda hilarious, basically "Our software totally behaves like malware but don't you dare to say it or you'll be hearing from our team of rabid lawyers!"
4
YeGoblynQueenne 1 hour ago 0 replies      
>> Contrary to Talos wrongful allegations, our business has been approved by French regulators and we have never been indicted or sued for any malware distribution!!!!

Oh dear. Four exclamation marks? Someone's about to start wearing their underpants on their head, methinks. [1]

_________

[1] http://wiki.lspace.org/mediawiki/Multiple_exclamation_marks

5
based2 2 hours ago 0 replies      
http://blog.talosintel.com/2016/04/the-wizzards-of-adware.ht...

"With a network display 11.7 million PCs installed worldwide, Tuto4pc.COM GROUP achieved a turnover of 12 million during the year 2014."

6
siculars 1 hour ago 0 replies      
The doublespeak is real.
28
The Burglar with His Very Own Mac Attack thedailybeast.com
100 points by kposehn  11 hours ago   23 comments top 10
1
andkon 8 hours ago 3 replies      
This reminds me a lot of BLDGBLG's post about Die Hard's architecture[1], especially this line:

 his dream house included a maze of trap-doors and what Sergeant Scheimreif called escape holes. It was everything he seemed to want a building to bewith near-infinite ways of getting from one room to another and no upper limit on the places he could hide.
This stuff is a weird obsession of mine, probably stemming from being a skateboarder for my whole life. Eventually, you realize that all space is something you can rearrange.

[1] http://www.bldgblog.com/2010/01/nakatomi-space/

2
HoopleHead 8 hours ago 1 reply      
I know it was just a taster, trying to interest us in a forthcoming book. But, nevertheless, I thought that story was very long on hyperbole and very short on detail.
3
y7 5 hours ago 3 replies      
> But there was more to it than that. Hidden inside the repetitive floor plans and the daily schedules of these franchised businesses, Roofman had found the parameters of a kind of criminal Groundhog Day: a burglary that could be performed over and over in different towns, cities, and statesprobably even different countries, if hed triedand his skills would only get better with each outing. In a very real sense, he was breaking into the same building again and again, endlessly duplicating the original crime.

> For Roofman, it was as if each McDonalds with its streamlined timetable and centrally controlled managerial regime was an identical crystal world: a corporate mandala of polished countertops, cash registers, supply closets, money boxes, and safes into which he could drop from above as if teleported there. Everything would be in similar locations, down to the actions taking place within each restaurant. At more or less the same time of daywhether it was a branch in California or in rural North Carolinaemployees would be following a mandated sequence of events, a prescribed routine, and it must have felt as if he had found some sort of crack in space-time, a quantum filmloop stuttering without cease, an endless present moment always waiting to be robbed. It was the perfect crimeand he could do it over and over again.

> For Roofman, it must have looked as if the rest of the world were locked in a trance, doing the exact same things at the exact same times of dayin the same kinds of buildings, no lessand not just in one state, but everywhere. Its no real surprise, then, that he would become greedy, ambitious, overconfident, stepping up to larger and larger businessesbut still targeting franchises and big-box stores. They would all have their own spatial formulas and repeating events, he knew; they would all be run according to predictable loops inside identical layouts all over the country.

Personally, I got a bit bored with the author's style of continuously seeking bold metaphors for the same thing, but I'm curious: do people consider this interesting writing, or does this style detract from the content?

4
BWStearns 9 hours ago 0 replies      
It's always interesting to see the systems that get engineered (engineer themselves? abiogenerate?) as externalities. It's like a scaled up pathological version of https://xkcd.com/1172/

I don't think this is common enough to justify having pentesters think about your corporate procedures' exposure to such behavior but it's definitely an interesting thing to think about when designing sensitive human or mixed human/computer systems.

Edit: word building fix

5
svantana 3 hours ago 0 replies      
This is reminiscent of the dichotomy of well structured software versus copy protection/cracking -- software development thrives in clear organization and separation of concerns, but so does cracking (i.e. having a single function take care of all license management will make it crackable in minutes). So software devs who want good DRM are forced to employ arcane obfuscation and back-handed tactics, sometimes at the detriment of the functionality and repairability of the software. I guess that's part of why we can't have good things...
6
beejiu 7 hours ago 3 replies      
This page has download 20MB of content in about 20 seconds. Despite having a top of the range iMac, I can barely scroll the page without lag.
7
cm3 5 hours ago 1 reply      
Who else thought this was about a Mac OS exploit?
8
SFJulie 7 hours ago 0 replies      
So by organizing people by routine franchise open the door to a new type of crime.

Imagine if an organized mob decided to mutualize the analysis and then stroke at the same time 100 shops?

Imagine if banks do the same as mc do?

9
nxzero 1 hour ago 0 replies      
>> "without fail described as politein one oft-repeated example, even insisting that his victims put on their winter coats so that they could stay warm after he locked them all in a walk-in freezer."

Sorry, but if someone tried to lock me in a freezer last thought would be how polite they were.

10
marincounty 7 hours ago 2 replies      
When I was in college, I had a security job. I literally watched a huge building on the weekends. It was in a bad part of town, so walking around that big, dark building was nerve wracking. I had a time clock, and had to punch 15 keys per hour. For two years, I made my rounds on the hour. Ten minutes of running around that place. Then back to the janitorial closet, and my homework.

After two years, my boss asked me to mix up my rounds. When I mixed up my rounds, I saw a lot of people doing things in that building they shouldn't have. Did I care? No! It was nothing life, or death, and I wasen't going to die over $7.49/hr. Nor, would I ruin some guys life. No--I wasen't a good security guard, but the patrons always got their lost purse/wallet back, if I found their items.

People don't like to hear this, but so much theft is internal. Entities like to blame professional criminals, drug addicts, etc., but so much theft is internal.

The people higher up in the organization stole the most. It was then middle management. And then Cops stole--wow, it was staggering, but they were pretty slick. And the thefts were always blamed on gangs, the homeless, or that new janitor.

I stayed quiet, and watched their behavior. I can usually walk through any store, and spot which employee is stealing. I have found they are usually overly enthusiastic, care too much about following exact procedure, and they are usually the last person in the organization you would expect would have a dark side, and never complain. In other words, the person who gets the managent promotion.

(I don't want to argue. I won't be back. If you do have a problem with stealing, really try to stop. If you can't stop, be smart. Don't steal enough to rack up a felony. I believe it's over $500? Don't ever walk into a establishment with no money. I forget what it's called, but it racks up big charges. If you are stealing because of the thrill it brings, take up intence exercise, or see a therapist. And try to take on Robin Hood morality; Never take from the poor. Don't let the innocent guy take the fall. Be a stand up guy?)

29
Moving Past the Scaling Myth silvrback.com
86 points by tmorton  14 hours ago   18 comments top 7
1
ktRolster 10 hours ago 1 reply      
Fred Brooks pointed out that with a small team of competent programmers, any organizational methodology will work.

If you pay attention, the best Agile systems focus on improving the skills of developers instead of forcing people to follow the 'steps' or a formula.

2
exabrial 11 hours ago 1 reply      
I really wish the scaling myth would die. Everyone pretends they have a scalability problem because it's sexy and as such, travesties have been committed (you know, in case we scale). How about instead designing software where the architecture and code is clear, consistent, and readable by the next guy?
3
tango12 52 minutes ago 0 replies      
Reg. the architecture question: People generally start carving out microservices once they hit some scale.

What about starting off with microservices (because tools like k8s make the otherwise insane management overhead more tractable)? Could that be a possible solution?

The problem of scaling state, databases need to handle anyway. Ideally the same database just keeps on working as you go up. Or atleast the protocol is the same, and you switch out the implementation from a single instance to a clustered version.

4
nickpsecurity 12 hours ago 0 replies      
Hmmm. Maybe. What I've found is that IT often re-invents past designs and methods to solve modern problems. Matter of fact, the way the cloud providers scaled was to re-implement mainframes on x86 boxes. The next level is applying lessons from older research in modern ways to improve efficiency of various parts of HW and SW. If anything, quite a few people came up with timeless lessons that teach you how to do efficient, reliable computation. They keep getting reused.

So, I don't think we can take it as far as author is suggesting where it's like classical vs quantum physics. Maybe at ASIC vs software level. Even those were partly joined with synthesis & coprocessing tools. I just don't see it as every high-level description I read about things is based on similar principles at each layer of the stack. Given similar constraints or goals, you would use similar strategy. There's certainly divergence or outliers but more repetition of patterns than anything.

The only myths are that technology/fads X, Y, and Z should've been widespread adopted over the ones (or enhancements of them) that consistently worked. The author is seeing results of people building on stuff that came with assumptions that don't match new problem. Or people straight-up ignoring root cause in their solutions or methods. Common problems.

5
cateye 7 hours ago 0 replies      
One of the underlying problems is that there isn't a formula (= process) for innovation. Most software development tries to create a competitive advantage by creating something unique that isn't easy to replicate.

So, every process around being able to deliver that by just applying that process creates a false hope. Or it is not meant for this purpose but the "buyers" aren't aware of it and have other expectations.

6
DanielBMarkham 1 hour ago 0 replies      
As somebody who loves startups and small teams, and who has a job working with both small and big organizations, I have been living this scaling thing for a long time.

I'd like to give you a simplistic answer, like "All you need, kid, is a small team! For anything!"

Slogans like that are true, but yet they are terribly misleading, because 1) many organizations are already terribly overstaffed, and 2) it doesn't really help to tell teams "What do I do right now?"

So here's as simple as I can make it:

Good organizations will do whatever is necessary to make things that people want, even if that means instead of programming, the programmers sit on the phone and do some manual chore for people as they call. Before you code anything, you have to be hip-to-hip with some real person that you're providing value to.

But as soon as you have those five folks sitting on the phones doing something useful? You gotta immediately automate. Everything. This means you're going to have all freaking kinds of problems as you move from helping ten people a day to helping a million. You have to automate solutions, access, coordination, resource allocation, failovers, and so on -- the list is almost endless (but not quite)

As they grow, poor organizations take a scaling problem and assign it to a person. Somebody does it once, then they're stuck with it for life. Good organizations continue to do it manually, then immediately automate. Somebody does it once, then the team immediately starts "plugging all the holes" and fixing the edge cases so that nobody ever has to manually be responsible for that again.

Growing "sloppy" means you end up helping those million people a day -- but you have hundreds of people on staff. Meetings take time. Coordination takes time. There are a ton of ways to screw up. People tend to blame one another. Growing good means you can be like WhatsApp, servicing a billion users with a tiny team of folks.

If you're already an existing BigCorp and have been around for a while -- odds are that you are living with this sloppy growth pattern. That means you need to start, right now, with identifying all the feedback loops, like release management, and automating them, such as putting in a CI/CD pipeline. Not only that, but there are scores of things just like that. You have a lot of work to do. It might be easier just to start over. In fact, in most cases the wisest thing to do is start over.

Now picture this: you're an Agile team at BigCorp and you've got the fever. Woohoo! Let's release often, make things people want, and help make the world a better place. But looking around, all you see is ten thousand other developers in a huge, creaky machine that's leaking like a sieve. You go to a conference with a thousand other BigCorps, just like yours. Are you going to want to hear about how it's better just to trash things and start over, about the 40 things you need to have automated right now but don't, or how to make your section of 150 programmers work together; how to "scale agile"?

Scaling Agile is an issue because the market says it is an issue.

7
wpietri 12 hours ago 1 reply      
This makes a lot of sense to me. At the very least, I think that most American organizations undergo a substantial cultural transition as they scale, so it makes sense we'd also need a process transition.

The problem I see is that people aren't really willing to be honest about the cultural transition that happens, so we also can't be honest about the process transition.

I think Agile-ish approaches work very well in startups, because the structure is pretty flat, and the goals are shared. But as companies grow, they tend to become what I think of business feudalism: hierarchical, control-oriented, territorial. For that, it makes sense you need different processes. And I think large company Agile is in effect Waterfall with a faster cadence, so you get that different process. But nobody will admit it. "We're doing Agile," they say, with too-bright eyes and gritted teeth.

What I wonder is: what if instead of killing the peer culture and the human-centered process as we scaled, we kept them?

30
Lavabit code open sourced github.com
417 points by jayfk  18 hours ago   50 comments top 9
1
matt_wulfeck 15 hours ago 1 reply      
I never read the closing letter and it is quite unnerving.

"If my experience serves any purpose, it is to illustrate what most already know: our courts must not be allowed to consider matters of great importance in secret, lest we find ourselves summarily deprived of meaningful due process. If we allow our government to continue operating in secret, it is only a matter of time before you or a loved one find yourself in a position like I was standing in a secret courtroom, alone, and without any of the unalienable rights that are supposed to protect us from an abuse of the states authority."

2
kkl 16 hours ago 2 replies      
A number of comments in this thread appear to suggest that Lavabit was end-to-end encrypted. It was not.

https://moxie.org/blog/lavabit-critique/

3
hartator 18 hours ago 3 replies      
It's not the original Lavabit, but dark mail.
4
colejohnson66 17 hours ago 3 replies      
Curious: what would happen if a bunch of these popped up all over the place and used end to end encryption between each other making email truly secure between each other? Would such a thing be possible? Adopt Mega's model where they store the private key, but encrypt it with the user's password and only the browser has the decrypted copy.
5
tacojuan 17 hours ago 0 replies      
I've seen some opensource implementations of Protonmail's stuff, any comparisons?
6
sig_chld_mike 15 hours ago 0 replies      
so what happens if you run this on Amazon (or any other cloud provider that would cooperate with govt intrusion)? do you need your own servers to make it work as intended?
7
justifier 12 hours ago 1 reply      
can anyone speak to the value of the DIME spec?

https://darkmail.info/downloads/dark-internet-mail-environme...

8
psiconaut 17 hours ago 0 replies      
the repo wasn't public before?
9
deepnet 17 hours ago 2 replies      
So Ladar Levinson closed his company because he refused to provide a backdoor to his customers email encryption ?

This seems similar to the Apple case, was Tim Cook just too big to bully ?

Snowden had his own encryption or used GPG so a Lavabit backdoor encryption key would not lower the entropy of Snowden's encryted emails.

Was Levinson's gag order lifted ? Why did Lavabit have to close but Apple didn't ?

[EDIT] Levison was not jailed.

       cached 1 May 2016 16:02:01 GMT