hacker news with inline top comments    .. more ..    15 Feb 2017 Best
home   ask   best   2 years ago   
A US-born NASA scientist was detained at the border until he unlocked his phone theverge.com
943 points by smb06  2 days ago   433 comments top 62
Jonnax 2 days ago 18 replies      
Their point about how other countries will take the US's stance as a cue is somewhat scary.

If you try and cross any border it will be relinquishing access to all accounts.

I'm assuming email also comes along with 'social media', since communication is by its definition social.

So how do you protect yourself? I think just going with "Don't have any social media" isn't a good answer because the relationship that children growing up today with the internet is almost completely different to even people 10-15 years older than them.

Someone having carte blanche access to a person's phone will find something if they want to.

Imagine you're in a few group chats, someone mentions doing some drug. And you've just entered a country where that's an instant prison sentence.

Maybe some off colour jokes about politicians? Proof to kick you out or at least detain.

I imagine we're at the cusp of something much more unsettling. The technology to reverse image search a face is available today. It's pretty easy to make you appear associated with anything, anyone, etc.

krab 2 days ago 1 reply      
As a EU citizen, I see these events from a bit different angle. I have visited US several times and the atmosphere and behaviour of both the customs officers and the TSA personnel gets more and more overlooking.

I have never went through such extended search but going across a US airport feels really uncomfortable, to the extent I haven't seen in another country (UK comes close, though). The thing is Trump only added a little bit. This is a process that has been evolving for some time already.

I wonder if anything would change if all US travellers to Europe would be given a leaflet explaining:

"As a reciprocal measure for ESTA or Visa process, you are obliged to pay $14 entry fee. Moreover, we will perform an extended search to every fifth American passport holder. During the search, we may seize your devices and ask for your passwords. Not complying may result in a detention up to 24 hours and/or denied entry."

caminante 2 days ago 5 replies      
Here's the Customs and Border Patrol policy in question [0] (see page 31).

The EFF has a nice write-up on this topic [1]. It sounds like there's a "border search exemption" that bypasses the Fourth Amendment. The rationale was to ensure duties were paid and screen for "bad guys," drugs, weapons, diseased fruit, etc.

[0] https://www.dhs.gov/sites/default/files/publications/privacy...

[1] https://www.eff.org/deeplinks/2016/12/law-enforcement-uses-b...

suprgeek 2 days ago 1 reply      
The crux of the matter is here:

More importantly, travelers are not legally required to unlock their devices, although agents can detain them for significant periods of time if they do not....

and here:The document given to Bikkannavar listed a series of consequences for failure to offer information that would allow CBP to copy the contents of the device. I didnt really want to explore all those consequences, he says. It mentioned detention and seizure.

It sounds like CBP is trying to circumvent the "PIN Revealing" need by basically illegally detaining Citizens until they do.

This is grounds for "Habeas Corpus" lawsuit - should a citizen really dig their heels in.

TomMarius 2 days ago 2 replies      
When I was a child (in central Europe), USA was seen as a heaven where everyone would like to live - and I did too. Nowadays I'm very happy I live in a "poor", speaking by numbers, but much more free republic in the middle of the old continent.
mindslight 2 days ago 1 reply      
Especially with a USG-owned device, this seems like it would have been a ripe time to assert one's citizenship for entry and just let them steal the device.

The last time I traveled internationally, I purposely brought only an old laptop. To return, I zeroed the hard drive and physically removed it from the machine so the scum would have pretext to steal less of my property.

For my preparation I was rewarded with absolutely no thuggery, which is how the sheer majority of border crossings actually go. That's the insidious thing about the inverted-totalitarian threat model - these specific situations are inherently rare. If they were common, change would easily happen through democratic means. It is only through the majority of people believing that it cannot happen to them, are the injustices allowed to persist.

We really need a reboot for a modern OS model which puts cryptographic access control front and center, with support for secret splitting and the appropriate bottom-up foundation that allows for steganographic-secure machines. I can actually see this plausibly happening for proper personal computers, eventually. Unfortunately the average person's computing device has become a "cell phone" which, even ignoring the inherent pwntivity of Qualcomm integrated chips, is a software ecosystem funded primarily through commercial surveillance.

makecheck 2 days ago 2 replies      
There are at least four facts that should cause stuff like this to be discontinued immediately:

It is not only possible to acquire electronic data after crossing a checkpoint but there are many ways of doing so.

There is no possible way for the contents of a phone to be a threat to TRANSPORTATION security, which is theoretically the only reason someone should care when youre crossing a border, boarding a plane, etc.

Even if it were possible for data itself to be a threat (and its not), there are many ways to carry data. Someone could hide the data in encrypted form, or even hide it in plain sight by being clever. Also, the information crossing a border doesnt have to be electronic at all; it could be a page in a book.

Even if something suspicious is found, that is not guilt and no charges can be laid so what is the point!?

Its long past time to shut down all of these ridiculous things. There should be a very tiny list of things that border security needs to do, and it should all fit on one hand.

swalsh 2 days ago 1 reply      
Another terrifying part of this, is due to the nature of networks, when one of your "friends" becomes compromised, your private message do too. This goes way past national security, and 7 countries.

I have former coworkers from Syria, Iran, and Iraq. They're great people, and are great programmers. I friended them on Facebook many years ago, and now when one of them is caught at a border it's not just their private messages being raided... its my own anti-trump messages.

This needs to stop here.

maaaats 2 days ago 1 reply      
The former prime minister of Norway was recently detained at the border for previously having visited Iran. Think about that, prime minister of a NATO allied country. The border rules (even before Trump) are whack.
safeaim 2 days ago 2 replies      
For all of you guys recommending using fake accounts, do remember that right before christmas, Obama administration signed in new rules[1], giving NSA leeway to share their collected data with 16 other agencies, including DHS, which CBP falls under. So you may get caught if you try to pull these shenanigans off. US agencies are no strangers to mission creep when it come's to sharing data, as seen recently in this article from Intercept on how FBI is building a national watchlist for companies that want to have realtime updates on whether their employees have committed any crimes while employed. [2]

Two quotes from the NYT article that I feel are important to have in the back of your head when you plan your fake accounts:

Now, other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A. and then apply such rules for minimizing privacy intrusions.

But Patrick Toomey, a lawyer for the American Civil Liberties Union, called the move an erosion of rules intended to protect the privacy of Americans when their messages are caught by the N.S.A.s powerful global collection methods. He noted that domestic internet data was often routed or stored abroad, where it may get vacuumed up without court oversight.

Let's say CBP get's a tool in a couple of months that let's their border agent search up any passenger through the NSA raw data. That search may then produce your real accounts. Let's say they do this before questioning you, and you then provide them with your fake accounts, that will not look good.


EDIT: Removed the part about felony, as that was blatantly wrong.

joshuaheard 2 days ago 3 replies      
I don't see what this has to do with Trump's travel restrictions, other than it coincidentally happened at the same time. If the author is trying to imply this correlation is causation, there is no evidence in the article. That being said, no American should have to have his phone searched at the border, even with the stated border exceptions to the Fourth Amendment.
coldcode 2 days ago 0 replies      
No, the US should not be allowed to request or demand access to a US Citizen's electronic devices for any reason whatsoever, no matter what Homeland Security says. The whole point of customs was to insure that goods were not brought into the country to avoid paying duties. The contents of an electronic device cannot be charged a duty. Anything else is beyond their authority. Of course none of this will stop a guy in a fancy uniform from demanding an illegal search anyway and making your life hell. Given the current government this will only get worse.
ianderf 2 days ago 0 replies      
> just over a week into the Trump Administration.

It actually started long time before Trump. http://travel.stackexchange.com/questions/3363/laptop-search...

bogomipz 2 days ago 0 replies      
>"Not only is he a natural-born US citizen, but hes also enrolled in Global Entry a program through CBP that allows individuals who have undergone background checks to have expedited entry into the country."

Incredible. The TSA and DHS is basically "theater of the absurd". Every other disturbing detail aside, this individual actually paid good money to enroll in the Global Entry program only to be detained and humiliated by this agency.

bmc7505 2 days ago 1 reply      
This is why we need plausibily deniable encryption. Does anyone know an Android ROM, or jailbreak app that is visually indistinguishable from the lock screen, which can be unlocked to an innocuous home screen?
Havoc 2 days ago 1 reply      
Note to self - bring burner phone for dodgy countries...like the US.
throwaway2439 2 days ago 1 reply      
I work at a military base, I received a background check and everything, not for anything classified, but for a scientific research group stationed on a base. Now, I think I'm afraid to leave the country because I'm not white.

Here I was planning to get global entry, but it's clear it doesn't matter lick.

rl3 2 days ago 1 reply      
Using detention as a tool to extort access from people is underhanded at best. Rifling through people's digital devices should not be acceptable.

Why doesn't Apple/Google/Microsoft/Facebook et al. coordinate with organizations like EFF or ACLU and throw their weight behind a campaign to stop this bullshit?

If not companies, there's still plenty of extremely wealthy individuals in SV that one would think might care.

fny 2 days ago 1 reply      
All the people suggesting a "duress mode" would solve this issue need to wake up.

For as long as it's not illegal to force people to open up their phones at the border, you are not under duress. In fact, the government could even warp the situation to where you'd be commiting perjury by showing a fake screen.

Unfortunately, we can't solve this problem through technology: we need to convert the broader public and fight to make the representatives we elect work for the people.

blintz 2 days ago 1 reply      
> More importantly, travelers are not legally required to unlock their devices, although agents can detain them for significant periods of time if they do not.

What kind of ridiculous technicality is that? Detainment isn't supposed to be a tool to coerce cooperation.

BJanecke 2 days ago 2 replies      
Obviously appreciating that NASA has some sensitive data but just for a second also try and appreciate how would this play out for someone who works in the financial industry, where unauthorised sharing of sensitive data is not only a breach of contract with your employer/clients but a crime(insider trading) in almost all countries.

Or imagine how the people who enforce these new regulations can exploit this.

Query from the traveller at the border what their job is, if in the financial industry request that they relinquish their email find something that could tip you off and go buy stock. If they don't you lose nothing, you deport them, you do your job.

I understand that you might want to tear this apart, but keep in mind the person that requests your data will often not be the person viewing it, so you are in no position to just "Take some names" and then ensure that your data remains confidential.

This is terrifying.

kentbrew 1 day ago 0 replies      
Some tweets from @Pinboard feel like they want to go here:

This is a small point but important: dont specify people are US-born; either youre a US citizen or youre not.

Emphasizing that someone was born in the US as a kind of super-citizenship plays into the hands of people you dont want to be helping.

The proper term for someone born abroad who doesnt speak English and has a brand-new US passport is: American

biafra 2 days ago 3 replies      
What is the worst that can happen to a non US citizen, who is not producing the passwords they demand? Being send back immediately and all belongings seized? Or only devices they do not get the decryption passwords for? Detainment for how long? Being charged with what?
bitwizzle 2 days ago 1 reply      
Now is the time for vendors to consider implementing a duress password. Upon entering your duress password the user is presented with a fake profile, or perhaps everything could just be wiped. I'm not sure how well this would play out in the real world, but it's one of the best things protections I could imagine if you want to carry sensitive data across borders.
Mikeb85 2 days ago 3 replies      
This happens in Canada too, unfortunately (was in the news quite a bit not too long ago, and can be seen in action on that reality TV show about border security).

Best to wipe your phone, and not bring any sensitive documents across borders period.

xexers 2 days ago 0 replies      
"You will not share your password (or in the case of developers, your secret key), let anyone else access your account, or do anything else that might jeopardize the security of your account."


It would be a violation of my facebook terms and conditions to share my facebook password!

Esau 2 days ago 0 replies      
This is bullshit. The U.S. should be required to treat its citizens in a constitutional manner regardless of where they are located.
virmundi 2 days ago 3 replies      
As a white man, I'm a bit concerned about coming back into my own country. The only social media accounts I have are here and reddit. Will the guards at the gate accept that I don't have a Facebook or twitter account?
heckubadu 2 days ago 0 replies      
Here's gov data on device searches, from the ACLU https://www.aclu.org/government-data-about-searches-internat...
clamprecht 2 days ago 1 reply      
> I asked a question, Why was I chosen? And [the CBP agent] wouldnt tell me, he says.

He should file a Freedom of Information / Privacy Act request to get the reason they chose him.

helpfulanon 2 days ago 0 replies      
So, for a casual traveler who may have a phone with conversations peppered with unfavorable political views throughout - what is the best security hygiene in this situation?

Anyone have tips or tricks that average people can use, things that maybe don't involve having a separate phone etc?

ianderf 2 days ago 1 reply      
Nobody has mentioned the "Scroogled" short story yet? I really hope this is not the future that expects us. http://www.crimeflare.com/doctorow.html
droithomme 2 days ago 4 replies      
US citizens can not be prohibited re-entry into the US.
tn13 2 days ago 0 replies      
What if you are judged by the password you have chosen ? You have 24 character password ? Clearly you want to hide something more sinister.

Why is your password "ResidentEvil3040" ? You intend to not return after visiting USA ?

otaviokz 2 days ago 0 replies      
Welcome to the USSR as we read about in school...


MR4D 1 day ago 0 replies      
Intersting thought...since this appears to be the law of the land, and the border is controlled by the Executive Branch, what happens if somebody who works in one of the other two branches is stopped? It seems they would have a good claim that it's unconstitutional given the Separation of Powers that the constitution provides.

I would think that once a lawyer frames the possibility of this in front of a judge, that the law will be stricken down.

Abishek_Muthian 1 day ago 0 replies      
Supposedly the reason being analysed is his South Indian name ' Sidd Bikkannavar ' ; I'm curious to know the story behind his name being a South Indian myself. The part of the name 'annavar' is generally found in interior villages being associated with village gods but I have never come across anyone named this way, perhaps it was 'Siddarth' which got shortened to Sidd.
pedalpete 2 days ago 0 replies      
The Customs agent insisted he had the authority search the device.

I was thinking about this the other day, as a non-American, and I don't see how that is possible.

I work for the Australian gov't and of course cannot give out passwords or access to any of the devices which I carry which belong to the Australian gov't. How does US Border controls deal with that situation. They definitely do not have the authority to search a device owned by a foreign gov't, though it also seems they don't have the authority to search the device of an American either.


bayesian_horse 2 days ago 0 replies      
I feel myself discouraged from visiting the US for the next few years at least.
sneak 2 days ago 0 replies      
Use a password manager. Use long random passwords for every site.

Set your phone PIN to something 20 chars and random and text it to your friend. Write your friend's number on a slip of paper but add 1 to each non-area-code digit.

Disable biometrics. Power off phone.

You now no longer have the ability to provide the information they seek at the border.

Call your friend when you get through (from someone else's phone) and change your PIN back.

sova 2 days ago 0 replies      
Telecommunications devices are so strongly regulated and the laws regarding privacy so systemically ignored that it's a wonder we even petition our rhetorical "ownership" of said stuff. Jokes aside, just because an airport is an "effective border" and also a big police station at the same time, that does not mean you waive your rights.
Frogolocalypse 2 days ago 0 replies      
While I have visited the states many times over the years, and enjoyed the time I spent there, I will never go there again.
donquichotte 1 day ago 1 reply      
This reminds me of the guards on the border between Kazakhstan and Uzbekistan that wanted to check the photos on my phone, presumably because the import of pornographic material into Uzbekistan is strictly outlawed.
bradneuberg 2 days ago 0 replies      
If this becomes common place across borders phones should come with a "fake" unlock access code - if you enter it it drops you into a plain vanilla setup with some fake contacts perhaps. Might make sense to create fake email and social media accounts too then...
6nf 2 days ago 1 reply      
This guy is a US citizen. What are they going to do if he refuses? He has the right to enter the USA. At worst they can confiscate the phone but without a court order I don't see how they can detain him indefinitely. Can they?
JammyDodger 1 day ago 2 replies      
Why is this even a story? I've had the same thing happen to multiple times coming into the UK. I'm also white and British if that makes any difference.
JustSomeNobody 2 days ago 1 reply      
So, if it was NASAs phone, why not call their legal department before turning over the phone? I carry a work phone and would definitely seek legal counsel before turning it over to anyone.
ommunist 2 days ago 0 replies      
Looks like US gov is a self-eating snake. Reminds me of grave incident happened to Dr Stephen Mann, who was brutally deprived of his reality augmentation devices on the US border.
nnd 2 days ago 0 replies      
Looks like one would need to wipe their phone before traveling to US from now on. What about laptops though? :/
99_00 1 day ago 0 replies      
This happens every day in western countries. It happened before Trump and it will happen after.
whalesalad 2 days ago 0 replies      
This happens in other countries too. It's happened to friends of mine entering Canada from the United States.
SN76477 2 days ago 0 replies      
I guess I will start packing my phone and using my ipad when I travel since it has less personal data attached.
Pica_soO 2 days ago 0 replies      
Could i have a dualboot jon-doe os on my phone, presenting the most boring person ever?
tn13 2 days ago 1 reply      
How does this apply when my phone belongs to my employer and has sensitive data on it ?
yarou 2 days ago 3 replies      
Let it happen to Thiel or Musk, see how quickly the procedure will be reversed.
br_smartass 2 days ago 0 replies      
Great freedom, huh?
ArenaSource 2 days ago 0 replies      
where did I put my old Nokia 8210...
jjawssd 2 days ago 4 replies      
kareldonk 2 days ago 0 replies      
This is what happens in statism. Time to wake up, slaves. Google my name and statism. Read.
kevin_thibedeau 2 days ago 2 replies      
> Since the phone was issued by NASA...

So it was already government property. I don't see the issue here.

tn13 2 days ago 3 replies      
This is really sad. As an immigrant I always thought this was coming. I never fly on a muslim airlines like Etihad to Turkish even though they are more convenient, I do not accept friend requests from muslims people on LinkedIN and Facebook.
batbomb 2 days ago 4 replies      
Conceivably, it's quite possible he had information which is subject to ITAR regulations, including data about sensors, mirrors. At the very least, he probably had sensor vendor specifications which are trade secrets and often covered under NDAs.
Python moved to GitHub github.com
808 points by c8g  2 days ago   298 comments top 25
payne92 2 days ago 13 replies      
Part of Github's secret sauce: Web source tree browsing that's front and center, that's relatively decent, with OK search. (versus making the log/history the central part of the Web UI as other tools seem to do)

There are SO many times I need a short peek at something, and am glad don't have to clone/download, etc.

laurentdc 2 days ago 6 replies      

I quite like the idea of "centralizing" development on GitHub, or similar services. It makes it much easier for everyone to fork, test, make a pull request, merge, etc..

For example, one reason why I gave up contributing to OpenWrt was their absolutely legacy contribution system [1], which required devs to submit code diff patches via email (good luck not messing up the formatting with a modern client) on a mailing list. It took me an hour to submit a patch for three lines of code. It seems like Python wasn't much different. [2]

[1] https://dev.openwrt.org/wiki/SubmittingPatches#a1.Creatingap...

[2] https://docs.python.org/devguide/patch.html

di 2 days ago 1 reply      
agentgt 2 days ago 4 replies      
This is a little disappointing for several reasons. I understand the merits of GitHub but I really wish Python at least stuck with Mercurial repository and some decentralization.

It's especially sad because Mercurial is just now starting to be incredibly powerful with evolutions.

I guess I'm an old fart but all the centralization has made me paranoid and I still absolutely prefer Mercurial (albeit with plugins) over git.

misnome 2 days ago 8 replies      
Why on earth have they done:

> Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017

Rather than just

> Copyright (c) 2001-2017

Fice 1 day ago 3 replies      
This is scary. For increasingly many potential contributors a project effectively does not exist if it is not on GitHub. And, being a huge centralized service, GitHub is very susceptible to censorship (e.g. repos being taken down via DMCA or Russia blocking GitHub until they started to cooperate with the censors). I see this dependence as very bad and dangerous for the free software movement. Should we even consider convenience of a service that has serious ethical issues?
tbrock 1 day ago 0 replies      
Wow if only we could get Gnome and Linux on there and give up this mailing list for patches nonsense we'd be golden.
anamoulous 2 days ago 1 reply      
Wow the black bar settled it for them, huh?
EvgeniyZh 2 days ago 1 reply      
I'd really like more big and important projects would move their development to GitHub-styled services. Maybe I'm just not hardcore enough, but I feel they make live easier, both for maintainers, contributors and newcomers.

But it's probably too hard to switch and core developers don't see point in it (since they're totally ok with working their way). Maybe when a new generation developers will take core positions...

hueving 1 day ago 1 reply      
Sad day. Don't forget that github is a closed source tool. This is equivalent to them announcing they are switching to Jira.
imode 1 day ago 1 reply      
I'd question why it wasn't GitLab but after the recent outage it would somewhat be in bad taste. :P

what exactly was Python using before/where was it hosted? all I can find are the source archives on python.org. I'm assuming this wasn't a hard transition but I'm genuinely curious as to their development strategy regarding distribution of source.

jaimebuelta 1 day ago 0 replies      
Background info about why this migration and considered alternatives


dbalan 2 days ago 3 replies      
So python eventually moved to git from mercurial.
gigatexal 2 days ago 1 reply      
Python 3.7 already in the works. The effort to push python 3.x is picking up steam it seems.
echelon 1 day ago 1 reply      
I can't help but think Gitlab would have been in contention for this move had they not had the recent outage. Can anyone from the Python org comment on what other choices were considered?
meneses 1 day ago 0 replies      
FYI, Python was on Github but it was the read-only version. You still had to go to mercurial to push the updates.
rectangletangle 2 days ago 1 reply      
Where is the "issues" section? I wanna read about some gnarly low-level CPython bugs!

Other than that, this is a welcome change.

lucidguppy 1 day ago 0 replies      
I wish PR's on github could be checked off per commit like in bitbucket.
faraggi 1 day ago 0 replies      
Anbody know what happend to Guido's contributions between 2008-2012?

Maybe he had kids. :P

napolux 1 day ago 0 replies      
Still waiting for WordPress to move. ;)
hayd 2 days ago 0 replies      
and nearly 100k commits... time to start making some PRs!
dustinmoris 1 day ago 1 reply      
gkya 2 days ago 4 replies      
This is FUD. There are sensible responses to this comment, so I won't write a new one, but say one thing: You don't know how to use conventional tools, and and go on to blindly rant about them.

> Basically, e-mail is the death of any sort of low-effort contribution. If you're starting a new project, and chose a mailing list, you're probably excluding a huge quantity of potential contributors.

If those contributors are as incompetent to not be able to mail a patch, I'd rather be happy to have excluded them.

hexa- 2 days ago 0 replies      
I was recently hit by an IPv4 routing outtage and had only IPv6 available to connect to the internet.

I was therefore unable to connect to github.com, as there is no IPv6 support available:

% host github.comgithub.com has address has address

Introducing Cloud Spanner, a Global Database Service googleblog.com
819 points by wwilson  10 hours ago   332 comments top 50
ChuckMcM 9 hours ago 9 replies      
Congratulations to the Spanner team for becoming part of the Google public cloud!

And for those wondering, this is why Oracle wants billions of dollars from Google for "Java Copyright Infringement" because the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now.

It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!

suprgeek 9 hours ago 2 replies      
Really a CP system but with the Availability being five 9s or better (less than one failure in 10^6)

How:1)Hardware - Gobs and Gobs of Hardware and SRE experience

"Spanner is not running over the public Internet in fact, every Spanner packet flows only over Google-controlled routers and links (excluding any edge links to remote clients). Furthermore, each data center typically has at least three independent fibers connecting it to the private global network, thus ensuring path diversity for every pair of data centers. Similarly, there is redundancy of equipment and paths within a datacenter. Thus normally catastrophic events, such as cut fiber lines, do not lead to partitions or to outages."

2) Ninja 2PC

"Spanner uses two-phase commit (2PC) and strict two-phase locking to ensure isolation and strong consistency. 2PC has been called the anti-availability protocol [Hel16] because all members must be up for it to work. Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC member is highly available even if some of its Paxos participants are down."

tedd4u 9 hours ago 8 replies      
The team here at Quizlet did a lot of performance testing on Spanner with one of our MySQL workloads to see if it's an option for us. Here are the test results: https://quizlet.com/blog/quizlet-cloud-spanner
richdougherty 5 hours ago 1 reply      
Some interesting stuff in https://cloud.google.com/spanner/docs/whitepapers/SpannerAnd... about the social aspects of high availability.

1. Defining high availability in terms of how a system is used: "In turn, the real litmus test is whether or not users (that want their own service to be highly available) write the code to handle outage exceptions: if they havent written that code, then they are assuming high availability. Based on a large number of internal users of Spanner, we know that they assume Spanner ishighly available."

2. Ensuring that people don't become too dependent on high availability: "Starting in 2009, due to excess availability, Chubbys Site Reliability Engineers (SREs) started forcing periodic outages to ensure we continue to understand dependencies and the impact of Chubby failures."

I think 2 is really interesting. Netflix has Chaos Monkey to help address this (https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey). There's also a book called Foolproof (https://www.theguardian.com/books/2015/oct/12/foolproof-greg...) which talks about how perceived safety can lead to bigger disasters in lots of different areas: finance, driving, natural disasters, etc.

jedberg 8 hours ago 7 replies      
This release shows the different philosophies of Google vs Amazon in an interesting way.

Google prefers building advanced systems that let you do things "the old way" but making them horizontally scalable.

Amazon prefers to acknowledge that network partitions exist and try to get you to do things "the new way" that deals with that failure case in the software instead of trying to hide it.

I'm not saying either system is better than the other, but doing it Google's way is certainly easier for Enterprises that want to make the move, and why Amazon is starting to break with tradition and release products that let you do things "the old way" while hiding the details in an abstraction.

I've always said that Google is technically better than AWS, but no one will ever know because they don't have a strong sales team to go and show people.

This release only solidifies that point.

karlding 9 hours ago 7 replies      
I wonder how this will affect adoption of CockroachDB [1], which was inspired by Spanner and supposedly an open source equivalent. I'd imagine that Spanner is a rather compelling choice, since they don't have to host it themselves. As far as I know, CockroachDB currently does not support providing CockroachDB as a service (but it is on their roadmap) [2].

[1] https://www.cockroachlabs.com/docs/frequently-asked-question...

[2] https://www.cockroachlabs.com/docs/frequently-asked-question...

sudhirj 8 hours ago 2 replies      
For those trying to compare this with AWS Aurora, Aurora is more a regular database (MySQL / Postgres) engine with a custom data storage plugin that's AWS/ELB/SSD/EFS-aware. Because of this the database engine can make AWS specific decisions and optimizations that greatly boost performance. It supports master-master replication in the same region, master-slave across regions.

Global Spanner looks like a different beast, though. It looks like Google has configured a database for master-master(-master?) replication, across regions and even continents. They seem to be pulling it off by running only their own fiber, each master being a paxos cluster itself, GPS, atomic clocks and lot of other whiz-bangery.

cipherzero 9 hours ago 1 reply      
The white paper is available here: http://static.googleusercontent.com/media/research.google.co...

for anyone interested

runeks 9 hours ago 10 replies      
I wonder why they charge a minimum of $0.90 per node-hour when they offer VMs for as little as $0.008/hr. This is hugely useful even for single-person startups, so why charge a minimum of ~$8,000 per year?
xapata 9 hours ago 1 reply      
Amazon likes to respond to Google with it's own price drops and product launches. It's telling that their announcements are orthogonal instead of direct competition with Spanner.

When Google announced Spanner back in 2012, I'm sure Amazon and Microsoft started teams to reproduce their own versions.

Spanner is not just software. The private network reduces partitions. GPS and atomic clocks for every machine help synchronize time globally. There won't be a Hadoop equivalent for Spanner, unless it includes the hardware spec.

emersonrsantos 8 hours ago 1 reply      
Thomas Watson in 1943 amd his famous quote: I think there is a world market for about five computers".

If he was alive, he could say these computers are Google, Apple, Microsoft, Amazon and Facebook.

tabeth 7 hours ago 2 replies      
Forgive my ignorance, but could someone explain in layman's terms in which situation this would be helpful? E.g. if I have 1TB of data would I use this? If I have 1GB with a growth rate of 25GB/daily would I use this?
gonyea 41 minutes ago 0 replies      
I can vouch for Spanner: it's a badass piece of Google's infrastructure.
jdcarr 9 hours ago 0 replies      
Link to the actual OSDI paper (not the simpler whitepaper) https://static.googleusercontent.com/media/research.google.c...
abalone 8 hours ago 1 reply      
How does this compare to AWS Aurora in terms of pricing and performance?

With Aurora the basic instance is $48/month and they recommend at least two in separate zones for availability, so it's about $96/month minimum. Storage is $.10/GB and IO is $.20 per million requests. Data transfer starts at $.09/GB and the first GB is free.[1]

Spanner is a minimum of $650/mo (6X the Aurora minimum), storage is $.30/GB (3X), and data transfer starts at $.12/GB (1.3X).

Of course with Aurora you have to pick your instance size and bigger faster instances will cost more. Also there's the matter of multi-region replication, although it appears that aspect of Spanner is not priced out yet. So maybe as you scale the gap narrows, but it's interesting to price out the entry point for startups.

[1] https://aws.amazon.com/rds/aurora/

Ajedi32 9 hours ago 0 replies      
> Today, were excited to announce the public beta for Cloud Spanner, a globally distributed relational database service that lets customers have their cake and eat it too: ACID transactions and SQL semantics, without giving up horizontal scaling and high availability.

This sounds too good to be true. But it's Google, so maybe not. Time to start reading whitepapers...

aladine 28 minutes ago 1 reply      
Great product from Google. I wonder what is the difference between Cloud Spanner and Google CloudSQL
bowmessage 10 hours ago 0 replies      
elvinyung 8 hours ago 1 reply      
While everyone is puzzling over how Spanner seems to be claiming to be CA, I would like to take this opportunity to bring up PACELC[1].

The idea is that the A-or-C choice in CAP only applies during network partitions, so it's not sufficient to describe a distributed system as either CP or AP. When the network is fine, the choice is between low latency and consistency.

In the case of Spanner, it chooses consistency over availability during network partitions, and consistency over low latency in the absence of partitions.

1: http://cs-www.cs.yale.edu/homes/dna/papers/abadi-pacelc.pdf

andy_ppp 9 hours ago 3 replies      
> clients can do globally consistent reads across the entire database without locking

How is this possible across data centres? Does it send data everywhere at once?

Seems too good to be true of course but if it works and scales it might be worthwhile just not having to worry about your database scaling? Still I don't believe it ;-)

EDIT: further info...

> Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC member is highly available even if some of its Paxos participants are down. Data is divided into groups that form the basic unit of placement and replication.

So it's SQL with Paxos that presumably never get's confused but during a partition will presumably not be consistent.

tamalsaha001 7 hours ago 0 replies      
One thing to note is Spanner's transactions are different compared to what you get with a traditional RDBMS. See https://cloud.google.com/spanner/docs/transactions#ro_transa...

An example is the rows you get back from a query like "select * from T where x=a" can't be part of a RW transaction. I believe because they don't have the time-stamp associated with them. So, you have to re-read those rows via primary key inside a RW transaction to update them. This can be a surprise if you are coming from a traditional RDBMS background. If you are think about porting your app from MySQL/PostgreSQL to Spanner, it will be more than just updating query syntax.

Disclaimer: I used F1 (built on top of Spanner, https://research.google.com/pubs/pub41344.html) few years ago.

nodesocket 6 hours ago 0 replies      
Related, I wrote a blog post on the network latency between Google Compute Engine zones and regions. I'm assuming Cloud Spanner will still have these latencies once multi-region is deployed. Cross-zone latency on GCE is very good though.


dhd415 9 hours ago 1 reply      
Given that CockrochDB is based on Spanner and F1, this DBaaS sounds like it will compete directly with them.
wcdolphin 6 hours ago 1 reply      
Is JSON data type support in the works? Seems to be a very commonly requested feature these days.
kennethmac2000 8 hours ago 1 reply      
Looks cool, but the pricing seems a bit non-cloud-native (or at least non-GCP-native).

"You are charged each hour for the maximum number of nodes that exist during that hour."

We've been educated by Google to consider per-minute, per-instance/node billing normal - and presumably all the arguments about why this is the right, pro-customer way to price GCE apply equally to Cloud Spanner.

danols 1 hour ago 0 replies      
Interesting but without INSERT and UPDATE it just isn't worth it for me. When can we expect it to handle data manipulation language (DML) statements?
mankash666 7 hours ago 0 replies      
While the product is compelling (acid compliant, horizontally scanning DB), it does seem expensive.

If you use 2 nodes/hour, Cost = (20.9) 24 * 31 = $1400/month not anointing for storage and network chargers.

pagade 9 hours ago 3 replies      
What is TrueTime really? Are their Distributed Systems 'sharing a global clock'?
executive 9 hours ago 1 reply      
Does it support spatial objects / can it replace PostGIS?
BinaryIdiot 8 hours ago 1 reply      
Oh this looks really compelling! Though I'm guessing this is targeted to companies? I'd love to use this for some personal projects but the pricing seems really high. Am I reading it right that a single node being used at least a tiny bit every hour is about $670 a month?

Maybe I'm misunderstanding how the pricing works here. Any clarification would be highly welcomed :)

nodesocket 6 hours ago 2 replies      
So does Cloud Spanner replace the existing Google Cloud SQL offering [1]? What are the pros/cons of each?

[1] https://cloud.google.com/sql/

rdtsc 9 hours ago 2 replies      
> This leads to three kinds of systems: CA, CP and AP,

What is a distributed system that is CA? Can you build a distributed system which will never have a partition.

sandGorgon 7 hours ago 2 replies      
> If you have a MySQL or PostgreSQL system that's bursting at the seams

Postgresql ? How does this work for people migrating from traditional SQL databases - typically people use ORM. How would this fit in with, say , Rails or SqlAlchemy ?

tdrd 9 hours ago 2 replies      
Doesn't seem possible to use this yet. No client libraries and no samples: https://cloud.google.com/spanner/docs/tutorials

Have they documented the wire protocol? I couldn't find it.

rattray 9 hours ago 2 replies      
Very interesting. How does this pricing compare to AWS Aurora? https://aws.amazon.com/rds/aurora/pricing/
jallriddle 10 hours ago 1 reply      
Is this similar to AWS Aurora or is this something else completely different?
esseti 4 hours ago 0 replies      
does anyone know if it works with django or a way to make it working? it should be a matter of a connector, no?
DivineTraube 7 hours ago 2 replies      
"What if you could have a fully managed database service that's consistent, scales horizontally across data centers and speaks SQL?"

Looks like Google forgot to mention one central requirement: latency.

This is a hosted version of Spanner and F1. Since both systems are published, we know a lot about their trade-offs:

Spanner (see OSDI'12 and TODS'13 papers) evolved from the observation that Megastore guarantees - though useful - come at performance penalty that is prohibitive for some applications. Spanner is a multi-version database system that unlike Megastore (the system behind the Google Cloud Datastore) provides general-purpose transactions. The authors argue: We believe it is better to have application programmers deal with performance problems due to overuse of transactions as bottlenecks arise, rather than always coding around the lack of transactions. Spanner automatically groups data into partitions (tablets) that are synchronously replicated across sites via Paxos and stored in Colossus, the successor of the Google File System (GFS). Transactions in Spanner are based on two-phase locking (2PL) and two-phase commits (2PC) executed over the leaders for each partition involved in the transaction. In order for transactions to be serialized according to their global commit times, Spanner introduces TrueTime, an API for high precision timestamps with uncertainty bounds based on atomic clocks and GPS. Each transaction is assigned a commit timestamp from TrueTime and using the uncertainty bounds, the leader can wait until the transaction is guaranteed to be visible at all sites before releasing locks. This also enables efficient read-only transactions that can read a consistent snapshot for a certain timestamp across all data centers without any locking.

F1 (see VLDB'13 paper) builds on Spanner to support SQL-based access for Google's advertising business. To this end, F1 introduces a hierarchical schema based on Protobuf, a rich data encoding format similar to Avro and Thrift. To support both OLTP and OLAP queries, it uses Spanner's abstractions to provide consistent indexing. A lazy protocol for schema changes allows non-blocking schema evolution. Besides pessimistic Spanner transactions, F1 supports optimistic transactions. Each row bears a version timestamp that used at commit time to perform a short-lived pessimistic transaction to validate a transaction's read set. Optimistic transactions in F1 suffer from the abort rate problem of optimistic concurrency control, as the read phase is latency-bound and the commit requires slow, distributed Spanner transactions, increasing the vulnerability window for potential conflicts.

While Spanner and F1 are highly influential system designs, they do come at a cost Google does not tell in its marketing: high latency. Consistent geo-replication is expensive even for single operations. Both optimistic and pessimistic transactions even increase these latencies.

It will be very interesting to see first benchmarks. My guess is that operation latencies will be in the order of 80-120ms and therefore much slower than what can be achieved on database clusters distributed only over local replicas.

crooked-v 2 hours ago 0 replies      

How long until it gets shut down with a month's notice?

darkerside 10 hours ago 3 replies      
> Unlike most wide-area networks, and especially the public internet, Google controls the entire network and thus can ensure redundancy of hardware and paths, and can also control upgrades and operations in general

I know this is a single system, but I'll still say it. This seems like another step in a scary trend for our internet.

gigatexal 9 hours ago 1 reply      
The sql syntax reference looks similar to that of Postgres' syntax reference.
kozikow 8 hours ago 0 replies      
SqlAlchemy engine please :) ?
tajen 5 hours ago 0 replies      
> $.90 per node per hour

That makes $700 per month. Is this the minumum? or can we have 0 node when the lambda is idle ?

koolba 10 hours ago 14 replies      
> Today, were excited to announce the public beta for Cloud Spanner, a globally distributed relational database service that lets customers have their cake and eat it too: ACID transactions and SQL semantics, without giving up horizontal scaling and high availability.

This is a bold claim. What do they know about the CAP theorem that I don't?

Separately, (emphasis mine):

> If you have a MySQL or PostgreSQL system that's bursting at the seams, or are struggling with hand-rolled transactions on top of an eventually-consistent database, Cloud Spanner could be the solution you're looking for. Visit the Cloud Spanner page to learn more and get started building applications on our next-generation database service.

From the rest of the article it seems like the wire protocol for accessing it is MySQL. I wonder if they mean to add a PostgreSQL compatibility layer at some point.

api 9 hours ago 1 reply      
Given the CAP theorem I wonder what trade-offs they make and how much visibility they give you into these trade-offs.

In any case this is much better than Amazon's offerings... when they actually ship it. :)

Avloss 4 hours ago 0 replies      
Amazing! But why does this feel like such a de ja vue all over again.. (surely I'm missing something).. They've spent 5 years telling us that we just CAN'T scale SQL.. Now they'll tell us that actually.. they've figured it out! :)
kozak 10 hours ago 0 replies      
I wonder how many people will get a seizure from that red-blue blinking rectangle in the video :(

Upd: Downvoting this warning will only increase that number.

williamle8300 8 hours ago 0 replies      
I see there's "data layer encryption" but the data is still readable by Google. Why would anyone want to keep feeding the Google beast with more data?

Software is about separating concerns, and decentralizing authority. Responsible engineers shouldn't be using this service.

theptip 7 hours ago 1 reply      
> Does this mean that Spanner is a CA system as defined by CAP? The short answer is no technically, but yes in effect and its users can and do assume CA.

It's somewhat ironic that Brewer, the original author of the CAP theorem, is making this sort of marketing-led bending of the CAP theorem terminology. I think what he really should be saying is something in more nuanced language like this: https://martin.kleppmann.com/2015/05/11/please-stop-calling-...

But perhaps Google's marketing department needed something in the more popular "CP or AP?" terminology. I don't see what would be wrong with "CP with extremely high availability" though.

It's certainly wacky to be claiming that a system is "CA", since as the post admits it's technically false; to me this makes it clear that CP vs. AP (vs. CA now?) does not convey enough information. I'd prefer "a linearizably-consistent data store, with ACID semantics, with a 99.999% uptime SLA". Not as snappy as "CA" (I will never have a career in marketing I suppose), but it makes the technical claims more clear.

Man jailed 16 months, and counting, for refusing to decrypt hard drives arstechnica.com
660 points by doener  2 days ago   476 comments top 46
realo 2 days ago 9 replies      
I have a question...

Suppose the suspect Alice only has a portion of the key. Someone else (Bob...) has the remaining key bits.

Alice is busted, and 'compelled to give the key', and DOES provide her portion of the key.

Bob is never found.

Then Alice would be indefinitely imprisoned, even if she would have actually complied with the court order.

It seems unethical, to me.

Bonus question: Alice pretends that Bob exists, but actually he does not, but police cannot prove that. What then?

A possible answer to the first question: Alice is not compelled to provide the key. She is compelled to decrypt the drive. Obviously she can't do that without Bob. Alice is screwed and will spend the rest of her life in prison.

Seems harsh.

externalreality 2 days ago 5 replies      
Not sure what the man's crime is here. Does he even remember his keys after sixteen months in the slammer? I don't even remember my Gmail password after 16 days of vacation. Basically, like the article says, it like not opening a safe for an inquisitor: you are damned if you do, you are damned if you don't. Encryption is nothing new people, you are just putting your data in a safe.

We have a tendency to misconstrue, willfully misinterpret, or altogether ignore the law when it comes to prosecuting individuals who we believe to be standing on much lower moral ground. We do so because we want so badly to punish the accused that we are willing to reduce or eliminate greater good that some privacy laws are aiming to provide (i.e. Trumps silly travel ban which is based on his hatred of Muslims built upon imaginary news stories and personal exaggerations of particular recent events -- all laws out the window)

AckSyn 2 days ago 7 replies      
He shouldn't have to decrypt his hard drives, and I support his decisions.

The problems with this are numerous.

First of all no one has any duty to provide the police with evidence as a 5th amendment protection. It's not a "right" for the police at all to have.

Imprisoning someone for failure to disregard their constitutional rights is absurd.

They have no evidence to hold him period.

spaceboy 2 days ago 2 replies      
I thought the U.S had better key disclosure law[1] than other countries? Personally I would rather not self-incriminate myself by revealing a key, no matter how draconian and lengthy the sentencing was. Why, you ask? Well I consider all my own personal data likened to an extension of my own mind, and revealing a key is like slicing a thin part of my brain and attempting to pick its contents. Never a gentlemanly thing to do in any circumstance.

In terms of being stopped and searched when traveling, I just carry a TailsOS bootable live USB. My laptop doesn't have a hard-drive and boots entirely from my TailsOS USB stick. I did not enable any persistent storage and any bookmarks I need to remember, I simply remember them by rote, like in that movie The Book of Eli[2]. My threat model is such that I don't want anybody knowing my business when traveling. The intrusiveness should only go so far as one question, like "Business or Pleasure?" and that's all.

[1] https://en.wikipedia.org/wiki/Key_disclosure_law#United_Stat...

[2] https://en.wikipedia.org/wiki/The_Book_of_Eli

godelski 2 days ago 4 replies      
From what I understand our legal system was designed to fail "open". Or rather that we are willing to let a guilty person go free rather than an innocent person go to jail.

I know everyone wants to have a perfect justice system but we have to ALSO decide which direction we would like it to fail until that time comes (never). In essence cases like this are more about this question. When the system fails, which direction do we want it to fail in?

INTPenis 1 day ago 0 replies      
Speaking from experience of 1 month vacation I'm not sure I'd be able to decrypt after 16 months of not touching a keyboard.

My most important passwords (passphrases for gpg used by password managers and luks) are in my head and muscle memory.

When I update passwords I tend to have them written down until I've typed them enough times.

So after a months vacation I often struggle to remember my work password for example. While using phrases makes all this easier these days, 16 months is a long time to presumably spend without your keyboard.

ust 2 days ago 1 reply      
Professor Orin Kerr has wrote about this exact case extensively, and provides a good insight into all legal aspects. I think it is well worth a read, especially the part about the 'forgone conclusion'.


payne92 2 days ago 1 reply      
If he were being forced to divulge the physical coordinates of a hidden thumb drive, it's highly likely he'd get Fifth Amendment protection.

But being forced to divulge the virtual coordinates of his hidden data is somehow different...

keithnz 2 days ago 1 reply      
Seems to me the 5th should protect him. Question is, is that a good thing?

Should law enforcement have a right to search through court orders? In a world of unbreakable locks it seems very hard to get justice unless the law can do proper searches. If we end up in a world of unbreakable encryption everywhere, seems to me, criminal activity will have huge benefits. If we can't control crime, we can't have a just society. We can't protect a individuals rights if they are undermined by criminals. Of course, it's also hard if the state has too much power to protect and individuals rights. But somewhere we need pragmatic compromises.

stillsut 1 day ago 1 reply      
I really hate to be writing this but it seems like the only solution...

Doesn't this whole situation and the threat thereof go away for 99.9% of the population if we decriminalize possession / "viewing" of child pornography? [Note: you could still be severely prosecuted for making it]

There doesn't appear to be anything else in the digital realm that can get you in such legal trouble. The only other thing I could think of is national defense espionage, or rogue WMD plans. And on these counts, 99.9% of people are going to be very difficult to put a plausible frame job for these crimes. Sure the 0.1% with security clearance could be framed here, but as far as I understand, that's a personal decision and risk each person go to make for themselves.

If you deny the prosecution the ability to use reasonable suspicion of CP to search, or compel a decryption of your digital files, it's going to be a long time before another case like this.

danbruc 2 days ago 3 replies      
This is probably a lot less black and white then it might seem at first. If there is sufficient evidence, one can obtain a search warrant and this forces you to possibly act against your own best interest by allowing the police to search your home. On the other hand you can usually not be forced to testify against yourself.

So this becomes the question where decrypting a hard drive lies on this spectrum. Is it more like testifying against yourself or is it more like allowing the police to search your home? Assuming one agrees with the way testifying against yourself and searching your home is currently handled by the law.

hysan 1 day ago 1 reply      
What happens if the drives develop bit rot over those 16 months preventing them from ever being decrypted? Based on the wording of what he is in contempt of, it sounds like he would sit in jail until death. To me, it sounds like the prosecution is trying to play a word game to get around 5th amendment protections.
downandout 1 day ago 1 reply      
I'm curious why he doesn't just claim that he forgot the key. There's no way to prove conclusively that he's lying, and a judge cannot jail someone for disobeying a court order that they have no way of obeying. If he's told them he knows it and just won't give it to them, then this bridge is probably burned, but it's probably his only shot. Absent a ruling from the Supreme Court on this issue, they can and will hold him until he complies, dies, or has already served the maximum possible sentence for the crimes he is suspected of committing.
krick 1 day ago 0 replies      
Just yesterday [0] someone (multiple people, actually) was claiming fingerprint locking on phones is unsafe based on fact the 5th amendment doesn't protect your fingerprints, but does protect your right to not reveal your password.

[0] - https://news.ycombinator.com/item?id=13622684

dwaltrip 2 days ago 1 reply      
I have a related, seemingly analogous situation that I'm curious about.

If a suspect might have physically buried important evidence hundreds of miles of way in the middle of nowhere, such that it is effectively impossible to find, can the courts "compell" the suspect to give up any knowledge of this, in a similar fashion as the man in the article?

planetjones 2 days ago 9 replies      
As expected on HN I am not surprised to see people defending one's right to privacy and encryption. However, what's the solution then ? If all the "bad guys" who distribute illegal material do so encrypted volumes and refuse to give up the decryption key then what do we do ? It's a different world now; the police can't just take a drill out and open the safe.
Zelmor 1 day ago 0 replies      
Here we have an example of a forceful, totalitarian state. Where they cannot keep up their laws due to technological progress, they threaten you with violence and boredom (boredom as in sitting in a cage as long as you refuse to comply). In communist era Eastern Europe, secret police used to send in beautiful prostitutes and/or agents to deal with people held in captivity. Get a man to erect his penis. When that happened, strong men stormed the room, grabbed the victim, and inserted a glass rod into his penis. Then smashed it. I would not like to go into details on what they did to women. You can look that up.

Land of the Free - as long as you do not encrypt your shit, that is

EGreg 2 days ago 1 reply      
This is a good reason to have TrueCrypt hidden volumes and other forms of steganography. You can decrypt and still have more stuff that looks like random junk. No one can be sure you decrypted everything.

The best idea however is to have sensitive stuff stored encrypted on freenet, and log in using incognito browser sessions.

sologoub 2 days ago 1 reply      
Modern encryption isn't that much difference in basic concept to a cipher, such that it takes data in readable form and makes it unreadable.

In Apple/Gov dispute on the San Bernardino iPhone case, Gov brought up the Burr case from 1807, arguing that a 3rd party could be compelled to decipher the contents, provided there was no self-incrimination (Apple argued Burr did not apply): http://www.macworld.com/article/3046095/legal/burrs-cipher-s...

Does anyone know if there has been an attempt to equate decryption to deciphering in US courts?

jwatte 2 days ago 1 reply      
"My hard drive contains only the finest random bytes. There is nothing to decrypt. Please prosecute me for lying if you can prove otherwise."
orbitingpluto 2 days ago 0 replies      
I have problems remembering a password if I don't use it for a month or two. After 16 months, I'd probably have no idea.
ommunist 2 days ago 0 replies      
Humanity is really close to produce tech that quite literally reads minds. Since those are not easily encrypted, what prosecution awaits judges so much obsessed with illicit topics, like the one discussed in this article? Either we'll have an orwellian "thoughtcrime" to become reality, or we change the legal system. For the moment, it is obvious that American legal system and basic information technology are quite different worlds.
geuis 2 days ago 0 replies      
Laws protecting the worst of us protect the rest and best of us.
data4science 12 hours ago 0 replies      
So much for my plan to:

1. Reformat a hard drive with DBAN

2. Name the new drive: the_art_of_the_deal_2_from_future

3. Start a LLC named Marty McFly

4. Mail said drive from said entity to the White House

GTP 1 day ago 0 replies      
I don't reside in the USA and I don't know the US law. After reading a certain amount of comments seems that the heart of the discussion is the fifth emendament. I agree that if the 5th emendament protects you from giving the combination of a safe logically follows that the same applies for encription passwords. But I don't agree with the root of this argumentation: if a court has a valid reason to think that a safe contains evidence that is relevant for the lawsuit, why should make any difference if the safe is locked with a physical key or with a combination? Somebody who has a different opinion could please explain me his point?
victor9000 1 day ago 1 reply      
It seems like the state is holding the accused captive for failure to produce unencrypted hard drives. But how can the state prove that the accused has the ability to fulfill this request?
upofadown 2 days ago 0 replies      
>"...Instead, the order requires only that [Rawls] produce his computer and hard drives in an unencrypted state."

This language makes it sound like the government is specifically asking the defendant to take an affirmative action to produce the evidence required to incriminate himself. That would be the same thing as issuing an order with the intent to compel an accused murder to tell the police where the body is. I really don't understand how a court could issue an order based on such an argument.

Zikes 2 days ago 0 replies      
Is the ACLU not at all involved in this case? Shouldn't they be?
ioquatix 2 days ago 0 replies      
Good to know that File Vault actually works.
ohstopitu 1 day ago 1 reply      
We seriously need self destructing time based keys.

Once a device does not have a password in a time defined by the user - all of it is wiped.

It goes without saying that this should be off by default but definitely a good feature.

Esau 2 days ago 0 replies      
To me, this is incredibly perverse: no one should be forced to assist the government in their own prosecution.
TheBobinator 2 days ago 2 replies      
There's a burden of proof here the court needs to meet before holding the defendant in contempt.

For starters, it's reasonable to assume the defendant owns the hard drives in question if they're in their possession, irregardless of their testimony otherwise.

Given that piece of information:

1: The court has to prove the disks are actually encrypted. It is not merely enough for the cops to pick up the disks, see some garbled data, and determine it's encrypted. Now if you're using a file level encryption protocol that leaves enough un-encrypted stuff on disk that you can identify the filesystem and the file encryption, then you've met the requirement. If you are using full disk encryption, especially something designed to hide the data and filesystem from anything but a forensics package and even a forensics package see's garble, then there's effectively no way to tell the disk is actually encrypted or with what.


2: They have to prove the defendant, at some point, had the encryption key. That requires proving the method of encryption and key generation. With a door lock, you know there's a key. With a safe combo, you know the combo could be 12 digits and broken up between a dozen people. With an encryption system, any combination of things you know, are, or have could be part of the key. Compelling the defendant to reveal all of that is absolutely a violation of their 5th amendment rights.


Lets assume we're using windows EFS. Lets further assume analysis of EFS indicates the user named "YOU" owns the account. Furthermore, lets assume there are files that have date modified dates within the end users folder that indicate they had logged in the day prior to the search warrant being served.

You give the court the "I don't remember" line.

In that case, forgetting the password is destruction of evidence, not contempt. If a key escrow is used and they can prove it, same deal, destruction of evidence.

Elijen 2 days ago 4 replies      
"I forgot the password"

Problem solved.

megous 2 days ago 0 replies      
Why don't they sue him based on what they already have. They have to have something strong enough to justify holding him for more than a year already, so wtf?
smokedoutraider 1 day ago 1 reply      
Are we just ignoring that he is a pedo, and likely has incriminating garbage on his laptop?
mtgx 1 day ago 1 reply      
Remember when Obama passed the law that allows for indefinite detention without charge? And how his supporters said "Yeah, but it's not like he would use it"?

I think it's already used all the time across the country now. A law that "is not supposed to be used" should not exist. If it exists, then it will be used. I'm sure this is some kind of Murphy law or something.

yellowapple 1 day ago 0 replies      
"The authorities say it's a 'foregone conclusion' that illicit porn is on those drives. But they cannot know for sure unless Rawls hands them the alleged evidence that is encrypted with Apple's standard FileVault software."

Then it ain't a "foregone conclusion". If it was, they wouldn't need him to unlock the drives; they could prosecute him with the evidence they used to arrive upon that "foregone conclusion".

cdevs 2 days ago 0 replies      
We should jail oj until he tells us he did it?
pbhjpbhj 1 day ago 1 reply      
guard-of-terra 2 days ago 3 replies      
The people who do this now will become old and helpless to, and it is our purpose to forget nothing and make sure their final years are thoroughly ruined.
BigChiefSmokem 2 days ago 4 replies      
cdevs 2 days ago 0 replies      
We need John ham in that black mirror Christmas episode..
intrasight 2 days ago 1 reply      
Why don't they let him out on bail until the court is able to hear the case?
BigChiefSmokem 2 days ago 2 replies      
Let him go then.
dleslie 2 days ago 3 replies      
Doesn't America have some rule against being compelled to provide evidence of your guilt?
m-j-fox 1 day ago 0 replies      
My idea is to just hand over the password, which is 'asdf'. If the password fails it's because whoever is in possession of the computer has already logged on and changed the password -- no longer my responsibility.
Show HN: A guide to all HTML5 elements and attributes htmlreference.io
609 points by bbx  16 hours ago   85 comments top 39
shpx 7 hours ago 0 replies      

Same thing but guaranteed to be up to date and (more) complete.

For example, htmlreference.io's page for <input> doesn't mention the autocomplete attribute. MDN lists all its possible values.



callahad 12 hours ago 5 replies      
I believe your kbd example is incorrect. You suggest

> To save, press <kbd>Ctrl + S</kbd>.

But the spec (both W3C and WHATWG) suggests that individual keys should be nested inside an outer <kbd> tag: "When the kbd element is nested inside another kbd element, it represents an actual key or other single unit of input as appropriate for the input mechanism."

Thus, the example should be:

> To save, press <kbd><kbd>Ctrl</kbd> + <kbd>S</kbd></kbd>.

Cite: https://w3c.github.io/html/textlevel-semantics.html#the-kbd-...

On the face of it, this seems ridiculous. It's too verbose, the tag name is misleading, and if you actually use the correct markup on GitHub or StackOverflow, it will render incorrectly because both sites assume the standalone <kbd> element represents physical keyboard buttons.

On the other hand, what's the value in semantic markup if we don't adhere to its semantics?

Practically speaking, I would be a happier person today if I hadn't read that part of the spec, and instead persisted on in blissful ignorance of the element's intended semantics. Thanks, specs.

blauditore 10 hours ago 2 replies      
How is this updated (manually or automatically from official specs)?

Call me paranoid, but I see this diverging from actual specs, then people googling for "html reference" finding it and thinking it is something official. The result would be another W3Schools disaster[1].

In my opinion, the official W3 specification pages are not that bad, and alternatively there's the simpler MDN with strong community support (thus lower risk of deprecation).

[1]: http://www.w3fools.com/

dagw 15 hours ago 1 reply      
Really cool. One feature request. when a description of a tag refers to another tag, then those tags should be links.

For example the description for <li> is:

 Defines a list item within an ordered list <ol> or unordered list <ul>.
Here I should be able to click on <ol> and <ul> to get information about those tags.

falsedan 14 hours ago 2 replies      
It's missing the links to context & to the spec, which was the killer feature of the old WDG HTML Reference[0].

[0]: http://www.htmlhelp.com/reference/html40/alist.html

adontz 15 hours ago 0 replies      
Links to http://caniuse.com/#search= will be useful.dt, li, option, td, th, tr are visible regardless of finder checkboxes
edent 15 hours ago 1 reply      
That's really well laid out. Less detail than MDN, but that site can be overkill.

Two small pieces of feedback. HTTPS support would be good. Also, when I scroll the list of element and then click on one, I'm taken back to the top of the list - I'd like to stay where I am.

darekkay 10 hours ago 1 reply      
A more general/comprehensive API documentation tool (which can be also used offline) is DevDocs[1]

[1] http://devdocs.io

codingdave 12 hours ago 0 replies      
The checkbox filters at the top are somewhat un-intuitive. For example, If I uncheck everything but experimental, I want to see all experimental... but what I get is all experimental that are not block, inline, etc. I mean, as a developer, I see how those filters work... but as an end user, that isn't how I want them to work.
ahstro 14 hours ago 2 replies      
Nice, but seems to be something wrong with the tagging. I unchecked all tags except `experimental` and I ended up with seven results, only one of which is actually experimental, (`picture`), the rest being _pretty_ cemented (`dt`, `li`, `option`, `td`, `th`, and `tr`).It also seems to leave out some other expermental tags, like `wbr` and `slot`, that are on the site.
etimberg 12 hours ago 0 replies      
The canvas page [1] has incorrect defaults listed for the width. It states that the default is 100 but it is actually 300 [2].

1. http://htmlreference.io/element/canvas/2. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ca...

mouzogu 10 hours ago 1 reply      
from a UX perspective i really like this. like the way it's laid out and presented - found it easy to scan.

I know a lot of people big up MDN vs W3Schools and all their arguments are basically correct but i find it (mdn) really ugly and hard to read. i often find myself going to w3schools to just copy a snippet or get a 1 line description of what i need.

mdn often feels more like a technical spec as opposed to a guide, which it kind of is i suppose.

great work on the layout!

alexgrcs 13 hours ago 0 replies      
The <address> example is wrong. <address> defines contact information, but it's not appropriate for all postal and e-mail addresses.http://html5doctor.com/the-address-element/
tomkin 4 hours ago 0 replies      
I love things like this, but I find I have about 30 of these sorts of things bookmarked, and I never think to pull them up when I'm in need I immediately go to Google and find something like the Mozilla ref @shpx mentioned. It's a real problem I wish there was a solution for. Other than of course having a better memory.
badthingfactory 2 hours ago 0 replies      
Every time I look at lists of HTML5 elements, I about how I really need to get better at writing semantic HTML. Tomorrow I will go to work and create 500 more div tags.
shdon 10 hours ago 0 replies      
You've got the description for the "ins" element the same as for the (admittedly related) "del" element.
xamuel 13 hours ago 1 reply      
This is really great. It would be even greater if it could be used offline. Instead of hitting the server every time I choose one of the elements to view.
karol 4 hours ago 1 reply      
Can you help with a community driven website content as well?http://docs.webplatform.org/wiki/html
jean- 12 hours ago 4 replies      
Good stuff. Would be even nicer if it showed information about optionally self-closing tags, which is one of the main reasons I occasionally need to look at the spec. For instance:

> A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, div, dl, fieldset, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, main, nav, ol, p, pre, section, table, or ul, element, or if there is no more content in the parent element and the parent element is not an a element.

Links to the relevant parts of the official spec would be nice too, e.g. https://www.w3.org/TR/html5/grouping-content.html#the-p-elem...

tenaciousDaniel 14 hours ago 1 reply      
I wouldn't say hgroup is experimental. It was on the standards track and now it's being deprecated.
HNaTTY 9 hours ago 0 replies      
The filter checkboxes are a bit weird, for example elements (such as br and img) which are both self-closing and inline only show up if both self-closing and inline are selected, instead of one or the other. Also there are several elements such as li which are not categorized and show up even if no checkboxes are selected.
teh_klev 14 hours ago 0 replies      
Nice...would be useful to have the summary of what a tag's purpose is on the main page list of tags. For three letter or less tags it's not always obvious what I'm looking to achieve a specific task.
xophishox 11 hours ago 0 replies      
Is there something similar to this for javascript?

I really find this format super easy to read, even if there may be slight inconsistencies and nuances to reading it.

0x1d 10 hours ago 0 replies      
This is the cleanest HTML5 reference site I've seen. I like that it has links back to MDN as well.

Is it design completely custom or did you use a template or theme? I'm really struggling with the design side of my side projects.

Also, if you don't mind me asking, how difficult was it to get approved for Carbon Ads?

KingMob 4 hours ago 0 replies      
Nice. Can you add support for <svg>? I know it's not HTML...but it would be handy.
cabalamat 14 hours ago 1 reply      
In general this is pretty good. One mistake that it doesn't make is to use low contrast for everything.

However, the code examples don't look very nice on my box due to the font which comes out as Nimbus Mono L. How about you link to an actual font? (I like Ubuntu Mono, though there are lots of other good programming fonts).

Also with respect to code samples, would black text on a white background be a possibuility?

neogodless 14 hours ago 1 reply      
For some reason, if I have self-closing and inline checked, and uncheck inline, meta items appear.
darkhorn 5 hours ago 0 replies      
Keygen is missing https://www.w3schools.com/TAgs/tryit.asp?filename=tryhtml5_k... which is available in Firefox 51.
projektir 12 hours ago 0 replies      
The design is really cute!

I appreciate that the website works well even with all scripts blocked.

criswell 10 hours ago 0 replies      
If you like this, you'll lOoOoOoOve this: http://cssreference.io/
aiaf 12 hours ago 0 replies      
I've only checked the anchor tag, and it is lacking.

It should link to the URI RFC to illustrate all possibly valid syntax for the href attribute. Also doesn't mention 'javascript:'

jeshan25 7 hours ago 0 replies      
really really cool. I've shared it in a facebook group for front end devs that I'm part of.

One minor issue: Your search bar doesn't look like one :)I didn't notice it until my second visit.

ulucs 15 hours ago 0 replies      
quite cool! but even after deselecting all the options, I'm still being shown some tags (select, tr, th, td, dt, li). I think it's because they haven't been tagged with anything
javajosh 2 hours ago 0 replies      
Without compatibility information this is more dangerous than useful. For example, a naive HTML author would use the semantic tags, only to fail in most browsers. There is mention neither of compatibility nor polyfill. Better off with MDN and caniuse.com.
cyborgx7 12 hours ago 1 reply      
Why does it want me to whitelist it in my adblocker when it claims to be free and to always be free?
k__ 13 hours ago 0 replies      
Is there a good list of the most used meta tags out there?
jkochis 12 hours ago 0 replies      
Can you add the 'range' input type?
lisper 14 hours ago 1 reply      
You left out <center> ;-)
SFJulie 3 hours ago 0 replies      
Reading the comments:

- nice brogramming;- not up to date;- not refering the spec (sorry this is important when you are in trouble);- well read @shpx comment he is right

Verdict: more signal less noise policiy says, don't click to the bait.

normally I put a goatse.cx link here or a rickroll

Encrypted email is still a pain incoherency.co.uk
535 points by jstanley  1 day ago   426 comments top 53
tptacek 1 day ago 32 replies      
Encrypted email is pretty much over in 2017.

The emerging consensus among experts is that it's not worth the trouble, or, worse, incapable of doing much more than generating a false sense of security. That's for a bunch of reasons:

* An enormous installed base of clients that won't do encryption, meaning that at best you're attempting to tunnel encrypted messaging over an unencrypted transport.

* A protocol that leaks metadata, including some message content, at the envelope layer.

* Hundreds of millions of users that primarily access messages through browser clients that can't meaningfully implement crypto.

* An archive-always UX that ensures that huge amounts of plaintext are scattered around the Internet by both senders and receivers.

* An unencrypted installed base that ensures encryption will be opt-in for the foreseeable future, meaning that users will routinely reveal plaintext accidentally by, for instance, quoting messages and forgetting to encrypt.

* End user demands for things like search that can only be delivered efficiently at scale by databases of plaintext (most likely at centralized servers).

All these problems are probably surmountable (with enormous, concerted effort). But: why bother? Email is just one of dozens of messaging systems available to Internet users. Better to move sensitive conversations to things like Signal, WhatsApp, or Wire --- the double ratchet construction is designed specifically to make IM-like protocols secure even when conversations are sporadic and last months.

MayeulC 1 day ago 2 replies      
For what it's worth, I used to use encrypted mail some time ago as much as possible, before realising it was fundamentally flawed:

the key retention is the biggest issue. You need to keep your key around for a long time, probably storing copies of it. This increases the probability of a leak.

there is no method to revoke a key with a 100% assurance that nobody will use or trust it afterwards.

if a key is broken or leaked, every encrypted message you ever received can be deciphered. And I hope you realise it, or future messages are also at risk.

Those are the three major flaws I can remember right now. The way people usually use it make them feel safe while they are not necessarily. It's a bit like reusing a complex password on different websites (although far less critical since the key is assymetric).

The way I use it now (and I never actually needed it, to be fair), is that if some third party want to send me some confidential information, they request my public key via an open channel, and I then generate a key, unique for this conversation (ideally, it would be done on a per-email basis). Of course, the complex part here is ensuring they receive the proper key (ie, no man in the middle). This can be done by using a side (preferably secure) channel.

Of course, in today's world, encrypted emails are not the best way to communicate privately, in my opinion, but that's another story.

And there are a couple of other issues regarding encrypted emails such as its adoption, it's complexity, etc. But none as fundamental.

legulere 1 day ago 5 replies      
I do not get why everyone thinks that encrypted email is GPG.

S/MIME is supported by almost all email clients.

S/MIME is far less of a pain (but still some pain and could be improved).

It has a model of how to verify that keys belong to the right person, that actually works in practice in contrast to GPG where you basically have to verify keys by hand (adversarial CAs are a problem, but probably only for a tiny amount of people).

lmm 1 day ago 1 reply      
A secring is a "secret keyring" that contains a number of private keys. I agree that consistency of "secret" versus "private" would be helpful, but the concepts of a key and a keyring are distinct (and a reasonably effective metaphor IMO - you have a keyring which your keys are on). The extent to which the tools should push you towards having a single key versus rotating is arguable; people criticise GPG for being too hard to use but people (sometimes even the same people!) also attack it for not encouraging key rotation enough. A gitflow-esque helper for setting up a best-practice rotation would be a valuable thing for someone to make (and fundamentally there just isn't enough money in making these tools more usable, unfortunately, which is why I fear it will never happen).

GPG shouldn't need to tell you (prominently) where the files are because you should never need to know that - tools that integrate with GPG should be able to ask GPG where the keys are. The main thing that needs to happen here is for the Enigmail key generation bug to be fixed.

If you're looking for a good GUI mail experience with GPG integration I found KMail much nicer than Thunderbird/Enigmail, for what it's worth.

Gaelan 1 day ago 3 replies      
Ooh, I know this one! I think. Doesn't Apple Mail have this built in? I go to Keychain Access, choose the option to generate a key. Two clicks. Head to Mail, encryption options are there. Now, to import his key. Do some googling on that.

Wait, what? Apple Mail supports S/MIME, not GPG. Competing standards strike again.

If the other person has S/MIME, Apple Mail does have a very easy experience. I can't speak for the merits of either security-wise.

Also, I think this is the sort of thing Keybase is good for. There's a level of indirection pasting into Keybase, but it's pretty easy to set up and (for non-Snowden levels of paranoia) makes it very easy to start sending encrypted mail to somebody else. The new Keybase chat is also an option.

Raed667 1 day ago 1 reply      
I have "taught" encrypted email for many years to different types of people.

I'm surprised to see that most non-technical people that I assisted had a better understanding of these same tools than the author.

Generating the key using Enigmail has always worked for both Windows and Debian/Ubuntu (those are the top OSs people brought), finding a person's key was a pain a few years ago, I love the new UI.

My conclusion: The author probably had his mind set before even starting.

EDIT : Enigmail now allows you to enable encryption/signing by default, this prevents users from sending accidental clear-text emails.

parent5446 1 day ago 1 reply      
This article: what a shitfest.

But seriously, I was expecting some actual discussion about how GPG still isn't easy (or possible for that matter) in modern webmail clients, or even something relating to the usability of common GPG GUIs, but instead just got a guy complaining about how he was pressing enter too fast and missed a dialog box, among other nonsense complaints.

Personally, the GPG CLI acts exactly as I expect it to, being a CLI and all, and I don't expect non-advanced users to use it.

mark_l_watson 1 day ago 1 reply      
Years ago, working at a friend's security company everyone used Apple Mail with GPG. That is the only time anyone insisted on using encrypted email.

Fast forward to the present: I support and like ProtonMail, but I can't talk anyone else into using it. I don't understand why more small companies, wanting to protect their intellectual property, don't use ProtonMail (or something like it).

cpursley 1 day ago 1 reply      
I'm not sure I buy this. Protonmail has a very polished UI that's dead simple enough for technical people and non-technical people alike:


zamalek 1 day ago 0 replies      
This is one of the reasons that disrupting email is one of the most popular bad ventures. It's great 1980's tech. Email is a false vacuum in the field of solutions for communication, a local minimum. We can't improve on it any futher - every direction along the gradient of solutions is a worse solution.

IM is doing a good job at locating a new local minimum right now.

vasilakisfil 1 day ago 1 reply      
The site does not use HTTPS (TLS) so that public key is completely useless.
sigjuice 1 day ago 1 reply      
Most prominent mail clients have built-in S/MIME support (Outlook, iOS, Thunderbird, Mail.app on macOS). The problem is that the there is no easy and free/cheap way to get an S/MIME certificate. My hope is that Let's Encrypt or Keybase or someone will make this easier someday.
tom-jh 1 day ago 1 reply      
It is indeed a pain. That's why I made https://cryptup.org/

For some of my users, using Gmail itself is already challenging.

Yet they are sending around PGP encrypted messages, attachments, etc.

Even my mom is using this. For encryption software, moms count double.

jecxjo 1 day ago 1 reply      
The more this topic comes up, the more I start to wonder if the "difficulty" in email encryption is actually people just being lazy.

We have IM and texting apps like Signal. You install, and if your friends install then you're secure. Most people skip verifying fingerprints, not doing IRL face to face verification. Yes the install process is simple and requires no real work to start encrypting things, but that still doesn't make the process secure. If anything, having these types of security models where you aren't forced to verify the other end continues to breed this lazy mentality. Security is hard and requires all end users to actually put time and thought into what they are doing. We can always make a better mouse trap when it comes to security but that will not change people's minds on how they interact with security when online.

cottsak 1 day ago 0 replies      
Well now there's Signal https://itunes.apple.com/au/app/signal-private-messenger/id8... and Keybase Chat https://keybase.io/blog/keybase-chat

Why do we still need PGP email?

dmbaggett 1 day ago 0 replies      
It's only a pain if you're still trying to use PGP. You can download Inky (http://inky.com) for any platform and use any email account to exchange S/MIME-encrypted email simply by checking a button. We're focused on large enterprises now (because we've found consumers and small business don't care about encrypting their email, or think TLS=encryption) but there's nothing stopping individuals from trying it out. It makes e2e encryption of email using any email account basically invisible. You can send encrypted mail to non-Inky users as well.
guenp 1 day ago 2 replies      
https://gpgtools.org/ works great for Mac.
whyagaindavid 22 hours ago 0 replies      
I want email to stay at least for work related. Imagine your employer shifts to WhatsApp or signal. Asks you to code shady stuff. Even if you resign, there is no proof your manager knew it if your official mobile phone was wiped. Think about harassment from senior managers. All these go undocumented. This is why I prefer my email from and to every one in office to be plain text. No PGP. I am German; do not care Hillary or Trump - but if Hillary had used signal. May be things would be different!
matt_wulfeck 1 day ago 0 replies      
It's not hard though, it's just hard to do it without any third-party in a way that provides an easy user experience. iMessage is a good example of encrypted E2E messaging that "just works", but it is setup and maintained by a third party.

I would say one of the biggest hurdles to increased security is the debilitating nature of secure from the security people themselves. Everyone wants a perfect solution and postulates endlessly about every edge case. So much so that the community won't accept any solution that has even a little bit of hair on it, so nothing gets done except everyones individual homebrew mechanisms. I swear there's not a pragmatic bone in their bodies, and if you doubt me then join some popular crypto mailing lists and see the tinfoil hattery for yourselves.

cygned 1 day ago 1 reply      
A client's admin sends credentials via email - including logins and passwords for mission critical infrastructure. When asking him about security concerns he said: "Our mail server's using HTTPS, everything is secure"

I haven't responded to that.

cookie0 1 day ago 2 replies      
Your key is not importable :D

 $ gpg --import stanley.asc gpg: CRC error; C68D2A - 29357C gpg: read_block: read error: Invalid keyring gpg: import from `stanley.asc' failed: Invalid keyring gpg: Total number processed: 0
Edit: Your key is way too short.

makecheck 1 day ago 0 replies      
If you want more-secure methods adopted, they have to be easy to use.

I find that encrypted PDFs are a reasonable option, even when reading on mobile.

If I want to send myself something sensitive, I attach a password-protected PDF; then even if I open the E-mail on my iPhone, I am prompted for a password before I am allowed to view it.

And frankly, I wish that web sites would stop trying to implement their own clunky mail-like secure messaging systems and just use password PDF. I hate receiving these you have a secure message on oursite.com! messages that are impossible to deal with on mobile (not to mention they appear very spam-like at first).

spodkowinski 1 day ago 0 replies      
What I find most frustrating when it comes to GPG encryption and email is the lack of support for public key encryption for generated mails. I've seen very few sites supporting GPG and if they did, I found it always just worked and I imaging not much of a big deal to setup. So why do even the biggest shops not offer this? I really would like to be able to upload my public key to e.g. Amazon. It's great to make the checkout process and everything super secure, but just to send every purchase you did and your personal data in an unencrypted mail across the web afterwards.
phyushin 1 day ago 0 replies      
I wrote a blog post on using mailvelope to send/receive encrypted emails via existing mail providers such as Google mail ... Linked into my post about getting a private/public key pair using keybase.io I'd like to think they explain it fairly well but I hadn't considered the implications of people believing they're more secure than they actually are ... Until I read the comments here - disclaimer: not claiming encrypted email is super secure or even necessarily that using mailvelope is super secure but just wanted to make it look less scary to the uninitiated
Tepix 1 day ago 1 reply      
gpg2 broke when I updated from Ubuntu 14.04 to Ubuntu 16.04. I had to export the keys using gpg and import them using gpg2.

Before the upgrade gpg2 was able to read the keys just fine.

Now, that's not the only problem after the upgrade, Enigmail is having some other issues...

It's a mess.

zanethomas 1 day ago 0 replies      
Protonmail works fine for me. I prefer to keep e-mails within family and with other close associates private. So I just insist everyone uses protonmail. It's as easy to use as gmail and they all put up with my demands.
iofiiiiiiiii 16 hours ago 0 replies      
The main challenge for me seems to be that users do not have a concept of maintaining a digital identity and the key management that goes along with it. They just expect their real world identity to somehow translate automatically.

This just does not happen. Though I wonder if government-issued digital ID might be of benefit here to encourage good practices. Estonian ID-cards carry key pairs, so if email clients supported GPG email with smartcard-hosted keys (uhh maybe some do but I never heard of one) then this might be an approach worth undertaking.

nkoren 1 day ago 1 reply      
...Or you could just use https://www.mailpile.is/, which is a genuinely less painful way to do encrypted email.
aarontyree 1 day ago 1 reply      
Keybase, Nylas mail plugin. Done.orGPG Tools beta, Mail app, done.or even just the Keybase built in encrypt/decrypt.
jheriko 1 day ago 0 replies      
to me, this accurately describes most user experiences of most open source software.

its not just gpg and mail encryption, this sort of experience describes a lot of tools - and not too long ago just installing linux was a considerably worse nightmare of usability fails than this (it has greatly improved, i'm happy to say)

alkonaut 1 day ago 0 replies      
Using encrypted email is only a solved problem (simple enough) when it's as easy as using https for all end users.

Right now it's on par with serving https, which is still way way too painful even for tech people.

duracel 1 day ago 1 reply      
Why would spy agencies wanted to read people's messages from server, when they can get plain text in each device ? Nonsense. Current encryption type secures only storage and transmission states. What secures kernel, decryption, caching and read states ?
brynjolf 1 day ago 0 replies      
After the second shitfest he lost me. Colourful language sure but this just felt out of place.
aruggirello 1 day ago 0 replies      
I don't know what kind of Linux the author is using but, on Debian/Ubuntu/Mint/etc., you should really be doing:

 sudo apt-get install enigmail
rather than go hunting for installers.

patrickread 1 day ago 1 reply      
If you're OK with using a third-party and would rather stick to GUI's, Virtru is a very easy solution for email encryption: https://www.virtru.com/
faragon 1 day ago 0 replies      
You can not avoid responses in clear-text quoting your encrypted emails, so that mechanism is broken by design: it should be a point to point encrypted mailbox.
throw7 1 day ago 0 replies      
Any business that approaches usefulness in terms of encrypted communications is shutdown by the u.s.
nickpsecurity 1 day ago 1 reply      
Let me contrast the author's experience with my own. Note that I had a brain injury during this process that made me forget scripting and GPG plus hard to learn. I'm a nice test case for how hard things are. :)

So, I looked into GPG. Holy shit there's a ton of options and complexity. High-assurance security says subset to minimal thing that works for increased trustworthiness. I noticed it could encrypt files with others' public keys. The person that contacted me was able to receive attached files. Text editors only have so many 0-days left in them & are easy to sandbox. So, I decided on this protocol:

1. Type the message into a text file.

2. Type a cryptic command to encrypt it with that public key.

3. Attach the file to a message to that person. Optionally doing this on a different box passed through a data diode if I was worried about GPG box compromised. Just using Linux for now.

4. Receiving works other way where I download an encrypted file that I run a decryption command on.

So, that's simple. How to get started and what commands to use? I installed GPG first. One look at the man page made me hit Google instead. I found a cheat sheet, identified the minimal commands necessary, and compared them against the man page. Saw what seemed to be right stuff but that man page was horrible. Looked at a bunch of other sources online with varying trustworthiness to see if they had same commands. Seemed like I had the right ones. I was also warned the key gen phase could take a long time so I just ignored that usability problem that stomped the OP so much. I was warned after all.

The key generated. Messages sent and received well. Only thing left was tediously typing my new buddy's email into the box with every encryption. As others came up, I was having to remember more email addresses. Time to automate that shit with a front end that worked on any system I needed. Also, without remembering how to program.

I recalled Python was easy. So, I'd need a data structure for a list of (alias, email) pairs, basic operations on text for maybe substitution, input, conditionals, ability to print them, and spawn function of some kind. Python reference gave me all I need which I tested each to be sure then composed them with tests. End result was a Python script with the list of alias/emails in it like a config file where I could just add people to the script itself. Then, I run the script on the text message with it asking which of a listed group of people to send it to. I type in number or name for it to run command automatically. Then, I verify by eye the new file looks like gibberish and attach it to the email.

End result was that I had GPG for friends using it, I figured out how to use it with fair degree of trustworthiness, and I automated the annoying part with less than beginner's knowledge of scripting. This shows GPG is way less hard than it appears to be. Although I sure could use a great front-end to smooth over all this. I'm sure any half-ass programmer could create one given what I did in that state. :)

isaac_is_goat 1 day ago 0 replies      
ProtonMail is "good enough" for me.
Vkkan2016 1 day ago 0 replies      
I am using protonmail it's clean and secure and it's working on non proton users with pin
upofadown 1 day ago 0 replies      
TLDR: Enigmail can be entirely broken due to interaction with GPG.

I've never used Enigmail. Is this sort of thing common?

owly 1 day ago 0 replies      
I love protonmail. It's simple and just works.
conmarap 1 day ago 0 replies      
Or, you could just use Protonmail and be done with it.
omouse 1 day ago 0 replies      
ProtonMail and TutaNota are pretty good.
pymai 1 day ago 0 replies      
protonmail is a whole lot easier to use now that you only need 1 password instead of 2
logfromblammo 1 day ago 0 replies      
I think the real problem is real-person identity management. Encrypted email is still a pain because we're still trying to shoehorn secure communication into a system that is fundamentally open and public, and filled with authorities that have to know something about you in order to route your messages to somewhere closer to your mailbox.

Email is still the text equivalent to the publicly switched telephone network. Domain names have to be registered. Mail routing has to be set up via DNS MX records and mail server software. It is like writing a letter on a postcard. Any attacker will be able to read the origin postmark, the destination address, and the ciphertext, no matter what code you use for the message.

All the problems with key exchange and trust chains is a result of trying to send secure messages from one known person to another known person. It's approaching from the wrong direction. People meet in person to grow their web of trust before exchanging secure messages online. But most of the people I communicate with online, I have never met in person, nor do I even know what they look like.

If I use secure pseudonym-to-pseudonym communication to set up an in-person meeting, and prove to the other that the human controls the pseudonymous identity, it should be impossible to determine how those two real people happened to show up in the same place at the same time. It should be indistinguishable from a coincidental encounter. You just can't do that with encrypted email.

I envision an uncountable number of large rocks, and masked ninjas running around hiding encrypted notes under them. The ninjas hang out in their darkened ninja bars, letting each other know which dead drop to use if they want to start a correspondence, and how to signal that it should be checked. But in the network, nothing exists unless someone runs the service, so all those rocks would have to be created and maintained by volunteers, and there is always the possibility that someone is preferentially using one of their own rocks, or setting up a fake rock to spy on anyone that lifts it.

I don't even know enough cryptography or spy tradecraft to even make a helpful suggestion on how to communicate with anyone on the network in such a way that all known attacks by state-level actors may be mitigated. All I can think of is the Black Hand missions of Oblivion (Elder Scrolls 4)--which used dead drops--and I ended up spoilering all the spoilers, because they had been spoilered. Never mind the philosophical implications of questioning how you even know whether a person you are talking to face-to-face, whom you have known for years, is really the person they appear to be. Do we even need to attach an identity to an actual face most of the time?

exabrial 1 day ago 0 replies      
Keybase.io brah. Simple!
parennoob 1 day ago 6 replies      
Do any of these keyservers perform email verification? It would go a good way towards some kind of verification that a user's GPG key corresponds to their email. Otherwise, anyone can generate a key with any email address and push it up to the servers. The standard way of verifying it (key-signing parties) is somewhat difficult.
innocentoldguy 1 day ago 0 replies      
I used to encrypt my email with GnuPG, but it was a huge hassle for me, and an even bigger pain for the family and friends I wanted to communicate with. In the end, I realized that the things I tend to say over email just aren't that important anyway, so I stopped doing it. If someone wants to spy on my dad's fart jokes, more power to them.

Nowadays, I hardly ever check my email. There are other ways to communicate with the people who matter to me, which are automatically encrypted and aren't as easily exploited by marketers and spammers. Email just isn't something I really need or use much anymore.

boomboxy 1 day ago 3 replies      
qplex 1 day ago 0 replies      
If you find 'gpg --gen-key' too hard to use, I don't know what to tell you.

It presents (at least on my system) a very clear prompt to type a passphrase. Maybe you should blame your distribution instead of gpg?

peterposter 1 day ago 0 replies      
Couldn't the blockchain be employed to do proof-of-key-ownership? I mean it's decentralized and trusted and all
Amazon Chime chime.aws
539 points by runesoerensen  22 hours ago   312 comments top 64
niyazpk 11 hours ago 7 replies      
The first (and personally the only) requirement I have with any chat system is that it should _not_ modify the text I enter in any way - especially if I am pasting something.

Sometimes I have to paste a line or two of code, or a few lines of a stack trace. Sometimes I have to paste a string which contains some particular set of characters. Microsoft Lync absolutely destroys the pasted text. It subtly converts the double quotes into some unicode nonsense. Then it converts some common character sets into smilies. When you copy text from Lync it is almost always guaranteed to be different from what what entered originally. God, I hate Lync with a passion.

ultimoo 21 hours ago 8 replies      
I liked the part about Chime calling you at the scheduled start of meeting. So simple yet I had never thought of this since my org uses WebEx.

With a smart phone that would pretty well via a push notification or an actual call, but not sure how that would work when you want a join a meeting from a physical meeting room with its own AV system. I'm sure there is a way to get that set up.

krashidov 21 hours ago 18 replies      
Enterprise conferencing software is so bad and so expensive I'm astonished it took this long for a decent competitor to come in. I'm really surprised Google didn't go all in with making Hangouts a decent competitor. I have a feeling this will make a lot of money.
greyskull 21 hours ago 0 replies      
Amazon acquired Biba[0], this is that product with the backend swapped out. It's currently being beta'd internally and they haven't yet added anything over Biba the product. There are some great features planned, from what I've heard.

[0] http://www.biba.com/

therealmarv 21 hours ago 4 replies      
I really do not like conference systems which do not work on Linux. Not everybody is using a Mac or Windows. Microsoft is also ignoring Skype and Skype for Business on Linux. This is all crap.
Taek 21 hours ago 2 replies      
They tout security but I don't think it's open source and it looks like everything is stored on Amazon servers.

Minimally it is centralized, and you can't verify that there's no backdoor. In this day and age, that means we're both trusting their core intentions, and also trusting that some government won't step in and silently force their hand. I don't personally feel that is good enough to be considered secure anymore.

biot 21 hours ago 1 reply      
"With Amazon Chime, you can feel confident youre communicating securely."

This wording has always struck me as being awful. People felt confident investing with Bernie Madoff as well. I'd rather have confidence from proven security instead of just feeling confident.

bobmagoo 7 hours ago 0 replies      
Not sure what the long term play for Amazon is with a WebEx competitor, something likely to do with getting enterprise business, hope it works out.

In case you hadn't seen it, this is basically the anti-marketing video for how conference calls actually work: https://www.youtube.com/watch?v=DYu_bGbZiiQ

planetjones 21 hours ago 1 reply      
Great product idea. At work we use Skype for business and it's a disaster - especially bad is it seemingly randomly says 'your device is causing poor audio quality' and mutes you. The only way to recover is to dial out and in. Before that it was the AT&T solution - such an ugly application with poor usability. If Amazon really polish this product and provide a great user experience and quality they could pick up a lot of business.

Edit: there's a problem here. Skype for business allows up to 250 participants. The AT&T solution (webex maybe) allows, I think, an unlimited number. Amazon Chime has a limit of 250 people. This wouldn't cut it for presentations in large companies e.g. announcement of annual results or divisional virtual 'town hall'

kupiakos 20 hours ago 2 replies      
> Amazon Chime works seamlessly across your devices.

> No Linux support

Corrado 12 hours ago 1 reply      
This is really cool but I wish they had more details on the Chat part of the solution. What does it look like? Can you theme it? Does it have any integrations (ala Slack)? Can you have inline pictures? Does it have a rich message API?
lars_francke 20 hours ago 3 replies      
I'm definitely going to try this (even though unusable for us because of missing Linux support). We have currently settled on Zoom and it's okay, they do have Linux support.

One problem I have with all video conferencing solutions we've tried (same for my colleagues, all Mac or Linux users, sadly no Windows users to compare) is high CPU usage. I have a 2015 MacBook Pro and when I share my screen CPU usage skyrockets to 150-200% basically pegging the whole CPU. Without sharing my screen CPU usage is at 80-100%.

I have similar problems with certain videos on the web (e.g. Ted.com and others).

Is this something everyone else here sees as well? I always assumed they must because we see it across devices and products.

benevol 19 hours ago 3 replies      
> Amazon Chime is a communications service that transforms online meetings with a secure, easy-to-use application that you can trust.

- Amazon, PRISM partner

Narkov 20 hours ago 3 replies      
Their claim of "a third of the cost of traditional solutions" is an apples and oranges comparison.

The basic and plus pricing options, while cheap, are practically useless with only 2 maximum attendees and the $15/user/month pro plan is hardly "a third of the cost...".

Looks like a great product with an average price point.

cyberferret 19 hours ago 2 replies      
Everything else aside, I am surprised/impressed to see that Amazon has the '.aws' top level domain! Does that mean they will be now branding all their AWS infrastructure under this domain?
jamiesonbecker 11 hours ago 0 replies      
We use Zoom at Userify and love it. Fantastic Linux client, too. However, it automatically calling me (and saving me time auto-dialing auth codes) would be a pretty nice feature.
hrayr 21 hours ago 3 replies      
Are they competing with WebEx, Skype, or Slack? Looks like a compelling B2B offering from Amazon. I bet they'll have an accompanying hardware to go with this in the coming months.
zeta0134 8 hours ago 2 replies      
I clicked through, and was accosted with a gigantic video. I wanted to close the tab right there, but I've seen this before, so I scrolled down to make the giant video go away. No dice, every single page element dances and animates and moves, and there don't seem to be any static images on the whole page. I can't scroll to a single position to read the actual text without some large part of my monitor animating in a suitably distracting fashion.

Why. Just... why? Why is this necessary?

algesten 20 hours ago 1 reply      
So, no video conferencing in basic/plus plan (1:1 doesn't count). It's funny how many attempts there are at making conferencing software that just have audio and some basic chat.

Entry level needs video, since you can get it for free elsewhere (i.e. hangouts).

avip 19 hours ago 0 replies      
I absolutely love the startupish way aws launches new services. They have the whole "landing page" and marketing pitch class A, but the product is alpha if being nice.

There's long, long way to go for this thing to compete with hangout, zoom, or anything else out there.

Source: I've just tried it out, chatting with myself on native app + 2 browsers.

krackers 21 hours ago 0 replies      
Amazon's horizontal expansion is pretty fascinating. From an online stores and cloud service provider to consumer products and now b2b apps.
manishsharan 21 hours ago 1 reply      
...application available for Android, iOS, Mac, and Windows

Why not Linux?

legohead 21 hours ago 0 replies      
No screenshots of the product?
dorfsmay 12 hours ago 0 replies      
No Linux client? What's the advantage over WebEx then?

Both hangout and zoom can do Linux, but they aren't seen as corporate as WebEx.

JumpCrisscross 21 hours ago 1 reply      
Does this feature end-to-end encryption?
nathancahill 21 hours ago 2 replies      
Speaking of.. can Slack hurry up and buy Zoom? Aren't they pretty flush with cash? It seems crazy that they are letting this huge market (where they have a foothold) slip away to new competitors.
dfrey 5 hours ago 0 replies      
I'm so sick of proprietary walled garden messaging systems. So now I need slack, chime, skype, hangouts, imessage, allo, facebook, etc depending on who I want to talk to.
pimlottc 12 hours ago 1 reply      
How do they authenticate who they are when they call you? I've gotten bogus calls before from "the credit card company", so there needs to be a way to be certain you're talking to the right people.
andy_ppp 19 hours ago 1 reply      
This looks awesome; I'm regularly told that Amazon is a horrendous place to work yet they seem to be producing great software and interesting startup type concepts all the while. Not sure how they do it? AWS is a bit of a mess, maybe it's just there that is problematic...
webwanderings 11 hours ago 0 replies      
I don't know how any of these could compete with Zoom, in terms of their offerings. Perhaps Zoom just doesn't have enough of a big name branding push, otherwise, it is hands down a product one should use over any other. I am a free-user of Zoom and I have explored many others out there; there's just no one who come close to Zoom's offerings.
tea-flow 9 hours ago 0 replies      
My Amazon login doesn't work. Is anyone else experiencing this issue? I just logged in on Amazon.com just fine using the same credentials (I use a PW manager). Thanks in advance.
fizixer 5 hours ago 0 replies      
> ... transforms online meetings ...

(has no mention of collaborative white-boarding)

po 19 hours ago 0 replies      
This is the first time that I've heard Amazon really call out AWS as a name brand in a non aws-dashboard oriented product (maybe they have already in the past?) Are Chime user identities AWS IAM users under the hood?

AWS as a more consumer-facing platform probably has a long climb ahead of it but it could be quite helpful for Amazon to differentiate from their many product misses released under the Amazon name.

Grimes23 12 hours ago 0 replies      
Only amazon would reveal a product without including any screenshots or details.
m_mueller 20 hours ago 0 replies      
For those who tried screensharing, does it have a pointing-feature (i.e. viewer can point to something)? There's so many products out there like Skype and Hangout that don't support it and I don't understand why not, it seems pretty basic to me (just only show the arrow on platforms that support it, i.e. OSX, Windows and Linux).
woodylondon 16 hours ago 0 replies      
Wondering if you have a Plus account and setup a call if you then have remote access, group chat etc - a little unclear what happens between Plus and Basic user. if all users need to be Plus then can see this being a problem.
codingdave 14 hours ago 0 replies      
Screen sharing not being available for free is going to make us skip the free trial. We have plenty of options for voice and chat. And video just isn't that important to my teams. Screen sharing, however, is vital. And we are willing to pay, but as long as hangouts works for free, why pay?

I know everyone says hangouts is dead and Google isn't putting much work into it. But it does work. And unless they actually shut it down, it gives us what we need. Free. We don't use it for large webinars or anything, and it has its flaws, but... free. That is a really hard point to beat.

vinay_ys 19 hours ago 1 reply      
Super expensive dial-in rates. $0.214 per minute in India is basically twice the ISD calling rate.
vegabook 19 hours ago 1 reply      
No Linux. Buh-Bye
cdnsteve 14 hours ago 2 replies      
Plus plan: $2.50 per user, maximum 2 attendees - seriously, $125/mo for 50 users? I think they missed the mark.

Join.me: 50 meeting participants, $22/mo.

Roritharr 20 hours ago 1 reply      
While we're on the topic of conferencing software, is there a List for Software to use when you want 4k/30p or 1080/60p?

Skype seems to not be up to the task. Our Gbit Connection is.

slyall 19 hours ago 1 reply      
Interesting that you pay per user per month. I wonder how it works for occasional and one-off users.

Eg if you want a vendor to join your team's chat or you use it to talk to clients.

blintz 21 hours ago 0 replies      
I'm really curious how this all-in-one concept will compare to the Slack approach of chat as the central functionality augmented by a bunch of integrations with external services.
bikamonki 14 hours ago 0 replies      
I see Amazon making a successful social network faster than Facebook making a successful market place.
jcoffland 19 hours ago 0 replies      
The Chime app for Android is very invasive. Instead of asking for permissions as they are needed it asks you to give up everything immediately.
malloryerik 14 hours ago 2 replies      
Is Chime based on WebRTC in any way?
avodonosov 17 hours ago 0 replies      
No linux support?
kr0 15 hours ago 0 replies      
I hope they don't get in the habbit of releasing -ime products. Wow that's old already
alexandercrohde 20 hours ago 0 replies      
One thing I'd like to hear the official policy on is message privacy (i.e. is management reading your stuff?).

That's a personal concern I have with slack.

xroche 19 hours ago 1 reply      
So this is basically what lifesize.com has been providing for ages, sans the Linux support. Truly revolutionary I guess.
draw_down 21 hours ago 0 replies      
"Meetings call you" is a good idea, as is the reconnecting stuff. Who knows if anyone will use this, but even with all the supply in this market, there is still space for something that actually works well. As someone who works remotely I can't wait until this gets figured out.
sthomas1618 21 hours ago 0 replies      
Zoom competitor?
hkmurakami 17 hours ago 1 reply      
I noticed that this is from AWS. Is this the first SaaS application coming from AWS?
chime 20 hours ago 3 replies      
As the guy who has owned chime.tv for well over a decade, this is a bit concerning IP-wise.
euyyn 21 hours ago 1 reply      
A new player in the area! Is this the first Amazon enterprise service not for developers?
evantahler 19 hours ago 1 reply      
Appear.in can't be beat.
jerianasmith 14 hours ago 0 replies      
For simple and secure meetings, We should give Amazon chime a try.
thomasfl 17 hours ago 0 replies      
No screenshots?
dbg31415 17 hours ago 0 replies      
Someone please do what you say and make one that's clearly better than all the other shitty dwarves out there today so the industry can standardize.

"Should we call, or Go to Meeting, or Google Hangout, or Skype, or Lifesize, or Slack, or Adobe Connect, or Zoom, or WebEx, or Chime, or..." It's getting ridiculous.

For new services: Please don't be based in the US or willing to cooperate with the US Government. Remember, "We don't snitch!" is an excellent marketing line -- I'll give you money for that. I don't trust Amazon or any of these at present.

cobookman 21 hours ago 1 reply      
Not sure if more impressed with the product or use of a TLD of .aws
_ao789 18 hours ago 0 replies      
I didn't know there was a .aws tld..
nodesocket 19 hours ago 1 reply      
How did they get that TLD .aws?
all_usernames 20 hours ago 0 replies      
In soviet Russia...
happy-go-lucky 19 hours ago 0 replies      
No wonder Amazon is the most innovative company of 2017.


Operating Systems: From 0 to 1 tuhdo.github.io
594 points by tuhdo  19 hours ago   65 comments top 13
radisb 15 hours ago 5 replies      
From the book:

"If a programmer writes software for a living, he should better be specialized in one or two problem domains outside of software if he does not want his job taken by domain experts who learn programming in their spare time."

Seems a bizarre sentiment, but after reading this sentence, I feel like I really wanna donate some money to the guy. If he gives a way I will surely do.

bogomipz 10 hours ago 1 reply      
This looks great!

I would also like to recommend another free resource that might be a good complement(theory vs implementation) to this:

"Operating Systems: Three Easy Pieces"

available online at:


itsmemattchung 10 hours ago 1 reply      
Every time an OS book (about weekly) is posted on HN, I immediately want to jump all over it but I gently remind myself to finish: CMU's Computer System's from a programmer's perspective and Elements of computing (nand2tetris).
js8 16 hours ago 0 replies      
Looks very nice so far. One minor nitpick - there should definitely be a chapter on inter-process communication, it's an important part of operating systems.
koolba 14 hours ago 0 replies      
For those of you that have read, or at least skimmed, this, how does it compare to the Minix book? (which was a joy to read!)
lorenzfx 14 hours ago 0 replies      
How much time would it approximately take to work through this?
kriro 16 hours ago 1 reply      
Seems to be for a x86 operating system. I'd have preferred some other architecture because so much of OSdev for x86 that I remember was working around quirks of the architecture (A20 gate etc.). I guess it's a valuable lesson but I'd really enjoy a fork of the book for the hardware you find in a Beagle Bone or Pi3 or something. Maybe this could be crowdfunded if the x86 version is popular?
emiliobumachar 18 hours ago 2 replies      
It's a from-first-principles guide to building an OS.

From the title, I had mistakenly assumed it was about the first OS ever.

gigatexal 12 hours ago 1 reply      
Well this has got to be one of the more ambitious things I've seen on HN. I wonder if the guy behind SkyOS is still doing operating systems. http://www.skyos.org/
laxentasken 18 hours ago 7 replies      
I have on my list to read something along the lines like this, but this seems incomplete. Does anyone have anything similar?
jerianasmith 14 hours ago 1 reply      
The hallmark of a Good book is that it should leave you wanting more. That's what the book is all about.
seewhat 14 hours ago 1 reply      
On pg 38: Field Gate Programmable Array (FPGA)

Surely: Field Programmable Gate Array (FPGA) ?

lanbanger 14 hours ago 2 replies      
The grammar makes that book fairly painful to read :-|
India has banned disposable plastic in Delhi globalcitizen.org
347 points by SimplyUseless  6 hours ago   127 comments top 23
Maarten88 5 hours ago 4 replies      
Rwanda also has this policy, since 2008, and they enforce it. The country is very clean, it looks different than other African countries (and countries like the US), just because there is no plastic rubbish everywhere. I think this is a very good policy, and would welcome it at home.


wallace_f 1 hour ago 0 replies      
The economist inside me says it would be better to tax disposable plastic.

The tax could cover the cost to clean up the litter. That would create jobs in three ways: 1) plastic clean-up jobs, 2) businesses and economic activity that desperately need disposable plastic can still possibly survive, and ) jobs making disposable plastic.

Anyways, it's a lot better than taxing things we all agree we want more of. Like jobs.

hive_mind 42 minutes ago 2 replies      
This is India. Starving, thirsty, injured (hit by cars and trucks) cows and dogs roam the streets, hobbling along, limping along. There are people starving on the street-side. The poorest children are openly prostituted on the streets.

How are they going to enforce a rule regarding plastic bags?

The rich will continue to do whatever they want.

The middle-classes will continue to do whatever they can get away with.

The poor will continue to be shit on and abused.


About 20 years ago they banned smoking in public in Delhi (I was there when they did it).

All that this ordinance did was to give the police yet another angle to harass people. More corruption. More bribes.

throwaway420 5 hours ago 5 replies      
Not a huge fan of outright bans, and think this is probably the wrong priority for India to focus on. (I understand this is just Delhi)

Air pollution is huge right now. And sad to say, people pooping in streets and rivers is still a major problem.

To me, plastic remnants are a very minor issue in comparison.

geodel 5 hours ago 2 replies      
I do not see any coverage in Indian media. Could it be one of those official notifications which public at large hardly follows but it breeds corruption by enforcement officials. Though looking at Indian papers I came across this rather frightening news:


bogomipz 5 hours ago 3 replies      
This is great news! I have a question maybe someone from Delhi could answer, the article states:

"The ban took effect on the first day of 2017."

What are the vendors doing? Is water being sold in glass bottles with a deposit scheme for redemption now?

rjurney 3 hours ago 0 replies      
This is awesome. When I was in India ten years ago, everywhere we went in rural India the trees along creeks, rivers and streams were littered with plastic bags from where floods had deposited them. They were like leaves, and the trees were dead. They had already banned plastic bags locally in that province, and it is good to see a global ban on plastic in general.

In Himachal Pradesh, the plastic bag ban had resulted in a cottage industry forming where discarded newspapers were folded/glued into shopping bags. I'd like to see this same thing happen in the US. A friend imported a palette of these bags to Florida, and he was able to sell them to vendors and make a small profit. This tells me they might be viable here commercially.

As they say, reduce > re-use > recycle.

bendermon 5 hours ago 2 replies      
Note: Thank you for down voting for pointing out a glaring 'fake statistics' and poor journalism, on a #1 trending post on HN.

The first and the second quote do not mean the same thing, not even close.

"A massive 60%t of the plastic waste in the oceans is said to have come from India, according to the Times of India."

The TOI reads - "Banning disposable plastic is a huge step for the capital and the country because India is among the top four biggest plastic polluters in the world, responsible for around 60% of the 8.8 million tons of plastic that is dumped into the worlds oceans every year."

As an Indian, I see a lot of journalists stuck in a colonial era. They go out of their way to tarnish and stereotype the great unwashed. They manage to turn even positive news to mock and heckle the less developed world.

But this article has taken it to great heights. The TOI isn't exactly known for journalistic integrity and often conveniently pulls statistics from their backside. But to misquote the devil, this article has certainly hit the lowest level.

SteveNuts 5 hours ago 2 replies      
Banning it is one thing, we'll see if they can actually enforce it.
walrus01 4 hours ago 0 replies      
I wish that they could find a way to do this in the Islamabad/Rawalpindi area. Due to a lack of budget and government coordination for large scale trash pick up, nearly every stream and ravine in the area is the designated trash dumping grounds. It's thoroughly littered with plastic shopping bags and plastic bottles. It won't solve the problem of people throwing trash on nearly any available piece of unclaimed or unusable land, but at least it'll be paper based or biodegradable.
noahmbarr 3 hours ago 0 replies      
All too often it seems US restaurants do the quick calculus of going 100% disposable cutelry/etc, making a decision we'll still be dealing with 50 years from now....

I am very supportive of these types programs, even if hard to enforce.

theparanoid 2 hours ago 0 replies      
Here in California single use plastic bags are banned. Now it's a pain, at checkout you either have to tell them how many multiuse bags to charge or bring bags from home.

No so many years ago paper bags were common and what I used. Paper is great, it's biodegradable, renewable, and convenient.

dirkg 3 hours ago 0 replies      
Good in principle, will cause hardships for many in practice. There are probably millions of street vendors who rely on plastic to pack food to go, as well as many other shops of course.

There are many other priorities to focus on which can have a far bigger impact.

upofadown 4 hours ago 1 reply      
>The ban includes bags, cups and cutlery.

What about all the plastic containers the food originally came in?

kumarski 2 hours ago 0 replies      
Hopefully it is enforced.
xyzzy4 5 hours ago 2 replies      
Plastic is the least of Delhi's problems. First they should be banning vehicles that don't meet emission standards. I visited there twice and I'll be lucky if I don't come down with some lung problem.

Edit:They should ban the burning of plastic, not plastic itself. And enforce it.

edblarney 2 hours ago 1 reply      
It's great. But the 'trash problem' in India runs deeper.

They need to have anti-litter regulation, awareness campaigns, and enforcement.

awqrre 3 hours ago 0 replies      
no more plastic trash bags... and of course no more using grocery bags as trash bags...
toephu2 5 hours ago 2 replies      
I don't understand how we can build self-driving cars, send people to the moon, and create advanced facial recognition software, but can't build technology that can separate plastic bags?
CaiGengYang 5 hours ago 4 replies      
What are they going to replace plastic with ? Can you invent a kind of material that is as capable as plastic but has zero negative effects on the environment ?
X86BSD 4 hours ago 0 replies      
Why only Delhi?! That country is drowning in filth not just Delhi.
vanattab 5 hours ago 4 replies      
Can we get the title changed to make it clear that India banned plastic in Delhi not all of India.

Also the title says "literally all disposable plastic" then in the article it says applies to cups, bags, cutlery.

libso 5 hours ago 2 replies      

"Delhi has banned disposable plastic"

Not all of India. Just Delhi.

The Google Analytics Setup I Use on Every Site I Build philipwalton.com
473 points by uptown  1 day ago   93 comments top 12
jameslk 1 day ago 2 replies      
It's pretty annoying that I have to create spam filters for Google Analytics to be useful. Every site I've installed GA on has required me to filter out spam. I don't understand why something isn't done about it at an engineering level. If site owners can set up filters against spammers, is it really that hard for Google to do it? Especially since they can see it across their accounts. Seems like it's the same type of issue that plagues email, yet Google seems to have that under control.
kristianc 1 day ago 4 replies      
Echoing what others are saying, I much prefer Google Tag Manager. Many clients use a CMS which make injecting dynamic variables into a page a bit of a pain if it's not done via rules at runtime.

The Next Web has open-sourced its Google Tag Manager setup (https://github.com/thenextweb/gtm), which has things like Scroll Tracking, Engagement Tracking (riveted.js), Outbound Link Tracking and lots of other things that are not in the default GA setup. They have recently added support for AMP.

In my experience it allows clients to get up and running with a useful GA setup in a couple of hours and means that you as a developer don't get bothered to make trivial changes.

caleblloyd 1 day ago 7 replies      
With the surge in Ad Blocking recently, part of me wonders how accurate the Google Analytics JavaScript tracker is today, and how accurate it will be in 5 years. I wonder if we'll see a trend back to server-side analytics soon.
Sir_Cmpwn 1 day ago 2 replies      
Please don't contribute to Google's tracking dominance over the web. How insane is it that one company runs their javascript on 90% of the web?
tombrossman 1 day ago 1 reply      
Remember that it's mandatory to disclose to visitors that your site uses Google Analytics in their T&C's https://www.google.com/analytics/terms/us.html (section 7, 'Privacy'). I don't see a privacy policy on this Google employee's page but perhaps they have a special exemption?

Anyhow, for many websites you'll get more accurate traffic data with GoAccess parsing your logs and showing you page views and basic demographic data. Use it alongside Google Analytics if you must, to see the exact difference between what Google tells you your page views were versus what your server tells you.

largehotcoffee 1 day ago 1 reply      
Not many people know about this feature of GA, but add the following to anonymize your users IP addresses before sending the information to Google.

> ga('set', 'anonymizeIp', true);

Roger_Jones 1 day ago 1 reply      
Filtering out GA sessions with the language of "C" (versus actual languages like en-us, fr, etc.) goes a long way in filtering out GA spam.

This language code is 99% of the time associated with bots. I had one site where 20% of all the sessions in a given month was such fake traffic!

sync 1 day ago 1 reply      
It looks like navigator.sendBeacon is not very well supported across browsers. [1]

Is this really a good idea?

1: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/s...

thomasthomas 1 day ago 3 replies      
Tag Manager is definitely preferable in my experience if you want to empower non technical people such as marketing to make their own changes on the fly without having to bother developers.
cyborgx7 1 day ago 0 replies      
Alternative title: The Spyware I Use on Every Site I Build
ns8sl 1 day ago 0 replies      
What's the deal with stats delayed over 24 hours? Man, I hate that.
shostack 1 day ago 0 replies      
Beyond this info, I'd add my own suggestions from having spent a good portion of my career digging around in GA...

- If you have multiple domains, sub domains, etc. make sure to spend plenty of time reviewing the cross-domain setup documentation and test it thoroughly.

- If you have high volume, frequently do deep segmentations, use lots of custom dimensions, etc., make sure you have a clear understanding of how sampling in GA works, how to tell if you are being sampled, and find ways to avoid it by pulling reports in different ways. Otherwise you can end up in a situation where you are making decisions off of .3% of your traffic and while Google's sampling algorithm thinks it is fine, comparison against other data sources often shows it is not.

- Make sure any reporting you do across things like GA vs. AdWords is done with a clear understanding of how they each report on paid search. GA reports on it by default on a last non-direct click basis. AdWords just counts everything AdWords touches. This means that AdWords can give you a good sense of where you are gaining traction, whereas GA can help you understand how it works in conjunction with other touch points, and perhaps how you might change the way you weight things and measure success.

- GTM is powerful and free, but with great power comes great responsibility. Also, it can be a real PITA sometimes.

- Annotations are a highly underutilized tool in GA and can save you a lot of headaches. I just wish there was a way to bulk import/export them via spreadsheet or API.

- You can't currently create goal funnels from event-based conversions (please Google add this!), but the workaround for the time being is to push virtual page views at the same as the event fires, and then create funnels off of those.

- User stitching sounds awesome, but is actually much more limited than you'd think from reading overview. You need a separate view (which means your main GA view you use can't segment for the stitched sessions for comparison--just the new view which only contains the stitched users). And there's a 90 day rolling data retention window, so you need some sort of export process if you care about that data. Unfortunately, this is pretty important data if you have lots of cross-device tracking issues.

- Depending on your volume, you can reach the hit limits of the free tier pretty quickly if you start tracking a ton of events (since they all count as hits). Here's a good overview [1] of what these limits are, how they work, and what they mean for you. When I got the scary notification, Google was exceptionally unhelpful in working with me to resolve the problem, despite considerable ad spend. After reducing them to what we thought would be fine, they were unable to assure me that our data would not be nuked, and basically couldn't give me any real info beyond "this is the policy." Super frustrating.

- If you have good logging of events that tracks both server and client-side, it is healthy to compare for variances monthly or quarterly. You'd expect client-side tracking to break more often than server-side, but it is important to see how much that can alter your numbers.

[1] https://www.e-nor.com/blog/general/hit-count-in-google-analy...

Effectively Using Android Without Google Play Services with Gplayweb in Docker fxaguessy.fr
316 points by fxaguessy  2 days ago   73 comments top 10
JulienSchmidt 2 days ago 3 replies      
Time to mention https://microg.org/ / https://news.ycombinator.com/item?id=12864429 again. It's a FLOSS re-implementation of Google Play Services. It offers e.g.:

* Optional completely offline geo location service via an on-phone database which often preserves battery and even works when no internet access is available. Online backends using e.g. Mozilla Location Service are also available

* The often unavoidable Push notifications via Google Cloud Messaging while only sending minimal identifying data

* The Analytics (tracking) and Ad parts are simple stubs which avoid app crashes but do nothing else

deep_attention 2 days ago 3 replies      
Or you can just install Yalp Store from F-Droid (https://f-droid.org/repository/browse/?fdid=com.github.yerio...) and download the apps from the Google Play Store directly to your device. You can also easily update your apps with Yalp Store.

Another possibility is to use the Java software Raccoon on your Desktop, available here: http://raccoon.onyxbits.de/

necessity 2 days ago 5 replies      
Uber, Tinder, and other apps I use require Google Play Services in their latest version. They simply refuse to work without it and there is no web alternative. It's either downgrade to a bug-prone version or use Google Play Services. There is really no privacy on a fully functional Android Phone that isn't just email and phone services - though I guess if you're using Uber and Tinder there is no privacy one way or the other.
Animats 2 days ago 1 reply      
I use an Android phone without any Google services or a Google account. When I got the phone, it brought up a demand to sign in with a Google account. But there's a "Later" option to bypass that temporarily. Disabling "Google One Time Setup" made that go away.

Mail is the built-in IMAP client. Browsing is with Firefox. Apps come from F-droid. Maps come from ZNavi. It's OK.

whyagaindavid 1 day ago 0 replies      
Remember using play services unofficially like e.g. microg can get your account locked. It is OK to create a throw away accounted for downloading apps,but continued usage _may_ lock your account.I do use lineageOS on a nexus 5 without play services and battery lasts 2 days with light usage.
Midiv0k 2 days ago 0 replies      
Instead of downloading an app from Google Play and transferring it to the open source phone you could use Aptoide. https://www.aptoide.com/?lang=en
hoschicz 2 days ago 4 replies      
What is the worst thing about G Play Services you can't turn off?
tehwalrus 2 days ago 0 replies      
I tried to use a Cyanogenmod phone over the summer without google play services (I couldn't get it to install).

I used the Amazon android store, which has many of the same applications as the play store. However, most apps I installed crashed on first launch due to (I assumed) lack of play services.

It was a very crappy experience which lasted only about a week before I switched back to a handset where the gapps flash worked correctly.

known 2 days ago 2 replies      
G play services sucks phone battery
swiley 2 days ago 0 replies      
There really isn't that much on google play store worth installing (except games I guess? even that's not so great and most cost money) that doesn't have some open source counterpart. The only exceptions are things like kik (and those don't run well, if at all, without google play services.)
Government-grade spyware hits Mexican advocates of soda tax bendbulletin.com
296 points by srameshc  1 day ago   137 comments top 8
colemannugent 1 day ago 6 replies      
>[NSO Group] claims to sell its spyware only to law enforcement agencies to track terrorists, criminals and drug lords

>An NSO spokesman reiterated those restrictions in a statement Thursday, and said the company had no knowledge of the tracking of health researchers and advocates inside Mexico

>NSO executives point to technical safeguards that prevent clients from sharing its spy tools.

Of the above only two can be true. My bet is they sell to anyone who can pay.

Reason077 1 day ago 2 replies      
Seems like "Big Sugar" is increasingly behaving like the new "Big Tobacco" as they gear up to fight against regulation and taxation.
canistr 1 day ago 0 replies      
Here is the original report from Citizen Lab:


thinkcontext 1 day ago 3 replies      
Using the software in a way that reveals its use seems ham-handed, perhaps its not a sophisticated actor using it in this instance.

But that is in some ways even more disturbing. It shows the capability is so available that even the unsophisticated are able to use it.

mi100hael 1 day ago 11 replies      
> The soda industry has poured over $67 million into defeating state and local efforts to regulate soft drink sales in the United States since 2009, according to the Center for Science in the Public Interest.

Why don't these companies just pivot harder towards diet sodas? Seems like a win-win.

ryandrake 1 day ago 2 replies      
What the heck is "government-grade" spyware? 3X the cost but doesn't work?
ourmandave 1 day ago 0 replies      
"Government-grade spyware" reminded me of the deleted scene from Napoleon Dynamite.

"Do you want to die Napoleon?"

"Yeah right. Whose the only one here who knows illegal ninja moves from the government."


cantankerous 1 day ago 1 reply      
I wonder if somebody with a bone to pick with a soda tax works in a bureau that has access to this weaponized malware. Seems like the most plausible case to me.
A friendly web development tutorial for complete beginners internetingishard.com
392 points by interneting  1 day ago   52 comments top 20
WA 1 day ago 4 replies      
The graphics are nice, but I think you can polish the writing.

1. It took me probably 30 seconds to understand that the text blocks on the start page are summaries of what's in every chapter. Try to read this without knowing that this website is an online book:

> The purpose of HTML, CSS, and JavaScript, the difference between frameworks and languages, and finding your way around a basic website project with Atom.

This doesn't make any sense if you don't know that it describes a chapter in a book.

2. Simplify sentences. Cut adjectives. Shorten. It reads academic. Examples:

> Our goal is to make it as easy as possible for complete beginners to become professional web developers, so if youve never written a line of HTML or CSS, but youre contemplating a career shift, grab a cup of coffee, take a seat, and lets get to work.

This is a single sentence. It's hard to understand. Your readers aren't all native Americans. This could be turned into:

-> This guide helps you to become a professional web developer, even if you've never written a single line of HTML or CSS.

> Theyre very closely related, but theyre also designed for very specific tasks. Understanding how they interact will go a long way towards becoming a web developer.

Ctrl + F "very": Too often.

A good editor might reduce the text by 25-50%. My secret tip is the Material Design Writing guide [1]. It shows how to write for apps. With apps, the user needs to get a task done as fast and efficient as possible. Write that way.

[1] https://material.io/guidelines/style/writing.html

onion2k 1 day ago 1 reply      
I think it's really good. I often wonder how people learn to make websites these days - I made my first 20 years ago when it was all a lot more straightforward. Even a basic site needs a huge amount groundwork now.

The only comments I have is that it's technically "web design" rather than "web development", and there's nothing about taking the work the user has done and putting it on a server somewhere so other people can see it. It'd be a shame for someone to work through the whole thing and not have a working website at the end.

Awesome work.

jordanlev 1 day ago 0 replies      
This is absolutely fantastic!

I recently taught some "intro to web design / development" classes to college students (mostly designers, so coming to it more from a creative perspective as opposed to "coders at heart"), and this is the "textbook" I wish I had had then! The order that concepts are introduced is perfect, and I think the decisions made about what to explain vs. what to ignore are spot on.

When I got to the part with hr and br tags, I was thinking to myself "tsk tsk, the closing slashes are unnecessary"... then the next paragraph explains how they're unnecessary but the rationale for using them is a good idea (in the context of learning).

I also very much appreciate that it's strictly HTML and CSS and doesn't start off with all the extra mental overhead of build tools, compiled languages, server setups, etc.

Really well done!

heymijo 1 day ago 0 replies      
Thank you for this. No specific critiques but I thought I'd give you some context about why this tutorial is a help to me.

The last time I built a web page was probably 2001 and the fanciest it got was frames in Frontpage. Prior to that it was all html in notepad. I do not program.

I am however, trying to get a website up for a new education venture I'm building. I'm using a Bootstrap framework. There's enough old stuff in there that I can figure out how to edit it, but also enough new stuff to throw me for some loops. A lot of the tutorials I've found to explain elements are either too simple or too complex.

Your tutorial seems to be the sweet spot. I flipped to the part about responsive images and it answered some previously unanswered questions I harbored.

When this post hit[0] HN awhile back I got excited by the template but I couldn't figure out how to edit important parts of it because it used SVG images. I didn't know what SVG was or if/how to edit/replace them. Your tutorial is the first simple explanation I've seen regarding SVGs and their place/purpose in the responsive image landscape.

Much appreciated as skimming your site has already been helpful.

[0] https://news.ycombinator.com/item?id=13126228

myfonj 1 day ago 0 replies      
I can hardly judge, but that "Edit Here" "Reload Here" concept, being necessary to understand at first, seems not as necessary to keep after the point reader grasps the fact "this is source file and that is it rendered appearance". Live preview like complex Thimble [1] or simple Scratchpad [2] or Dabblet [3] seems to be far more convenient for exploring fundaments of HTML / CSS, especially for beginners.

In fact I have such one "scratchpad" crammed into data:uri and bookmarked and keep regularly using it to quick test almost anything HTML / CSS / XML / SVG / JS related. (I am coder.)

[1] https://thimble.mozilla.org/[2] http://scratchpad.io/[3] http://dabblet.com/

drops 1 day ago 0 replies      
I think it would be more fair to call this "HTML & CSS" tutorial than a "web development" one.
josephjrobison 1 day ago 0 replies      
Very awesome, look forward to digging in. As a digital marketer primarily, I've self-taught myself enough to be fairly proficient at HTML and CSS.

The thorn in my side has always been javascript, which I've started and stopped many times. Look forward to seeing your lessons on javascript!

sddfd 1 day ago 0 replies      
This is awesome, thanks! I've updated my personal homepage just a few days ago, and it was difficult to google the simplest way to make the layout responsive.

This was, because answers often involved adding some overbloated framework to my homepage that was supposed to solve all problems.

After two days of googling arround I arrived at very simple HTML + CSS, like you are describing, and did not need kilobytes of "CSS frameworks".

The same applies to Javascript, and it is even worse there. People are using big, big frameworks everywhere, and I wonder if it would not be much easier to do it in plain Javascript. On stackoverflow it is really difficult to find an answer that does not require at least jquery. Are people aware of how to do things in plain javascript, and choosing the frameworks deliberately?

nthot 1 day ago 2 replies      
The site appears to be broken for internet explorer 11. I am not able to follow any links.
throwaway2016a 1 day ago 1 reply      
How do we know this does not fall under the trap of not actually being friendly but only seeming friendly as viewed from the eyes of an expert / advanced user.

Whenever I see these I wonder if they remembered to take someone with no subject matter knowledge and have them attempt to go through it.

For all I know this site did. I'm just musing out loud,

z3t4 1 day ago 0 replies      
It's refreshing to see a web-dev guide that doesn't focus on a framework.
exprA 1 day ago 2 replies      
HTML, CSS and JavaScript are not hard.

What's hard is being able to teach any single person them that requires many sorts of people skills. The irony is that for people with the right mindset, learning the 3 of them, or programming in general is one of the easiest thing they could do.

tangue 1 day ago 0 replies      
It's pretty cool,even on tricky subject like responsive images the content stays accessible. I hope he'll continue with js, filling the gap between codecademy and "first let's npm install webpack babel and gulp"
morinted 1 day ago 0 replies      
Grammar aside: wouldn't it be internetting?
nik736 1 day ago 1 reply      
Looks good but the scrolling is really weird and laggy for me.
catpolice 1 day ago 0 replies      


untilHellbanned 1 day ago 0 replies      
Complete beginners don't need flexbox.
Yan_Coutinho 1 day ago 0 replies      
Good! I also really like this one about webdev in HTML5:https://www.liveedu.tv/elijahwass99/videos/REABM-html5-codin...
xiaoma 1 day ago 0 replies      
Friendly is a mistake. It should be hostile, unforgiving, difficult and hold just enough promise for one to justify continuing.

Prepare people for what's ahead.

throw0213 1 day ago 0 replies      
They do ignore accessibility in both content and the website design itself. It's frustrating to see all those websites that are trying to teach about design, or even offer design services, but with their own websites containing illegible texts, broken layouts, invalid markup, etc. Heck, even those that claim to aim accessibility and validity often fail to achieve any.

Just a few examples to demonstrate what I mean:

- Grey-on-white color scheme: high brightness, leading to eye strain.

- Font sizes and div widths are set in pixels.

- Fails validator.w3.org validation.

- Light-grey-on-white navigation links: low contrast, leading to barely legible text.

Why does e to pi i equal -1? (2015) [video] youtube.com
352 points by espeed  2 days ago   133 comments top 27
cousin_it 2 days ago 8 replies      
1) e^x is a function whose derivative is equal to its value.

2) e^ix is a function whose derivative is equal to its value rotated by 90 degrees (ie^ix).

3) As x goes from 0 to pi, the trajectory of e^ix always has a velocity vector perpendicular to its current position. For example, when x = 0, the current position is 1 and the velocity vector is i.

4) So the trajectory a circle arc of length pi, which ends at -1.

unholiness 1 day ago 4 replies      
This explanation strikes me as a little too aggressive in throwing out the notation with the bathwater, only reaching its result by redefining the terms we already have intuition for into space-stretching operations that don't work like arithmetic does in my head.

In that sense I think I have the same problem with this proof that I do with the standard one, where you add the Maclaurin series of cos() and i * sin() and match term-by-term with the series for e^(i * ). The problem is, at the point you can actually show equality, the things on one side aren't obviously a rotation and the things on the other side aren't obviously an exponential.

I'm not just hear to yell at clouds. I was given a proof that I truly love by a professor I adore, which I think really does give insight into what all these operators are doing. The best video I can find with it is here:

https://www.youtube.com/watch?v=-dhHrg-KbJ0 (Skip to 7:30 if you're already comfortable with the limit definition of e^x)

The basic summary is:

1) e^i is equal to (1 + i/n) ^ n for large n

2) That base, (1 + i/n), plotted as a complex number, has length approaching 1, angle approaching /n

3) The base squared, (1 + i/n)^2, by de moivre's theorem, forms another point as if the transformation from (0, 1) were repeated twice that is, the length stays one, and another tiny angle is added for a total of 2/n

4) The full result is therefore n transformations, taking the path along the unit circle, traveling and arriving at cos() + i * sin()

phkahler 2 days ago 4 replies      
It's a nice video but it goes so fast I don't think people who don't already understand this stuff will get it.
obastani 2 days ago 0 replies      
The most elementary explanation I ever got for why e^(i pi) = -1 is from the definition

e = lim_{n -> infinity} (1 + 1/n)^n

First, we can generalize this to

e^z = lim_{n -> infinity} (1 + 1/n)^(z n) = lim_{m -> infinity} (1 + z/m)^m

using the substitution m = zn. Therefore,

e^(i pi) = lim_{m -> infinity} (1 + i pi/m)^m

Now, converting the complex number (1 + i pi/m) in terms of polar coordinates (r, theta) yields

r = (1 + pi^2/m^2) ~ 1

theta = sin^{-1}(pi/m) ~ pi/m

Since the product of two complex numbers is

(r, theta) (r', theta') = (r r', theta + theta'),

we have

(1 + i pi/m)^m ~ (1^m, m * pi/m) = (1, pi) = -1.

timlod 1 day ago 1 reply      
I love the Khan Academy video on this:


Sal's enthusiasm at the end is contagious!

nilkn 1 day ago 1 reply      
In Lie theory, given a Lie group, there's a general notion of an "exponential function", which maps elements in the tangent space to the identity to "full" elements of the group.

In this case, our group is the unit circle in the complex plane. This is not circular logic, by the way. If a, b are complex numbers on this circle, then |a| = |b| = 1 and so |ab| = |a||b| = 1.

The identity of the group is the complex number 1 (with plane coordinates (1,0)). So the tangent space to the identity is the vertical line 1 + it, for t a real-valued parameter. In two-dimensional coordinates, that expression looks like (1,0) + t * (0,1).

(To be completely precise, the tangent space is a vector space, not a line displaced from the origin. In particular, the tangent space must contain a zero vector.)

If v is an element of this tangent space, then in Lie theory the exponential of v, exp(v), is defined to be g(1), where g is the unique geodesic on the circle (passing through the identity element 1) whose velocity at time 0 is v.

Visually, this is sort of like taking the tangent vector v = ti, placing its base at 1, then wrapping it around the circle and marking where its endpoint ends up at. If the vector is v = pi*i, then it has length pi, so it will end up demarcating an arc length of pi on the circle. Since we're working with the unit circle, this takes us straight to (-1,0).

I'm still leaving a lot out, of course -- most importantly why this notion of exponential has anything to do with the ordinary one.

datainplace 2 days ago 2 replies      
Is it pure chance that this was posted only two or three days after I watched it along with a few other e to pi = -1 videos?

Randomness aside. 3blue1brown makes some wonderful math videos that I find really explain the intuitiveness of some of the ideas. I was unfortunately cursed with a math teacher who for whatever reason required us to memorize until we passed the test. Imaginary numbers were taught as "something that will help you in college"

mitchtbaum 1 day ago 1 reply      
e^i = 1.

In words, this equation makes the following fundamental observation:

The complex exponential of the circle constant is unity.

Geometrically, multiplying by e^i corresponds to rotating a complex number by an angle in the complex plane, which suggests a second interpretation of Eulers identity:

A rotation by one turn is 1.

The Tau Manifesto http://tauday.com/tau-manifesto#sec-euler_s_identity

skybrian 1 day ago 1 reply      
If you prefer slides over a video, here is a slideshow I wrote:https://docs.google.com/presentation/d/1oMNjkDp-LieSGnZEwNpc...

Or you might prefer Better Explained's article:https://betterexplained.com/articles/intuitive-understanding...

hprotagonist 2 days ago 1 reply      
This seemed like voodoo until i took signals and systems. Then it's just a normal observation about vector sums on the complex unit circle.
lowpro 1 day ago 1 reply      
I've been watching 3Blue1Brown a lot recently, and he's been linked to many times here too. His work is absolutely amazing, and he even wrote a python library to make the animations!
0xbadf00d 1 day ago 0 replies      
Very Nice Video - One of my favourite sub 5 minute explanations is Oliver Humpage's @ Ignite Bristol circa 2011:


Hope you enjoy as much as I did.

1001101 1 day ago 0 replies      
Gauss said that if the reason weren't immediately obvious to someone, they would never be a first rate mathematician. For the rest of us, there's YouTube.
cmollis 1 day ago 2 replies      
I had never understood how imaginary numbers had any bearing on physics (probably because I'd never been taught). But lately I've thinking about how all mathematics must have some natural physical equivalent or relation. E.g. how does multiplication happen in nature.. what does it mean really to multiply something?...how does this happen in nature? For the most part, we take these things as facts and learn the mechanics, but in physics, it seems to me, you really need to understand how these algebraic interpretations translate into physical realities (also, perhaps sort of obvious for everyone reads HN). For someone who enjoys discovering these perhaps obvious things later in life, it's clear how mathematicians and physicists could clearly come to the same mathematical conclusions from completely opposite vectors.. I guess this probably happens all the time.
somewhat_slow 1 day ago 1 reply      
At 2:37, right after finishing explaining adders and multipliers on the line it started going way too quick like where did e^x and some these infinite sums come from???
dvt 1 day ago 2 replies      
I've seen this a while ago, and while it's pretty instructive, it's actually also pretty confusing. The magic happens in a seemingly-innocuous throwaway sentence at around 4:20 (after being introduced to the 2D plane):

> ... This can now include rotating along with some stretching and shrinking ...

It's entirely non-obvious WHY we should be okay with rotating all of a sudden. The real answer is not super complicated, but it deals with a couple of amazing relationships between exponential functions and trigonometric identities[1]. So really, we don't have to accept "rotating" as some weird new action we can do when moving into the complex plane, we just have to accept that trigonometry is weirdly related to analysis due to some very cool properties.

[1] https://www.phy.duke.edu/~rgb/Class/phy51/phy51/node15.html

neximo64 2 days ago 2 replies      
For anyone that finds it beautiful isn't there a bit of humanisation and definition involved (for example the Sine function used in deriviation uses 'pi' instead of 90 degrees), not to mention Sine is a human created function. You could have e to pi (90 * -1) too.. or a different method to define angles instead of having 360 degrees (base 60)
jjaredsimpson 2 days ago 1 reply      
e^x = an infinite series

if you substitute x = iz

you can split the even and odd terms of the infinite series into cos and sin

e^iz = cos z + i sin z

evaluating at z = pi yields

cos pi + i sin pi

-1 + 0i = -1

the exponential function maps the imaginary axis to the unit circle. pi gets mapped to -1.

This shows its true, but the "why" and real understanding requires calculus and the first week of complex analysis. Otherwise you are just parroting a set of facts.

gizmo686 1 day ago 0 replies      
As always with this author, I followed a link to a math explanation video expecting to point out how it is only a superficial treatment that breaks down under rigorous analysis, only to be dissapointed in my inability to find any such flaws. Seriously, this guy is awesome. If you need to learn math, see if he has a video about it.

I do have 1 gripe with this video though, and that is his handwaving around "natural".

More specifically, as far as this particular case goes, there is nothing natural about the choice of e^x, or the significance of pi.

As he identifies, we are interested in some function that maps adders into multipliers with the property f(x+y)=f(x)f(x).

To uniquely identify such a function, we need to add an additional constraint. He chose to add f(pi) = -1. He justifies this by arguing that pi is the length you would travel along the unit circle to arrive at -1. This is true (and the underlying reason why pi and e end up being natural), however using this argument seems to break the abstraction for me.

Under this construction, in the equation f(i * pi)=-1, "i * pi" is an object in the set of multipliers, and "-1" is an object in the set of adders. Specifically, "i * pi" is a function which takes a plane (or perhaps a point) and rotates it, while "-1" is a function which takes a plane (or point) and slides it.

He then invokes an unstated mapping, g, to convert the multiplier [0] "i * pi" into the real number "pi". He than insists that g(x) gives the distance a point would travel along the unit circle when the multiplier x is applied to it.

At this point, because arriving at the point "-1" [1] from the point "1" through rotation requires traveling pi distance, it makes sense that f(i * pi) = f(g^-1(pi)) = -1

I am still not convinced that (within this construction alone), this is a more natural choice that saying that f(i) = 1, but would agree that it is one of the two natural choices. To get to f(x) = e^x being the natural choice requires showing that it comes up in all sorts of unrelated parts of math, so it is probably more natural.

[0] It might be better to speak of "rotational multipliers" here, as I am not sure how to natural define g(x) for multipliers that stretch the plane, instead of or in addition to simply rotating it.

[1] This "1" is again distinct from the adder "1" and multiplier "1", but plays a central role in defining them, so I do not object to its usage.

etatoby 1 day ago 0 replies      
I think Mathologer's take is much more intuitive:


jejones3141 1 day ago 0 replies      
Look at the Taylor series for e^x, and see what you get for e^(i * theta). The terms with i raised to an even power are real, those with i raised to an odd power are imaginary. Group the odds together and the evens together, and compare with what the Taylor series for sin x and cos x give you for sin theta and cos theta. (I'm being a bad person here, ignoring convergence issues, but I want to say that all three are absolutely convergent.)
agumonkey 1 day ago 0 replies      
I deeply thought the iterative nature was advanced ..

I see how intuitive the scale / rotate mindset is; but I'm a bit sad that iteration is a mishap in their view.

ttoinou 1 day ago 0 replies      
BTW for thoses who like Complex Analysis here's how to visualize them as a 2D deformation : https://www.youtube.com/watch?v=CMMrEDIFPZY
hgdsraj 2 days ago 0 replies      
Honestly the calculus explanation is the most clear to me (Taylor series expansion of e^x)
hawkice 1 day ago 1 reply      
> (2015)

I'd prefer if admins could change the link to a description of this mathematical fact that's more up to date, and fits the needs of me and my family in 2017.

ageofwant 1 day ago 0 replies      
Why not just write it e^i=1 which conceptually and pedagogically makes just so much more sense.
faragon 1 day ago 0 replies      
Pure beauty.
Is PostgreSQL good enough? renesd.blogspot.com
370 points by richardboegli  21 hours ago   132 comments top 22
brightball 13 hours ago 3 replies      
I've been attempting to preach the PostgreSQL Gospel (http://www.brightball.com/articles/why-should-you-learn-post...) for a few years now about this exact same thing.

When you look at your database as a dumb datastore, you're really selling short all of the capabilities that are in your database. PG is basically a stack in a box.

Whenever I started getting into Elixir and Phoenix and realized that the entire Elixir/Erlang stack was also basically a full stack on it's own...and that by default Phoenix wants to use PG as it's database...I may have gone a little overboard with excitement.

If you build with Elixir and PostgreSQL you've addressed almost every need that most projects can have with minimal complexity.

yorhel 20 hours ago 2 replies      
There was a similar talk[1] at FOSDEM, where the speaker describes how, as an experiment, he replaces a full ELK stack plus other monitoring tools with PostgreSQL. He even goes as far as implementing a minimal logstash equivalent (i.e. log parsing) into the database itself.

It wasn't an "we do this at scale" talk, but I'd love to see more experiments like it.

For the impatient: Skip to 17 minutes into the video, where he describes the previous architecture and what parts are replaced with Postgres.

1. https://fosdem.org/2017/schedule/event/postgresql_infrastruc...

cel1ne 19 hours ago 5 replies      
I do use PostgreSQL wherever possibly. Add http://postgrest.com/ and nginx as a url-rewriting proxy and you have a performant, highly adaptable REST-Server.
einhverfr 18 hours ago 0 replies      
Nice writeup though I would add a few things.

Listen/Notify work great for short-term job queues. For longer term ones, you have some serious difficulties on PostgreSQL which require care and attention to detail to solve. In those cases, of course, you can solve them, but they take people who know what they are doing.

Also in terms of storing images in the database, this is something that really depends on what you are doing, what your database load is, and what your memory constraints are. At least when working with Perl on the middleware, decoding and presenting the image takes several times the RAM that loading it off the filesystem does. That may not be the end of the world, but it is something to think about.

Also TOAST overhead in retrieved columns doesn't show up in EXPLAIN ANALYZE because the items never get untoasted. Again by no means a deal breaker, but something to think about.

In general, PostgreSQL can be good enough but having people know know it inside and out is important as you scale. That's probably true with any technology, however.

Aeyris 21 hours ago 7 replies      
Is anyone actually utilising a recent version of PostgreSQL for full-text searching beyond a hobby project? How do you find the speed and accuracy versus Elasticsearch?
mattferderer 12 hours ago 0 replies      
Very nice article for those of us who have never used PostgreSQL much. I've been starting to use it with Elixir & this gives me a good understanding of why someone would use it, especially when when starting a new app.

Out of curiosity, does anyone have a favorite article saved that does a great comparison of when to use certain databases?

seibelj 11 hours ago 0 replies      
Yes, for all projects and small businesses I start, Postgres and Redis is what I use from the beginning. Then if it ever gets to the point where I need a different DB for something, I replace components with the new tool. People get fascinated with these fly-by-night data stores and put their operations at serious risk. Start with the tried and tested technologies, then carefully augment your stack as needed.
jaequery 20 hours ago 2 replies      
is it good enough? yes. in fact, its probably an overkill for most. i think the question of good enough wouldve been perfect for sqlite.
mamcx 9 hours ago 1 reply      
Now imagine if we understand that the relational model is no for "just data storage" but also can be use for everything.

The closest thing(1) was dbase/foxpro. You can actually build a full app with it. Send email from the database? Yes. Is not that wrong? Is wrong just because RDBMS (2) made it wrong, not because is actually wrong. Why is better to split in separated languages/run times/models a app than one integrated?

(1): Taking in consideration that neither Fox or any "modern" rdbms have take the relational model to it full extension.

(2): A RDBMS is a full-package with a defined role, and limited capabilities. A relational-alike language will not be a exact replica of that. Not even is demanded to implement a full-storage solution.

The biggest mistake the relational guys have commited is to think always in terms of full-databases instead of micro-database. Ironically, kdb+ (or lisp? or rebol?) could be the closest thing to the idea (where data+code are not enemies but friends).

mooneater 10 hours ago 0 replies      
Postgres' awesome extensible type system means it will continue to increase in functionality much more easily than most comparable DBs.https://www.postgresql.org/docs/9.6/static/extend-how.html
agentgt 12 hours ago 1 reply      
We had to learn this the hard way. We have many of the data/services the article mentions and while we still use them when ever it gets massive we actually will go back to Postgresql.

For example for our internal analytics/logs/metrics we use ELK and Druid but believe it or not these tools despite their purported scaling abilities are actually damn expensive. These new cloud "elastic" stuff cheat and use lots and lots of memory. For a bootstrapped solvent self-funded startup like us we do care about memory usage.

For customer analytics we use... yes Postgresql.

For counters and stream like things we don't use Redis we use Pipelinedb (Postgresql fork). For Cassandra like stuff we use Citus (Postgresql extension).

Some of our external search uses SOLR (for small fields) but Postgresql text search is used for big fields.

The only part of our platform we don't really leverage on Postgresql is the message queue and this because RabbitMQ so far has done a damn good job (that and the damn JDBC driver isn't asynchronous so LISTEN/NOTIFY isn't really useful).

rosser 21 hours ago 7 replies      
Using an RDBMS as a work queue is an anti-pattern, but if you're going to do it, you probably can't do much better than LISTEN/NOTIFY.
anko 17 hours ago 4 replies      
I love Postgres, but the one thing I think sucks is it's COUNT() performance.

I've read all sorts of hacks but I would love for someone to solve this for me!

TheAceOfHearts 16 hours ago 1 reply      
It's really amazing how far you can with a relational database. If you have very minimal constraints, keeping everything in a single place can make life so much easier. Configuration hell is real. I hadn't considered using PG for storing binary data, but I've hacked together a few toy projects where I used mongo and just shamelessly shoved everything in there.

I have a some slightly tangential questions, which I'd love to hear people's thoughts on: How do you decide where to draw the line between what's kept and defined in the application and database? For example, how strict would you make your type definitions and constraints? Do you just accept that you'll end up duplicating some of it in both places? Also, how do you track and manage changes when you have to deal with multiple environments?

crudbug 12 hours ago 1 reply      
Has someone played with threading model within Postgres.

I was reading the documents, looks like for every client request Postgres forks a new Process and uses shared memory model.

Using multi-processor threads/coroutines might be useful for scaling it further.

mooneater 11 hours ago 0 replies      
How do you get a "hotstandby replica for $5/month"?
mooneater 10 hours ago 0 replies      
Another big plus for postgres: PL/Python, PL/R, etc
ckdarby 10 hours ago 0 replies      
Why is the font so small on this site?
ausjke 11 hours ago 0 replies      
still using php+mysql here as I can find so many documents about their various usage easily.
hartator 10 hours ago 2 replies      
I am more the MongoDB bandwagon. Shemaless makes prototyping so much easier. And no migrations!
api 11 hours ago 1 reply      
From my experience with both PostgreSQL and RethinkDB (and other NoSQL stores):

For SQL, complex queries, and data warehousing: yes. It's an excellent database and I'm not sure why you'd pick another SQL DB unless it were a lot better on point two.

For high availability and scaling: no, absolutely not.

The problem with the latter is an arcane deployment process and arcane error messages that provide constant worry that you're doing something wrong. It's a many week engineering project to deploy HA Postgres, while HA RethinkDB takes hours -- followed by some testing for prudence... our testing revealed that it does "just work" at least at our scale. We were overjoyed.

The docs for Postgres HA and clustering are also horrible. There are like five different ways to do it and they're all in an unknown state of completion or readiness.

Of course if/when we do want complex queries and more compact storage, we will probably offload data from the RethinkDB cluster to... drum roll... a PostgreSQL database. Of course that will probably be for analytics done "offline" in the sense that if the DB goes down for a bit we are fine. HA is not needed there.

TL;DR: everything has its limitations.

treve 11 hours ago 6 replies      
Or just MySQL. Popular choice, unpopular opinion. I trust it more because it gives me a tried and tested path when I need replication (which tends to happen rather early). My understanding is that Postgres replication is not nearly as battle-tested.
Public Must Fight against Prism and Tempora Surveillance (2013) spiegel.de
237 points by fergeson  2 days ago   23 comments top 8
jmcdiesel 2 days ago 2 replies      
It will always be true.

We're not entitled to anything. Nothing. We're not entitled to life. We're not entitled to breath. We're not entitled to a damned thing.

People feel entitled to free speech, to living itself, and there is no such entitlement. You have to ensure, yourself, that the things you feel you should have, you have. If you don't they will be taken away over time.

Taek 2 days ago 2 replies      
> Where is the outrage?

Seems to me that most people either believe that their government is the good guy and always will be or they are comfortable enough in their lives and current security that they don't feel motivated to make a fuss.

normalperson123 2 days ago 0 replies      
whenever i see an article like this, i am reminded that these authors are forced to take into consideration the timing and appearance of their articles. for instance, they would be unwise to release a scoop before or during the super bowl. in other words, in the fight to preserve liberty in the world, they are reduced to the same tactics as sleazy advertising and PR companies. it is so sad that such an important, fundamental and altruistic cause is so fragile.
okreallywtf 2 days ago 0 replies      
While I agree with concerns over privacy I still think that overall, our current obsession with it will actually have detrimental long term results because of the resulting cynicism that it is creating. Because surveillance has continued or increased overtime regardless of which party controls the government, it creates a false equivalency that has convinced a lot of people that "its all the same" and that only some kind of dramatic (and likely violent and disruptive) revolution will solve the problem, while most of us have done little or nothing to actually change things short of complaining about them.

How many people on these forums have contacted their representatives on this issue? How many have considered running for office themselves? My guess is we've all spent more time griping to each other and on online forums that are not monitored by government (for opinions at least, they unfortunately might be monitored for other reasons) than we have on any kind of productive effort.

Privacy is a big concern, but is it as big as nuclear proliferation? Is it as big of an issue as climate change (no, I would say). Many of us who live in the US currently can afford to be worried about our privacy, we're mostly comfortable, safe, and well fed and can be concerned with our privacy despite the fact that it doesn't impact almost any of us. On top of that, the cynicism created by surveillance has convinced many people to waive their right to vote because they assume, correctly in many cases, that surveillance will continue in some form under any candidate, despite the fact that many other important (in some cases more important) issues will be treated very differently.

I am not debating that it is an issue but the truth is, I am more concerned for the many people in the richest nation on earth that don't even have enough access to computers and the internet to even be worried about surveillance and its hard to be surprised that this issue doesn't resonate with many people. We need a public discussion on this issue and we need to debate what privacy we're willing to give up for our safety as law enforcement has less and less ability to monitor criminals.

The sad thing is that if governments had been up front with their citizens and acknowledged the challenges in combating crime in an era where wiretaps and other previously available tools were becoming obsolete people might have been willing to accept intrusions into their privacy with acceptable civilian oversight; instead it was done without our knowledge and consent and now I'm afraid we'll collectively chop off our own nose to spite our face.

graedus 2 days ago 1 reply      
Good and relevant article, title should include (2013) though
tps5 2 days ago 1 reply      
> An appropriate real-world metaphor for the program might be something like this: In every room of every house and every apartment, cameras and microphones are installed, every letter is opened and copied, every telephone tapped. Everything that happens is recorded and can be accessed as needed.

This is why I hate the media.

I don't understand how making this analogy helps to explain the spying program discussed in the article. Why do we need "an appropriate real-world metaphor" in order to describe a real world spying program that actually exists? I strongly feel that this is a misleading way to relay information, and I'm sure it confused a lot of readers.

seycombi 2 days ago 2 replies      
a good place to start https://www.privacytools.io/
15thEye 2 days ago 0 replies      
This program is mammoth, three years on.. what's stopped them?
Show HN: Rumuki, a prenup for sex tapes rumuki.com
400 points by nathankot  2 days ago   198 comments top 61
always_good 2 days ago 10 replies      
I think "a prenup for home videos" might be the worst marketing angle possible. It sounds like a legal contract.

But, worse, it makes it sound like it's only something you should use if you don't trust the other person.

Instead, they should market it as "2fa for sex tapes" rather than a trust issue by itself and point to things like "the fappening" icloud social engineering hack.

I can't imagine many people using this otherwise.

Also, don't forget that your target audience is mostly women who bear the majority of the shaming for a leaked video instead of high-fives. "Don't be the next Jennifer Lawrence" is going to be more effective marketing.

nmat 2 days ago 4 replies      
It's an interesting idea, but I don't think it will work in practice. These moments happen naturally because the couple trusts in each other. Convincing my girlfriend to download a specific app so that we can film something is a bit odd.

People don't care that much about how things are encrypted or about complex security mechanisms, they want something that is easy to use. Snapshat is easy to use for example. Everyone knows that a snap can be saved like everybody knows that I can film a phone playing a video. Given that the practical security of both apps are the same, people will go for ease of use and Snapshat wins there.

JackC 2 days ago 0 replies      
Two thoughts:

- I think I would sell this as a private camera app ("protect you and your partner from prying eyes"), rather than by emphasizing the two-party crypto angle ("protect you from your partner"). Like, make the front-line features be: "it's a camera where each photo album is protected by a secret PIN, and if someone takes your phone but doesn't know the PIN, they can't tell the album exists! Oh, and if you want to you can share the album with someone else who has the app, but you can always delete something from the album and it'll be deleted from the shared version as well."

This way you're selling it as something that's better than the built in camera app, with some bonus safer-sharing features that will just happen to reduce privacy violations in practice, instead of emphasizing the distrust-of-your-partner-solved-by-easily-hacked-crypto thing. When someone asks their partner to install, it's not "because I don't trust you" but "because it's more private for us."

- As a lawyer, I think the legal-prenup-built-into-app approach would be pretty interesting. For example, right now the way US law works, it's much, much easier to get revenge porn taken down if you happen to have been the one holding the camera, than if it was your partner holding the camera. If you were holding the camera you own the copyright, and we have robust legal-technical tools for copyright takedowns, whereas we only have patchy state-based laws around invasion of privacy.

So could we have camera apps that actually reallocate the rights between the photographer and subject? Like imagine a shutter button with a bunch of fine print like, "by pressing this button I express an intent to share authorship of the resulting work with all human subjects portrayed, and agree that consent of all authors must be obtained to authorize any copy."

I'm not an expert and not sure what would be possible, but it would be interesting to talk to legal advocates in the revenge porn area and ask what legal agreements people could have entered beforehand that would have best protected them, and see if any of them could cleanly be engineered into the UX of a private camera app -- or even into Snapchat et al.

ChicagoBoy11 2 days ago 1 reply      
I'm incredibly impressed by the design work on this. As someone who has studied design the past few years, the landing page alone is filled with little details and choices that taken together just succinctly and beautifully communicate what this app does. Definitely inspiring to know Nathan just hacked on this on his own -- I hope it is a terrific success!
dsacco 2 days ago 1 reply      
This is a cool idea.

I expressed an upfront concern about reverse engineering in another comment directly to the OP (no DRM is foolproof, etc). After skimming through the whitepaper I'd like ask you a few implementation questions about the feasibility of client trust:

Can you tell me how the device token/keys are stored locally and accessed by the application? I understand the crypto itself (e.g. libsodium), but I'd like to know how you're protecting data on the client insofar as you can.

Can you tell me what your methodology is for determining if an application has been manipulated or altered?

How are you specifically obfuscating sensitive data or otherwise making the DRM bypass difficult (e.g. obfuscating data in .so files, etc).

I'm not trying to grief you here, I just want to talk about technical protection mechanisms in place. To your credit, you explicitly admitted that DRM is fundamentally not a foolproof guarantee (though that's different from saying it's not effective...). I think your app would mitigate most scenarios where an ex would try and expose the other party.

mosselman 2 days ago 3 replies      
I find it odd how people are hating on the idea here, while this could obviously be a lot better than the alternative... you know unencrypted video files that someone can access whenever they want.

The concerns I'd have with this myself would be that I'd have to trust the website. As it is I trust my spouse a lot more than some cloud service and I don't expect that to change.

hellofunk 2 days ago 5 replies      
This app would have saved my career a couple years ago. Now, when I visit the old office to say hi to my ex-colleagues, I still can't get past the nickname they gave me because of what happened.
shiado 2 days ago 4 replies      
This is a really cool concept. But ultimately you cannot get past the analog hole.https://en.wikipedia.org/wiki/Analog_hole
mrcactu5 2 days ago 1 reply      
pateldeependra 2 days ago 3 replies      
This is not foolproof. What if someone records the video from another phone while playing. Similar to snapchat.
nickpsecurity 2 days ago 0 replies      
It's a neat project. Others have fleshed out most details. It could even get uptake if spread on social media. Let me focus instead on the more devious possibilities.

"They are encrypted and saved on your devices. Recordings are never sent across the internet and never touch our servers. "

" it is impossible for third party attackers to gain access to your videos without local access to the network your devices are on (that includes us!)"

This claim is made by every developer of security/privacy apps when content stays on the device. It's actually false. They could embed a backdoor in the current or a future release that shares the files. Already requires networking permission when managing videos. Actually, a service like this getting extremely popular could lead to one of the largest leaks of nude pics in history. One person hacking the box containing the source/credentials, getting on the development team, or being the original author w/ trolling intent could subvert it into a giant store of pics/video. Get it to send the data back when on WiFi to avoid high, data bills. Thumbnails of videos sent first to filter out uninteresting parties.

I'm not accusing the author of this at all. I'm just assessing security risk from side I'm good at: subversion. The subversion risk here is spectacularly above average as a network effects developing around this app lead to many eggs in one basket that's probably easy to grab. Or was until the author read my comment and beefed up security in a panic. ;)

stefs 2 days ago 1 reply      
i just read the page again and realize this is for a single view and a maximum of 7 days for sex tapes recorded while both partners are present.

in my opinion this is one way to do it but prevents me from having a kind of "library" with videos to share with my partner.

how about a shared library that is watchable until access is revoked by one party?

there's also the question about sharing videos with a person that's not present (i.e. long distance relationship).

jt2190 2 days ago 2 replies      
Where's the legal contract part? If my video is leaked by the other party, I'd like to at least have a clear contract so I can issue takedown notices, sue for copyright damages, etc.

(Edit: Not a new idea, certainly, but well executed so far.)

kazinator 2 days ago 1 reply      
Only two keys?

The lack of threesome (and beyond) support shows somewhat of a lack of vision.

WheelsAtLarge 2 days ago 0 replies      
Great idea, I know it's hard to enforce 100% due to the analog hole but I think it has a future. Even a little enforcement is better than none. My suggestion is to give each video a time to expire. You should also get someone, with a good security reputation,to do a security audit on the encryption that way you can claim 100% encryption security.
secfirstmd 2 days ago 3 replies      
This is a clever idea that I feel has lots of uses, not just sex tapes.
xkxx 2 days ago 0 replies      
Not everybody is in a monogamous relationship. Is it possible to use the app for more than two devices (i.e. more than two people)?
nkkollaw 2 days ago 0 replies      
Even if the payoff didn't make me think this was a template contract for sex videos, I don't get it.

The problem with these kinds of videos isn't trusting the other party in the present, but in the future.

What stops one from doing a screencast of the video, and then publishing it months later when you break up?

kristerv 2 days ago 2 replies      
haha, great. Awesome landing btw. I'd love to be updated on how your project goes. Blog?
xrd 2 days ago 1 reply      
I invented this four years ago, minus the crypto. http://web.archive.org/web/20110210234301/http://nakedescrow...
jessaustin 2 days ago 1 reply      
I didn't notice this issue addressed anywhere: what if the blue adversary/sex partner simply steals the red phone for a few minutes in order to grant access to the blue phone? Will red be informed of this? Is there some sort of passphrase that only red knows, to prevent this?
spaceboy 2 days ago 0 replies      
Even though web beacons / trackers like Google Analytics are fairly harmless, I still wouldn't trust them in this app because they often sit close to the main app's code and can be MITM'd to do bad things like send back snippets of a recording, or metadata about a recording like the name of the video file. That is, of course if this app has such beacons. I haven't sat between the traffic of this app (ab)using Burp Suite or Fiddler[2] to give a proper opinion

[1] https://portswigger.net/burp/

[2] http://www.telerik.com/fiddler

ivanhoe 2 days ago 0 replies      
if it's shown on the screen it can be copied, so in the end it's again all about trusting that person. Still this type of protection is cool as it makes it harder for accidents to happen, or if phone gets stolen it gives you some level of protection.
tambourine_man 2 days ago 0 replies      
I agree with the consensus here that the marketing angle is delicate and may need work, but these is genius and the landing page is very nicely done.

The image of the two phones on top of each other is illustrative and just a little bit suggestive, which is clever and tasteful IMO.

Good job.

JepZ 2 days ago 0 replies      
Reminds me of MC Frontalot - Secrets from the futurehttps://frontalot.bandcamp.com/album/secrets-from-the-future
rdl 2 days ago 0 replies      
This is an interesting idea and a real problem.

Not sure how best to market it. Maybe "keep access within a couple" -- emphasize protection from outsiders, people who gain access to one of the couple's devices temporarily, etc.

Then, as an aside, make it so either party can irrevocably end access at any time.

Don't mention "break up" so prominently. "Pre-nup" has lots of bad connotations.

Would be cool if you could cover some other files, too (text, etc.). A way for people to collaborate on something and then delete drafts. Video and pictures are obviously a lot of it, though.

cocktailpeanuts 2 days ago 0 replies      
This is a great idea. Now I can tell girls we're all safe because we're using this app, and record as much as I want.

Then come home and use my iPad to record playback. Perfect plan! Win-Win!

dbg31415 2 days ago 2 replies      
What happens when one person dies? Is the other cut off, or is the system smart enough to have a dead man's switch option that lets you allow access if no response is given in X hours / days?
EGreg 2 days ago 0 replies      
Why not make this work for files of any kind?

By default, a message could disappear shortly after playback ended a la Snapchat.

But you could store the encrypted version and bring if back anytime with consent of both parties.

RRRA 2 days ago 0 replies      
would be nice if it was a n of m keys. Could be useful to film in a protest but only be able to unlock it with more people from the news room while still syncing it to them.
tedmiston 2 days ago 0 replies      
A multi-person key is interesting.

In practice, however, any system like this, Snapchat, etc is easily defeated with a USB cable and QuickTime's record device screen feature. I suppose it could be useful to others depending on your "threat model", but generally it offers no protection from a savvy computer user after you've unlocked it once.

Edit: Not sure why the downvotes. This is completely correct and you can test it yourself with any iOS app. I've added more explanation to clarify.

codeisawesome 1 day ago 0 replies      
This doesn't solve the problem it's marketing to: people can record the screen as soon as the 7 day access is granted. It makes that problem worse as well, if someone doesn't consider this possibility and trusts blindly.

I agree with the other comments that this should be marketed as 2FA instead.

euphetar 2 days ago 2 replies      
It seems like a good idea, but won't work.

I can't imagine two horny teens like:M: Show me something hot babyF: Sure! But please install that app to take all the nececary security precautions before we proceed with our sexting...

This will gain traction among camgirls and other people that produce private porn (I just invented that term because I don't know how one would call porn distributed on an individual basis).So thats like, pervware?

maruhan2 2 days ago 0 replies      
Can't you just screen-record it when you first get the access?
socrates1998 2 days ago 1 reply      
I like it, but I don't see people paying for it. Your market is mainly women in relationships. I don't know how you would get them to download it and actually use it.

The best marketing angle would be to get a high profile celeb to get behind it, maybe one that has had a sex tape leak.

Honestly, even then, I don't know how much they would use it.

It sounds sexist, but women (on a large scale) just aren't into this type of security thing.

Women would rely on the relationship and the trust built up into it to make sure their sex tapes don't leak.

Honestly, I really like this type of thing for sensitive business stuff or other security oriented material.

You make the boss/owner/manager the guy with one key and then he can sort of decide who has the other key on a need to know basis.

I really like the idea, but the application use is just off in my opinion.

eriknstr 2 days ago 0 replies      
Very nice work on the website. It looks great and it's highly informative.
jt2190 2 days ago 1 reply      
Where's the legal contract part? If my video is leaked by the other party, I'd like to at least have a clear contract so I can issue takedown notices, sue for copyright damages, etc.
espeed 1 day ago 0 replies      
Initially I thought this was going to be a mechanism for mutually assured destruction, not mutually assured deletion.
juskrey 2 days ago 0 replies      
Too bad QuickTime lets anyone do iPhone screen video capture.
org3432 2 days ago 0 replies      
Although a bit flawed for it's current use case, it's a great idea, there are likely many other use cases for mutual authorization.
terhechte 2 days ago 0 replies      
Great idea, fantastic execution, I hope you will succeed with this. I'd extend the usage scenario to picture sharing though.
drdaeman 2 days ago 0 replies      
Just curious. Why encrypt one key with another instead of using some sort of secret sharing scheme (e.g. Shamir's)?
jasonlingx 2 days ago 0 replies      
Unless you're a porn star you're likely worse off using this if it makes you feel safer making a sex tape...
usgroup 2 days ago 0 replies      
Hmmm thinking better still would be some kind of cooperative playback whereby it's not possible to play or record the video without direct cooperation from the other device. One device could store a one time pad for the video and the other could store the XOR with the one time pad. Both have to cooperate to play back. Could be extended to N devices devices using Samir's algo
voidabhi 2 days ago 1 reply      
"Your content is never stored on, or sent to our servers."

Are you kidding me? How will you get funded?

anjc 2 days ago 0 replies      
This is a good idea. Ignore the people saying it doesn't have a use.
Numberwang 2 days ago 1 reply      
Do a external recording while having access and this solution fails.
jwatte 2 days ago 0 replies      
Because cell phones don't support screen capture?

(No, wait, they do!)

napolux 2 days ago 0 replies      
This is so f++king cool. But it will never work, like wonderfully explained by https://news.ycombinator.com/item?id=13629076
ommunist 2 days ago 0 replies      
Delete the key and you face indefinite jail in the US.
bussiere 2 days ago 0 replies      
And it's still possible to film the phone no ?
romanpoet 2 days ago 0 replies      
You know, this sounds absurd, but I like it.
epynonymous 2 days ago 0 replies      
nathan, what did you use for the "how it works page" for the animation, that's pretty nifty
sametmax 2 days ago 0 replies      
If you film something you want to keep for yourself with a phone, you already lost. Film with a camera with no network communication, and keep the only physical copy in a safe.
mirimir 2 days ago 7 replies      
Ummm, so shoot a video of the playing video. Or hack into the display circuitry. With that, you could stream a copy somewhere, even if both parties were present.
RegulatoryRoger 2 days ago 2 replies      
In many places, paying for sex is illegal. Paying to make a pornography video isn't. Could this be used to build the Uber for sex?
raldi 2 days ago 0 replies      
Did you patent this?
bbcbasic 1 day ago 0 replies      
I wonder how this plays with the 5th amendment issues mentioned here https://news.ycombinator.com/item?id=13629728.
epynonymous 2 days ago 0 replies      
pretty awesome, and simple.
SeriousM 2 days ago 1 reply      
... Or just don't trust everybody...
Big Picture of Calculus (2010) [video] youtube.com
381 points by espeed  1 day ago   57 comments top 11
datainplace 1 day ago 3 replies      
Now I know someone is browsing my Youtube history and posting to HN. Youtube really is the math teacher I never had.

Probability Models and Axioms https://www.youtube.com/watch?v=j9WZyLZCBzs

The Exponential Function https://www.youtube.com/watch?v=oo1ZZlvT2LQ&t=100s

Vector Space https://www.youtube.com/watch?v=ozwodzD5bJM&t=36s

Tree cutting fails: https://www.youtube.com/watch?v=JHZkR6UVegY

seycombi 1 day ago 4 replies      
If you want to get into the details of Calculus

Professor Leonard https://www.youtube.com/user/professorleonard57

Herbert Gross MIT Calculus Revisited: Multivariable Calculus https://www.youtube.com/playlist?list=PL1C22D4DED943EF7B

Herbert Gross MIT Calculus Revisited: Calculus of Complex Variables https://www.youtube.com/playlist?list=PLD971E94905A70448

Herbert Gross MIT Calculus Revisited: Single Variable Calculus https://www.youtube.com/playlist?list=PL3B08AE665AB9002A

MIT 18.02 Multivariable Calculus, Fall 2007 https://www.youtube.com/playlist?list=PL4C4C8A7D06566F38

MIT 18.02SC: Homework Help for Multivariable Calculus https://www.youtube.com/playlist?list=PLF07555F3CC669D01

MIT 18.01 Single Variable Calculus, Fall 2006 https://www.youtube.com/playlist?list=PL590CCC2BC5AF3BC1

MIT 18.01SC: Homework Help for Single Variable Calculus https://www.youtube.com/playlist?list=PL21BCE50ABFF029F1

shaftway 3 hours ago 0 replies      
Honestly I couldn't slog through the video. It started off with "let's use letters" and immediately got to things like "the right letters for this case are 'df' and 'dt'". This just feels like "here's a bunch of magic that you'll just have to memorize". Reminds me of my favorite quote:

> And then Satan said "put the alphabet in math".

When I learned the basics of calculus it was via simple problems that we wanted to answer. And for a while we estimated the answers by brute forcing approximations. I still drop back to this when I can't remember how to do something and need to re-discover parts of calculus.

mrcactu5 1 day ago 0 replies      
I was a teaching assistant at UC Santa Barbara for 3 years, I can tell you, students might slug through the calculations , but none -- or a precious few -- really get the big picture.

Mostly, it is that calculus pedagogy is a disaster. And it hasn't been updated to include recent advances in technology (such as Data Science)

Strang is an amazing lecturer. I can tell you, that far into your Masters and PhD these same basic issues resurface. There's at least one project I can think of where the entire problem hinges on taking one derivative.

billyzs 1 day ago 0 replies      
Gilbert Starng also has an excellent series on Linear Algebra on MIT opencourseware [https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra...]
pknerd 1 day ago 1 reply      
Thanks for sharing such an amazing thing. I am done with studies but should be helpful to understand it properly to teach my kids in future so that they just fill up pages with dy/dx without getting an idea behind it.
huangc10 1 day ago 1 reply      
I find that most people understand the general concepts of basic calculus. What's hard to wrap around is when you start getting into proofs (ie. delta-epsilon, trig, e,...and that's just some general ones).

I suppose that's how they differentiate the good students from the just okay students. Personally, I don't think youtube videos will help that much when it goes down to the nitty gritty. You really have to sit down, think, and understand.

stephengillie 1 day ago 1 reply      
Having had a non-traditional route through learning various math topics, I'd like to advocate the teaching of calculus (the understanding of the accumulation of infinitesimally-small changes) before the teaching of trigonometry (the relationships between different geometric shapes, and the resulting association with waves).

In my (humbly biased) opinion, it's easier to learn how trigonometric concepts operate with a calculus background, than with merely an algebra-and-geometry background. Trigonometry complicates calculus education - knowing trigonometry may facilitate better calculus understanding to some, but not knowing trigonometry doesn't necessarily complicate learning calculus concepts. The trigonometry-first tradition in math education reflects a "number-phile" bias, not the most optimal route for humans to ingest and retain concepts.

Edit: The "number-phile" bias is clear - we want our teachers to be people with passion on the topic, as we're more likely to get a better understanding of the topic. Understanding sine waves as the result of an integral was much easier for me to understand than as the result of a complicated calculation.

peetle 1 day ago 0 replies      
I love Gilbert Strang, but _by far_ my favorite book on calculus is "Calculus Made Easy" by Silvanus Thompson. It supplies a very elegant and powerful set of tools to derive many of the "fundamentals" of calculus.
brightball 1 day ago 10 replies      
What would be some practical "daily life" applications of calculus? I've been discussing a focus on teaching with applicable examples in a lot of subjects, but this is one where I've been struggling a bit.
alfonsodev 1 day ago 3 replies      
Thank you for this video, is there a platform that you could aggregate all this resources mapping them to certain knowledge tree ? So that you could have the big picture and then zoom in for specific topics videos. Something like Khan Academy's knowledge map[1], that you could customise to add you own contents, this video for example.

[1] https://www.khanacademy.org/exercisedashboard

DigitalOcean introduces Load Balancers digitalocean.com
282 points by AYBABTME  11 hours ago   119 comments top 24
tachion 10 hours ago 8 replies      
"No ops needed". Every time I see such piece of PR I cry a little inside. Sure, spread such things even more, so that everyone around, devs, managers, business take it even more seriously and build more horrible, unsustainable, overpriced, insecure and failing infrastructures and services. No ops needed in the serverless lambda cloud! ;)
nailer 9 hours ago 1 reply      
DigitalOcean peeps: a lot of cloud provider load balancers (specifically: AWS ELBs and Heroku's various HTTPS products) are currently really slow due to:

- No ECC cert support (slowing down initial connection time)

- No HTTP/2 (so no multiplexing, and text based protocol, slowing down fetching the actual page)

Do DO Load Balancers support these?

simplehuman 10 hours ago 11 replies      
I am curious to hear what people use digital ocean for. It's great for running one-off servers and to run WordPress, ghost, lamp stack. But I can hardly imagine people using load balancers like in AWS. Does anyone have a usecase? Thanks.
activatedgeek 9 hours ago 3 replies      
It is very interesting to see how DO is expanding its portfolio slowly and steadily. Does anybody have a relatively large-scale (>50k users) mildly mission-critical applications running on DO? Can you share you experience with existing services?
zalmoxes 10 hours ago 2 replies      
I don't know if DO plans to provide a managed Kubernetes offering similar to GKE, but if they did I would use it.

having a Load Balancer is necessary to integrate with k8s similarly to GCLB and aws, so this is a step in the right direction.

plainOldText 9 hours ago 3 replies      
Could someone more knowledgeable please provide some insight into why using their load balancers would be preferred over, say having regular Droplet instances act as load balancers, by leveraging nginx for instance?
SparkyMcUnicorn 10 hours ago 1 reply      
Linode's load balancers have clearly defined support for 10,000 concurrent connections. Am I missing where the limits are on DO's?
chrissnell 10 hours ago 0 replies      
Does anybody know what they're powering this service with? I presume that it's software-based: nginx, haproxy, traefik, or similar.
tryrobbo 9 hours ago 0 replies      
Huh, I'm impressed DO have managed this long without them. I can see a lot of use cases for this, especially in the failover use case.
CrLf 10 hours ago 4 replies      
Load balancers is nice, but per-client private networking would be better. Just a thought.
iDemonix 10 hours ago 1 reply      
I was looking forward to this, but the price point is too high, I could run VMs that use keepalive or similar for cheaper.
cma 6 hours ago 0 replies      
Does anyone know why egress bandwidth on Google's cloud is so expensive compared to Digital Ocean?
rc_bhg 8 hours ago 0 replies      
And linode introduces high memory instances. Today is a good day for all of us!
edpichler 2 hours ago 0 replies      
I like Digital Ocean solutions, but I am currently using Nginx under a Digital Ocean Droplet, for USD 5 month, and it's working very well.
autotune 9 hours ago 0 replies      
Curious how these handle high traffic. If no pre-warming is needed and it can handle higher traffic spikes, I can see this being a potential alternative to AWS ELB.
borplk 9 hours ago 0 replies      
Since the topic is relevant, I was disappointed with how basic Amazon ELB is.

Many features such as weighted or IP-based routing are missing.

I know it's possible to achieve that with other options like Route53 or running your own load balancers behind ELB but for my basic needs and projects that's too much cost and complexity.

I just want a "load balancer as a service" that has a decent feature-set.

lykron 10 hours ago 1 reply      
DigitalOcean seems to be going after Linode. Will be interesting to see how Linode responds.
bogomipz 10 hours ago 3 replies      
I was under the impression that Digital Ocean was mostly lower end i.e the $5.00 or $10.00 a month instances. Can someone say what is the use case for a load balancer in front of small instances?

Is Digital Ocean now targeting larger customers now?

novaleaf 8 hours ago 0 replies      
I don't know if it's a lot harder, but it seems the usefulness of this is severely diminished without the ability to autoscale your backend.
owenwil 8 hours ago 0 replies      
Honestly, this is awesome. It looks a lot easier than AMazon's EC2 load balancer to use, and fits right into their whole philosophy.
ak2196 4 hours ago 0 replies      
Congratulations guys! Welcome to 1995.
nik736 9 hours ago 0 replies      
How much concurrent visitors does it support?
solibra 7 hours ago 1 reply      
No mention of HTTP/2 and Websockets support though.
baccredited 9 hours ago 0 replies      
No ipv6?
New Macbook Pro power efficiency and time remaining macdaddy.io
260 points by feelix  1 day ago   200 comments top 22
dperfect 22 hours ago 20 replies      
> Reducing mA (power draw) rather than increasing mAh (power storage) is the most effective way to increase battery life, which would go a long way to explain manufacturers obsession with thinness (which, in electrical terms often directly translates efficiency (i.e., smallness)) in their devices rather than just giving us something that endures.

I think it goes the other way; the manufacturers' obsession with thinness (for mostly aesthetic reasons) goes a long way to explain why they're focusing on reducing power draw.

It's great that they are reducing power draw, but by making everything thinner (usually translating into decreased power storage), those gains in efficiency are largely cancelled out when it comes to battery life in real-world usage.

When creating these things, the designers, engineers, and marketing people obviously have some kind of target battery life in mind, and it seems as though the target is always "about the same battery life as the last generation," whereas there's no end to the number of compromises and engineering efforts that go toward cutting 1 mm off one dimension.

And of course, the tech bloggers all go nuts about how thin the newest generation is compared to the last, and people buy computers based on their recommendations.

Aren't these things thin enough already, at least for professionals? Why not go for some truly earth-shattering battery life on one of these laptops? Do people really care more about that 1 mm than they do about potentially hours of additional battery life?

Alex3917 22 hours ago 10 replies      
I love the new 2016 MBP, and after using it think most of the purported issues are BS.

That said, there is one very real issue that I haven't seen mentioned anywhere: because of how thin they are, many of the minor issues that could previously be fixed in the Apple Store now require sending it out.

I had an intermittently working key on my keyboard and took it in for something I assumed would take half an hour to fix. They told me they needed to mail it in and it would be 4 - 6 days, probably 4. It's now going on day 13.

Fortunately my last one was old enough that it had zero resale value, but for anyone thinking of upgrading I'd highly recommend keeping a backup.

camdenlock 21 hours ago 3 replies      
When I bring my 2016 MBP into the office, it runs out of battery near the end of the day, and I have to plug it in for those final 2 or 3 hours. Very satisfied.

My colleague with a new Surface Book has been shaking his fist at me, wishing he hadn't listened to the FUD and left the Applesphere. Apparently he gets frustratingly bad battery life on his SB. (To be fair, though, the SB's display ejection hardware is just plain cool...)

We're programmers, BTW, writing server-side software mainly using Go, running several ancillary services locally on our dev machines using Docker to ease the development process. I'm thoroughly pleased with my 2016 MBP's battery life considering these workloads.

patryn20 23 hours ago 6 replies      
Based on my (admittedly unusual) workloads, the effective charge time of the 2016 is awful. I already only get 4-5 hours out of a 2014 MBP. I bought a 2016 and only managed 3 1/2 hours. Maybe they've improved that with some software fixes, but it's still a non starter. So I returned it.

I know I'm not a typical user, but I get much better battery life (with lower productivity) out of my windows laptop. Which was not the case when I switched to Macs years back.

camillomiller 22 hours ago 1 reply      
I think a lot of people are talking about the new MBP as if they had one. It's 2017 and a too many power users still base their opinion simply on specs divination -- that is, theoretical musings about this and that component will perform. Especially with Apple, it should be clear that the whole thing is much more complicated than a 1+1 based on a spec list.
40acres 23 hours ago 1 reply      
Amazing news for someone who's about to pull the trigger on the non-touch bar 2016 MBP.

I considered many other options but couldn't find a company with similar build quality and reputation for high quality in the mobile workstation space.

My only concern was the battery, it seems to perform at a high level.

010a 23 hours ago 5 replies      
Everyone talks about the downsides of the new MacBooks. No one talks about about how they reduced the size of the internal battery by 30%, effectively added a second screen, improved overall performance, and still kept the same battery life.
mconzen 22 hours ago 0 replies      
Have had one since November and this mirrors my experience. YMMV, but I find that the 2016 MBP charges quite a bit faster than the 2015. For me, I prefer the faster charging over the slight increase in overall battery life, with the added bonuses of the '16 using a standard charging cable. Anyone with a newer Android phone (and soon, airport shops and convenience stores nationwide) can hook you up with a spare MBP charger in a pinch. External USB-C batteries are also easily available and add as much portable charging capacity as you want to buy. Anyone serious about needing to travel with their laptop will probably find it easier to keep a '16 MBP charged than before.
berryg 19 hours ago 1 reply      
I have an older MBP. Hunting down processes and especially tabs in Safari or Chrome that cause the CPU to spike and to drain the batteries becomes a daily routine. A routine I don't like.

Is there a program that watches excessive CPU usage and battery drain over time, warns me and points me directly to the possible culprit and advices me to take action. For example to close a specific tab in Safari. I would even let the program automatically close Safari tabs for certain websites.

The goal would be to optimise battery usage in the most user friendly way.

amluto 18 hours ago 0 replies      
Why is it reasonable to make claims about the current draw going down without discussing voltage?

There are only two useful figures here: power drawn from the battery (probably mW) and time the battery would last at idle. The current is meaningless unless you know that the voltage (current voltage, not nominal voltage) of the batteries on the systems being compared are the same. Given that battery voltage changes during a discharge cycle, this is far from a given even the batteries are completely identical.

If Apple really wanted to halve battery current, they could just change the cell configuration to double the voltage. Bingo!, except the laptop won't actually run any longer on a charge.

baobrain 23 hours ago 0 replies      
I will attribute this to Intel's advances and focus in low power processors, not much credit goes to Apple in my mind.

The Macbook pro 2014 used Haswell processors which are notably more power hungry on idle than Broadwell and its successor Skylake, as Intel really focused on power draw with Broadwell.

konstruktors 18 hours ago 1 reply      
It's 76Wh in 2016 vs 99Wh in 2015 -- a 24% smaller battery. To have the same battery time, it must draw 24% less current than it did with the 2015 model.

Secondly, we should always compare watts instead of amps unless we know the batteries have the exact same voltages.

For example, for a battery life of 18h it must draw 4.2W on average (76Wh/18h). I don't think that's possible because only the 2016 MacBook uses that little power -- 4.1W (41Wh/10h).

konceptz 23 hours ago 2 replies      
Very happy to hear that idle is better. Very unhappy to hear that usage is worse.

I would prefer not to think about my usage patterns and just sit back knowing my new laptop (2016 mbp) is always better than my old (2015 mbp) one in all cases.

losvedir 13 hours ago 0 replies      
Interesting, and nice to hear a little good news for once about the new MBP's, heh.

I actually am thinking about upgrading my old MBA, and have considered the non-touchbar MBP. At this point, though, I'm not sure if I should be waiting for Kaby Lake. Does anyone have stats on how much that should improve performance or battery life? It's likely the next revision of MBP, maybe later this year (?), will support the newer processors, right?

holografix 19 hours ago 0 replies      
Cant wait for the new round of MBPs. This one just seems like a catastrophe... I can deal with the butterfly keyboard, anemic CPUs, and USB-c and even the outrageous price. Just not all as a package!
tim333 12 hours ago 0 replies      
The suggested Battery Guru app is cool - running it now on my Air. Just found charging the phone increases the draw from 450 mA to 1200 roughly https://macdaddy.io/mac-battery-guru/
lucaspiller 15 hours ago 0 replies      
Why is the power draw measured in milliamps, wouldn't watts be more appropriate? The voltage of the battery drops as it discharges, so 500mA when it's fully charged isn't the same as 500mA when it's nearly empty. Unless that is taken after the voltage has been regulated, but in that case again why are we measuring the capacity in amp-hours instead of watt-hours?
herghost 17 hours ago 0 replies      
With mine, on the plus side I am getting about 14 hours a day of work - that's connected to WiFi, using a Citrix VDE for about 9 hours, along with local office suite, email, browsing, iTunes, etc.

On the negative side, if I'm playing Civ 5 whilst plugged in the battery is still discharging - albeit very slowly. I'm not sure what would happen if it actually got to zero. I assume it would switch off, despite being plugged in?

graiz 22 hours ago 1 reply      
I suspect that 'BatteryTimeRemaining.c' is used in both iOS and OSX builds. It seems unlikely that a MacBook would have an extra battery pack.
dbg31415 15 hours ago 1 reply      
Until you steam a video or play a game, like Civilization or XCom (from 2012). Then the battery just dies at a near 1% per minute rate. "Great battery! ... until you actually go to do something that uses power." I did a side by side test and the battery on my new MBP lasts the same as my 2013 MBP for streaming conference calls -- under 2 hours for both. (I had them call each other with cameras aimed at each other while both streaming Netflix.)
rasz_pl 23 hours ago 1 reply      
Its very efficient _at doing NOTHING_. Start using CPU and its 1 hour max.
var_chris 21 hours ago 1 reply      
I ordered the 2016 MBP w/touchbar and had it completely die within 2 weeks. Just came back from lunch and it wouldn't power on. I don't think this has been as widespread of an issue as low battery performance. It was a bummer because I have been using macs for quite a while and really like to develop on them but now I am having doubts on getting another one. Would probably prefer a MacPro if they ever updated them.
List of falsehoods programmers believe in github.com
252 points by edward  1 day ago   116 comments top 18
andrewstuart2 1 day ago 3 replies      
If we really aspire to be an information-driven culture who makes informed decisions about important topics, we really ought to curate lists like these down to actual informed discussions on the items in the list.

Printing out a list of "falsehoods" littered with personal opinions and calling them "demonstrable" [1] without a shred of evidence is not going to contribute to general knowledge. There may be some utility in getting me thinking, but for the most part I just find it self-aggrandizing. For what it's worth, I do find this [2] format much more helpful. I'd very much rather see valid counterexamples proving a sweeping statement false than yet another sweeping statement that happens to cover a few sweeping statements.

Of course, the above is my personal opinion and may not be shared by the rest of the community, but please can we do better than curating a list of lists we agree with?

[1] https://chiselapp.com/user/ttmrichter/repository/gng/doc/tru...

[2] https://www.mjt.me.uk/posts/falsehoods-programmers-believe-a...

bshlgrs 1 day ago 2 replies      
This list is an awkward mix of posts containing easily-verifiable but surprising claims about various technical specifications, and posts which just make a variety of contentious claims with no particular evidence provided (I think the economics one is possibly the worst).
vorpalhex 1 day ago 2 replies      
There's a danger in mixing technical spec information (a phone number can contain non-numeric characters) and non-technical spec information (women...). Some falsehoods are provably correct/incorrect, statements about people don't fit into that mold.
elcapitan 1 day ago 1 reply      
Ok, that list is awful, with all the ideological nonsense.

But besides that, for stuff that is actually empirically testable and relevant it would be nice to put them into a unit-testing library to automatically check certain functionality you build (for example to check functions that deal with time or dates or names).

song 1 day ago 2 replies      
Was reading the post about names... My girlfriend has a family name with two letters. In Asia, it's common enough so it's not an issue but in Europe there are some systems that refuse my girlfriend's name because it's too short.

It's kind of frustrating.

stinos 1 day ago 1 reply      
Most of these things are more about 'not knowing' ot 'forgetting to take into account' than 'not believing'? (not a native English speaker but surely those are not the same semantically right?)
patmcguire 1 day ago 2 replies      
"tax - A PHP 5.4+ tax management library"


z3t4 1 day ago 1 reply      
If we are to remember the truth, it's better to list the truth rather then the falsehoods.

Example: Partner says "Don't buy the red one", then a few days later you go and buy the red one, while you should have bought the blue one. It would be better if your partner had said "Buy the blue one".

komali2 1 day ago 4 replies      
>Time passes at the same speed on top of a mountain and at the bottom of a valley

Woah, what? Like, they're talking about effects of relativity because someone is traveling "faster" as the earth spins at the top of a mountain?

waynecolvin 1 day ago 0 replies      
Wait a moment, they have a Falsehoods series on all sorts of subjects. The Fake News fact checkers are going to try globbing on to every niche they can...
pavel_lishin 1 day ago 2 replies      
One interesting question from http://haacked.com/archive/2007/08/21/i-knew-how-to-validate...: is Fred Bloggs@example.com actually a valid email address, given that the plain ascii double-quotes seem to have been converted to "fancy" ones by the blogging software?
laurentdc 1 day ago 3 replies      
> The shortest path between two points is a straight line


camus2 1 day ago 2 replies      
There is some obvious "activism" behind such list, mixing valid technical specifications, politics, opinions and gender theory. But I guess it is a political correctness guide for people who believe in a specific political line, also a way to shame publicly those who do not by using that list as an argument of authority.
DashRattlesnake 1 day ago 0 replies      
When it comes to "curated" list like this, the who the curator is more important that what they curate. Who is Kevin Deldycke, and why should I value his judgement?
ZenoArrow 1 day ago 1 reply      
>"Falsehoods Programmers Believe - A brief list of common falsehoods. A great overview and quick introduction into the world of falsehoods."


>"Falsehoods programmers believe about names">"People have names"

Is this a joke?

Cyphase 1 day ago 1 reply      
I assume OP didn't mean to link to the #postal-addresses anchor?
willis77 1 day ago 1 reply      
koolba 1 day ago 2 replies      
Under "Falsehoods Programmers Believe About "Women In Tech"":

> We're only in tech to find a husband, boyfriend or generally to get laid.

If you flip the genders around I'm pretty sure that would be true for quite a few men (at least the last part).

Intel Shows 2.5D FPGA at ISSCC eetimes.com
252 points by mrb  22 hours ago   114 comments top 13
oliwarner 17 hours ago 11 replies      
Getting a bit bored of leaks, press releases and even specs. The proof of a CPU is in how it benchmarks across real workloads.

We're 2-3 weeks away from global release. People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts. But instead of any firm performance figures, we have rumour.

The only reason I can imagine AMD is still keeping the press under cover is because these Zen chips still can't compete where it matters and if AMD lets that be known, everybody will go back to Plan A and buy an i7 6900K.

mrb 22 hours ago 2 replies      
I should have linked to the second page: http://www.eetimes.com/document.asp?doc_id=1331317&page_numb...

Relevant quote:

"AMD said its upcoming Zen x86 core fits into a 10 percent smaller die area than Intels currently shipping second-generation 14nm processor. Analysts and even Intel engineers in the session said the Zen core is clearly competitive though many confidential variables will determine whether the die advantage translates into lower cost for AMD."

chx 21 hours ago 2 replies      
There's a lot that goes into CPU performance. Go to http://wccftech.com/ryzen-smaller-die-intel-zen-architecture... and search for "Table courtesy of the Linley Group" without quotes to see an extremely interesting table: http://cdn.wccftech.com/wp-content/uploads/2017/02/Zen-Doubl...

> Intel has a double precision IPC of 16 FLOPs per Clock with Skylake as well as 2x 256 bit FMA whereas Zen only has 8 FLOPs per clock and 2* 128 bit FMA.

FMA=Fused multiply-add. It remains to be seen whether dollar for dollar AMD matches Intel or not -- it's likely to be very application dependent.

pjc50 11 hours ago 3 replies      
I'm slightly confused why everyone's talking about CPUs and ignoring the FPGA!

Anyway, it's an interesting technique. An extension of chip-on-module packaging where instead of having a circuit board made of FR4 you have a tiny PCB made of multilayer silicon. This allows fast connections between chips made with different processes (CPU/DRAM/Flash are somewhat incompatible), and joining small chips together into large ones to improve yield.

gm-conspiracy 12 hours ago 2 replies      
Was this headline changed?

I thought it referenced AMD originally?

iand675 20 hours ago 4 replies      
So the latest AMD cores seem like they might be more competitive... does anyone know which AMD processors are likely to support ECC memory? My one big gripe with Intel CPUs is that they currently only support ECC memory for non-consumer chips. I run a personal ZFS cluster and am more concerned about data integrity / cost than I am about pure CPU performance.
Groxx 22 hours ago 1 reply      
Link seems to go to "Intel Shows 2.5D FPGA at ISSCC" instead?

edit: oh, derp. nevermind. there's a second page: http://www.eetimes.com/document.asp?doc_id=1331317&page_numb...

edit edit: so apparently page 2 can't be linked to. whatever. bottom of the article content has a "next page" link.

robotjosh 12 hours ago 1 reply      
Maybe a big L2 cache is nice, but overall performance is all that really matters. I suspect the figures we have seen are from running the chip hot with a fancy cooler. Gamers won't be able to overclock much and regular people will have heat and noise to deal with. Just a hunch, we'll see in a few weeks. I'm going to buy a ryzen setup if they are not awful.
webaholic 20 hours ago 1 reply      
bdconsulting 22 hours ago 1 reply      
Link points at something else. Even the edited link below..
vegabook 16 hours ago 0 replies      
personally hoping for ECC support on the AMD consumer chips, because Xeon pricing right now is just a total and utter ripoff. For the growing in-memory database world this would be huge.
akerro 19 hours ago 0 replies      
And consumes twice less WATs!
qeternity 21 hours ago 5 replies      
AMD really needs to start competing on price. Match Intel and Nvidia performance, at 75% TCO.
Replacing butter with vegetable oils does not cut heart disease risk (2016) theatlantic.com
192 points by upen  2 days ago   176 comments top 20
lngnmn 1 day ago 4 replies      
It obviously does not because there are whole populations, such as Tibetans, who consume butter on daily basis and are still alive and well, without any cardiovascular epidemic.

What is a risk, by the way? How it is defined, apart from a personal lifestyle, diet, habits, current set of disorders and chronic illnesses of a particular person? It is a likelihood? An average of some imagined population of which some non-representative sample is treated according to some abstract, disconnected from reality model of a few selected unproven factors in a complex multiple causation individual phenomena? How the value of that number related to anything meaningful? It passes peer-reviews because it conforms to a socially constructed consensus (the current set of memes) but no one does a review of logic and causality.

nmerouze 1 day ago 4 replies      
Cholesterol isn't bad, it's in fact very important for the production of testosterone among other things. The problem comes from the inability to use it because the body isn't healthy. Polyunsaturated fats will produce bad byproducts when it breaks down and over time makes the body sick.

We crucially lack magnesium and potassium in our diet. There are tons of studies showing the benefits of magnesium against heart disease. And it's not just the heart, cholesterol can obstruct the liver and a sick liver will cause a whole lot of problems.

pombrand 6 hours ago 0 replies      
What you really need is a study comparing saturated fats, ideally from butter AND from coconut oil, with monounsaturated fat which is the plant fat recommended as healthiest (olive oil, avocados, almonds), not polyunsaturates that no-one claimed was particularly healthy to begin with. It also needs to looks at all cardiac events, not just death (having cardiac events can reduce life quality).

If you read the wiki about saturated fats https://en.wikipedia.org/wiki/Saturated_fat_and_cardiovascul... it's clear that there's no benefit from saturated fats, but potential downfalls.

A more useful headline: "Replacing butter with vegetable oils high in monounsaturated fatty acid reduces risk of cardiac events and neurological disorders" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3705810

So much butter confirmation bias here.

firasd 2 days ago 2 replies      
In fact the trend to replace butter with vegetable oils has led people to ingest more trans fats which are provably more dangerous than saturated fat.
teslabox 2 days ago 2 replies      
I try to avoid eating vegetable oil. I understand that the polyunsaturated oils are preferentially stored to protect the body from these unstable oils.

The Center for Science in the Public Interest needs to eat some crow.

jelliclesfarm 2 days ago 2 replies      
There is nothing wrong with butter. Oils can become rancid. Clarified butter or ghee is even better as they have longer shelf life.
tluyben2 2 days ago 3 replies      
How about olive oil? The oils they name there already got a bad rap in the press here for not being very good for you. But olive oil persists.

Also; you have to wonder about these tests... Anecdotal, but too many times I see people take cola light, a light sugar substitute for their tea and veg oil based butter with their 3000 kcal burger & fries & chocolate sunday.

Also, the more I read up about it, I think stress is far more involved than food in a lot of cases. And if you feel you have a lot of stress (some people can handle tons and feel nothing, other get burn out with comparitively little, so it is personal) then I do not think food matters a lot: exercise probably does. Just looking at food is not enough there; weight, stress, genetic factors and exercise have to be equal for all individuals.

mark_l_watson 1 day ago 1 reply      
I think that generalizations on fats in diets don't work for analyzing different diets because of other factors like how much processed foods, sugar, and meat are in a person's diet.

My wife does well eating lots of butter and more meat. I do well by eating lots of vegetables. The only thing our diets really have in common is the avoidance of packaged/processed foods. It is some work, but people need to pay attention to how eating different foods make them feel, and over a long period of time.

trillf0rd 1 day ago 2 replies      
Like almost all nutritional studies, the evidence here is circumstantial and should be read with significant skepticism. Stephen Pauker, a professor of medicine at Tufts University and a pioneer in the field of clinical decision making, says, Epidemiologic studies, like diagnostic tests, are probabilistic statements. They dont tell us what the truth is, he says, but they allow both physicians and patients to estimate the truth so they can make informed decisions.' (Excerpt from Do We Really Know What Makes Us Healthy by Gary Taubes http://www.nytimes.com/2007/09/16/magazine/16epidemiology-t....)

If you look at the larger body of evidence beyond this study, there are major reasons why institutional wisdom continues to advocate for the consumption of mono and polyunsaturated fats over saturated fat. For example, a larger 2016 cohort study of 115,000+ participants concluded high dietary intakes of saturated fat are associated with an increased risk of coronary heart disease (http://www.bmj.com/content/355/bmj.i5796).

Eating healthier is all about what categories of food replace current calories. If similar studies continue to show vegetable oil consumption is not protective against heart disease, it will probably make more sense to advocate replacing vegetable oil calories with fatty nuts and avocados that are much more nutritiously dense than oils (my preference for where to get fats). To jump to the conclusion we should all eat more butter based on this one study of n=9,423, however, is bad logic.

douche 2 days ago 2 replies      
That whole myth will inevitably be revealed as a coup by the soybean industry, right? Right up there with the hogwash of the food pyramid.
arethuza 1 day ago 2 replies      
One of the most remarkable snacks I've ever had was served on a trawler in the North Sea - a large number of butteries (pastries from the North East of Scotland which, as the name suggests, are made with a large amount of butter) layered in a deep baking tray and covered with a couple of pounds of salted butter and then baked until nice and hot.

Possibly the most delicious thing I have ever consumed NB (it was shared among the crew of 6 or so).

cel1ne 1 day ago 0 replies      
What would certainly help is adhering to the WHO recommendation of having a maximum of 25g (0.9 ounces) of sugar daily.
Xdafjgneoiwe 1 day ago 0 replies      
Isn't the Omega-6 to Omega-3 ratio one important measure of how heart-healthy a diet is? This could at least in part explain increasing heart disease risk.

I assume using lots of Omega-6 oil would push the ratio to even more unhealthy levels than what a standard american diet has.

veritas213 1 day ago 0 replies      
what about replacement with coconut oils?

Would love to see a comparison.

SimeVidas 2 days ago 1 reply      
What about coconut oil? :)
SixSigma 1 day ago 0 replies      
It does reduce cow deaths though
eruditely 1 day ago 0 replies      
But what does it reduce the risk of what we DO NOT KNOW is happening to us? Perhaps a disease or affect that is difficult to measure immediately, but is affect-ing us ul-tim-at-ely.
DrScump 2 days ago 1 reply      
blogspam of:


(from April 2016)

ADDENDUM: It was also covered in The Atlantic at the time, for a more general audience:


pinnbert 2 days ago 3 replies      
I'm a software engineer with a huge ego and thus an opinion on this.
Happy 15th Birthday .NET microsoft.com
260 points by vyrotek  21 hours ago   93 comments top 21
josteink 18 hours ago 5 replies      
I've been using .NET professionally and for hobby projects since version 1.1.

While initially sceptical of this new MS platform, I was quickly sold on C# compared to Java: The language just seemed so much better. The base class library was easier to use. The tooling was rock solid. In many (but not all) ways it was just a better Java.

Since then, I must admit I think Microsoft has handled the platform really well. It has made the Java-platform seem quite literally stagnant. First class functions and lambdas first now? Really?

Clearly there's been ups and downs. There was times when Microsoft has seemed dormant and not paying attention to the needs and requests of its developers, happy to have conquered the market for in-house enterprise applications. Some APIs were thrown out as soon as they were delivered to market. There were times when I was about to throw it all out due to Microsoft's best option at the time seeming inferior to pretty much everything else out there (ASP.Net Webforms with AJAX being a particularly frustrating story).

But all in all, it seems they've managed to pull through, to react in time. They've gotten better at listening to the community. Now they've even open sourced it "all" and let us in, allowed us to participate and shape the future of .NET, if we want to.

Yes. I can list a bunch of imperfections which remain with us today (like the .NET Core launch and migration-strategy not being all that convincing), but I'll leave that for another time.

Today, on .NET's 15 birthday, I'll say that I'm happy I invested in .NET those 15 years ago. It's literally paid my house, and I've enjoyed working with it while it did.

Keep up the good work, Microsoft. There's a world of developers out there who appreciate what you do.

LyalinDotCom 19 hours ago 0 replies      
Wow 15 years already, I still remember learning how to code C# for the first time in .NET 1.1 days, it seems like a lifetime ago :).

Its also worth pointing out that Visual Studio is also having its 20th anv event and shipping Visual Studio 2017 RTM on the same date on March 7th and 8th.


Join Us: Visual Studio 2017 Launch Event and 20th Anniversary: https://blogs.msdn.microsoft.com/visualstudio/2017/02/09/vis...

kabdib 13 hours ago 1 reply      
I started using .NET in late 2001. I'd done a few years at startups that were using Java and doing all kinds of architecture astronautics involving JavaBeans and great big complicated ways to access config files while ignoring any really hard customer-facing problems. The tools sucked really hard, and I remember it was difficult to find an IDE that didn't simply crash when you fed it 50K lines of Java to debug. Browser plugins OMFG. Working in Java wasn't a whole lot of fun.

Anyway, I joined a new company, started using C#, and one sunny afternoon about three months into my first real project I remember thinking to myself, quite clearly, "Oh thank God, the nightmare is over."

I did some Android work a few years ago, and all the bad stuff came flooding back: The fallen towers of abstraction degenerated into a sea of flag bits. The hideously complicated build and packaging systems. I haven't written any C# in five years or so, so maybe it's just as horrible now (and I've glossed over bad things like DLL hell and the global compilation pre-caching nonsense), but I sure don't enjoy anything based on Java.

rodionos 17 hours ago 0 replies      
List of Microsoft tags among the top-100 tags on Stackoverflow:

 .net, c#, asp.net, sql-server, asp.net-mvc, excel, vb.net, windows, vba, winforms, visual-studio, linq, excel-vba, sql-server-2008, wcf, visual-studio-2010

Stackoverflow counters for the above tags over time:


cyberferret 19 hours ago 3 replies      
1.5 decades. Very impressive achievement.

I am wondering though, and I will ask the question of all the .Net developers out there - Did Microsoft achieve their end goal of reducing "DLL Hell" via the .Net framework?

I didn't do a lot of development in this platform, but whenever I dabbled in it, I used to come across the same sort of issues - "Oh, you PC doesn't have the correct version of the .Net framework installed, I'll have to go and download version x from Microsoft". There were a lot of applications tied to a particular (older) version of .NET and I remember that installing older versions over newer ones used to cause issues on quite a few network systems.

Also, got really tired of finding nice 'little' apps on the ancient web which looked like useful utilities in a 200Kb file, only to find out after downloading that you needed to then download a 100MB+ behemoth framework version x just to run it.

I know in a lot of cases in made development life easier, but on the deployment and implementation side of things, I feel we still suffered the same "DLL Hell", though not to the same level.

grovegames 13 hours ago 1 reply      
I remember starting a project for a Telco in C# before .Net was officially released. It was riddled with issues, but it made it easy to use COM+ as a means to hot swap 'plugins' to the core of the application. This allowed us to distribute one application, and have a configuration dynamically load which modules a user would need based on their role. The web wasn't very capable 15+ years ago, and this allowed for a single base to be maintained and yet shared modules to cut down on repetitive coding. You could do this with COM+ already, but manually working with IQueryInterface over a simple reflections added more complexity than was worth it. And when you needed to, you could easily make calls to the Windows API, so you could still do code injection to get past the parts when the language didn't support yet. And DLL hell was no longer so prevalent. I was told somewhat recently that it's still running in some capacity, which I'm glad of, because I remember stressing over how irresponsible it was to use an untested language and platform for such a project.
jug 17 hours ago 3 replies      
There's so much going on that it's almost making me hesistant about taking on the new strategies like .NET Core or where to go with user interfaces. For the long term, is it better to lock myself into UWP and Windows Store, or reap the benefits with Xamarin?

Much would be easier if UWP was open and DirectX accelerated on Windows and OpenGL accelerated on Linux or macOS, and uploading these apps on Windows _could_ be to Windows Store but wouldn't be so architecturally tied to it. I know there are ways to dodge their store even with UWP apps, but it feels like you're hacking their intented model and on thin ice for whatever will come or change in the future.

I never really liked platforms where application development is tied to distribution methods, especially not when .NET has it in its history to not be that (other than maybe a very slight nudge towards ClickOnce). I felt like a part of the .NET philosophy died with UWP.

I applaud Microsoft for where they have come and their latest developments are intriguing but I see many quite major annoyances remaining. I work almost full time with .NET though, simply because that's where my money is for now. But I'm not touching UWP either at work or at home. It's still also receiving a mixed response at best by developers, from what I've heard. It's like... "OK, so here's this limited UWP app compared to WPF on desktop, because we have to build it on an API with Windows Phone in mind which no one uses and then upload it to this single store out of our control, with the excellent company of scam apps tarnishing our trademark". I feel like this is Microsoft's unfortunate reality of where things stand which they should move from. "How do we best fix these concerns?"

Insanity 16 hours ago 1 reply      
I currently work as a Java Dev, but the first language I really learned to use was C#. I don't use it often anymore as I am on a Linux machine at home - though I did try running it with Mono some years ago.

C# remains a pleasant language to use, and feels like it is quite a few steps ahead of Java. I'd strongly recommend that people check it out if they haven't, if only to get a feel of where Java might be in a few years.

e.g: Local variable type inference. This is one of the "small" features that makes C# such a pleasure to use, and I am happy that it is proposed to be in a future JDK.

moolcool 10 hours ago 0 replies      
Now all the resumes that say '15 years of .NET experience' aren't necessarily lies
shaydoc 16 hours ago 0 replies      
Happy birthday .NETI have known you since the start. C# had me straightaway, I was so relieved to stop using C++ lol (just kidding, no I am serious).I remember the early developmentor crash courses we took, before embarking on an "enterprise corporate banking ecommerce platform". Talk about jumping into a new technology at the deep end.We hit all the issues, Dataset serialization across .NET Remoting, Reflection costs, assembly size issues, remembering to strong name the assembly sn -k, and the GAC. Oh and I will never forget the talk on how .NET Remoting WebServices and XML are the future, but at least we could fallback to TCP Binary when things got hairy performance wise!

15 years later its a hell of a platform now!

pjmlp 18 hours ago 1 reply      
Happy birthday!

My only complaint back in the day, given Anders Hejlsberg experience, was that we only got NGEN and JIT, but not a full AOT compiler without requiring dependencies to be installed into the host system.

Well, at least .NET Native and xcopy install do finally exist.

6nf 17 hours ago 0 replies      
I remember my senior year professor talk about this newfangled language from Microsoft called 'C sharp' while we were doing some Visual Basic 6 work. Soon after graduating I had the opportunity to choose a platform for a new web project at my first full time job, and we chose .NET 1.1

Definitely one the best decisions I've ever made. Microsoft: Thank you for .NET.

wilsonfiifi 6 hours ago 0 replies      
IMHO C# and .Net platform felt more approachable than the Java ecosystem. Unfortunately when I shifted to Linux I had to leave it all behind. However, it's refreshing to see the effort being put into .Net Core but way too much time has passed and I've invested so much into Python (and more recently Go) that I'm reluctant to switch back.

So congrats Microsoft for giving us this lovely alternative and thanks Jesse Liberty and CodeSmith!

sharpercoder 17 hours ago 0 replies      
I remember setting up 4 prototypes on 4 different platforms: sun jvm, adobe flash, dhtml and .net. The last won, mostly because ease of development and fast DirectDraw rendering of many datapoints. Never looked back, best decision in my career.
bananaboy 16 hours ago 0 replies      
I took a subject at university that included a C# component. It was 2002 so it must have been .NET 1.0. I liked it, it reminded me of Borland C++ for Windows, Delphi, and similar RAD tools that I had played around with. I remember being super confused by boxing at first!

It's really matured well. Great work, and Happy Birthday .NET!

mmcclellan 10 hours ago 0 replies      
I remember back in those early days seeing job requirements that listed 10+ years .NET. Nice to see some people actually have that now.
skc 18 hours ago 0 replies      
It's grown to become a very pleasant platform and ecosystem to dev in.

It's fairly fuss free for the most part. Good job MS.

fergie 13 hours ago 1 reply      
Why did Microsoft never target any platforms outside of Windows with .NET? (serious question)
alextooter 19 hours ago 5 replies      
Not happy at all!I love the old Visual Studio 6.0 IDE, the new devenv.exe is getting more slower after each version.

Go back to native,Microsoft!

cobbzilla 19 hours ago 2 replies      
so time to open source the whole thing? what a nice birthday present that would be! :)
the-dude 19 hours ago 3 replies      
Was .NET part of the anti-competitive ploy to run Java into the ground ?


The world's deepest ocean trenches are packed with pollution economist.com
217 points by rglovejoy  1 day ago   79 comments top 9
pcrh 1 day ago 2 replies      
Given that polychlorinated biphenyls are so much more abundant in the deep sea trenches than even "In grossly polluted areas, like the Liao River in China" the scientist in me suspects that something other than pollution might be at play.

Perhaps deep sea organisms synthesize polychlorinated biphenyls as an adaptive response? (weirder things are known...) Or perhaps the chemical degradation of polychlorinated biphenyls is inhibited by the environment?

philipkglass 1 day ago 2 replies      
Precisely why the Mariana trench has such elevated levels of polychlorinated biphenyls remains unclear. Dr Jamieson suspects it has to do with the trenchs proximity to the North Pacific Subtropical Gyre, a whirlpool hundreds of kilometres across that has amassed enormous quantities of plastics over the years, and which has the potential to send the pollutants that bind to those plastics deep into the ocean as the plastics degrade and descend.

I think that this guess is likely to be right. It would take a very long time for fluid convection and diffusion to transport these pollutants to such depths. But particles of plastic that are higher-density than water will collect a lot of these strongly hydrophobic pollutants on their surfaces and sink deeply much faster than convection/diffusion operate.

There is a "missing plastic" question in environmental science. We see a lot of plastic trash near the surface in oceans, but the visible amount is much less than the amount humans seem to be adding to the ocean each year.


Where is the "missing" plastic? It seems likely that some of it is sinking to the ocean floor, either because the plastic itself is denser than water or because it builds up denser-than-water growths on its surface. Finding polychlorinated biphenyls and brominated ethers concentrated at such depths is, IMO, pretty convincing evidence for plastics and the pollutants concentrated on their surfaces sinking into the benthic zone.

(Another part of the missing plastic may be gone due to colonization and digestion of plastics by natural hydrocarbon-eaters; see "Life in the Plastisphere: Microbial Communities on Plastic Marine Debris" for a really fascinating paper about this phenomenon.

https://www.researchgate.net/profile/Tracy_Mincer/publicatio... )

It's rather alarming to find such concentrated pollution so far away from its human sources. But at the risk of sounding callous, it's kind of good news for humans and our critical ecosystem services: these very deep ocean regions are relatively isolated from most seafood eaten by humans, and from the photic zone whose photosynthesis is an important part of the carbon cycle. If persistent pollution has to partition somewhere, partitioning into the deepest parts of the ocean is about the best case scenario for surface life.

leeoniya 1 day ago 2 replies      
A bit off-topic, but...

"If Mount Everest were flipped upside down into it, there would still be more than 2km of clear water between the mountains base and the top of the ocean"

This statement always bugs me, the elevation at Everest's base is already ~14,000ft. Its prominence is not its full elevation. When you try to have someone imagine "flipping it upside down", a person wouldn't typically consider including the surrounding terrain, but simply ignore the fact that it's already at great elevation.

rm_-rf_slash 1 day ago 5 replies      
I hope that the global conversation on ecological protection can evolve from climate change to tangible effects, like pollution and ocean acidification, just as climate change evolved from the use of the term global warming.

Ocean acidification in particular ought to be an issue even climate skeptics can acknowledge is a problem. Unlike climate change, which can be difficult to communicate due to its abstract nature (we had heat last summer and snow last winter so what's changing?), you can plainly test acidification with two cups of water - tap and sparkling - and a pair of litmus slips to show the difference. Then expand on how all the carbon in the atmosphere does that to the oceans, and then demonstrate what that does to life in the oceans, from the algae and plankton to the fish people eat.

Overall I think that focusing on precise tangible issues that people can observe for themselves is a better way to communicate the need for ecological protection than to be completely correct in a large and abstract assessment that people might have trouble following.

Only problem is it's hard to sex up the term "ocean acidification." For something like that we'd need an attention-grabbing shorthand, like "melty fishy death water."

Nomentatus 21 hours ago 0 replies      
Most of the plastic in the ocean comes from street litter in coastal cities. If you smoke, that includes cigarette butts (the filters are not natural cellulose anymore), as well as the plastic wrapped around the package that so many let fall to the sidewalk.
Apocryphon 1 day ago 1 reply      
Sounds like even more problems for a prospective future undersea colony beyond the crushing weight of pressure.
ridgeguy 1 day ago 1 reply      
The Economist summary says the amphipods that were found to contain PCBs were collected by baiting traps with mackerel.

I don't have access to the original research report, but I wonder if they analyzed the mackerel for those pollutants?

alkonaut 1 day ago 0 replies      
If this doesn't end bad for humans it sure will when it's the plot of a sci-fi horror movie.
ge96 21 hours ago 0 replies      
Some day it would be awesome to build swarms of autonomous deep-sea robots. Ahhh... imagine if they ate tiny plastic like plankton for whales haha.

I know just bs, not a novel idea, until you do it just spewing smoke.

So much to learn still, I'm currently oriented to web dev not hardware programming but I have a hardware friend though, math friend, pieces will fit someday perhaps.

The ocean is such a mystery/entrancing.

Just imagine if you had a bunch of robots just out there doing there thing and you could "ssh" into them by satellite haha would be nuts.

High-speed rail taking shape even as opponents seek to kill it sfchronicle.com
193 points by MilnerRoute  2 days ago   283 comments top 13
erentz 2 days ago 6 replies      
Huge supporter of rail but sadly I believe there are a number of mistakes being made. This is a classic example of how projects like this in the US need to be monolithic, self contained, and gold plated to the hilt. The correct action would have been for CA to incrementally acquire and build railways along a number of corridors. Take the SF bay for example. There should be one agency running Caltrain, the Capitol Corridor, the ACE. It should electrify and improve all the routes. Extend reach to Monterey and other Central Valley locations. And do the same in Southern California.

The incompatibility of CAHSR with other rail strikes me as a repeat of the BART mistake.

datahack 2 days ago 11 replies      
If I had known that they were going to run this through the Central Valley instead of down the coast where it belongs I never would have voted for it. It should have gone straight from la to Oakland or emeryville, turned and gone straight to Sacramento. Instead we have a convoluted mess of rail connections that take people places they don't even want to go (no offense to Bakersfield buts it's hardly a holiday destination).

It's a farse because of how it's getting implemented, and nobody is under any illusions this isn't going to turn into another bay bridge budget monster in California.

Still a rail proponent, but bitterly disappointed in the implementation choices of this project.

jorblumesea 2 days ago 4 replies      
Forget high speed rail, where's the commuter trains? It seems ridiculous that in most US cities public transit is mostly non-existent. Traffic is bad in most major US cities you'd think people would be jumping for this as urbanization increases.
niftich 2 days ago 1 reply      
Not sure why they decided to build an elaborate viaduct [1] to allow for a high-speed curve radius in Southeast Fresno, if Fresno will be a likely stop anyway. Just make all trains stop in Fresno, a place conveniently midway between SF and LA and desperate for improved connections with California's more prosperous areas. After all, isn't the whole point of HSR to improve connectivity between more than just the two termini, since those who want to go direct between SF and LA will always have a direct flight as an option?

[1] https://www.google.com/maps/d/viewer?mid=1NYS0Y3nyyYZowXFDtJ...

sand500 2 days ago 7 replies      
As I see it, the price of rail tickets will be pretty high.whats stopping the price of flights from being halved in the next couple of decades and killing this train service too?
bcheung 1 day ago 1 reply      
I would prefer to see the money spent on improving Caltrain. It really is not a viable option for a lot of people. South of SJ Diridon, the trains only run 3x in the morning and 3x in the evening and they require you to basically wake up at 5 or 6 AM depending on the train you want to catch. A lot of tech workers don't come into the office until after 10 AM.

The high speed rail won't really help people already living in Silicon Valley except for the rare few who live near downtown San Jose and plan to work in SF (or vice versa).

And yet, they are the ones who will bear the inconvenience. The proposed plans I have seen will have the train elevated 60 ft in the air, it's about 70-80 ft total from my bedroom window and the train will travel 150 mph several times an hour at around 100 dB. Additionally, they will be narrowing a major road that is 2 lanes in each direction to 1 lane in each direction to make room for the train. Which is weird, because the Caltrain tracks are just another 30 feet away so not sure why they don't just build it above that.

There was some talk of a proposal to reimburse home owners due to the loss of value to their homes if this goes through. Based on the proposal I was looking at they showed a $100K to $200K loss of property value if the train goes through.

If I actually wanted to take the train, it would take me 20-45 minutes to get to the train station in San Jose even though I am already in San Jose.

I'm really disappointed in how impractical this is for the majority of people who live and work in SF bay area and the level of cost and inconvenience it will cause.

Figs 1 day ago 0 replies      
I was against the project back when it was a ballot measure, but since then my opinion on it has changed -- I don't care if trains ever run on the damned thing, what really matters in this project is getting the right of way to the land, because they're going to put in a MASSIVE amount of fiber optic cable as they dig out the path.
jboggan 1 day ago 1 reply      
Probably should be spending that money on infrastructure (read: dam) maintenance.
klinquist 1 day ago 4 replies      
One thing that nobody has mentioned - autonomous cars. Close to the time this will be completed, I assume I'll be able to sit in my vehicle and be shuttled from SF to LA in 5 hrs in the comfort of my own vehicle.
intrasight 2 days ago 0 replies      
>Americas biggest infrastructure project is both in limbo and full-speed ahead.

There is a logic to that for a project of this magnitude that may take dozens of years to complete.

arca_vorago 1 day ago 0 replies      
I loved Chomsky's response to this "Like the one I took in Japan... in the 1950's..."

For extra fun, realize that California had already begun about the same time as Japan to install more public transport, but the car companies moved muscle on them to stop it.


zlynx 2 days ago 0 replies      
"planning snafus" AKA politically motivated outright lies.
valuearb 2 days ago 3 replies      
$64B just in startup costs, for a slower trip than flying. Why this infatuation with a technology that hasn't been leading edge since the 1800s?
Internet firms legal immunity is under threat economist.com
202 points by JumpCrisscross  1 day ago   129 comments top 17
tankenmate 1 day ago 8 replies      
And yet phone companies and postal services aren't liable for what people send using their services. The big difference of course is that digital services are much closer in effect to being a broadcast service, and the costs are much much lower than phones or snail mail.

So to say that these policies are exceptionalism is bending the truth, when in fact it is a faulty, leaky, middle ground between old world information transmission systems; private (phones), semi public (snail mail, think spam, political pamphlets), and public (mainstream media).

We live in an imperfect world and this is yet another example. The pendulum will nevertheless swing the other way for a while.

schoen 1 day ago 0 replies      
The article regards this change as almost a foregone conclusion, while mentioning that "Internet activists and the firms themselves may deplore" the loss of 230.

If you're reading this and you're in one of these categories, you can do something rather than just deplore the change. For example, get your company to write or sign on to amicus briefs in 230 cases explaining why not being liable for user activity is important to you.

Also, all different kinds of organizations can endorse or advocate for


rrggrr 1 day ago 5 replies      
The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.
anigbrowl 1 day ago 1 reply      
Previous periods of overbearing law enforcement in the face of rapidly changing technology imho had a lot to do with the flourishing of open source and the multiplication and widespread adoption of federated protocols. Subsequent changes in the political environment (not least the collapse of Soviet-style communism and the end of the Cold War) led to a moderately happy marriage between convenience and consumerism in which the web flourished and provided wins for both consumers and capitalists.

I feel the emerging need now is for lean protocols and tools that allow us to effectively filter unwanted content and to view and manipulate metadata structures, both inherent and emergent. If you're looking for inspiration in places other than the commercial sphere, much interesting work on digital ontologies has been emerging from the EU in recent years.

chris0x00 1 day ago 3 replies      
I was briefly concerned before dismissing this all as absurd. Surely we aren't more likely to hold tech companies responsible for the actions of their users than we are to hold gun manufacturers responsible for theirs.
RangerScience 1 day ago 0 replies      
> The argument that they do not interfere in the kind of content that is shown was a key rationale for exempting them from liability.

It seems like this might be the "correct" point; at least when considering "who" is responsible for content - that the degree to which you (the service provider) picks and chooses content is the degree to which you are responsible for the effects of showing that content.

In an ideal implementation, such a link also correlates with organizational size: FB of today can both afford to be liable for the content shown, and can afford the work to be responsible about it. FB when it started could not.

A better example might be Tinder - When it started, would it be reasonable for Tinder to police its users for asshole behavior? Now that Tinder is established, it is reasonable for them to not?

Keverw 17 hours ago 0 replies      
What is the reality that taking away the legal immunity could really happen? Wouldn't the community, non-profits and big internet companies lobby against this, call up all their congress and senators like just how we stopped SOPA and PIPA?

Cases like this are the few I'd approve of corporate lobbying for(which I'm usually against personally). I do not like the idea of censorship myself. Is someone posting hateful stuff? Unfollow or block them. I feel like censorship should be used in rare cases.

eternalban 1 day ago 4 replies      
> GOOGLE, Facebook and other online giants like to see their rapid rise as the product of their founders brilliance. Others argue that their success is more a result of lucky timing and network effectsthe economic forces that tend to make bigger firms even bigger.

(Take it easy with that down arrow button :) but yet others see their rapid rise as sponsored fronts for Intelligence.

[p.s. & I would be delighted to be presented with thoughtful replies that show /why/ the above view can not possibly be true.]

hyperion2010 1 day ago 1 reply      
Well this would only help incumbents, the little guys are unlikely to ever be able to police user content at scale or at cost.
shmerl 1 day ago 1 reply      
So this is pushed by DRM freaks, who dream about censoring everything, and making others pay for this abusive policing.
revelation 1 day ago 3 replies      
it carried over to service platforms

No, it never did, Uber just made that up. The whole basis of this article is plain false, no other words for it.

(Dito Amazon, who have brazenly been selling and even shipping electronic waste that passes no basic safety standards. No uttering of "marketplace seller" changes the legal reality.)

adventured 1 day ago 1 reply      
If they shred the legal immunity, the only platforms remaining will be the giants. It would be the ultimate moat for Facebook, Google, etc.

I've been waiting for two decades for the monsters in DC (and their many accomplices) to legislatively make it impossible to wake up in the morning with a normal business idea (not talking Napster here) and decide to just build it without having to go through an endless parade of legal/political/regulatory/licensing concerns. It doesn't appear to be far away now, the government monster is always hungry, always expansive, always looking to dig its claws into any bastions of free movement.

My suggestion to younger entrepreneurs out there: get it while you can. This glorious period of having so much freedom to create/build - no permission required - will probably seem like a distant fantasy in another decade. There is no scenario in which they aren't going to add more and more friction to the process, putting themselves in-between you and building things online as just another layer of control.

known 20 hours ago 0 replies      
Whta's the difference between Communications Decency Act and https://en.wikipedia.org/wiki/Indemnity
awinder 1 day ago 3 replies      
Why do people have to quote fake news like it's an invented problem? I guess there could be some disagreement on the scope of the problem, but there's inevitably a better-trained person in this world who will be researching and becoming a subject matter expert on this in the coming years on it's effects and breadth.
SteveWatson 1 day ago 5 replies      
milesrout 1 day ago 0 replies      
You shouldn't be liable for what your users say and do.

You sure as hell should be liable when your buggy crappy software costs people money.

wang_li 1 day ago 2 replies      
There is some absurdity in the safe harbor provisions that cover people outside of US legal jurisdiction and also that provide protections even when the provider has no actual business relationship with the customer or even any idea who the customer actually is.

I've felt that an argument could be made that safe harbor provisions should only apply when the service provider can provide an actual identity associated with an account and that that person is within US legal jurisdiction.

Live slow, die old: Mounting evidence for caloric restriction in humans geroscience.com
193 points by discombobulate  6 hours ago   153 comments top 21
ellyagg 4 hours ago 3 replies      
There's a totally different way to live that also has mounting evidence, and it's a way that sounds a lot more satisfying to me.

For awhile now, the "obesity paradox" has been a thing, where segments of the population who are a little heavier than one would expect actually have the best all-cause mortality rates.

Recently there's been some pushback on this "paradox", but the one I want to call attention to is here:


The problem with the obesity paradox is that it's been based on the flawed BMI. In this study, they actually did DEXA scans of elderly women's body fat percentage, and those with the highest BMI and lowest body fat percentage had the best all-cause mortality rates.

This suggests plenty of calories plus strength training is in the running for a longevity lifestyle. This makes sense intuitively and if one is familiar with the panoply of beneficial physiological effects from exercise on the human body. And, it would not come as a shock to find that increased strength from greater muscle improves, e.g., balance and coordination to prevent accidents in the elderly, nor that increased muscle mass provides a protective tissue reserve for fighting disease without the concomitant downsides of adipose.

crudbug 5 hours ago 2 replies      
2016 Nobel prize was awarded to Yoshinori [0] for his work on Autophagy [1] - Cells eating dead cells and auto-correcting body functions when there is lack of calories/energy in the system.

Ancient philosophy also says about fasting 2 days a week. The Autophagy process provides a scientific foundation for those claims.

[0] https://www.nobelprize.org/nobel_prizes/medicine/laureates/2...

[1] https://en.wikipedia.org/wiki/Autophagy

jaggederest 6 hours ago 4 replies      
As a counterexample, the Minnesota Starvation Experiment found a substantial variety of negative effects from calorie restriction:

> Among the conclusions from the study was the confirmation that prolonged semi-starvation produces significant increases in depression, hysteria and hypochondriasis as measured using the Minnesota Multiphasic Personality Inventory. Indeed, most of the subjects experienced periods of severe emotional distress and depression.[1]:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally).[6] Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.[1]:123124 The participants reported a decline in concentration, comprehension and judgment capabilities, although the standardized tests administered showed no actual signs of diminished capacity. This ought not, however, to be taken as an indication that capacity to work, study and learn will not be affected by starvation or intensive dieting. There were marked declines in physiological processes indicative of decreases in each subject's basal metabolic rate (the energy required by the body in a state of rest), reflected in reduced body temperature, respiration and heart rate.


leovonl 5 hours ago 1 reply      
There is mounting evidence that fasting and a few dietary approaches (like the ketogenic diet) are useful tools for dealing with a whole sort of issues, from autoimmune diseases and generally any excessive inflammatory responses (including CHD).

Calorie restriction OTOH is something that gives mixed results, and I suspect it may be related to the diet adopted and how it is implemented (ie, LCHF vs LFHC, fasting vs frequent small meals).

I suggest anyone interested in food/health to spend some time on google scholar searching about these approaches, it's enlightening.

wallace_f 38 minutes ago 1 reply      
Given the inconsistent, even capricious prescriptions coming from nutritional science and medicine, here's three reasons I don't worry too much about nutritional "science."

First, n anecdote:

1.) One year I decided to run a marathon, and joined a running group including some elite runners.

One was a former state champion. In that particular year he'd run an Iron Man, a marathon, and the Goofy Challenge. That is a big deal.

He was carved out of stone. At running clubs, women would surround him. They'd paw at him, giggle as they pointed and touched this man body of steel.

After witnessing this for months, once at a pub I saw him order, IIRC: 2 hotdogs, a hamburger meal (with its own sides), and sweet potato fries, which he washed down with several beers.

I was shocked. I asked if he always ate that way. He said he just eats what he feels like eating, with one exception: added sugar.

So that's what I do. I eat what I feel like, with (3) exceptions. Small portions, no added sugar, and limits on artificial flavours. I've seen great results in energy, mind and physique. It appears my body is actually really well tuned to tell me what it needs without me having to do anything at all--as long as I follow those three rules.

2.) We need to be careful what we call actual science. If you can't rigidly follow the scientific method and test your theory to see if it matches with nature, it's not science.

3.) Nutrition is ridiculously important to our well being. It's more important than sex, shelter, and ego. Our bodies have been evolving for millions of years to learn how to tell us what they require. It's arrogant to declare yourself to know better here without having hard scientific evidence of it. That's what we're seeing now in the "scientific" literature. Butter is no longer a heart risk. Eggs are now healthy. All of the sudden sugar is the devil. Etc.

athenot 31 minutes ago 0 replies      
prodmerc 4 hours ago 4 replies      
Wait just a minute. What if I don't want to live longer? What if I want to live a shorter life full of energy?

Everyone is focusing on longevity when quality is probably more important. 120 years as a couch potato would be shit compared to 60 years of being active every day (whether physically or mentally).

People are against TRT because it shortens lifespan, while completely ignoring the quality of life improvements, which is just stupid.

Give me something to burn fast and bright, not slow and dim, thank you very much.

nacc 6 hours ago 4 replies      
I have a wild theory about longevity. For all the mammals, increased heart rate correlates with lower life span [0]. But heart rate itself may just be a measurement, and what's important is rate of metabolism - higher metabolism, higher heart rate, lower life span. And vice versa.

If we very boldly assume these relationships are causal and predictive to individuals, it follows lower energy consumption -> longer life. Caloric restriction might be one way to do this.

However, with lower metabolism, stuff in the body tend to get older. This will make them fail more easily, then you are likely to get sick, which will shorten your life. Therefore the optimal strategy to maximum life expectancy seems to be control metabolism (by exercise / calory intake etc.) to a certain point where you are just unlikely to get sick, then stop.

[0] biology.stackexchange.com/questions/20489/is-there-any-relationship-between-heartbeat-rate-and-life-span-of-an-animalH

coldcode 6 hours ago 19 replies      
Interesting but is it worth living being hungry all the time? Are the extra years (you might not get anyway due to illness or accident) on average worth it to you the individual?
anonu 6 hours ago 2 replies      
You might live longer on a calorie restricted diet but you might just die of boredom. Seriously, food is amazing. Why would you deny yourself so much of it.

I'd be a bigger advocate of intermittent fasting. This has also been shown to improve health and longevity.

memoryfab_com 5 hours ago 0 replies      
I actually believe starvation and caloric deficiency has more correlation with inflammation. When you introduce foreign sustenance (food) into your body, it has to spend resources to work through it, absorb it and expel waste. The higher the caloric count the more work it has to do and usually more inflammation from working through it instead of working on recovery.

What studies (HMS,Johnhopkins) have shown is exercise (blood flow vital to removing waste and in result inflammation) and sleep (recovery) is equally as if not more important than diet.

These focus points are the same in fighting age related diseases: Alzheimers and sleeplessness.

fast_throwaway 4 hours ago 0 replies      
Dr. Jason Fung, a Canadian Nephrologist is an encyclopedic resource for the impacts of calorie restriction and fasting. While he has a couple (really good) books, the following presentation is a nice overview of his work, findings and results of therapeutic fasting:

'Therapeutic Fasting - Solving the Two-Compartment Problem'https://www.youtube.com/watch?v=tIuj-oMN-Fk

stefap2 5 hours ago 2 replies      

He existed and even thrived on a diet of subrancid cheese and milk in every form, coarse and hard bread and small drink, generally sour whey, as William Harvey wrote. "On this sorry fare, but living in his home, free from care, did this poor man attain to such length of days".

Thomas Howard brought him to London to meet King Charles I.

Parr was treated as a spectacle in London, but the change in food and environment apparently led to his death.

sebringj 6 hours ago 0 replies      
See about Roy Walford http://www.walford.com/ for the origins of this. He was able to experiment with humans in the biosphere team.
thomk 5 hours ago 0 replies      
Eat To Live, by Dr. Joel Fuhrman describes a diet that is low calorie high nutrient and it's a very good, easy read for anyone who is interested. I personally lost 70+ pounds on it: https://www.wikiwand.com/en/Eat_to_Live

Health = Nutrients/Calories!

grondilu 5 hours ago 0 replies      
Yet as far as know longevity records among humans did not have any particular diet, did they?

As caloric restriction is being more and more shown to be the only efficient method to increase longevity, I suppose more and more people will try it and soon enough we'll get to break records. Time will tell.

wcummings 5 hours ago 1 reply      
A low-calorie plant-based diet

>much of it from plant-based material like the Japanese sweet potato, their staple food, in contrast with the rice-heavy cuisine of the mainland.

hartator 5 hours ago 0 replies      
I wonder what's the best way to apply this to daily life. Restriction at every meal vs alternative day restrictiona vs fasting a few days every once in a while.
Hambonetasty 6 hours ago 2 replies      
Why the fuck would you want to do that? Being old sucks.
reasonattlm 5 hours ago 0 replies      
Some of the more interesting recent research, from the perspective of evidence for meaningful health benefits, and some degree of additional longevity in our long-lived species.

Caloric restriction improves health and survival of rhesus monkeyshttps://dx.doi.org/10.1038/ncomms14063

Will calorie restriction work in humans?http://dx.doi.org/10.18632/aging.100581

reasonattlm 5 hours ago 0 replies      
Of note, Geroscience is a new popular science of aging online magazine supported by the Apollo Ventures investment fund, devoted to longevity science startups. You should absolutely take a look at their site to see the sort of sea change that has occurred for the perception of aging research in the funding community:


They take a very Hallmarks of Aging view of the causes of aging, which I can quibble with around the edges, but it is good to seem more people putting their money into the game of building ways to treat the causes of aging.

The principals at Apollo became involved in this space and raised a fund both because they are enthused by the field of therapeutics to treat aging and want to see it succeed, but also because they recognize the tremendous potential for profit here. The size of the market for enhancement biotechnologies such as rejuvenation treatments is half the human race, every adult individual.

Publishing a magazine on aging research is a way to help broaden their reach within the community, find more prospective investments, talk up their positions, and raise the profile of the field as a whole, all of which aligns fairly well with the broader goals of advocacy for longevity science. Many hands make light work, and we could certainly use more help to speed up the growth of this field of research and development.

Bill Binney: Things won't change until we put these people in jail repubblica.it
259 points by bootload  2 days ago   55 comments top 9
rdtsc 2 days ago 2 replies      
That was a pretty good interview. I read about Binney before Snowden and I think Snowden knew about him as well. That's why he had to get all that data out otherwise he would not be believed.

I remember Binney sort being painted as a crazy conspiracy lunatic.

It is interesting how he stood up to them but was afraid for his life for a bit there. Wonder if he knew of any cases of people being suicided on US soil by the US govt or just a general precaution. Wonder if anyone from that dept. would leak anything...

equalarrow 2 days ago 1 reply      
Sadly, this will never happen and like so many gov documentaries that bring to light abuses by groups, people will be outraged and then life will happen.

I wish the FBI or relevant body could get their shit together and say "hey, we really should look into this and do something about it."


webmaven 19 hours ago 0 replies      
Interesting conclusion:

"Would you advise young people to put their talents at the service of the NSA?"

"I am an advocate of infiltration: joining the ranks of those working and coming out through the ranks of the administration of that agency, whatever the agency may be: the CIA, the FBI, whatever. As long as you preserve your character and integrity, you do the right thing, and that is what we need: people doing the right thing. It's the only way to change things, in the end. The other way is to come from the outside and put them in jail".

cupantae 1 day ago 1 reply      
Anyone else read the title and think "these people" were bankers?
thora 1 day ago 0 replies      
An in depth interview w/ Bill Binney is available here:https://www.youtube.com/watch?v=3owk7vEEOvs
CharlesW 2 days ago 9 replies      
> Tom Drake took the software we had for ThinThread, basically after the NSA cancelled our programme, and ran it against the entire NSA database in February 2002. We found that all the data about the attack was in there, where they were going, who they were connecting with, actually even the date of the attack: 9/11.

I'll bet I could design a system to reveal the exact dates of terrorist attacks after they've happened too.

3825 1 day ago 2 replies      
> I hold out very little hope for any of this to change, but we as citizens need to keep resisting anyways. Gut some government agencies, cut the budgets, stop the revolving door of lobbyists.

I don't like this idea but I respect it. I might even support it. Problem is that evil people will use this sentiment and turn it around to say they want to cut taxes so these agencies have less money to spend which leads to smaller government. Evil people like Ronald Reagan have argued that with starve the beast and we know how that worked. Please do not support tax cuts in an effort to gut government agencies. If you want to gut government agencies, do so directly.

jonloldrup 2 days ago 2 replies      
When will open-minded inquiries into 9/11 become non-taboo?
Inferring Your Mobile Phone Password via WiFi Signals fermatslibrary.com
245 points by pogba101  9 hours ago   54 comments top 13
gefh 9 hours ago 7 replies      
Holy shit.From a brief scan it looks like the paper concentrates on recovering a numeric pin, but these attacks never get worse, only better, so I assume full keyboard access is not too far off.What's the defense? Have your phone manage the passwords and unlock via fingerprint?
sounds 9 hours ago 0 replies      
For those who want more information on CSI (Channel State Information):


This allows you to use a custom firmware developed for the Intel 5300 wireless adapter and read the CSI values with each packet.

Every 802.11n implementation that I am aware of keeps a CSI vector (IQ values, typically as integers) within the wifi chip. Both the Wifi AP and STA do this. The CSI vector is updated with every packet, using the training data at the beginning of the packet. (802.11 is CSMA [2] so there is a fixed transmission to start the packet)

In other words, Intel has this nice tool for one of their (now somewhat dated) chips. But CSI is not restricted to Intel chips. Atheros chips have a decent but limited CSI readback method, not quite as nice as Intel's [3]. But CSI has been used for experiments on all major wifi chips out there.

With 802.11n this is used to determine the quality of signal likely to be received on each sub-carrier within the signal.

CSI is useful for many other things: RF experiments, indoor position sensing, and now apparently also password cracking.

[2] https://en.wikipedia.org/wiki/Carrier_sense_multiple_access_...

[3] http://pdcc.ntu.edu.sg/wands/Atheros/

kardos 9 hours ago 1 reply      
Direct link to PDF without the (infuriating) popups/overlays: http://delivery.acm.org/10.1145/2980000/2978397/p1068-li.pdf
andai 8 hours ago 1 reply      
See also: detecting and motion tracking people behind walls, with the ability to recognise specific people ( also using wifi ).


Of particular interest: It can determine breathing patterns and heart rate.

user659 8 hours ago 0 replies      
This paper is available through Google scholar if you search for "CCS 16 password WiFi" or click here: https://www.a51.nl/sites/default/files/pdf/p1068-li.pdf

I've been a part of a similar paper that detected exact keystrokes. This one seems to build on a similar idea. The thing to keep in mind is that these systems need user and environment specific training. That is if the user is changed or the user or something in the environment moves, the system needs to retrain.

freyr 1 hour ago 0 replies      
It looks like they're inferring the right 6-digit password about 20% of the time on their first try, presumably using the Xiaomi phone. But if they can try 20 candidates before getting locked out, they can guess the 6-digit password about 50% of the time.

With the Samsung phone, which has a much lower 1-digit recovery rate, it seems that it would be closer to 6% on the first try, and 20% by the twentieth try.

danielhooper 9 hours ago 1 reply      
Some weeks back I read a post here about detecting people in rooms by measuring how the physical body interferes with the wifi signals. I wouldn't have imagined someone could extract useful information at this small of a scale. wow!
program_whiz 7 hours ago 3 replies      
Read the section "limitations". Only works on 10 users right now, must be trained for the pattern "per user", phone must be sitting on stable surface, gesture must be performed as close to "the same" every time. This is just clickbait and "please fund our research" IMO.
adynatos 4 hours ago 0 replies      
LTE and HSDPA (and maybe older gens) have Channel Quality Indicator, which afaik has the same role as CSI. So I wonder if the same trick can be achieved with LTE signalling? To pull that off you would need access to a BTS, but today with open source stacks, like OpenBTS or OpenAirInterface,you could roll out your own.
saycheese 8 hours ago 0 replies      
RELATED: "Keystroke Recognition Using WiFi Signals"


leejoramo 6 hours ago 0 replies      
Also never enter a password in any location where a hidden video camera could be observing you. Or where a hidden microphone could be listening to your typing. Or where ruffians holding crowbars could be lurking in the next room.
baby 5 hours ago 0 replies      
I like this Fermat thing but it would be cooler if it could add a date to the papers who, for some reason, do not have a date.
Amorymeltzer 9 hours ago 2 replies      
Moral of this (and every other) story: Never, ever connect to a free, public wifi.

ETA: This was meant to be glib, given the frequency of such stories seen on HN, and the many children below are quite correctly pointing out that the real moral is https://news.ycombinator.com/item?id=13645694

       cached 15 February 2017 03:11:01 GMT